pax_global_header00006660000000000000000000000064134602653600014517gustar00rootroot0000000000000052 comment=a799b74378a06dde24f726377574ab7a7973d871 ffc-2019.1.0.post0/000077500000000000000000000000001346026536000135535ustar00rootroot00000000000000ffc-2019.1.0.post0/.circleci/000077500000000000000000000000001346026536000154065ustar00rootroot00000000000000ffc-2019.1.0.post0/.circleci/config.yml000066400000000000000000000026161346026536000174030ustar00rootroot00000000000000version: 2 jobs: build: docker: - image: circleci/python:3.6 working_directory: ~/ffc-test steps: - checkout - run: name: Install dependencies # Install with sudo as tests not run as superuser in circleci/python command: | sudo apt-get update && sudo apt-get install libboost-math-dev sudo pip install cmake flake8 numpy pybind11 pytest six --upgrade - run: name: Install FEniCS dependencies command: | if [ "${CIRCLE_BRANCH}" == "next" ] ; then export DEP_BRANCH_NAME="next" ; else export DEP_BRANCH_NAME="master" ; fi pip install git+https://bitbucket.org/fenics-project/fiat.git@"${DEP_BRANCH_NAME}" --user pip install git+https://bitbucket.org/fenics-project/ufl.git@"${DEP_BRANCH_NAME}" --user pip install git+https://bitbucket.org/fenics-project/dijitso.git@"${DEP_BRANCH_NAME}" --user - run: name: Install FFC command: pip install -e . --user - run: name: Install pybind factory command: pip install ./libs/ffc-factory/ --user - run: name: Run unit tests command: | python -m pytest -v ./test/unit python -m pytest -v ./test/uflacs - run: name: Run regression tests command: | cd test/regression python test.py ffc-2019.1.0.post0/AUTHORS000066400000000000000000000045431346026536000146310ustar00rootroot00000000000000Credits for FFC =============== Main authors: Anders Logg email: logg@simula.no www: http://home.simula.no/~logg/ Kristian B. Ølgaard email: k.b.oelgaard@gmail.com Marie Rognes email: meg@simula.no Main contributors: Garth N. Wells email: gnw20@cam.ac.uk www: http://www.eng.cam.ac.uk/~gnw20/ Martin Sandve Alnæs email: martinal@simula.no Contributors: Jan Blechta email: blechta@karlin.mff.cuni.cz Peter Brune email: brune@uchicago.edu Joachim B Haga email: jobh@broadpark.no Johan Jansson email: johanjan@math.chalmers.se www: http://www.math.chalmers.se/~johanjan/ Robert C. Kirby email: kirby@cs.uchicago.edu www: http://people.cs.uchicago.edu/~kirby/ Matthew G. Knepley email: knepley@mcs.anl.gov www: http://www-unix.mcs.anl.gov/~knepley/ Dag Lindbo email: dag@f.kth.se www: http://www.f.kth.se/~dag/ Fabian Löschner email: fabian.loeschner@rwth-aachen.de www: https://w1th0utnam3.github.io/ Ola Skavhaug email: skavhaug@simula.no www: http://home.simula.no/~skavhaug/ Andy R. Terrel email: aterrel@uchicago.edu www: http://people.cs.uchicago.edu/~aterrel/ Ivan Yashchuk email: ivan.yashchuk@aalto.fi Credits for UFC =============== UFC was merged into FFC 2014-02-18. Below is the list of credits for UFC at the time of the merge. Main authors: Martin Sandve Alnaes Anders Logg Kent-Andre Mardal Hans Petter Langtangen Main contributors: Asmund Odegard Kristian Oelgaard Johan Hake Garth N. Wells Marie E. Rognes Johannes Ring Credits for UFLACS ================== UFLACS was merged into FFC 2016-02-16. Author: Martin Sandve Alnæs Contributors: Anders Logg Garth N. Wells Johannes Ring Matthias Liertzer Steffen Müthing ffc-2019.1.0.post0/COPYING000066400000000000000000001045131346026536000146120ustar00rootroot00000000000000 GNU GENERAL PUBLIC LICENSE Version 3, 29 June 2007 Copyright (C) 2007 Free Software Foundation, Inc. Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The GNU General Public License is a free, copyleft license for software and other kinds of works. The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change all versions of a program--to make sure it remains free software for all its users. We, the Free Software Foundation, use the GNU General Public License for most of our software; it applies also to any other work released this way by its authors. You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things. To protect your rights, we need to prevent others from denying you these rights or asking you to surrender the rights. Therefore, you have certain responsibilities if you distribute copies of the software, or if you modify it: responsibilities to respect the freedom of others. For example, if you distribute copies of such a program, whether gratis or for a fee, you must pass on to the recipients the same freedoms that you received. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. Developers that use the GNU GPL protect your rights with two steps: (1) assert copyright on the software, and (2) offer you this License giving you legal permission to copy, distribute and/or modify it. For the developers' and authors' protection, the GPL clearly explains that there is no warranty for this free software. For both users' and authors' sake, the GPL requires that modified versions be marked as changed, so that their problems will not be attributed erroneously to authors of previous versions. Some devices are designed to deny users access to install or run modified versions of the software inside them, although the manufacturer can do so. This is fundamentally incompatible with the aim of protecting users' freedom to change the software. The systematic pattern of such abuse occurs in the area of products for individuals to use, which is precisely where it is most unacceptable. Therefore, we have designed this version of the GPL to prohibit the practice for those products. If such problems arise substantially in other domains, we stand ready to extend this provision to those domains in future versions of the GPL, as needed to protect the freedom of users. Finally, every program is threatened constantly by software patents. States should not allow patents to restrict development and use of software on general-purpose computers, but in those that do, we wish to avoid the special danger that patents applied to a free program could make it effectively proprietary. To prevent this, the GPL assures that patents cannot be used to render the program non-free. The precise terms and conditions for copying, distribution and modification follow. TERMS AND CONDITIONS 0. Definitions. "This License" refers to version 3 of the GNU General Public License. "Copyright" also means copyright-like laws that apply to other kinds of works, such as semiconductor masks. "The Program" refers to any copyrightable work licensed under this License. Each licensee is addressed as "you". "Licensees" and "recipients" may be individuals or organizations. To "modify" a work means to copy from or adapt all or part of the work in a fashion requiring copyright permission, other than the making of an exact copy. The resulting work is called a "modified version" of the earlier work or a work "based on" the earlier work. A "covered work" means either the unmodified Program or a work based on the Program. To "propagate" a work means to do anything with it that, without permission, would make you directly or secondarily liable for infringement under applicable copyright law, except executing it on a computer or modifying a private copy. Propagation includes copying, distribution (with or without modification), making available to the public, and in some countries other activities as well. To "convey" a work means any kind of propagation that enables other parties to make or receive copies. Mere interaction with a user through a computer network, with no transfer of a copy, is not conveying. An interactive user interface displays "Appropriate Legal Notices" to the extent that it includes a convenient and prominently visible feature that (1) displays an appropriate copyright notice, and (2) tells the user that there is no warranty for the work (except to the extent that warranties are provided), that licensees may convey the work under this License, and how to view a copy of this License. If the interface presents a list of user commands or options, such as a menu, a prominent item in the list meets this criterion. 1. Source Code. The "source code" for a work means the preferred form of the work for making modifications to it. "Object code" means any non-source form of a work. A "Standard Interface" means an interface that either is an official standard defined by a recognized standards body, or, in the case of interfaces specified for a particular programming language, one that is widely used among developers working in that language. The "System Libraries" of an executable work include anything, other than the work as a whole, that (a) is included in the normal form of packaging a Major Component, but which is not part of that Major Component, and (b) serves only to enable use of the work with that Major Component, or to implement a Standard Interface for which an implementation is available to the public in source code form. A "Major Component", in this context, means a major essential component (kernel, window system, and so on) of the specific operating system (if any) on which the executable work runs, or a compiler used to produce the work, or an object code interpreter used to run it. The "Corresponding Source" for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities. However, it does not include the work's System Libraries, or general-purpose tools or generally available free programs which are used unmodified in performing those activities but which are not part of the work. For example, Corresponding Source includes interface definition files associated with source files for the work, and the source code for shared libraries and dynamically linked subprograms that the work is specifically designed to require, such as by intimate data communication or control flow between those subprograms and other parts of the work. The Corresponding Source need not include anything that users can regenerate automatically from other parts of the Corresponding Source. The Corresponding Source for a work in source code form is that same work. 2. Basic Permissions. All rights granted under this License are granted for the term of copyright on the Program, and are irrevocable provided the stated conditions are met. This License explicitly affirms your unlimited permission to run the unmodified Program. The output from running a covered work is covered by this License only if the output, given its content, constitutes a covered work. This License acknowledges your rights of fair use or other equivalent, as provided by copyright law. You may make, run and propagate covered works that you do not convey, without conditions so long as your license otherwise remains in force. You may convey covered works to others for the sole purpose of having them make modifications exclusively for you, or provide you with facilities for running those works, provided that you comply with the terms of this License in conveying all material for which you do not control copyright. Those thus making or running the covered works for you must do so exclusively on your behalf, under your direction and control, on terms that prohibit them from making any copies of your copyrighted material outside their relationship with you. Conveying under any other circumstances is permitted solely under the conditions stated below. Sublicensing is not allowed; section 10 makes it unnecessary. 3. Protecting Users' Legal Rights From Anti-Circumvention Law. No covered work shall be deemed part of an effective technological measure under any applicable law fulfilling obligations under article 11 of the WIPO copyright treaty adopted on 20 December 1996, or similar laws prohibiting or restricting circumvention of such measures. When you convey a covered work, you waive any legal power to forbid circumvention of technological measures to the extent such circumvention is effected by exercising rights under this License with respect to the covered work, and you disclaim any intention to limit operation or modification of the work as a means of enforcing, against the work's users, your or third parties' legal rights to forbid circumvention of technological measures. 4. Conveying Verbatim Copies. You may convey verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice; keep intact all notices stating that this License and any non-permissive terms added in accord with section 7 apply to the code; keep intact all notices of the absence of any warranty; and give all recipients a copy of this License along with the Program. You may charge any price or no price for each copy that you convey, and you may offer support or warranty protection for a fee. 5. Conveying Modified Source Versions. You may convey a work based on the Program, or the modifications to produce it from the Program, in the form of source code under the terms of section 4, provided that you also meet all of these conditions: a) The work must carry prominent notices stating that you modified it, and giving a relevant date. b) The work must carry prominent notices stating that it is released under this License and any conditions added under section 7. This requirement modifies the requirement in section 4 to "keep intact all notices". c) You must license the entire work, as a whole, under this License to anyone who comes into possession of a copy. This License will therefore apply, along with any applicable section 7 additional terms, to the whole of the work, and all its parts, regardless of how they are packaged. This License gives no permission to license the work in any other way, but it does not invalidate such permission if you have separately received it. d) If the work has interactive user interfaces, each must display Appropriate Legal Notices; however, if the Program has interactive interfaces that do not display Appropriate Legal Notices, your work need not make them do so. A compilation of a covered work with other separate and independent works, which are not by their nature extensions of the covered work, and which are not combined with it such as to form a larger program, in or on a volume of a storage or distribution medium, is called an "aggregate" if the compilation and its resulting copyright are not used to limit the access or legal rights of the compilation's users beyond what the individual works permit. Inclusion of a covered work in an aggregate does not cause this License to apply to the other parts of the aggregate. 6. Conveying Non-Source Forms. You may convey a covered work in object code form under the terms of sections 4 and 5, provided that you also convey the machine-readable Corresponding Source under the terms of this License, in one of these ways: a) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by the Corresponding Source fixed on a durable physical medium customarily used for software interchange. b) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by a written offer, valid for at least three years and valid for as long as you offer spare parts or customer support for that product model, to give anyone who possesses the object code either (1) a copy of the Corresponding Source for all the software in the product that is covered by this License, on a durable physical medium customarily used for software interchange, for a price no more than your reasonable cost of physically performing this conveying of source, or (2) access to copy the Corresponding Source from a network server at no charge. c) Convey individual copies of the object code with a copy of the written offer to provide the Corresponding Source. This alternative is allowed only occasionally and noncommercially, and only if you received the object code with such an offer, in accord with subsection 6b. d) Convey the object code by offering access from a designated place (gratis or for a charge), and offer equivalent access to the Corresponding Source in the same way through the same place at no further charge. You need not require recipients to copy the Corresponding Source along with the object code. If the place to copy the object code is a network server, the Corresponding Source may be on a different server (operated by you or a third party) that supports equivalent copying facilities, provided you maintain clear directions next to the object code saying where to find the Corresponding Source. Regardless of what server hosts the Corresponding Source, you remain obligated to ensure that it is available for as long as needed to satisfy these requirements. e) Convey the object code using peer-to-peer transmission, provided you inform other peers where the object code and Corresponding Source of the work are being offered to the general public at no charge under subsection 6d. A separable portion of the object code, whose source code is excluded from the Corresponding Source as a System Library, need not be included in conveying the object code work. A "User Product" is either (1) a "consumer product", which means any tangible personal property which is normally used for personal, family, or household purposes, or (2) anything designed or sold for incorporation into a dwelling. In determining whether a product is a consumer product, doubtful cases shall be resolved in favor of coverage. For a particular product received by a particular user, "normally used" refers to a typical or common use of that class of product, regardless of the status of the particular user or of the way in which the particular user actually uses, or expects or is expected to use, the product. A product is a consumer product regardless of whether the product has substantial commercial, industrial or non-consumer uses, unless such uses represent the only significant mode of use of the product. "Installation Information" for a User Product means any methods, procedures, authorization keys, or other information required to install and execute modified versions of a covered work in that User Product from a modified version of its Corresponding Source. The information must suffice to ensure that the continued functioning of the modified object code is in no case prevented or interfered with solely because modification has been made. If you convey an object code work under this section in, or with, or specifically for use in, a User Product, and the conveying occurs as part of a transaction in which the right of possession and use of the User Product is transferred to the recipient in perpetuity or for a fixed term (regardless of how the transaction is characterized), the Corresponding Source conveyed under this section must be accompanied by the Installation Information. But this requirement does not apply if neither you nor any third party retains the ability to install modified object code on the User Product (for example, the work has been installed in ROM). The requirement to provide Installation Information does not include a requirement to continue to provide support service, warranty, or updates for a work that has been modified or installed by the recipient, or for the User Product in which it has been modified or installed. Access to a network may be denied when the modification itself materially and adversely affects the operation of the network or violates the rules and protocols for communication across the network. Corresponding Source conveyed, and Installation Information provided, in accord with this section must be in a format that is publicly documented (and with an implementation available to the public in source code form), and must require no special password or key for unpacking, reading or copying. 7. Additional Terms. "Additional permissions" are terms that supplement the terms of this License by making exceptions from one or more of its conditions. Additional permissions that are applicable to the entire Program shall be treated as though they were included in this License, to the extent that they are valid under applicable law. If additional permissions apply only to part of the Program, that part may be used separately under those permissions, but the entire Program remains governed by this License without regard to the additional permissions. When you convey a copy of a covered work, you may at your option remove any additional permissions from that copy, or from any part of it. (Additional permissions may be written to require their own removal in certain cases when you modify the work.) You may place additional permissions on material, added by you to a covered work, for which you have or can give appropriate copyright permission. Notwithstanding any other provision of this License, for material you add to a covered work, you may (if authorized by the copyright holders of that material) supplement the terms of this License with terms: a) Disclaiming warranty or limiting liability differently from the terms of sections 15 and 16 of this License; or b) Requiring preservation of specified reasonable legal notices or author attributions in that material or in the Appropriate Legal Notices displayed by works containing it; or c) Prohibiting misrepresentation of the origin of that material, or requiring that modified versions of such material be marked in reasonable ways as different from the original version; or d) Limiting the use for publicity purposes of names of licensors or authors of the material; or e) Declining to grant rights under trademark law for use of some trade names, trademarks, or service marks; or f) Requiring indemnification of licensors and authors of that material by anyone who conveys the material (or modified versions of it) with contractual assumptions of liability to the recipient, for any liability that these contractual assumptions directly impose on those licensors and authors. All other non-permissive additional terms are considered "further restrictions" within the meaning of section 10. If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term. If a license document contains a further restriction but permits relicensing or conveying under this License, you may add to a covered work material governed by the terms of that license document, provided that the further restriction does not survive such relicensing or conveying. If you add terms to a covered work in accord with this section, you must place, in the relevant source files, a statement of the additional terms that apply to those files, or a notice indicating where to find the applicable terms. Additional terms, permissive or non-permissive, may be stated in the form of a separately written license, or stated as exceptions; the above requirements apply either way. 8. Termination. You may not propagate or modify a covered work except as expressly provided under this License. Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License (including any patent licenses granted under the third paragraph of section 11). However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation. Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice. Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, you do not qualify to receive new licenses for the same material under section 10. 9. Acceptance Not Required for Having Copies. You are not required to accept this License in order to receive or run a copy of the Program. Ancillary propagation of a covered work occurring solely as a consequence of using peer-to-peer transmission to receive a copy likewise does not require acceptance. However, nothing other than this License grants you permission to propagate or modify any covered work. These actions infringe copyright if you do not accept this License. Therefore, by modifying or propagating a covered work, you indicate your acceptance of this License to do so. 10. Automatic Licensing of Downstream Recipients. Each time you convey a covered work, the recipient automatically receives a license from the original licensors, to run, modify and propagate that work, subject to this License. You are not responsible for enforcing compliance by third parties with this License. An "entity transaction" is a transaction transferring control of an organization, or substantially all assets of one, or subdividing an organization, or merging organizations. If propagation of a covered work results from an entity transaction, each party to that transaction who receives a copy of the work also receives whatever licenses to the work the party's predecessor in interest had or could give under the previous paragraph, plus a right to possession of the Corresponding Source of the work from the predecessor in interest, if the predecessor has it or can get it with reasonable efforts. You may not impose any further restrictions on the exercise of the rights granted or affirmed under this License. For example, you may not impose a license fee, royalty, or other charge for exercise of rights granted under this License, and you may not initiate litigation (including a cross-claim or counterclaim in a lawsuit) alleging that any patent claim is infringed by making, using, selling, offering for sale, or importing the Program or any portion of it. 11. Patents. A "contributor" is a copyright holder who authorizes use under this License of the Program or a work on which the Program is based. The work thus licensed is called the contributor's "contributor version". A contributor's "essential patent claims" are all patent claims owned or controlled by the contributor, whether already acquired or hereafter acquired, that would be infringed by some manner, permitted by this License, of making, using, or selling its contributor version, but do not include claims that would be infringed only as a consequence of further modification of the contributor version. For purposes of this definition, "control" includes the right to grant patent sublicenses in a manner consistent with the requirements of this License. Each contributor grants you a non-exclusive, worldwide, royalty-free patent license under the contributor's essential patent claims, to make, use, sell, offer for sale, import and otherwise run, modify and propagate the contents of its contributor version. In the following three paragraphs, a "patent license" is any express agreement or commitment, however denominated, not to enforce a patent (such as an express permission to practice a patent or covenant not to sue for patent infringement). To "grant" such a patent license to a party means to make such an agreement or commitment not to enforce a patent against the party. If you convey a covered work, knowingly relying on a patent license, and the Corresponding Source of the work is not available for anyone to copy, free of charge and under the terms of this License, through a publicly available network server or other readily accessible means, then you must either (1) cause the Corresponding Source to be so available, or (2) arrange to deprive yourself of the benefit of the patent license for this particular work, or (3) arrange, in a manner consistent with the requirements of this License, to extend the patent license to downstream recipients. "Knowingly relying" means you have actual knowledge that, but for the patent license, your conveying the covered work in a country, or your recipient's use of the covered work in a country, would infringe one or more identifiable patents in that country that you have reason to believe are valid. If, pursuant to or in connection with a single transaction or arrangement, you convey, or propagate by procuring conveyance of, a covered work, and grant a patent license to some of the parties receiving the covered work authorizing them to use, propagate, modify or convey a specific copy of the covered work, then the patent license you grant is automatically extended to all recipients of the covered work and works based on it. A patent license is "discriminatory" if it does not include within the scope of its coverage, prohibits the exercise of, or is conditioned on the non-exercise of one or more of the rights that are specifically granted under this License. You may not convey a covered work if you are a party to an arrangement with a third party that is in the business of distributing software, under which you make payment to the third party based on the extent of your activity of conveying the work, and under which the third party grants, to any of the parties who would receive the covered work from you, a discriminatory patent license (a) in connection with copies of the covered work conveyed by you (or copies made from those copies), or (b) primarily for and in connection with specific products or compilations that contain the covered work, unless you entered into that arrangement, or that patent license was granted, prior to 28 March 2007. Nothing in this License shall be construed as excluding or limiting any implied license or other defenses to infringement that may otherwise be available to you under applicable patent law. 12. No Surrender of Others' Freedom. If conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot convey a covered work so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not convey it at all. For example, if you agree to terms that obligate you to collect a royalty for further conveying from those to whom you convey the Program, the only way you could satisfy both those terms and this License would be to refrain entirely from conveying the Program. 13. Use with the GNU Affero General Public License. Notwithstanding any other provision of this License, you have permission to link or combine any covered work with a work licensed under version 3 of the GNU Affero General Public License into a single combined work, and to convey the resulting work. The terms of this License will continue to apply to the part which is the covered work, but the special requirements of the GNU Affero General Public License, section 13, concerning interaction through a network will apply to the combination as such. 14. Revised Versions of this License. The Free Software Foundation may publish revised and/or new versions of the GNU General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies that a certain numbered version of the GNU General Public License "or any later version" applies to it, you have the option of following the terms and conditions either of that numbered version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of the GNU General Public License, you may choose any version ever published by the Free Software Foundation. If the Program specifies that a proxy can decide which future versions of the GNU General Public License can be used, that proxy's public statement of acceptance of a version permanently authorizes you to choose that version for the Program. Later license versions may give you additional or different permissions. However, no additional obligations are imposed on any author or copyright holder as a result of your choosing to follow a later version. 15. Disclaimer of Warranty. THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 16. Limitation of Liability. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. 17. Interpretation of Sections 15 and 16. If the disclaimer of warranty and limitation of liability provided above cannot be given local legal effect according to their terms, reviewing courts shall apply local law that most closely approximates an absolute waiver of all civil liability in connection with the Program, unless a warranty or assumption of liability accompanies a copy of the Program in return for a fee. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively state the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. Copyright (C) This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . Also add information on how to contact you by electronic and paper mail. If the program does terminal interaction, make it output a short notice like this when it starts in an interactive mode: Copyright (C) This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, your program's commands might be different; for a GUI interface, you would use an "about box". You should also get your employer (if you work as a programmer) or school, if any, to sign a "copyright disclaimer" for the program, if necessary. For more information on this, and how to apply and follow the GNU GPL, see . The GNU General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Lesser General Public License instead of this License. But first, please read . ffc-2019.1.0.post0/COPYING.LESSER000066400000000000000000000167271346026536000156170ustar00rootroot00000000000000 GNU LESSER GENERAL PUBLIC LICENSE Version 3, 29 June 2007 Copyright (C) 2007 Free Software Foundation, Inc. Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. This version of the GNU Lesser General Public License incorporates the terms and conditions of version 3 of the GNU General Public License, supplemented by the additional permissions listed below. 0. Additional Definitions. As used herein, "this License" refers to version 3 of the GNU Lesser General Public License, and the "GNU GPL" refers to version 3 of the GNU General Public License. "The Library" refers to a covered work governed by this License, other than an Application or a Combined Work as defined below. An "Application" is any work that makes use of an interface provided by the Library, but which is not otherwise based on the Library. Defining a subclass of a class defined by the Library is deemed a mode of using an interface provided by the Library. A "Combined Work" is a work produced by combining or linking an Application with the Library. The particular version of the Library with which the Combined Work was made is also called the "Linked Version". The "Minimal Corresponding Source" for a Combined Work means the Corresponding Source for the Combined Work, excluding any source code for portions of the Combined Work that, considered in isolation, are based on the Application, and not on the Linked Version. The "Corresponding Application Code" for a Combined Work means the object code and/or source code for the Application, including any data and utility programs needed for reproducing the Combined Work from the Application, but excluding the System Libraries of the Combined Work. 1. Exception to Section 3 of the GNU GPL. You may convey a covered work under sections 3 and 4 of this License without being bound by section 3 of the GNU GPL. 2. Conveying Modified Versions. If you modify a copy of the Library, and, in your modifications, a facility refers to a function or data to be supplied by an Application that uses the facility (other than as an argument passed when the facility is invoked), then you may convey a copy of the modified version: a) under this License, provided that you make a good faith effort to ensure that, in the event an Application does not supply the function or data, the facility still operates, and performs whatever part of its purpose remains meaningful, or b) under the GNU GPL, with none of the additional permissions of this License applicable to that copy. 3. Object Code Incorporating Material from Library Header Files. The object code form of an Application may incorporate material from a header file that is part of the Library. You may convey such object code under terms of your choice, provided that, if the incorporated material is not limited to numerical parameters, data structure layouts and accessors, or small macros, inline functions and templates (ten or fewer lines in length), you do both of the following: a) Give prominent notice with each copy of the object code that the Library is used in it and that the Library and its use are covered by this License. b) Accompany the object code with a copy of the GNU GPL and this license document. 4. Combined Works. You may convey a Combined Work under terms of your choice that, taken together, effectively do not restrict modification of the portions of the Library contained in the Combined Work and reverse engineering for debugging such modifications, if you also do each of the following: a) Give prominent notice with each copy of the Combined Work that the Library is used in it and that the Library and its use are covered by this License. b) Accompany the Combined Work with a copy of the GNU GPL and this license document. c) For a Combined Work that displays copyright notices during execution, include the copyright notice for the Library among these notices, as well as a reference directing the user to the copies of the GNU GPL and this license document. d) Do one of the following: 0) Convey the Minimal Corresponding Source under the terms of this License, and the Corresponding Application Code in a form suitable for, and under terms that permit, the user to recombine or relink the Application with a modified version of the Linked Version to produce a modified Combined Work, in the manner specified by section 6 of the GNU GPL for conveying Corresponding Source. 1) Use a suitable shared library mechanism for linking with the Library. A suitable mechanism is one that (a) uses at run time a copy of the Library already present on the user's computer system, and (b) will operate properly with a modified version of the Library that is interface-compatible with the Linked Version. e) Provide Installation Information, but only if you would otherwise be required to provide such information under section 6 of the GNU GPL, and only to the extent that such information is necessary to install and execute a modified version of the Combined Work produced by recombining or relinking the Application with a modified version of the Linked Version. (If you use option 4d0, the Installation Information must accompany the Minimal Corresponding Source and Corresponding Application Code. If you use option 4d1, you must provide the Installation Information in the manner specified by section 6 of the GNU GPL for conveying Corresponding Source.) 5. Combined Libraries. You may place library facilities that are a work based on the Library side by side in a single library together with other library facilities that are not Applications and are not covered by this License, and convey such a combined library under terms of your choice, if you do both of the following: a) Accompany the combined library with a copy of the same work based on the Library, uncombined with any other library facilities, conveyed under the terms of this License. b) Give prominent notice with the combined library that part of it is a work based on the Library, and explaining where to find the accompanying uncombined form of the same work. 6. Revised Versions of the GNU Lesser General Public License. The Free Software Foundation may publish revised and/or new versions of the GNU Lesser General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Library as you received it specifies that a certain numbered version of the GNU Lesser General Public License "or any later version" applies to it, you have the option of following the terms and conditions either of that published version or of any later version published by the Free Software Foundation. If the Library as you received it does not specify a version number of the GNU Lesser General Public License, you may choose any version of the GNU Lesser General Public License ever published by the Free Software Foundation. If the Library as you received it specifies that a proxy can decide whether future versions of the GNU Lesser General Public License shall apply, that proxy's public statement of acceptance of any version is permanent authorization for you to choose that version for the Library. ffc-2019.1.0.post0/ChangeLog.rst000066400000000000000000000605371346026536000161470ustar00rootroot00000000000000Changelog ========= 2019.1.0 (2019-04-17) --------------------- - Support ``NodalEnrichedElement``, see ``demo/NodalMini.ufl`` 2018.1.0 (2018-06-14) --------------------- - Remove python2 support. 2017.2.0 (2017-12-05) --------------------- - Some fixes for ufc::eval for esoteric element combinations - Reimplement code generation for all ufc classes with new class ufc::coordinate_mapping which can map between coordinates, compute jacobians, etc. for a coordinate mapping parameterized by a specific finite element. - New functions in ufc::finite_element: - evaluate_reference_basis - evaluate_reference_basis_derivatives - transform_reference_basis_derivatives - tabulate_reference_dof_coordinates - New functions in ufc::dofmap: - num_global_support_dofs - num_element_support_dofs - Improved docstrings for parts of ufc.h - FFC now accepts Q and DQ finite element families defined on quadrilaterals and hexahedrons - Some fixes for ufc_geometry.h for quadrilateral and hexahedron cells 2017.1.0.post2 (2017-09-12) --------------------------- - Change PyPI package name to fenics-ffc. 2017.1.0 (2017-05-09) --------------------- - Let ffc -O parameter take an optional integer level like -O2, -O0 - Implement blockwise optimizations in uflacs code generation - Expose uflacs optimization parameters through parameter system 2016.2.0 (2016-11-30) --------------------- - Jit compiler now compiles elements separately from forms to avoid duplicate work - Add parameter max_signature_length to optionally shorten signatures in the jit cache - Move uflacs module into ffc.uflacs - Remove installation of pkg-config and CMake files (UFC path and compiler flags are available from ffc module) - Add dependency on dijitso and remove dependency on instant - Add experimental Bitbucket pipelines - Tidy the repo after UFC and UFLACS merge, and general spring cleanup. This includes removal of instructions how to merge two repos, commit hash c8389032268041fe94682790cb773663bdf27286. 2016.1.0 (2016-06-23) --------------------- - Add function get_ufc_include to get path to ufc.h - Merge UFLACS into FFC - Generalize ufc interface to non-affine parameterized coordinates - Add ufc::coordinate_mapping class - Make ufc interface depend on C++11 features requiring gcc version >= 4.8 - Add function ufc_signature() to the form compiler interface - Add function git_commit_hash() 1.6.0 (2015-07-28) ------------------ - Rename and modify a number of UFC interface functions. See docstrings in ufc.h for details. - Bump required SWIG version to 3.0.3 - Disable dual basis (tabulate_coordinates and evaluate_dofs) for enriched elements until correct implementation is brought up 1.5.0 (2015-01-12) ------------------ - Remove FErari support - Add support for new integral type custom_integral - Support for new form compiler backend "uflacs", downloaded separately 1.4.0 (2014-06-02) ------------------ - Add support for integrals that know which coefficients they use - Many bug fixes for facet integrals over manifolds - Merge UFC into FFC; ChangeLog for UFC appended below - Various updates mirroring UFL changes - Experimental: New custom integral with user defined quadrature points 1.3.0 (2014-01-07) ------------------ - Fix bug with runtime check of SWIG version - Move DOLFIN wrappers here from DOLFIN - Add support for new UFL operators cell_avg and facet_avg - Add new reference data handling system, now data is kept in an external repository - Fix bugs with ignoring quadrature rule arguments - Use cpp optimization by default in jit compiler 1.2.0 (2013-03-24) ------------------ - New feature: Add basic support for point integrals on vertices - New feature: Add general support for m-dimensional cells in n-dimensional space (n >= m, n, m = 1, 2, 3) 1.1.0 (2013-01-07) ------------------ - Fix bug for Conditionals related to DG constant Coefficients. Bug #1082048. - Fix bug for Conditionals, precedence rules for And and Or. Bug #1075149. - Changed data structure from list to deque when pop(0) operation is needed, speeding up split_expression operation considerable - Other minor fixes 1.0.0 (2011-12-07) ------------------ - Issue warning when form integration requires more than 100 points 1.0-rc1 (2011-11-28) -------------------- - Fix bug with coordinates on facet integrals (intervals). Bug #888682. - Add support for FacetArea, new geometric quantity in UFL. - Fix bug in optimised quadrature code, AlgebraOperators demo. Bug #890859. - Fix bug with undeclared variables in optimised quadrature code. Bug #883202. 1.0-beta2 (2011-10-11) ---------------------- - Added support for bessel functions, bessel_* (I,J,K,Y), in UFL. - Added support for error function, erf(), new math function in UFL. - Fix dof map 'need_entities' for Real spaces - Improve performance for basis function computation 1.0-beta (2011-08-11) --------------------- - Improve formatting of floats with up to one non-zero decimal place. - Fix bug involving zeros in products and sums. Bug #804160. - Fix bug for new conditions '&&', '||' and '!' in UFL. Bug #802560. - Fix bug involving VectorElement with dim=1. Bug #798578. - Fix bug with mixed element of symmetric tensor elements. Bug #745646. - Fix bug when using geometric coordinates with one quadrature point 0.9.10 (2011-05-16) ------------------- - Change license from GPL v3 or later to LGPL v3 or later - Add some schemes for low-order simplices - Request quadrature schemes by polynomial degree (not longer by number of points in each direction) - Get quadrature schemes via ffc.quadrature_schemes - Improved lock handling in JIT compiler - Include common_cell in form signature - Add possibility to set swig binary and swig path 0.9.9 (2011-02-23) ------------------ - Add support for generating error control forms with option -e - Updates for UFC 2.0 - Set minimal degree to 1 in automatic degree selection for expressions - Add command-line option -f no_ferari - Add support for plotting of elements - Add utility function compute_tensor_representation 0.9.4 (2010-09-01) ------------------ - Added memory cache in jit(), for preprocessed forms - Added support for Conditional and added demo/Conditional.ufl. - Added support for new geometric quantity Circumradius in UFL. - Added support for new geometric quantity CellVolume in UFL. 0.9.3 (2010-07-01) ------------------ - Make global_dimension for Real return an int instead of double, bug # 592088 - Add support for facet normal in 1D. - Expose -feliminate_zeros for quadrature optimisations to give user more control - Remove return of form in compile_form - Remove object_names argument to compile_element - Rename ElementUnion -> EnrichedElement - Add support for tan() and inverse trigonometric functions - Added support for ElementUnion (i.e. span of combinations of elements) - Added support for Bubble elements - Added support for UFL.SpatialCoordinate. 0.9.2 (2010-02-17) ------------------ - Bug fix in removal of unused variables in Piola-mapped terms for tensor representation 0.9.1 (2010-02-15) ------------------ - Add back support for FErari optimizations - Bug fixes in JIT compiler 0.9.0 (2010-02-02) ------------------ - Updates for FIAT 0.9.0 - Updates for UFC 1.4.0 (now supporting the full interface) - Automatic selection of representation - Change quadrature_order --> quadrature_degree - Split compile() --> compile_form(), compile_element() - Major cleanup and reorganization of code (flatter directories) - Updates for changes in UFL: Argument, Coefficient, FormData 0.7.1 ----- - Handle setting quadrature degree when it is set to None in UFL form - Added demo: HyperElasticity.ufl 0.7.0 ----- - Move contents of TODO to: https://blueprints.launchpad.net/ffc - Support for restriction of finite elements to only consider facet dofs - Use quadrature_order from metadata when integrating terms using tensor representation - Use loop to reset the entries of the local element tensor - Added new symbolic classes for quadrature optimisation (speed up compilation) - Added demos: Biharmonic.ufl, div(grad(v)) term; ReactionDiffusion.ufl, tuple notation; MetaData.ufl, how to attach metadata to the measure; ElementRestriction.ufl, restriction of elements to facets - Tabulate the coordinates of the integration points in the tabulate_tensor() function - Change command line option '-f split_implementation' -> '-f split' - Renaming of files and restructuring of the compiler directory - Added option -q rule (--quadrature-rule rule) to specify which rule to use for integration of a given integral. (Can also bet set through the metadata through "quadrature_rule"). No rules have yet been implemented, so default is the FIAT rule. - Remove support for old style .form files/format 0.6.2 (2009-04-07) ------------------ - Experimental support for UFL, supporting both .form and .ufl - Moved configuration and construction of python extension module to ufc_module 0.6.1 (2009-02-18) ------------------ - Initial work on UFL transition - Minor bug fixes - The version of ufc and swig is included in the form signature - Better system configuration for JIT compiled forms - The JIT compiled python extension module use shared_ptr for all classes 0.6.0 (2009-01-05) ------------------ - Update DOLFIN output format (-l dolfin) for DOLFIN 0.9.0 - Cross-platform fixes for test scripts - Minor bug fix for quadrature code generation (forms affected by this bug would not be able to compile - Fix bug with output of ``*.py``. - Permit dot product bewteen rectangular matrices (Frobenius norm) 0.5.1 (2008-10-20) ------------------ - New operator skew() - Allow JIT compilation of elements and dof maps - Rewrite JIT compiler to rely on Instant for caching - Display flop count for evaluating the element tensor during compilation - Add arguments language and representation to options dictionary - Fix installation on Windows - Add option -f split_implementation for separate .h and .cpp files 0.5.0 (2008-06-23) ------------------ - Remove default restriction +/- for Constant - Make JIT optimization (-O0 / -O2) optional - Add in-memory cache to speed up JIT compiler for repeated assembly - Allow subdomain integrals without needing full range of integrals - Allow simple subdomain integral specification dx(0), dx(1), ds(0) etc 0.4.5 (2008-04-30) ------------------ - Optimizations in generated quadrature code - Change formatting of floats from %g to %e, fixes problem with too long integers - Bug fix for order of values in interpolate_vertex_values, now according to UFC - Speed up JIT compiler - Add index ranges to form printing - Throw runtime error in functions not generated - Update DOLFIN format for new location of include files 0.4.4 (2008-02-18) ------------------ - RT, BDM, BDFM and Nedelec now working in 2D and 3D - New element type QuadratureElement - Add support for 1D elements - Add experimental support for new Darcy-Stokes element - Use FIAT transformed spaces instead of mapping in FFC - Updates for UFC 1.1 - Implement caching of forms/modules in ~/.ffc/cache for JIT compiler - Add script ffc-clean - New operators lhs() and rhs() - Bug fixes in simplify - Bug fixes for Nedelec and BDFM - Fix bug in mult() - Fix bug with restrictions on exterior facet integrals - Fix bug in grad() for vectors - Add divergence operator for matrices 0.4.3 (2007-10-23) ------------------ - Require FIAT to use UFC reference cells - Fix bug in form simplification - Rename abs --> modulus to avoid conflict with builtin abs - Fix bug in operators invert, abs, sqrt - Fix bug in integral tabulation - Add BDFM and Nedelec elements (nonworking) - Fix bug in JIT compiler 0.4.2 (2007-08-31) ------------------ - Change license from GPL v2 to GPL v3 or later - Add JIT (just-in-time) compiler - Fix bug for constants on interior facets 0.4.1 (2007-06-22) ------------------ - Fix bug in simplification of forms - Optimize removal of unused terms in code formattting 0.4.0 (2007-06-20) ------------------ - Move to UFC interface for code generation - Major rewrite, restructure, cleanup - Add support for Brezzi-Douglas-Marini (BDM) elements - Add support for Raviart-Thomas (RT) elements - Add support for Discontinuous Galerkin (DG) methods - Operators jump() and avg() - Add quadrature compilation mode (experimental) - Simplification of forms - Operators sqrt(), abs() and inverse - Improved Python interface - Add flag -f precision=n - Generate code for basis functions and derivatives - Use Set from set module for Python2.3 compatibility 0.3.5 (2006-12-01) ------------------ - Bug fixes - Move from Numeric to numpy 0.3.4 (2006-10-27) ------------------ - Updates for new DOLFIN mesh library - Add support for evaluation of functionals - Add operator outer() for outer product of vector-valued functions - Enable optimization of linear forms (in addition to bilinear forms) - Remove DOLFIN SWIG format - Fix bug in ffc -v/--version (thanks to Ola Skavhaug) - Consolidate DOLFIN and DOLFIN SWIG formats (patch from Johan Jansson) - Fix bug in optimized compilation (-O) for some forms ("too many values to unpack") 0.3.3 (2006-09-05) ------------------ - Fix bug in operator div() - Add operation count (number of multiplications) with -d0 - Add hint for printing more informative error messages (flag -d1) - Modify implementation of vertexeval() - Add support for boundary integrals (Garth N. Wells) 0.3.2 (2006-04-01) ------------------ - Add support for FErari optimizations, new flag -O 0.3.1 (2006-03-28) ------------------ - Remove verbose output: silence means success - Generate empty boundary integral eval() to please Intel C++ compiler - New classes TestFunction and TrialFunction 0.3.0 (2006-03-01) ------------------ - Work on manual, document command-line and user-interfaces - Name change: u --> U - Add compilation of elements without form - Add generation of FiniteElementSpec in DOLFIN formats - Fix bugs in raw and XML formats - Fix bug in LaTeX format - Fix path and predefine tokens to enable import in .form file - Report number of entries in reference tensor during compilation 0.2.5 (2005-12-28) ------------------ - Add demo Stabilization.form - Further speedup computation of reference tensor (use ufunc Numeric.add) 0.2.4 (2005-12-05) ------------------ - Report time taken to compute reference tensor - Restructure computation of reference tensor to use less memory. As a side effect, the speed has also been improved. - Update for DOLFIN name change node --> vertex - Update finite element interface for DOLFIN - Check for FIAT bug in discontinuous vector Lagrange elements - Fix signatures for vector-valued elements 0.2.3 (2005-11-28) ------------------ - New fast Numeric/BLAS based algorithm for computing reference tensor - Bug fix: reassign indices for complete subexpressions - Bug fix: operator Function * Integral - Check tensor notation for completeness - Bug fix: mixed elements with more than two function spaces - Don't declare unused coefficients (or gcc will complain) 0.2.2 (2005-11-14) ------------------ - Add command-line argument -v / --version - Add new operator mean() for projection onto piecewise constants - Add support for projections - Bug fix for higher order mixed elements: declaration of edge/face_ordering - Generate code for sub elements of mixed elements - Add new test form: TensorWeighteLaplacian - Add new test form: EnergyNorm - Fix bugs in mult() and vec() (skavhaug) - Reset correct entries of G for interior in BLAS mode - Only assign to entries of G that meet nonzero entries of A in BLAS mode 0.2.1 (2005-10-11) ------------------ - Only generate declarations that are needed according to format - Check for missing options and add missing default options - Simplify usage of FFC as Python module: from ffc import * - Fix bug in division with constants - Generate output for BLAS (with option -f blas) - Add new XML output format - Remove command-line option --license (collect in compiler options -f) - Modify demo Mass.form to use 3:rd order Lagrange on tets - Fix bug in dofmap() for equal order mixed elements - Add compiler option -d debuglevel - Fix Python Numeric bug: vdot --> dot 0.2.0 (2005-09-23) ------------------ - Generate function vertexeval() for evaluation at vertices - Add support for arbitrary mixed elements - Add man page - Work on manual, chapters on form language, quickstart and installation - Handle exceptions gracefully in command-line interface - Use new template fenicsmanual.cls for manual - Add new operators grad, div, rot (curl), D, rank, trace, dot, cross - Factorize common reference tensors from terms with equal signatures - Collect small building blocks for form algebra in common module tokens.py 0.1.9 (2005-07-05) ------------------ - Complete support for general order Lagrange elements on triangles and tetrahedra - Compute reordering of dofs on tets correctly - Update manual with ordering of dofs - Break compilation into two phases: build() and write() - Add new output format ASE (Matt Knepley) - Improve python interface to FFC - Remove excessive logging at compilation - Fix bug in raw output format 0.1.8 (2005-05-17) ------------------ - Access data through map in DOLFIN format - Experimental support for computation of coordinate maps - Add first draft of manual - Experimental support for computation of dof maps - Allow specification of the number of components for vector Lagrange - Count the number of zeros dropped - Fix bug in handling command-line arguments - Use module sets instead of built-in set (fix for Python 2.3) - Handle constant indices correctly (bug reported by Garth N. Wells) 0.1.7 (2005-05-02) ------------------ - Write version number to output - Add command-line option for choosing license - Display usage if no input is given - Bug fix for finding correct prefix of file name - Automatically choose name of output file (if not supplied) - Use FIAT tabulation mode for vector-valued elements (speedup a factor 5) - Use FIAT tabulation mode for scalar elements (speedup a factor 1000) - Fig bug in demo elasticity.form (change order of u and v) - Make references to constants const in DOLFIN format - Don't generate code for unused entries of geometry tensor - Update formats to write numeric constants with full precision 0.1.6 (2005-03-17) ------------------ - Add support for mixing multiple different finite elements - Add support for division with constants - Fix index bug (reverse order of multi-indices) 0.1.5 (2005-03-14) ------------------ - Automatically choose the correct quadrature rule for precomputation - Add test program for verification of FIAT quadrature rules - Fix bug for derivative of sum - Improve common interface for debugging: add indentation - Add support for constants - Fix bug for sums of more than one term (make copies of references in lists) - Add '_' in naming of geometry tensor (needed for large dimensions) - Add example elasticity.form - Cleanup build_indices() 0.1.4-1 (2005-02-07) -------------------- - Fix version number and remove build directory from tarball 0.1.4 (2005-02-04) ------------------ - Fix bug for systems, seems to work now - Add common interface for debugging - Modify DOLFIN output to initialize functions - Create unique numbers for each function - Use namespaces for DOLFIN output instead of class names - Temporary implementation of dof mapping for vector-valued elements - Make DOLFIN output format put entries into PETSc block - Change name of coefficient data: c%d[%d] -> c[%d][%d] - Change ordering of basis functions (one component at a time) - Add example poissonsystem.form - Modifications for new version of FIAT (FIAT-L) FIAT version 0.1 a factor 5 slower (no memoization) FIAT version 0.1.1 a little faster, only a factor 2 slower - Add setup.py script 0.1.3 (2004-12-06) ------------------ - Fix bug in DOLFIN format (missing value when zero) - Add output of reference tensor to LaTeX format - Make raw output format print data with full precision - Add component diagram - Change order of declaration of basis functions - Add new output format raw 0.1.2 (2004-11-17) ------------------ - Add command-line interface ffc - Add support for functions (coefficients) - Add support for constants - Allow multiple forms (left- and right-hand side) in same file - Add test examples: poisson.form, mass.form, navierstokes.form - Wrap FIAT to create vector-valued finite element spaces - Check ranks of operands - Clean up algebra, add base class Element - Add some documentation (class diagram) - Add support for LaTeX output 0.1.1-1 (2004-11-10) -------------------- - Add missing file declaration.py 0.1.1 (2004-11-10) ------------------ - Make output variable names configurable - Clean up DOLFIN code generation - Post-process form to create reference, geometry, and element tensors - Experimental support for general tensor-valued elements - Clean up and improve index reassignment - Use string formatting for generation of output - Change index ordering to access row-wise 0.1.0 (2004-10-22) ------------------ - First iteration of the FEniCS Form Compiler - Change boost::shared_ptr --> std::shared_ptr ChangeLog for UFC ================= UFC was merged into FFC 2014-02-18. Below is the ChangeLog for UFC at the time of the merge. From this point onward, UFC version numbering restarts at the same version number as FFC and the rest of FEniCS. 2.3.0 (2014-01-07) ------------------ - Use std::vector > for topology data - Remove vertex coordinates from ufc::cell - Improve detection of compatible Python libraries - Add current swigversion to the JIT compiled extension module - Remove dofmap::max_local_dimension() - Remove cell argument from dofmap::local_dimension() 2.2.0 (2013-03-24) ------------------ - Add new class ufc::point_integral - Use CMake to configure JIT compilation of forms - Generate UseUFC.cmake during configuration - Remove init_mesh(), init_cell(), init_mesh_finalize() - Remove ufc::mesh and add a vector of num_mesh_entities to global_dimension() and tabulate_dofs(). 2.1.0 (2013-01-07) ------------------ - Fix bug introduced by SWIG 2.0.5, which treated uint as Python long - Add optimization SWIG flags, fixing bug lp:987657 2.0.5 (2011-12-07) ------------------ - Improve configuration of libboost-math 2.0.4 (2011-11-28) ------------------ - Add boost_math_tr1 to library flags when JIT compiling an extension module 2.0.3 (2011-10-26) ------------------ - CMake config improvements 2.0.2 (2011-08-11) ------------------ - Some tweaks of installation 2.0.1 (2011-05-16) ------------------ - Make SWIG version >= 2.0 a requirement - Add possibility to set swig binary and swig path - Add missing const for map_{from,to}_reference_cell 2.0.0 (2011-02-23) ------------------ - Add quadrature version of tabulate_tensor - Add finite_element::map_{from,to}_reference_cell - Add finite_element::{topological,geometric}_dimension - Add dofmap::topological_dimension - Rename num_foo_integrals --> num_foo_domains - Rename dof_map --> dofmap - Add finite_element::create - Add dofmap::create 1.4.2 (2010-09-01) ------------------ - Move to CMake build system 1.4.1 (2010-07-01) ------------------ - Make functions introduced in UFC 1.1 mandatory (now pure virtual) - Update templates to allow constructor arguments in form classes 1.4.0 (2010-02-01) ------------------ - Changed behavior of create_foo_integral (returning 0 when integral is 0) - Bug fixes in installation 1.2.0 (2009-09-23) ------------------ - Add new function ufc::dof_map::max_local_dimension() - Change ufc::dof_map::local_dimension() to ufc::dof_map::local_dimension(const ufc::cell c) 1.1.2 (2009-04-07) ------------------ - Added configuration and building of python extension module to ufc_utils.build_ufc_module 1.1.1 (2009-02-20) ------------------ - The extension module is now not built, if the conditions for shared_ptr are not met - Added SCons build system - The swig generated extension module will be compiled with shared_ptr support if boost is found on system and swig is of version 1.3.35 or higher - The swig generated extension module is named ufc.py and expose all ufc base classes to python - Added a swig generated extention module to ufc. UFC now depends on swig - Changed name of the python utility module from "ufc" to "ufc_utils" 1.1.0 (2008-02-18) ------------------ - Add new function ufc::finite_element::evaluate_dofs - Add new function ufc::finite_element::evaluate_basis_all - Add new function ufc::finite_element::evaluate_basis_derivatives_all - Add new function ufc::dof_map::geometric_dimension - Add new function ufc::dof_map::num_entity_dofs - Add new function ufc::dof_map::tabulate_entity_dofs 1.0.0 (2007-06-17) ------------------ - Release of UFC 1.0 ffc-2019.1.0.post0/ChangeLog.uflacs000066400000000000000000000005741346026536000166070ustar00rootroot000000000000001.7.0dev - Support for Piola mappings 1.6.0 [2015-07-28] - Minor updates and cleanups 1.5.0 [2015-01-12] - New factorization algorithm - Interior facet integrals - Most UFL features in place - Preparations for Python 3 support 1.3.0 [2014-02-19] - Unofficial non-release - Last version working with FEniCS 1.3 - Tagged uflacs-1.3 in the git repository but not officially released ffc-2019.1.0.post0/INSTALL000066400000000000000000000013061346026536000146040ustar00rootroot00000000000000To install FFC, type pip install --prefix=/path/to/install/ . This will install FFC in the default Python path of your system, something like /path/to/install/lib/python3.6/site-packages/. To specify C++ compiler and/or compiler flags used for compiling UFC and JITing, set environment variables CXX, CXXFLAGS respectively before invoking setup.py. The installation script requires the Python module distutils, which for Debian users is available with the python-dev package. Other dependencies are listed in the file README. For detailed installation instructions, see the FFC user manual which is available on http://fenicsproject.org/ and also in the subdirectory doc/manual/ of this source tree. ffc-2019.1.0.post0/LICENSE000066400000000000000000000005211346026536000145560ustar00rootroot00000000000000The header files ufc.h and ufc_geometry.h are released into the public domain. ------------------------------------------------------------------------------ Other files, unless stated otherwise in their head, are licensed by GNU Lesser General Public License, version 3, or later. See COPYING and COPYING.LESSER for the license text. ffc-2019.1.0.post0/MANIFEST.in000066400000000000000000000006661346026536000153210ustar00rootroot00000000000000include AUTHORS include COPYING include COPYING.LESSER include ChangeLog include ChangeLog.uflacs include INSTALL include LICENSE include requirements.txt recursive-include bench * recursive-include demo * recursive-include doc * recursive-include ffc *.in recursive-include libs * recursive-include test * recursive-exclude test/regression/ffc-reference-data * recursive-exclude test/regression/output * global-exclude __pycache__ *.pyc ffc-2019.1.0.post0/README.rst000066400000000000000000000047071346026536000152520ustar00rootroot00000000000000============================= FFC: The FEniCS Form Compiler ============================= FFC is a compiler for finite element variational forms. From a high-level description of the form, it generates efficient low-level C++ code that can be used to assemble the corresponding discrete operator (tensor). In particular, a bilinear form may be assembled into a matrix and a linear form may be assembled into a vector. FFC may be used either from the command line (by invoking the ``ffc`` command) or as a Python module (``import ffc``). FFC is part of the FEniCS Project. For more information, visit http://www.fenicsproject.org We are currently working on a highly experimental version of FFC at: https://github.com/FEniCS/ffcx Documentation ============= Documentation can be viewed at http://fenics-ffc.readthedocs.org/. .. image:: https://readthedocs.org/projects/fenics-ffc/badge/?version=latest :target: http://fenics.readthedocs.io/projects/ffc/en/latest/?badge=latest :alt: Documentation Status Automated Testing ================= We use Bitbucket Pipelines and Atlassian Bamboo to perform automated testing. .. image:: https://bitbucket-badges.useast.atlassian.io/badge/fenics-project/ffc.svg :target: https://bitbucket.org/fenics-project/ffc/addon/pipelines/home :alt: Pipelines Build Status .. image:: http://fenics-bamboo.simula.no:8085/plugins/servlet/wittified/build-status/FFC-FD :target: http://fenics-bamboo.simula.no:8085/browse/FFC-FD/latest :alt: Bamboo Build Status Code Coverage ============= Code coverage reports can be viewed at https://coveralls.io/bitbucket/fenics-project/ffc. .. image:: https://coveralls.io/repos/bitbucket/fenics-project/ffc/badge.svg?branch=master :target: https://coveralls.io/bitbucket/fenics-project/ffc?branch=master :alt: Coverage Status License ======= This program is free software: you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more details. You should have received a copy of the GNU Lesser General Public License along with this program. If not, see . ffc-2019.1.0.post0/bench/000077500000000000000000000000001346026536000146325ustar00rootroot00000000000000ffc-2019.1.0.post0/bench/Convection_3D_1.ufl000066400000000000000000000017311346026536000202210ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2016 Martin Alnæs # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # Form for convection matrix in a segregated Navier-Stokes solver degree = 1 U = FiniteElement("Lagrange", tetrahedron, degree) V = VectorElement("Lagrange", tetrahedron, degree) v = TestFunction(U) u = TrialFunction(U) w = Coefficient(V) a = dot(grad(u), w)*v*dx ffc-2019.1.0.post0/bench/Convection_3D_2.ufl000066400000000000000000000017311346026536000202220ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2016 Martin Alnæs # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # Form for convection matrix in a segregated Navier-Stokes solver degree = 2 U = FiniteElement("Lagrange", tetrahedron, degree) V = VectorElement("Lagrange", tetrahedron, degree) v = TestFunction(U) u = TrialFunction(U) w = Coefficient(V) a = dot(grad(u), w)*v*dx ffc-2019.1.0.post0/bench/HyperElasticity.ufl000066400000000000000000000040231346026536000204630ustar00rootroot00000000000000# Copyright (C) 2009 Harish Narayanan # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # First added: 2009-09-29 # Last changed: 2011-07-01 # # The bilinear form a(u, v) and linear form L(v) for # a hyperelastic model. (Copied from dolfin/demo/pde/hyperelasticity/cpp) # # Compile this form with FFC: ffc HyperElasticity.ufl. # Coefficient spaces element = VectorElement("Lagrange", tetrahedron, 1) # Coefficients v = TestFunction(element) # Test function du = TrialFunction(element) # Incremental displacement u = Coefficient(element) # Displacement from previous iteration B = Coefficient(element) # Body force per unit mass T = Coefficient(element) # Traction force on the boundary # Kinematics d = len(u) I = Identity(d) # Identity tensor F = I + grad(u) # Deformation gradient C = F.T*F # Right Cauchy-Green tensor E = (C - I)/2 # Euler-Lagrange strain tensor E = variable(E) # Material constants mu = Constant(tetrahedron) # Lame's constants lmbda = Constant(tetrahedron) # Strain energy function (material model) psi = lmbda/2*(tr(E)**2) + mu*tr(E*E) S = diff(psi, E) # Second Piola-Kirchhoff stress tensor P = F*S # First Piola-Kirchoff stress tensor # The variational problem corresponding to hyperelasticity L = inner(P, grad(v))*dx - inner(B, v)*dx - inner(T, v)*ds a = derivative(L, u, du) ffc-2019.1.0.post0/bench/MassH1_2D_1.ufl000066400000000000000000000014471346026536000172110ustar00rootroot00000000000000# Copyright (C) 2010 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . element = FiniteElement("Lagrange", triangle, 1) v = TestFunction(element) u = TrialFunction(element) a = v*u*dx ffc-2019.1.0.post0/bench/MassH1_2D_2.ufl000066400000000000000000000014471346026536000172120ustar00rootroot00000000000000# Copyright (C) 2010 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . element = FiniteElement("Lagrange", triangle, 2) v = TestFunction(element) u = TrialFunction(element) a = v*u*dx ffc-2019.1.0.post0/bench/MassH1_2D_3.ufl000066400000000000000000000014471346026536000172130ustar00rootroot00000000000000# Copyright (C) 2010 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . element = FiniteElement("Lagrange", triangle, 3) v = TestFunction(element) u = TrialFunction(element) a = v*u*dx ffc-2019.1.0.post0/bench/MassH1_2D_4.ufl000066400000000000000000000014471346026536000172140ustar00rootroot00000000000000# Copyright (C) 2010 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . element = FiniteElement("Lagrange", triangle, 4) v = TestFunction(element) u = TrialFunction(element) a = v*u*dx ffc-2019.1.0.post0/bench/MassH1_2D_5.ufl000066400000000000000000000014471346026536000172150ustar00rootroot00000000000000# Copyright (C) 2010 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . element = FiniteElement("Lagrange", triangle, 5) v = TestFunction(element) u = TrialFunction(element) a = v*u*dx ffc-2019.1.0.post0/bench/MassH1_3D_1.ufl000066400000000000000000000014521346026536000172060ustar00rootroot00000000000000# Copyright (C) 2010 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . element = FiniteElement("Lagrange", tetrahedron, 1) v = TestFunction(element) u = TrialFunction(element) a = v*u*dx ffc-2019.1.0.post0/bench/MassH1_3D_2.ufl000066400000000000000000000014521346026536000172070ustar00rootroot00000000000000# Copyright (C) 2010 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . element = FiniteElement("Lagrange", tetrahedron, 2) v = TestFunction(element) u = TrialFunction(element) a = v*u*dx ffc-2019.1.0.post0/bench/MassHcurl_2D_1.ufl000066400000000000000000000014621346026536000200130ustar00rootroot00000000000000# Copyright (C) 2004-2010 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . element = FiniteElement("N1curl", triangle, 1) v = TestFunction(element) u = TrialFunction(element) a = inner(v, u)*dx ffc-2019.1.0.post0/bench/MassHcurl_2D_2.ufl000066400000000000000000000014621346026536000200140ustar00rootroot00000000000000# Copyright (C) 2004-2010 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . element = FiniteElement("N1curl", triangle, 2) v = TestFunction(element) u = TrialFunction(element) a = inner(v, u)*dx ffc-2019.1.0.post0/bench/MassHcurl_2D_3.ufl000066400000000000000000000014621346026536000200150ustar00rootroot00000000000000# Copyright (C) 2004-2010 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . element = FiniteElement("N1curl", triangle, 3) v = TestFunction(element) u = TrialFunction(element) a = inner(v, u)*dx ffc-2019.1.0.post0/bench/MassHcurl_2D_4.ufl000066400000000000000000000014621346026536000200160ustar00rootroot00000000000000# Copyright (C) 2004-2010 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . element = FiniteElement("N1curl", triangle, 4) v = TestFunction(element) u = TrialFunction(element) a = inner(v, u)*dx ffc-2019.1.0.post0/bench/MassHcurl_2D_5.ufl000066400000000000000000000014621346026536000200170ustar00rootroot00000000000000# Copyright (C) 2004-2010 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . element = FiniteElement("N1curl", triangle, 5) v = TestFunction(element) u = TrialFunction(element) a = inner(v, u)*dx ffc-2019.1.0.post0/bench/MassHdiv_2D_1.ufl000066400000000000000000000014521346026536000176270ustar00rootroot00000000000000# Copyright (C) 2010 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . element = FiniteElement("BDM", triangle, 1) v = TestFunction(element) u = TrialFunction(element) a = inner(v, u)*dx ffc-2019.1.0.post0/bench/MassHdiv_2D_2.ufl000066400000000000000000000014521346026536000176300ustar00rootroot00000000000000# Copyright (C) 2010 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . element = FiniteElement("BDM", triangle, 2) v = TestFunction(element) u = TrialFunction(element) a = inner(v, u)*dx ffc-2019.1.0.post0/bench/MassHdiv_2D_3.ufl000066400000000000000000000014521346026536000176310ustar00rootroot00000000000000# Copyright (C) 2010 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . element = FiniteElement("BDM", triangle, 3) v = TestFunction(element) u = TrialFunction(element) a = inner(v, u)*dx ffc-2019.1.0.post0/bench/MassHdiv_2D_4.ufl000066400000000000000000000014521346026536000176320ustar00rootroot00000000000000# Copyright (C) 2010 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . element = FiniteElement("BDM", triangle, 4) v = TestFunction(element) u = TrialFunction(element) a = inner(v, u)*dx ffc-2019.1.0.post0/bench/MassHdiv_2D_5.ufl000066400000000000000000000014521346026536000176330ustar00rootroot00000000000000# Copyright (C) 2010 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . element = FiniteElement("BDM", triangle, 5) v = TestFunction(element) u = TrialFunction(element) a = inner(v, u)*dx ffc-2019.1.0.post0/bench/NavierStokes_2D_1.ufl000066400000000000000000000016321346026536000205260ustar00rootroot00000000000000# Copyright (C) 2010 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . scalar = FiniteElement("Lagrange", triangle, 1) vector = VectorElement("Lagrange", triangle, 1) v = TestFunction(vector) u = TrialFunction(vector) w = Coefficient(vector) rho = Coefficient(scalar) a = rho*inner(v, grad(w)*u)*dx ffc-2019.1.0.post0/bench/NavierStokes_2D_2.ufl000066400000000000000000000016321346026536000205270ustar00rootroot00000000000000# Copyright (C) 2010 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . scalar = FiniteElement("Lagrange", triangle, 2) vector = VectorElement("Lagrange", triangle, 2) v = TestFunction(vector) u = TrialFunction(vector) w = Coefficient(vector) rho = Coefficient(scalar) a = rho*inner(v, grad(w)*u)*dx ffc-2019.1.0.post0/bench/NavierStokes_2D_3.ufl000066400000000000000000000016321346026536000205300ustar00rootroot00000000000000# Copyright (C) 2010 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . scalar = FiniteElement("Lagrange", triangle, 3) vector = VectorElement("Lagrange", triangle, 3) v = TestFunction(vector) u = TrialFunction(vector) w = Coefficient(vector) rho = Coefficient(scalar) a = rho*inner(v, grad(w)*u)*dx ffc-2019.1.0.post0/bench/Poisson_2D_1.ufl000066400000000000000000000015001346026536000175350ustar00rootroot00000000000000# Copyright (C) 2004-2010 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . element = FiniteElement("Lagrange", triangle, 1) v = TestFunction(element) u = TrialFunction(element) a = inner(grad(v), grad(u))*dx ffc-2019.1.0.post0/bench/Poisson_2D_2.ufl000066400000000000000000000015001346026536000175360ustar00rootroot00000000000000# Copyright (C) 2004-2010 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . element = FiniteElement("Lagrange", triangle, 2) v = TestFunction(element) u = TrialFunction(element) a = inner(grad(v), grad(u))*dx ffc-2019.1.0.post0/bench/Poisson_2D_3.ufl000066400000000000000000000015001346026536000175370ustar00rootroot00000000000000# Copyright (C) 2004-2010 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . element = FiniteElement("Lagrange", triangle, 3) v = TestFunction(element) u = TrialFunction(element) a = inner(grad(v), grad(u))*dx ffc-2019.1.0.post0/bench/Poisson_2D_4.ufl000066400000000000000000000015001346026536000175400ustar00rootroot00000000000000# Copyright (C) 2004-2010 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . element = FiniteElement("Lagrange", triangle, 4) v = TestFunction(element) u = TrialFunction(element) a = inner(grad(v), grad(u))*dx ffc-2019.1.0.post0/bench/Poisson_2D_5.ufl000066400000000000000000000015001346026536000175410ustar00rootroot00000000000000# Copyright (C) 2004-2010 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . element = FiniteElement("Lagrange", triangle, 5) v = TestFunction(element) u = TrialFunction(element) a = inner(grad(v), grad(u))*dx ffc-2019.1.0.post0/bench/Poisson_3D_1.ufl000066400000000000000000000015031346026536000175410ustar00rootroot00000000000000# Copyright (C) 2004-2010 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . element = FiniteElement("Lagrange", tetrahedron, 1) v = TestFunction(element) u = TrialFunction(element) a = inner(grad(v), grad(u))*dx ffc-2019.1.0.post0/bench/Poisson_3D_2.ufl000066400000000000000000000015031346026536000175420ustar00rootroot00000000000000# Copyright (C) 2004-2010 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . element = FiniteElement("Lagrange", tetrahedron, 2) v = TestFunction(element) u = TrialFunction(element) a = inner(grad(v), grad(u))*dx ffc-2019.1.0.post0/bench/WeightedPoisson_2D_1.ufl000066400000000000000000000015261346026536000212260ustar00rootroot00000000000000# Copyright (C) 2010 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . element = FiniteElement("Lagrange", triangle, 1) v = TestFunction(element) u = TrialFunction(element) c = Coefficient(element) a = c*inner(grad(v), grad(u))*dx ffc-2019.1.0.post0/bench/WeightedPoisson_2D_2.ufl000066400000000000000000000015261346026536000212270ustar00rootroot00000000000000# Copyright (C) 2010 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . element = FiniteElement("Lagrange", triangle, 2) v = TestFunction(element) u = TrialFunction(element) c = Coefficient(element) a = c*inner(grad(v), grad(u))*dx ffc-2019.1.0.post0/bench/WeightedPoisson_2D_3.ufl000066400000000000000000000015261346026536000212300ustar00rootroot00000000000000# Copyright (C) 2010 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . element = FiniteElement("Lagrange", triangle, 3) v = TestFunction(element) u = TrialFunction(element) c = Coefficient(element) a = c*inner(grad(v), grad(u))*dx ffc-2019.1.0.post0/bench/WeightedPoisson_2D_4.ufl000066400000000000000000000015261346026536000212310ustar00rootroot00000000000000# Copyright (C) 2010 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . element = FiniteElement("Lagrange", triangle, 4) v = TestFunction(element) u = TrialFunction(element) c = Coefficient(element) a = c*inner(grad(v), grad(u))*dx ffc-2019.1.0.post0/bench/WeightedPoisson_2D_5.ufl000066400000000000000000000015261346026536000212320ustar00rootroot00000000000000# Copyright (C) 2010 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . element = FiniteElement("Lagrange", triangle, 5) v = TestFunction(element) u = TrialFunction(element) c = Coefficient(element) a = c*inner(grad(v), grad(u))*dx ffc-2019.1.0.post0/bench/WeightedPoisson_3D_1.ufl000066400000000000000000000015311346026536000212230ustar00rootroot00000000000000# Copyright (C) 2010 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . element = FiniteElement("Lagrange", tetrahedron, 1) v = TestFunction(element) u = TrialFunction(element) c = Coefficient(element) a = c*inner(grad(v), grad(u))*dx ffc-2019.1.0.post0/bench/WeightedPoisson_3D_2.ufl000066400000000000000000000015311346026536000212240ustar00rootroot00000000000000# Copyright (C) 2010 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . element = FiniteElement("Lagrange", tetrahedron, 2) v = TestFunction(element) u = TrialFunction(element) c = Coefficient(element) a = c*inner(grad(v), grad(u))*dx ffc-2019.1.0.post0/bench/bench.py000066400000000000000000000040451346026536000162660ustar00rootroot00000000000000# -*- coding: utf-8 -*- """This script runs a benchmark study on the form files found in the current directory. It relies on the regression test script for timings.""" # Copyright (C) 2010 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . import os import glob import sys from utils import print_table # Test options test_options = ["-r tensor", #"-r tensor -O", "-r quadrature", "-r quadrature -O", "-r uflacs"] # Get list of test cases test_cases = sorted([f.split(".")[0] for f in glob.glob("*.ufl")]) # Open logfile logfile = open("bench.log", "w") # Iterate over options os.chdir("../test/regression") table = {} for (j, test_option) in enumerate(test_options): # Run benchmark print("\nUsing options %s\n" % test_option) os.system(sys.executable + " test.py --bench %s" % test_option) # Collect results for (i, test_case) in enumerate(test_cases): output = open("output/%s.out" % test_case).read() lines = [line for line in output.split("\n") if "bench" in line] if not len(lines) == 1: raise RuntimeError("Unable to extract benchmark data for test case %s" % test_case) timing = float(lines[0].split(":")[-1]) table[(i, j)] = (test_case, test_option, timing) logfile.write("%s, %s, %g\n" % (test_case, test_option, timing)) # Close logfile logfile.close() # Print results print_table(table, "FFC bench") ffc-2019.1.0.post0/bench/plot.py000066400000000000000000000050071346026536000161640ustar00rootroot00000000000000# -*- coding: utf-8 -*- "This script plots the results found in bench.log." # Copyright (C) 2010 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # First added: 2010-05-13 # Last changed: 2010-05-13 from pylab import * # Read logfile results = {} try: output = open("bench.log").read() except Exception: output = open("results/bench.log").read() for line in output.split("\n"): if "," not in line: continue test_case, test_option, timing = [w.strip() for w in line.split(",")] try: form, degree = test_case.split("_") except Exception: form, dim, degree = test_case.split("_") form = form + "_" + dim if form not in results: results[form] = {} if test_option not in results[form]: results[form][test_option] = ([], []) results[form][test_option][0].append(int(degree)) results[form][test_option][1].append(float(timing)) # Plot results forms = sorted([form for form in results]) test_options = ["-r quadrature", "-r quadrature -O", "-r tensor", "-r tensor -O"] bullets = ["x-", "o-", "*-", "s-"] for (i, form) in enumerate(forms): figure(i) # Plot timings subplot(121) for (j, test_option) in enumerate(test_options): q, t = results[form][test_option] semilogy(q, t, bullets[j]) hold(True) a = list(axis()); a[-1] = 50.0*a[-1]; axis(a); legend(test_options, loc="upper left") grid(True) xlabel('degree') ylabel(form) title('CPU time') # Plot speedups subplot(122) q0, t0 = results[form]["-r quadrature"] for (j, test_option) in enumerate(test_options): q, t = results[form][test_option] t = [t0[k] / t[k] for k in range(len(t))] semilogy(q, t, bullets[j]) hold(True) a = list(axis()); a[-1] = 50.0*a[-1]; axis(a); legend(test_options, loc="upper left") grid(True) xlabel('degree') title("Speedup vs '-r quadrature'") show() ffc-2019.1.0.post0/bench/results/000077500000000000000000000000001346026536000163335ustar00rootroot00000000000000ffc-2019.1.0.post0/bench/results/bench.log000066400000000000000000000105341346026536000201200ustar00rootroot00000000000000MassH1_1, -r tensor, 4.55379e-08 MassH1_2, -r tensor, 5.8651e-08 MassH1_3, -r tensor, 1.62125e-07 MassH1_4, -r tensor, 3.8147e-07 MassH1_5, -r tensor, 5.6076e-07 MassHcurl_1, -r tensor, 9.01222e-08 MassHcurl_2, -r tensor, 2.46048e-07 MassHcurl_3, -r tensor, 7.93457e-07 MassHcurl_4, -r tensor, 5.58472e-06 MassHcurl_5, -r tensor, 1.17188e-05 MassHdiv_1, -r tensor, 1.65939e-07 MassHdiv_2, -r tensor, 5.26428e-07 MassHdiv_3, -r tensor, 3.75366e-06 MassHdiv_4, -r tensor, 8.48389e-06 MassHdiv_5, -r tensor, 1.61133e-05 NavierStokes_1, -r tensor, 5.37872e-07 NavierStokes_2, -r tensor, 2.02637e-05 NavierStokes_3, -r tensor, 0.000178711 Poisson_1, -r tensor, 8.2016e-08 Poisson_2, -r tensor, 1.19209e-07 Poisson_3, -r tensor, 2.82288e-07 Poisson_4, -r tensor, 6.48499e-07 Poisson_5, -r tensor, 3.72314e-06 WeightedPoisson_1, -r tensor, 1.23978e-07 WeightedPoisson_2, -r tensor, 5.45502e-07 WeightedPoisson_3, -r tensor, 7.50732e-06 WeightedPoisson_4, -r tensor, 2.80762e-05 WeightedPoisson_5, -r tensor, 8.05664e-05 MassH1_1, -r tensor -O, 3.71933e-08 MassH1_2, -r tensor -O, 5.57899e-08 MassH1_3, -r tensor -O, 1.24931e-07 MassH1_4, -r tensor -O, 2.82288e-07 MassH1_5, -r tensor -O, 5.34058e-07 MassHcurl_1, -r tensor -O, 8.91685e-08 MassHcurl_2, -r tensor -O, 2.46048e-07 MassHcurl_3, -r tensor -O, 7.47681e-07 MassHcurl_4, -r tensor -O, 5.00488e-06 MassHcurl_5, -r tensor -O, 1.01929e-05 MassHdiv_1, -r tensor -O, 1.57356e-07 MassHdiv_2, -r tensor -O, 4.50134e-07 MassHdiv_3, -r tensor -O, 1.31226e-06 MassHdiv_4, -r tensor -O, 7.20215e-06 MassHdiv_5, -r tensor -O, 1.36719e-05 NavierStokes_1, -r tensor -O, 3.43323e-07 NavierStokes_2, -r tensor -O, 1.16577e-05 NavierStokes_3, -r tensor -O, 8.93555e-05 Poisson_1, -r tensor -O, 8.4877e-08 Poisson_2, -r tensor -O, 1.13487e-07 Poisson_3, -r tensor -O, 2.32697e-07 Poisson_4, -r tensor -O, 4.80652e-07 Poisson_5, -r tensor -O, 9.38416e-07 WeightedPoisson_1, -r tensor -O, 1.13487e-07 WeightedPoisson_2, -r tensor -O, 4.27246e-07 WeightedPoisson_3, -r tensor -O, 5.31006e-06 WeightedPoisson_4, -r tensor -O, 1.96533e-05 WeightedPoisson_5, -r tensor -O, 5.51758e-05 MassH1_1, -r quadrature, 3.7384e-07 MassH1_2, -r quadrature, 2.94495e-06 MassH1_3, -r quadrature, 1.42822e-05 MassH1_4, -r quadrature, 5.07812e-05 MassH1_5, -r quadrature, 0.000148437 MassHcurl_1, -r quadrature, 8.39233e-07 MassHcurl_2, -r quadrature, 1.14136e-05 MassHcurl_3, -r quadrature, 7.8125e-05 MassHcurl_4, -r quadrature, 0.000316406 MassHcurl_5, -r quadrature, 0.000992188 MassHdiv_1, -r quadrature, 8.54492e-06 MassHdiv_2, -r quadrature, 7.51953e-05 MassHdiv_3, -r quadrature, 0.000367188 MassHdiv_4, -r quadrature, 0.00128125 MassHdiv_5, -r quadrature, 0.00378125 NavierStokes_1, -r quadrature, 4.02832e-06 NavierStokes_2, -r quadrature, 5.9082e-05 NavierStokes_3, -r quadrature, 0.000355469 Poisson_1, -r quadrature, 2.07901e-07 Poisson_2, -r quadrature, 3.38745e-06 Poisson_3, -r quadrature, 2.03857e-05 Poisson_4, -r quadrature, 7.91016e-05 Poisson_5, -r quadrature, 0.000255859 WeightedPoisson_1, -r quadrature, 2.57492e-07 WeightedPoisson_2, -r quadrature, 8.11768e-06 WeightedPoisson_3, -r quadrature, 4.05273e-05 WeightedPoisson_4, -r quadrature, 0.000183594 WeightedPoisson_5, -r quadrature, 0.000535156 MassH1_1, -r quadrature -O, 3.64304e-07 MassH1_2, -r quadrature -O, 3.17383e-06 MassH1_3, -r quadrature -O, 1.3916e-05 MassH1_4, -r quadrature -O, 4.8584e-05 MassH1_5, -r quadrature -O, 0.000136719 MassHcurl_1, -r quadrature -O, 6.79016e-07 MassHcurl_2, -r quadrature -O, 8.42285e-06 MassHcurl_3, -r quadrature -O, 6.00586e-05 MassHcurl_4, -r quadrature -O, 0.000248047 MassHcurl_5, -r quadrature -O, 0.000777344 MassHdiv_1, -r quadrature -O, 2.62451e-06 MassHdiv_2, -r quadrature -O, 2.28271e-05 MassHdiv_3, -r quadrature -O, 0.000111328 MassHdiv_4, -r quadrature -O, 0.00040625 MassHdiv_5, -r quadrature -O, 0.00122656 NavierStokes_1, -r quadrature -O, 1.60217e-06 NavierStokes_2, -r quadrature -O, 2.19727e-05 NavierStokes_3, -r quadrature -O, 0.000132813 Poisson_1, -r quadrature -O, 2.02179e-07 Poisson_2, -r quadrature -O, 3.479e-06 Poisson_3, -r quadrature -O, 2.49023e-05 Poisson_4, -r quadrature -O, 0.000107422 Poisson_5, -r quadrature -O, 0.000349609 WeightedPoisson_1, -r quadrature -O, 2.26974e-07 WeightedPoisson_2, -r quadrature -O, 7.93457e-06 WeightedPoisson_3, -r quadrature -O, 4.41895e-05 WeightedPoisson_4, -r quadrature -O, 0.000224609 WeightedPoisson_5, -r quadrature -O, 0.000703125 ffc-2019.1.0.post0/bench/results/results.log000066400000000000000000000115111346026536000205360ustar00rootroot00000000000000Linux aule 2.6.32-21-generic #32-Ubuntu SMP Fri Apr 16 08:09:38 UTC 2010 x86_64 GNU/Linux Thu May 13 21:39:15 CEST 2010 ---------------------------------------------------------------------------------- | FFC bench | -r tensor | -r tensor -O | -r quadrature | -r quadrature -O | ---------------------------------------------------------------------------------- | MassH1_1 | 4.5538e-08 | 3.7193e-08 | 3.7384e-07 | 3.643e-07 | ---------------------------------------------------------------------------------- | MassH1_2 | 5.8651e-08 | 5.579e-08 | 2.9449e-06 | 3.1738e-06 | ---------------------------------------------------------------------------------- | MassH1_3 | 1.6212e-07 | 1.2493e-07 | 1.4282e-05 | 1.3916e-05 | ---------------------------------------------------------------------------------- | MassH1_4 | 3.8147e-07 | 2.8229e-07 | 5.0781e-05 | 4.8584e-05 | ---------------------------------------------------------------------------------- | MassH1_5 | 5.6076e-07 | 5.3406e-07 | 0.00014844 | 0.00013672 | ---------------------------------------------------------------------------------- | MassHcurl_1 | 9.0122e-08 | 8.9169e-08 | 8.3923e-07 | 6.7902e-07 | ---------------------------------------------------------------------------------- | MassHcurl_2 | 2.4605e-07 | 2.4605e-07 | 1.1414e-05 | 8.4229e-06 | ---------------------------------------------------------------------------------- | MassHcurl_3 | 7.9346e-07 | 7.4768e-07 | 7.8125e-05 | 6.0059e-05 | ---------------------------------------------------------------------------------- | MassHcurl_4 | 5.5847e-06 | 5.0049e-06 | 0.00031641 | 0.00024805 | ---------------------------------------------------------------------------------- | MassHcurl_5 | 1.1719e-05 | 1.0193e-05 | 0.00099219 | 0.00077734 | ---------------------------------------------------------------------------------- | MassHdiv_1 | 1.6594e-07 | 1.5736e-07 | 8.5449e-06 | 2.6245e-06 | ---------------------------------------------------------------------------------- | MassHdiv_2 | 5.2643e-07 | 4.5013e-07 | 7.5195e-05 | 2.2827e-05 | ---------------------------------------------------------------------------------- | MassHdiv_3 | 3.7537e-06 | 1.3123e-06 | 0.00036719 | 0.00011133 | ---------------------------------------------------------------------------------- | MassHdiv_4 | 8.4839e-06 | 7.2021e-06 | 0.0012813 | 0.00040625 | ---------------------------------------------------------------------------------- | MassHdiv_5 | 1.6113e-05 | 1.3672e-05 | 0.0037812 | 0.0012266 | ---------------------------------------------------------------------------------- | NavierStokes_1 | 5.3787e-07 | 3.4332e-07 | 4.0283e-06 | 1.6022e-06 | ---------------------------------------------------------------------------------- | NavierStokes_2 | 2.0264e-05 | 1.1658e-05 | 5.9082e-05 | 2.1973e-05 | ---------------------------------------------------------------------------------- | NavierStokes_3 | 0.00017871 | 8.9355e-05 | 0.00035547 | 0.00013281 | ---------------------------------------------------------------------------------- | Poisson_1 | 8.2016e-08 | 8.4877e-08 | 2.079e-07 | 2.0218e-07 | ---------------------------------------------------------------------------------- | Poisson_2 | 1.1921e-07 | 1.1349e-07 | 3.3875e-06 | 3.479e-06 | ---------------------------------------------------------------------------------- | Poisson_3 | 2.8229e-07 | 2.327e-07 | 2.0386e-05 | 2.4902e-05 | ---------------------------------------------------------------------------------- | Poisson_4 | 6.485e-07 | 4.8065e-07 | 7.9102e-05 | 0.00010742 | ---------------------------------------------------------------------------------- | Poisson_5 | 3.7231e-06 | 9.3842e-07 | 0.00025586 | 0.00034961 | ---------------------------------------------------------------------------------- | WeightedPoisson_1 | 1.2398e-07 | 1.1349e-07 | 2.5749e-07 | 2.2697e-07 | ---------------------------------------------------------------------------------- | WeightedPoisson_2 | 5.455e-07 | 4.2725e-07 | 8.1177e-06 | 7.9346e-06 | ---------------------------------------------------------------------------------- | WeightedPoisson_3 | 7.5073e-06 | 5.3101e-06 | 4.0527e-05 | 4.4189e-05 | ---------------------------------------------------------------------------------- | WeightedPoisson_4 | 2.8076e-05 | 1.9653e-05 | 0.00018359 | 0.00022461 | ---------------------------------------------------------------------------------- ffc-2019.1.0.post0/bench/utils.py000066400000000000000000000035011346026536000163430ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2010 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # First added: 2010-05-11 # Last changed: 2010-05-11 def print_table(values, title): "Print nicely formatted table." m = max([key[0] for key in values]) + 2 n = max([key[1] for key in values]) + 2 table = [] for i in range(m): table.append(["" for j in range(n)]) for i in range(m - 1): table[i + 1][0] = str(values[(i, 0)][0]) for j in range(n - 1): table[0][j + 1] = str(values[(0, j)][1]) for i in range(m - 1): for j in range(n - 1): value = values[(i, j)][2] if isinstance(value, float): value = "%.5g" % value table[i + 1][j + 1] = value table[0][0] = title column_sizes = [max([len(table[i][j]) for i in range(m)]) for j in range(n)] row_size = sum(column_sizes) + 3*(len(column_sizes) - 1) + 2 print("") for i in range(m): print(" " + "-"*row_size) print("|", end="") for j in range(n): print(table[i][j] + " "*(column_sizes[j] - len(table[i][j])), end="") print("|", end="") print("") print(" " + "-"*row_size) print("") ffc-2019.1.0.post0/demo/000077500000000000000000000000001346026536000144775ustar00rootroot00000000000000ffc-2019.1.0.post0/demo/AdaptivePoisson.ufl000066400000000000000000000017121346026536000203200ustar00rootroot00000000000000# Copyright (C) 2010 Marie E. Rognes # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . element = FiniteElement("Lagrange", triangle, 1) element2 = FiniteElement("Lagrange", triangle, 3) u = TrialFunction(element) v = TestFunction(element) f = Coefficient(element2) g = Coefficient(element) a = inner(grad(u), grad(v))*dx() L = f*v*dx() + g*v*ds() M = v*dx() ffc-2019.1.0.post0/demo/AlgebraOperators.ufl000066400000000000000000000025321346026536000204450ustar00rootroot00000000000000# Copyright (C) 2010 Kristian B. Oelgaard # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # Test all algebra operators on Coefficients. # # Compile this form with FFC: ffc AlgebraOperators.ufl element = FiniteElement("Lagrange", triangle, 1) c0 = Coefficient(element) c1 = Coefficient(element) s0 = 3*c0 - c1 p0 = c0*c1 f0 = c0/c1 integrand = 5*c0 + 5*p0 + 5*f0\ + s0*c0 + s0*p0 + s0*f0\ + 5/c0 + 5/p0 + 5/f0\ + s0/c0 + s0/p0 + s0/f0\ + s0/5 + s0/5 + s0/5\ + c0**2 + s0**2 + p0**2 + f0**2\ + c1**2.2 + s0**2.2 + p0**2.2 + f0**2.2\ + c0**c1 + s0**c0 + p0**c0 + f0**c0\ + c0**s0 + s0**p0 + p0**f0 + f0**p0\ + abs(c0) + abs(s0) + abs(p0) + abs(f0) a = integrand*dx ffc-2019.1.0.post0/demo/Biharmonic.ufl000066400000000000000000000030751346026536000172670ustar00rootroot00000000000000# Copyright (C) 2009 Kristian B. Oelgaard, Garth N. Wells and Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # First added: 2009-06-26 # Last changed: 2011-03-08 # # The bilinear form a(u, v) and linear form L(v) for # Biharmonic equation in a discontinuous Galerkin (DG) # formulation. # # Compile this form with FFC: ffc -l dolfin Biharmonic.ufl # Elements element = FiniteElement("Lagrange", triangle, 2) # Trial and test functions u = TrialFunction(element) v = TestFunction(element) # Facet normal, mesh size and right-hand side n = FacetNormal(triangle) h = 2.0*Circumradius(triangle) f = Coefficient(element) # Compute average of mesh size h_avg = (h('+') + h('-'))/2.0 # Parameters alpha = Constant(triangle) # Bilinear form a = inner(div(grad(u)), div(grad(v)))*dx \ - inner(jump(grad(u), n), avg(div(grad(v))))*dS \ - inner(avg(div(grad(u))), jump(grad(v), n))*dS \ + alpha/h_avg*inner(jump(grad(u),n), jump(grad(v),n))*dS # Linear form L = f*v*dx ffc-2019.1.0.post0/demo/BiharmonicHHJ.ufl000066400000000000000000000025371346026536000176230ustar00rootroot00000000000000# Copyright (C) 2016 Lizao Li # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # The bilinear form a(u, v) and linear form L(v) for # Biharmonic equation in Hellan-Herrmann-Johnson (HHJ) # formulation. # # Compile this form with FFC: ffc -l dolfin BiharmonicHHJ.ufl HHJ = FiniteElement('HHJ', triangle, 2) CG = FiniteElement('CG', triangle, 3) mixed_element = HHJ * CG (sigma, u) = TrialFunctions(mixed_element) (tau, v) = TestFunctions(mixed_element) f = Coefficient(CG) def b(sigma, v): n = FacetNormal(triangle) return inner(sigma, grad(grad(v))) * dx \ - dot(dot(sigma('+'), n('+')), n('+')) * jump(grad(v), n) * dS \ - dot(dot(sigma, n), n) * dot(grad(v), n) * ds a = inner(sigma, tau) * dx - b(tau, u) + b(sigma, v) L = f * v * dx ffc-2019.1.0.post0/demo/BiharmonicRegge.ufl000066400000000000000000000026141346026536000202370ustar00rootroot00000000000000# Copyright (C) 2016 Lizao Li # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # The bilinear form a(u, v) and linear form L(v) for # Biharmonic equation in Regge formulation. # # Compile this form with FFC: ffc -l dolfin BiharmonicRegge.ufl REG = FiniteElement('Regge', tetrahedron, 1) CG = FiniteElement('Lagrange', tetrahedron, 2) mixed_element = REG * CG (sigma, u) = TrialFunctions(mixed_element) (tau, v) = TestFunctions(mixed_element) f = Coefficient(CG) def S(mu): return mu - Identity(3) * tr(mu) def b(mu, v): n = FacetNormal(tetrahedron) return inner(S(mu), grad(grad(v))) * dx \ - dot(dot(S(mu('+')), n('+')), n('+')) * jump(grad(v), n) * dS \ - dot(dot(S(mu), n), n) * dot(grad(v), n) * ds a = inner(S(sigma), S(tau)) * dx - b(tau, u) + b(sigma, v) L = f * v * dx ffc-2019.1.0.post0/demo/CellGeometry.ufl000066400000000000000000000022671346026536000176110ustar00rootroot00000000000000# Copyright (C) 2013 Martin S. Alnaes # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # A functional M involving a bunch of cell geometry quantities in ufl. # # Compile this form with FFC: ffc CellGeometry.ufl cell = tetrahedron V = FiniteElement("CG", cell, 1) u = Coefficient(V) # TODO: Add all geometry for all cell types to this and other demo files, need for regression test. x = SpatialCoordinate(cell) n = FacetNormal(cell) vol = CellVolume(cell) rad = Circumradius(cell) area = FacetArea(cell) M = u*(x[0]*vol*rad)*dx + u*(x[0]*vol*rad*area)*ds # + u*area*avg(n[0]*x[0]*vol*rad)*dS ffc-2019.1.0.post0/demo/CoefficientOperators.ufl000066400000000000000000000020061346026536000213220ustar00rootroot00000000000000# Copyright (C) 2007 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # Test form for operators on Coefficients. # # Compile this form with FFC: ffc CoefficientOperators.ufl element = FiniteElement("Lagrange", triangle, 1) u = TrialFunction(element) v = TestFunction(element) f = Coefficient(element) g = Coefficient(element) a = sqrt(1/abs(1/f))*sqrt(g)*inner(grad(u), grad(v))*dx + sqrt(f*g)*g*u*v*dx ffc-2019.1.0.post0/demo/Components.ufl000066400000000000000000000020001346026536000173240ustar00rootroot00000000000000# Copyright (C) 2011 Garth N. Wells # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # This example demonstrates how to create vectors component-wise # # Compile this form with FFC: ffc Component.ufl element = VectorElement("Lagrange", tetrahedron, 1) v = TestFunction(element) f = Coefficient(element) # Create vector v0 = as_vector([v[0], v[1], 0.0]) # Use created vector in linear form L = dot(f, v0)*dx ffc-2019.1.0.post0/demo/Conditional.ufl000066400000000000000000000025031346026536000174520ustar00rootroot00000000000000# Copyright (C) 2010-2011 Kristian B. Oelgaard # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # Illustration on how to use Conditional to define a source term # # Compile this form with FFC: ffc Conditional.ufl element = FiniteElement("Lagrange", triangle, 2) v = TestFunction(element) g = Constant(triangle) x = SpatialCoordinate(triangle) c0 = conditional(le( (x[0]-0.33)**2 + (x[1]-0.67)**2, 0.015), -1.0, 5.0) c = conditional( le( (x[0]-0.33)**2 + (x[1]-0.67)**2, 0.025), c0, 0.0 ) t0 = And(ge( x[0], 0.55), le(x[0], 0.95)) t1 = Or( lt( x[1], 0.05), gt(x[1], 0.45)) t2 = And(t0, Not(t1)) t = conditional(And(ge( x[1] - x[0] - 0.05 + 0.55, 0.0), t2), -1.0, 0.0) k = conditional(gt(1,0),g,g+1) f = c + t + k L = v*f*dx ffc-2019.1.0.post0/demo/Constant.ufl000066400000000000000000000020041346026536000167740ustar00rootroot00000000000000# Copyright (C) 2007 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # Test form for scalar and vector constants. # # Compile this form with FFC: ffc Constant.ufl element = FiniteElement("Lagrange", triangle, 1) v = TestFunction(element) u = TrialFunction(element) f = Coefficient(element) c = Constant(triangle) d = VectorConstant(triangle) a = c*inner(grad(u), grad(v))*dx L = inner(d, grad(v))*dx ffc-2019.1.0.post0/demo/CustomIntegral.ufl000066400000000000000000000033021346026536000201450ustar00rootroot00000000000000# Copyright (C) 2014 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # First added: 2014-03-04 # Last changed: 2014-06-03 # # This demo illustrates the use of custom integrals. # # Compile this form with FFC: ffc CustomIntegral.ufl # Define element element = FiniteElement("Lagrange", triangle, 1) # Define trial and test functions and right-hand side u = TrialFunction(element) v = TestFunction(element) f = Coefficient(element) # Define facet normal and mesh size n = FacetNormal(triangle) h = 2.0*Circumradius(triangle) h = (h('+') + h('-')) / 2 # Define custom measures (FIXME: prettify this) dc0 = dc(0, metadata={"num_cells": 1}) dc1 = dc(1, metadata={"num_cells": 2}) dc2 = dc(2, metadata={"num_cells": 2}) # Define measures for integration dx = dx + dc0 # domain integral di = dc1 # interface integral do = dc2 # overlap integral # Parameters alpha = 4.0 # Bilinear form a = dot(grad(u), grad(v))*dx \ - dot(avg(grad(u)), jump(v, n))*di \ - dot(avg(grad(v)), jump(u, n))*di \ + alpha/h*jump(u)*jump(v)*di \ + dot(jump(grad(u)), jump(grad(v)))*do # Linear form L = f*v*dx ffc-2019.1.0.post0/demo/CustomMixedIntegral.ufl000066400000000000000000000040031346026536000211330ustar00rootroot00000000000000# Copyright (C) 2014 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # First added: 2014-06-10 # Last changed: 2014-06-17 # # This demo illustrates the use of custom integrals with mixed elements. # # Compile this form with FFC: ffc CustomMixedIntegral.ufl # Define element P2 = VectorElement("Lagrange", triangle, 2) P1 = FiniteElement("Lagrange", triangle, 1) TH = P2 * P1 # Define trial and test functions and right-hand side (u, p) = TrialFunctions(TH) (v, q) = TestFunctions(TH) f = Coefficient(P2) # Define facet normal and mesh size n = FacetNormal(triangle) h = 2.0*Circumradius(triangle) h = (h('+') + h('-')) / 2 # Define custom measures (FIXME: prettify this) dc0 = dc(0, metadata={"num_cells": 1}) dc1 = dc(1, metadata={"num_cells": 2}) dc2 = dc(2, metadata={"num_cells": 2}) # Define measures for integration dx = dx + dc0 # domain integral di = dc1 # interface integral do = dc2 # overlap integral # Parameters alpha = 4.0 def tensor_jump(v, n): return outer(v('+'), n('+')) + outer(v('-'), n('-')) def a_h(v, w): return inner(grad(v), grad(w))*dx \ - inner(avg(grad(v)), tensor_jump(w, n))*di \ - inner(avg(grad(w)), tensor_jump(v, n))*di def b_h(v, q): return -div(v)*q*dx + jump(v, n)*avg(q)*di def s_h(v, w): return inner(jump(grad(v)), jump(grad(w)))*do # Bilinear form a = a_h(u, v) + b_h(v, p) + b_h(u, q) + s_h(u, v) # Linear form L = dot(f, v)*dx ffc-2019.1.0.post0/demo/CustomVectorIntegral.ufl000066400000000000000000000035201346026536000213320ustar00rootroot00000000000000# Copyright (C) 2014 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # First added: 2014-03-04 # Last changed: 2014-06-10 # # This demo illustrates the use of custom integrals with vector elements. # # Compile this form with FFC: ffc CustomVectorIntegral.ufl # Define element element = VectorElement("Lagrange", triangle, 1) # Define trial and test functions and right-hand side u = TrialFunction(element) v = TestFunction(element) f = Coefficient(element) # Define facet normal and mesh size n = FacetNormal(triangle) h = 2.0*Circumradius(triangle) h = (h('+') + h('-')) / 2 # Define custom measures (FIXME: prettify this) dc0 = dc(0, metadata={"num_cells": 1}) dc1 = dc(1, metadata={"num_cells": 2}) dc2 = dc(2, metadata={"num_cells": 2}) # Define measures for integration dx = dx + dc0 # domain integral di = dc1 # interface integral do = dc2 # overlap integral # Parameters alpha = 4.0 def tensor_jump(v, n): return outer(v('+'), n('+')) + outer(v('-'), n('-')) # Bilinear form a = inner(grad(u), grad(v))*dx \ - inner(avg(grad(u)), tensor_jump(v, n))*di \ - inner(avg(grad(v)), tensor_jump(u, n))*di \ + alpha/h*dot(jump(u), jump(v))*di \ + inner(jump(grad(u)), jump(grad(v)))*do # Linear form L = dot(f, v)*dx ffc-2019.1.0.post0/demo/Elasticity.ufl000066400000000000000000000020771346026536000173270ustar00rootroot00000000000000# Copyright (C) 2005 Johan Jansson # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # Modified by Anders Logg 2005-2007 # Modified by Marie E. Rognes 2012 # # The bilinear form e(v) : e(u) for linear # elasticity with e(v) = 1/2 (grad(v) + grad(v)^T) # # Compile this form with FFC: ffc Elasticity.ufl element = VectorElement("Lagrange", tetrahedron, 1) u = TrialFunction(element) v = TestFunction(element) def eps(v): return sym(grad(v)) a = inner(eps(u), eps(v))*dx ffc-2019.1.0.post0/demo/EnergyNorm.ufl000066400000000000000000000017361346026536000173030ustar00rootroot00000000000000# Copyright (C) 2005-2007 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # This example demonstrates how to define a functional, here # the energy norm (squared) for a reaction-diffusion problem. # # Compile this form with FFC: ffc EnergyNorm.ufl element = FiniteElement("Lagrange", tetrahedron, 1) v = Coefficient(element) a = (v*v + inner(grad(v), grad(v)))*dx ffc-2019.1.0.post0/demo/Equation.ufl000066400000000000000000000031361346026536000167770ustar00rootroot00000000000000# Copyright (C) 2007 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # Specification of a system F(u, v) = 0 and extraction of # the bilinear and linear forms a and L for the left- and # right-hand sides: # # F(u, v) = a(u, v) - L(v) = 0 # # The example below demonstrates the specification of the # linear system for a cG(1)/Crank-Nicholson time step for # the heat equation. # # The below formulation is equivalent to writing # # a = u*v*dx + 0.5*k*inner(grad(u), grad(v))*dx # L = u0*v*dx - 0.5*k*inner(grad(u0), grad(v))*dx # # but instead of manually shuffling terms not including # the unknown u to the right-hand side, all terms may # be listed on one line and left- and right-hand sides # extracted by lhs() and rhs(). # # Compile this form with FFC: ffc Equation.ufl element = FiniteElement("Lagrange", triangle, 1) k = 0.1 u = TrialFunction(element) v = TestFunction(element) u0 = Coefficient(element) eq = (u - u0)*v*dx + k*inner(grad(0.5*(u0 + u)), grad(v))*dx a = lhs(eq) L = rhs(eq) ffc-2019.1.0.post0/demo/FacetIntegrals.ufl000066400000000000000000000021651346026536000201060ustar00rootroot00000000000000# Copyright (C) 2009-2010 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # First added: 2009-03-20 # Last changed: 2011-03-08 # # Simple example of a form defined over exterior and interior facets. # # Compile this form with FFC: ffc FacetIntegrals.ufl element = FiniteElement("Discontinuous Lagrange", triangle, 1) u = TrialFunction(element) v = TestFunction(element) n = FacetNormal(triangle) a = u*v*ds \ + u('+')*v('-')*dS \ + inner(jump(u, n), avg(grad(v)))*dS \ + inner(avg(grad(u)), jump(v, n))*dS ffc-2019.1.0.post0/demo/FacetRestrictionAD.ufl000066400000000000000000000017341346026536000206710ustar00rootroot00000000000000# Copyright (C) 2010 Garth N. Wells # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # First added: 2010-06-07 # Last changed: 2011-07-01 # element = FiniteElement("Discontinuous Lagrange", triangle, 1) v = TestFunction(element) w = Coefficient(element) L = inner(grad(w), grad(v))*dx - dot(avg(grad(w)), avg(grad(v)))*dS u = TrialFunction(element) a = derivative(L, w, u) ffc-2019.1.0.post0/demo/Heat.ufl000066400000000000000000000023401346026536000160670ustar00rootroot00000000000000# Copyright (C) 2005-2007 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # The bilinear form a(v, u1) and linear form L(v) for # one backward Euler step with the heat equation. # # Compile this form with FFC: ffc Heat.ufl element = FiniteElement("Lagrange", triangle, 1) u1 = TrialFunction(element) # Value at t_n u0 = Coefficient(element) # Value at t_n-1 v = TestFunction(element) # Test function c = Coefficient(element) # Heat conductivity f = Coefficient(element) # Heat source k = Constant(triangle) # Time step a = u1*v*dx + k*c*inner(grad(u1), grad(v))*dx L = u0*v*dx + k*f*v*dx ffc-2019.1.0.post0/demo/HyperElasticity.ufl000066400000000000000000000040231346026536000203300ustar00rootroot00000000000000# Copyright (C) 2009 Harish Narayanan # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # First added: 2009-09-29 # Last changed: 2011-07-01 # # The bilinear form a(u, v) and linear form L(v) for # a hyperelastic model. (Copied from dolfin/demo/pde/hyperelasticity/cpp) # # Compile this form with FFC: ffc HyperElasticity.ufl. # Coefficient spaces element = VectorElement("Lagrange", tetrahedron, 1) # Coefficients v = TestFunction(element) # Test function du = TrialFunction(element) # Incremental displacement u = Coefficient(element) # Displacement from previous iteration B = Coefficient(element) # Body force per unit mass T = Coefficient(element) # Traction force on the boundary # Kinematics d = len(u) I = Identity(d) # Identity tensor F = I + grad(u) # Deformation gradient C = F.T*F # Right Cauchy-Green tensor E = (C - I)/2 # Euler-Lagrange strain tensor E = variable(E) # Material constants mu = Constant(tetrahedron) # Lame's constants lmbda = Constant(tetrahedron) # Strain energy function (material model) psi = lmbda/2*(tr(E)**2) + mu*tr(E*E) S = diff(psi, E) # Second Piola-Kirchhoff stress tensor P = F*S # First Piola-Kirchoff stress tensor # The variational problem corresponding to hyperelasticity L = inner(P, grad(v))*dx - inner(B, v)*dx - inner(T, v)*ds a = derivative(L, u, du) ffc-2019.1.0.post0/demo/Mass.ufl000066400000000000000000000016051346026536000161140ustar00rootroot00000000000000# Copyright (C) 2004-2007 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # The bilinear form for a mass matrix. # # Compile this form with FFC: ffc Mass.ufl element = FiniteElement("Lagrange", tetrahedron, 3) v = TestFunction(element) u = TrialFunction(element) a = u*v*dx ffc-2019.1.0.post0/demo/MathFunctions.ufl000066400000000000000000000034301346026536000177710ustar00rootroot00000000000000# Copyright (C) 2010 Kristian B. Oelgaard # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # Test all algebra operators on Coefficients. # # Compile this form with FFC: ffc MathFunctions.ufl element = FiniteElement("Lagrange", triangle, 1) c0 = Coefficient(element) c1 = Coefficient(element) s0 = 3*c0 - c1 p0 = c0*c1 f0 = c0/c1 integrand = sqrt(c0) + sqrt(s0) + sqrt(p0) + sqrt(f0)\ + exp(c0) + exp(s0) + exp(p0) + exp(f0)\ + ln(c0) + ln(s0) + ln(p0) + ln(f0)\ + cos(c0) + cos(s0) + cos(p0) + cos(f0)\ + sin(c0) + sin(s0) + sin(p0) + sin(f0)\ + tan(c0) + tan(s0) + tan(p0) + tan(f0)\ + acos(c0) + acos(s0) + acos(p0) + acos(f0)\ + asin(c0) + asin(s0) + asin(p0) + asin(f0)\ + atan(c0) + atan(s0) + atan(p0) + atan(f0)\ + erf(c0) + erf(s0) + erf(p0) + erf(f0)\ + bessel_I(1, c0) + bessel_I(1, s0) + bessel_I(0, p0) + bessel_I(0, f0)\ + bessel_J(1, c0) + bessel_J(1, s0) + bessel_J(0, p0) + bessel_J(0, f0)\ + bessel_K(1, c0) + bessel_K(1, s0) + bessel_K(0, p0) + bessel_K(0, f0)\ + bessel_Y(1, c0) + bessel_Y(1, s0) + bessel_Y(0, p0) + bessel_Y(0, f0) a = integrand*dx ffc-2019.1.0.post0/demo/MetaData.ufl000066400000000000000000000030451346026536000166710ustar00rootroot00000000000000# Copyright (C) 2009 Kristian B. Oelgaard # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # Test form for metadata # # Compile this form with FFC: ffc MetaData.ufl element = FiniteElement("Lagrange", triangle, 1) u = TrialFunction(element) v = TestFunction(element) # Three terms on the same subdomain passing representation differently # but consistently, should end up with quadrature for all three integrals # (mixing 'quadrature' and 'tensor' on same subdomain is not allowed) a_0 = u*v*dx(0, {"representation":"quadrature"})\ + inner(grad(u), grad(v))*dx(0, {"representation": "auto"})\ + inner(grad(u), grad(v))*dx(0) # Three terms on the same subdomain using different representations and order a_1 = inner(grad(u), grad(v))*dx(0, {"representation":"auto"}, degree=8)\ + inner(grad(u), grad(v))*dx(1, {"representation":"quadrature"}, degree=4)\ + inner(grad(u), grad(v))*dx(1, {"representation":"auto"}, degree="auto") a = a_0 + a_1 ffc-2019.1.0.post0/demo/Mini.ufl000066400000000000000000000023751346026536000161120ustar00rootroot00000000000000# Copyright (C) 2010 Marie E. Rognes # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # Illustration of vector sum of elements (EnrichedElement): The # bilinear form a(u, v) for the Stokes equations using a mixed # formulation involving the Mini element. The velocity element is # composed of a P1 element augmented by the cubic bubble function. # Compile this form with FFC: ffc Mini.ufl P1 = FiniteElement("Lagrange", triangle, 1) B = FiniteElement("Bubble", triangle, 3) V = VectorElement(P1 + B) Q = FiniteElement("CG", triangle, 1) Mini = V*Q (u, p) = TrialFunctions(Mini) (v, q) = TestFunctions(Mini) a = (inner(grad(u), grad(v)) - div(v)*p + div(u)*q)*dx ffc-2019.1.0.post0/demo/MixedCoefficient.ufl000066400000000000000000000020131346026536000204100ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2016 Miklós Homolya # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # Mixed coefficient # # Compile this form with FFC: ffc MixedCoefficient.ufl cell = triangle DG = VectorElement("DG", cell, 0) CG = FiniteElement("Lagrange", cell, 2) RT = FiniteElement("RT", cell, 3) element = MixedElement(DG, CG, RT) f, g, h = Coefficients(element) forms = [dot(f('+'), h('-'))*dS + g*dx] ffc-2019.1.0.post0/demo/MixedMixedElement.ufl000066400000000000000000000017201346026536000205560ustar00rootroot00000000000000# Copyright (C) 2007 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # A mixed element of mixed elements # # Compile this form with FFC: ffc MixedMixedElement.ufl cell = triangle DG = VectorElement("DG", cell, 0) CG = FiniteElement("Lagrange", cell, 2) RT = FiniteElement("RT", cell, 3) element = DG * CG * RT v = TestFunction(element) a = v[0]*dx ffc-2019.1.0.post0/demo/MixedPoisson.ufl000066400000000000000000000023141346026536000176300ustar00rootroot00000000000000# Copyright (C) 2006-2007 Anders Logg and Marie E. Rognes # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # The bilinear form a and linear form L for a mixed formulation of # Poisson's equation with BDM (Brezzi-Douglas-Marini) elements. # Compile this form with FFC: ffc MixedPoisson.ufl q = 1 BDM = FiniteElement("Brezzi-Douglas-Marini", triangle, q) DG = FiniteElement("Discontinuous Lagrange", triangle, q - 1) mixed_element = BDM * DG (sigma, u) = TrialFunctions(mixed_element) (tau, w) = TestFunctions(mixed_element) f = Coefficient(DG) a = (inner(sigma, tau) - div(tau)*u + div(sigma)*w)*dx L = f*w*dx ffc-2019.1.0.post0/demo/MixedPoissonDual.ufl000066400000000000000000000022521346026536000204370ustar00rootroot00000000000000# Copyright (C) 2014 Jan Blechta # # This file is part of FFC. # # DOLFIN is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # DOLFIN is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with DOLFIN. If not, see . # # First added: 2014-01-29 # Last changed: 2014-01-29 # # The bilinear form a(u, v) and linear form L(v) for a two-field # (mixed) formulation of Poisson's equation DRT = FiniteElement("DRT", triangle, 2) CG = FiniteElement("CG", triangle, 3) W = DRT * CG (sigma, u) = TrialFunctions(W) (tau, v) = TestFunctions(W) CG1 = FiniteElement("CG", triangle, 1) f = Coefficient(CG1) g = Coefficient(CG1) a = (dot(sigma, tau) + dot(grad(u), tau) + dot(sigma, grad(v)))*dx L = - f*v*dx - g*v*ds ffc-2019.1.0.post0/demo/NavierStokes.ufl000066400000000000000000000017761346026536000176370ustar00rootroot00000000000000# Copyright (C) 2004-2007 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # The bilinear form for the nonlinear term in the # Navier-Stokes equations with fixed convective velocity. # # Compile this form with FFC: ffc NavierStokes.ufl element = VectorElement("Lagrange", tetrahedron, 1) u = TrialFunction(element) v = TestFunction(element) w = Coefficient(element) a = w[j]*Dx(u[i], j)*v[i]*dx ffc-2019.1.0.post0/demo/NeumannProblem.ufl000066400000000000000000000020731346026536000201330ustar00rootroot00000000000000# Copyright (C) 2006-2007 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # The bilinear form a(u, v) and linear form L(v) for # Poisson's equation with Neumann boundary conditions. # # Compile this form with FFC: ffc NeumannProblem.ufl element = VectorElement("Lagrange", triangle, 1) u = TrialFunction(element) v = TestFunction(element) f = Coefficient(element) g = Coefficient(element) a = inner(grad(u), grad(v))*dx L = inner(f, v)*dx + inner(g, v)*ds ffc-2019.1.0.post0/demo/NodalMini.ufl000066400000000000000000000017731346026536000170710ustar00rootroot00000000000000# Copyright (C) 2018 Jan Blechta # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # Compile this form with FFC: ffc NodalMini.ufl cell = triangle P1 = FiniteElement("P", cell, 1) B3 = FiniteElement("B", cell, 3) V = VectorElement(NodalEnrichedElement(P1, B3)) element = MixedElement(V, P1) u, p = TrialFunctions(element) v, q = TestFunctions(element) a = (inner(grad(u), grad(v)) - div(v)*p - div(u)*q)*dx ffc-2019.1.0.post0/demo/Normals.ufl000066400000000000000000000017771346026536000166360ustar00rootroot00000000000000# Copyright (C) 2009 Peter Brune # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # This example demonstrates how to use the facet normals # Merely project the normal onto a vector section # # Compile this form with FFC: ffc Normals.ufl cell = triangle element = VectorElement("Lagrange", cell, 1) n = FacetNormal(cell) v = TrialFunction(element) u = TestFunction(element) a = dot(v, u)*ds L = dot(n, u)*ds ffc-2019.1.0.post0/demo/Optimization.ufl000066400000000000000000000017511346026536000177010ustar00rootroot00000000000000# Copyright (C) 2004-2007 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # The bilinear form a(u, v) and linear form L(v) for # Poisson's equation. # # Compile this form with FFC: ffc -O Optimization.ufl element = FiniteElement("Lagrange", triangle, 3) u = TrialFunction(element) v = TestFunction(element) f = Coefficient(element) a = inner(grad(u), grad(v))*dx L = f*v*dx ffc-2019.1.0.post0/demo/P5tet.ufl000066400000000000000000000015271346026536000162150ustar00rootroot00000000000000# Copyright (C) 2006-2007 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # A fifth degree Lagrange finite element on a tetrahedron # # Compile this form with FFC: ffc P5tet.ufl element = FiniteElement("Lagrange", tetrahedron, 5) ffc-2019.1.0.post0/demo/P5tri.ufl000066400000000000000000000015211346026536000162110ustar00rootroot00000000000000# Copyright (C) 2006-2007 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # A fifth degree Lagrange finite element on a triangle # # Compile this form with FFC: ffc P5tri.ufl element = FiniteElement("Lagrange", triangle, 5) ffc-2019.1.0.post0/demo/PointMeasure.ufl000066400000000000000000000022131346026536000176200ustar00rootroot00000000000000# Copyright (C) 2013 Marie E. Rognes # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # This demo illustrates how to use a point measure: dP # # Compile this form with FFC: ffc PointMeasure.ufl element = FiniteElement("CG", triangle, 1) V = FiniteElement("CG", triangle, 2) u = TrialFunction(element) v = TestFunction(element) g = Coefficient(element) f = Coefficient(V) a = u*v*dP + g*g*u*v*dP(1) + u*v*dx element = FiniteElement("DG", tetrahedron, 1) V = FiniteElement("DG", tetrahedron, 2) v = TestFunction(element) f = Coefficient(V) L = v*f*dP ffc-2019.1.0.post0/demo/Poisson.ufl000066400000000000000000000017411346026536000166440ustar00rootroot00000000000000# Copyright (C) 2004-2007 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # The bilinear form a(u, v) and linear form L(v) for # Poisson's equation. # # Compile this form with FFC: ffc Poisson.ufl element = FiniteElement("Lagrange", triangle, 1) u = TrialFunction(element) v = TestFunction(element) f = Coefficient(element) a = inner(grad(u), grad(v))*dx L = f*v*dx ffc-2019.1.0.post0/demo/Poisson1D.ufl000066400000000000000000000017411346026536000170310ustar00rootroot00000000000000# Copyright (C) 2004-2007 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # The bilinear form a(u, v) and linear form L(v) for # Poisson's equation. # # Compile this form with FFC: ffc Poisson.ufl element = FiniteElement("Lagrange", interval, 1) u = TrialFunction(element) v = TestFunction(element) f = Coefficient(element) a = inner(grad(u), grad(v))*dx L = f*v*dx ffc-2019.1.0.post0/demo/PoissonDG.ufl000066400000000000000000000032371346026536000170610ustar00rootroot00000000000000# Copyright (C) 2006-2007 Kristian B. Oelgaard and Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # First added: 2006-12-05 # Last changed: 2011-03-08 # # The bilinear form a(u, v) and linear form L(v) for # Poisson's equation in a discontinuous Galerkin (DG) # formulation. # # Compile this form with FFC: ffc PoissonDG.ufl # Elements element = FiniteElement("Discontinuous Lagrange", triangle, 1) # Trial and test functions u = TrialFunction(element) v = TestFunction(element) # Facet normal, mesh size and right-hand side n = FacetNormal(triangle) h = 2.0*Circumradius(triangle) f = Coefficient(element) # Compute average of mesh size h_avg = (h('+') + h('-'))/2.0 # Neumann boundary conditions gN = Coefficient(element) # Parameters alpha = 4.0 gamma = 8.0 # Bilinear form a = inner(grad(u), grad(v))*dx \ - inner(jump(u, n), avg(grad(v)))*dS \ - inner(avg(grad(u)), jump(v, n))*dS \ + alpha/h_avg*inner(jump(u, n), jump(v, n))*dS \ - inner(u*n, grad(v))*ds \ - inner(grad(u), v*n)*ds \ + gamma/h*u*v*ds # Linear form L = f*v*dx + gN*v*ds ffc-2019.1.0.post0/demo/PoissonQuad.ufl000066400000000000000000000021751346026536000174610ustar00rootroot00000000000000# Copyright (C) 2016 Jan Blechta # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # The bilinear form a(u, v) and linear form L(v) for # Poisson's equation using bilinear elements on bilinear mesh geometry. # # Compile this form with FFC: ffc PoissonQuad.ufl coords = VectorElement("P", triangle, 2) mesh = Mesh(coords) dx = dx(mesh) element = FiniteElement("P", mesh.ufl_cell(), 2) space = FunctionSpace(mesh, element) u = TrialFunction(space) v = TestFunction(space) f = Coefficient(space) a = inner(grad(u), grad(v))*dx L = f*v*dx ffc-2019.1.0.post0/demo/ProjectionManifold.ufl000066400000000000000000000022501346026536000207740ustar00rootroot00000000000000# Copyright (C) 2012 Marie E. Rognes and David Ham # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # This demo illustrates use of finite element spaces defined over # simplicies embedded in higher dimensions # # Compile this form with FFC: ffc ProjectionManifold.ufl # Define interval embedded in 3D: domain = Cell("triangle", geometric_dimension=3) # Define element over this domain V = FiniteElement("RT", domain, 1) Q = FiniteElement("DG", domain, 0) element = V*Q (u, p) = TrialFunctions(element) (v, q) = TestFunctions(element) a = (inner(u, v) + div(u)*q + div(v)*p)*dx ffc-2019.1.0.post0/demo/QuadratureElement.ufl000066400000000000000000000030121346026536000206320ustar00rootroot00000000000000# Copyright (C) 2008-2016 Kristian B. Oelgaard # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # The linearised bilinear form a(u, v) and linear form L(v) for # the nonlinear equation -div (1 + u) grad u = f (nonlinear Poisson) # # Compile this form with FFC: ffc QuadratureElement.ufl # Configure measure with specific quadrature rule scheme = "default" degree = 3 dx = Measure("dx") dx = dx(degree=degree, scheme=scheme) # Configure quadrature elements with compatible rule element = FiniteElement("Lagrange", triangle, 2) QE = FiniteElement("Quadrature", triangle, degree, quad_scheme=scheme) sig = VectorElement("Quadrature", triangle, degree, quad_scheme=scheme) u = TrialFunction(element) v = TestFunction(element) u0 = Coefficient(element) C = Coefficient(QE) sig0 = Coefficient(sig) f = Coefficient(element) a = C*u.dx(i)*v.dx(i)*dx + 2*u0*u0.dx(i)*u*v.dx(i)*dx L = f*v*dx - inner(sig0, grad(v))*dx ffc-2019.1.0.post0/demo/README000066400000000000000000000004461346026536000153630ustar00rootroot00000000000000To compile a form in this directory, just type ffc .ufl for example ffc Poisson.ufl To run these demos from within the source tree without needing to install FFC system-wide, update your paths according to export PATH="../scripts:$PATH" export PYTHONPATH="..:$PYTHONPATH" ffc-2019.1.0.post0/demo/ReactionDiffusion.ufl000066400000000000000000000020401346026536000206160ustar00rootroot00000000000000# Copyright (C) 2009 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # The bilinear form a(u, v) and linear form L(v) for a simple # reaction-diffusion equation using simplified tuple notation. # # Compile this form with FFC: ffc ReactionDiffusion.ufl element = FiniteElement("Lagrange", triangle, 1) u = TrialFunction(element) v = TestFunction(element) f = Coefficient(element) a = (inner(grad(u), grad(v)) + u*v)*dx L = f*v*dx ffc-2019.1.0.post0/demo/RestrictedElement.ufl000066400000000000000000000027571346026536000206440ustar00rootroot00000000000000# Copyright (C) 2009 Kristian B. Oelgaard # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # Restriction of a finite element. # The below syntax show how one can restrict a higher order Lagrange element # to only take into account those DOFs that live on the facets. # # Compile this form with FFC: ffc RestrictedElement.ufl # Restricted element CG_R = FiniteElement("Lagrange", triangle, 4)["facet"] u_r = TrialFunction(CG_R) v_r = TestFunction(CG_R) a = avg(v_r)*avg(u_r)*dS + v_r*u_r*ds #CG = FiniteElement("Lagrange", triangle, 4) #CG_R = CG["facet"] #u_r = TrialFunction(CG_R) #v_r = TestFunction(CG_R) #a = v_r('+')*u_r('+')*dS + v_r('-')*u_r('-')*dS + v_r*u_r*ds # Mixed element #CG = FiniteElement("Lagrange", triangle, 4) #CG_R = CG["facet"] #ME = CG * CG_R #u, u_r = TrialFunctions(ME) #v, v_r = TestFunctions(ME) #a = v*u*dx + v_r('+')*u_r('+')*dS + v_r('+')*u_r('+')*dS + v_r*u_r*ds ffc-2019.1.0.post0/demo/SpatialCoordinates.ufl000066400000000000000000000022611346026536000210000ustar00rootroot00000000000000# Copyright (C) 2010 Kristian B. Oelgaard # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # The bilinear form a(u, v) and linear form L(v) for # Poisson's equation where spatial coordinates are used to define the source # and boundary flux terms. # # Compile this form with FFC: ffc SpatialCoordinates.ufl element = FiniteElement("Lagrange", triangle, 2) u = TrialFunction(element) v = TestFunction(element) x = SpatialCoordinate(triangle) d_x = x[0] - 0.5 d_y = x[1] - 0.5 f = 10.0*exp(-(d_x*d_x + d_y*d_y) / 0.02) g = sin(5.0*x[0]) a = inner(grad(u), grad(v))*dx L = f*v*dx + g*v*ds ffc-2019.1.0.post0/demo/StabilisedStokes.ufl000066400000000000000000000023511346026536000204640ustar00rootroot00000000000000# Copyright (c) 2005-2007 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # The bilinear form a(u, v) and Linear form L(v) for the Stokes # equations using a mixed formulation (equal-order stabilized). # # Compile this form with FFC: ffc Stokes.ufl vector = VectorElement("Lagrange", triangle, 1) scalar = FiniteElement("Lagrange", triangle, 1) system = vector * scalar (u, p) = TrialFunctions(system) (v, q) = TestFunctions(system) f = Coefficient(vector) h = Coefficient(scalar) beta = 0.2 delta = beta*h*h a = (inner(grad(u), grad(v)) - div(v)*p + div(u)*q + delta*dot(grad(p), grad(q)))*dx L = dot(f, v + delta*grad(q))*dx ffc-2019.1.0.post0/demo/Stokes.ufl000066400000000000000000000021441346026536000164600ustar00rootroot00000000000000# Copyright (C) 2005-2007 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # The bilinear form a(u, v) and Linear form L(v) for the Stokes # equations using a mixed formulation (Taylor-Hood elements). # Compile this form with FFC: ffc Stokes.ufl P2 = VectorElement("Lagrange", triangle, 2) P1 = FiniteElement("Lagrange", triangle, 1) TH = P2 * P1 (u, p) = TrialFunctions(TH) (v, q) = TestFunctions(TH) f = Coefficient(P2) a = (inner(grad(u), grad(v)) - div(v)*p + div(u)*q)*dx L = inner(f, v)*dx ffc-2019.1.0.post0/demo/SubDomain.ufl000066400000000000000000000017531346026536000170760ustar00rootroot00000000000000# Copyright (C) 2008 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # This example illustrates how to define a form over a # given subdomain of a mesh, in this case a functional. # # Compile this form with FFC: ffc SubDomain.ufl element = FiniteElement("CG", tetrahedron, 1) v = TestFunction(element) u = TrialFunction(element) f = Coefficient(element) M = f*dx(2) + f*ds(5) ffc-2019.1.0.post0/demo/SubDomains.ufl000066400000000000000000000021321346026536000172510ustar00rootroot00000000000000# Copyright (C) 2008 Anders Logg and Kristian B. Oelgaard # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # This simple example illustrates how forms can be defined on # different sub domains. It is supported for all three integral # types. # # Compile this form with FFC: ffc SubDomains.ufl element = FiniteElement("CG", tetrahedron, 1) u = TrialFunction(element) v = TestFunction(element) a = u*v*dx(0) + 10.0*u*v*dx(1) + u*v*ds(0) + 2.0*u*v*ds(1) + u('+')*v('+')*dS(0) + 4.3*u('+')*v('+')*dS(1) ffc-2019.1.0.post0/demo/TensorWeightedPoisson.ufl000066400000000000000000000020421346026536000215130ustar00rootroot00000000000000# Copyright (C) 2005-2007 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # The bilinear form a(u, v) for tensor-weighted Poisson's equation. # # Compile this form with FFC: ffc TensorWeightedPoisson.ufl P1 = FiniteElement("Lagrange", triangle, 1) P0 = TensorElement("Discontinuous Lagrange", triangle, 0, (2, 2)) u = TrialFunction(P1) v = TestFunction(P1) f = Coefficient(P1) C = Coefficient(P0) a = inner(C*grad(u), grad(v))*dx ffc-2019.1.0.post0/demo/TraceElement.ufl000066400000000000000000000014341346026536000175610ustar00rootroot00000000000000# Copyright (C) 2015 Marie E. Rognes # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . element = FiniteElement("HDiv Trace", "triangle", 0) v = TestFunction(element) L = v*ds + avg(v)*dS ffc-2019.1.0.post0/demo/VectorLaplaceGradCurl.ufl000066400000000000000000000032531346026536000213620ustar00rootroot00000000000000# Copyright (C) 2007 Marie Rognes # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # The bilinear form a(u, v) and linear form L(v) for the Hodge Laplace # problem using 0- and 1-forms. Intended to demonstrate use of N1curl # elements. # Compile this form with FFC: ffc VectorLaplaceGradCurl.ufl def HodgeLaplaceGradCurl(element, felement): """This is a formulation of the Hodge Laplacian using k=1 and n=3, i.e 0-forms and 1-forms in 3D. Appropriate elements are GRAD \times CURL = Lagrange_r \ times Ned^1_{r} Lagrange_{r+1} \ times Ned^2_{r} """ (sigma, u) = TrialFunctions(element) (tau, v) = TestFunctions(element) f = Coefficient(felement) a = (inner(sigma, tau) - inner(grad(tau), u) + inner(grad(sigma), v) + inner(curl(u), curl(v)))*dx L = inner(f, v)*dx return [a, L] shape = tetrahedron order = 1 GRAD = FiniteElement("Lagrange", shape, order) CURL = FiniteElement("N1curl", shape, order) VectorLagrange = VectorElement("Lagrange", shape, order+1) [a, L] = HodgeLaplaceGradCurl(GRAD * CURL, VectorLagrange) ffc-2019.1.0.post0/demo/VectorPoisson.ufl000066400000000000000000000017741346026536000200350ustar00rootroot00000000000000# Copyright (C) 2010 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # The bilinear form a(u, v) and linear form L(v) for # the vector-valued Poisson's equation. # # Compile this form with FFC: ffc VectorPoisson.ufl element = VectorElement("Lagrange", triangle, 1) u = TrialFunction(element) v = TestFunction(element) f = Coefficient(element) a = inner(grad(u), grad(v))*dx L = inner(f, v)*dx ffc-2019.1.0.post0/demo/plotelements.py000066400000000000000000000040521346026536000175650ustar00rootroot00000000000000"This program demonstrates how to plot finite elements with FFC." # Copyright (C) 2010 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # First added: 2010-12-07 # Last changed: 2010-12-10 from ufl import * from ffc import * #element = FiniteElement("Argyris", triangle, 5) #element = FiniteElement("Arnold-Winther", triangle) #element = FiniteElement("Brezzi-Douglas-Marini", triangle, 3) element = FiniteElement("Brezzi-Douglas-Marini", tetrahedron, 3) #element = FiniteElement("Crouzeix-Raviart", triangle, 1) #element = FiniteElement("Crouzeix-Raviart", tetrahedron, 1) #element = FiniteElement("Discontinuous Lagrange", triangle, 3) #element = FiniteElement("Discontinuous Lagrange", tetrahedron, 3) #element = FiniteElement("Hermite", triangle) #element = FiniteElement("Hermite", tetrahedron) #element = FiniteElement("Lagrange", triangle, 3) #element = FiniteElement("Lagrange", tetrahedron, 3) #element = FiniteElement("Mardal-Tai-Winther", triangle) #element = FiniteElement("Morley", triangle) #element = FiniteElement("Nedelec 1st kind H(curl)", triangle, 3) #element = FiniteElement("Nedelec 1st kind H(curl)", tetrahedron, 3) #element = FiniteElement("Nedelec 2nd kind H(curl)", triangle, 3) #element = FiniteElement("Nedelec 2nd kind H(curl)", tetrahedron, 1) #element = FiniteElement("Raviart-Thomas", triangle, 3) #element = FiniteElement("Raviart-Thomas", tetrahedron, 3) plot(element) #plot(element, rotate=False) #plot("notation") ffc-2019.1.0.post0/doc/000077500000000000000000000000001346026536000143205ustar00rootroot00000000000000ffc-2019.1.0.post0/doc/man/000077500000000000000000000000001346026536000150735ustar00rootroot00000000000000ffc-2019.1.0.post0/doc/man/man1/000077500000000000000000000000001346026536000157275ustar00rootroot00000000000000ffc-2019.1.0.post0/doc/man/man1/ffc.1.gz000066400000000000000000000056141346026536000171740ustar00rootroot00000000000000WYMs8W`"Rv{ITĉw=q*r&JA$(a$%@R<\AF;7RoZs56f-VdUݯ>O>jYv۫-si۠-];&t*N/]__j꫚Eg^Y_}ps^$7eԦ { V2Zu- [jXf⡊BwKҶZZ'봬𘿐Z;;]`#Oܹ-159[m+YzF>+ͦ>H[[oCĉier'[ XQQPK]+Wp6U-;g7nbev*Breu)i-mK2'nWVa1n5`98Hu<SOx&k-\U}jqQҗC,`Bz.J+/(; g69*(\ ,^.s$#HS,ec=kƒFd|L4P}]9 DB\vpni0G DW{ejEDx4W1hR}>^H?_6A100 q/ӑ)ml;ea=Ͷe.`hLgNlgğ SDJ/9ԍ.Wa[զo~a#3(s nQ̅]lE1ma(QPU}[Nj\` q5O˛c?BzATGHJT pP:#}@܍$sE QpiX{PH8O*C&PԄvQ~$ۣ(v8Fhйz8 6Fr;( ۷QӐX{pi,yix;-N7 .?طWVw##cQqKQS"sN:yXLc~Po R̾#ybL M謡ŝ"?&VHerĿARGMπ9p08[!#XaJ DL8 E 6:86 zPNkJB Pӊ@x3u1]`!0L#N>4zPa.v|-Ri[IKaI#:n:AC`ߍ=Q|O땸|xXiW/[xyj.Fәd)5+ȉ|hn=50TeNϤ|$1΄@h9""i:b8oO! ޖG^k9$ 郾K:j@yqc"'@Q] 19# X,i[j"og~%ҴB T "jX+y..@-mxoCcӎ{}sy?6 iABVN%*!Qvpϛ+NiuOj+b֎{@{"UrQ // ȳ'e!F RQO==-EIg"nvq> ±a.MG:èO'@6-pZ02Ao/ҩw'Tk_6YRJP&]g]x^fkr#qTRb:wwD^Vnv+j| kjy$~%Ú|者VA_ȟ)h-/}_nTwPjffc-2019.1.0.post0/doc/sphinx/000077500000000000000000000000001346026536000156315ustar00rootroot00000000000000ffc-2019.1.0.post0/doc/sphinx/Makefile000066400000000000000000000157741346026536000173070ustar00rootroot00000000000000# Makefile for Sphinx documentation # # You can set these variables from the command line. SPHINXOPTS = SPHINXBUILD = sphinx-build PAPER = BUILDDIR = build # User-friendly check for sphinx-build ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1) $(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/) endif # Internal variables. PAPEROPT_a4 = -D latex_paper_size=a4 PAPEROPT_letter = -D latex_paper_size=letter ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source # the i18n builder cannot share the environment and doctrees with the others I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) source .PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest coverage gettext help: @echo "Please use \`make ' where is one of" @echo " html to make standalone HTML files" @echo " dirhtml to make HTML files named index.html in directories" @echo " singlehtml to make a single large HTML file" @echo " pickle to make pickle files" @echo " json to make JSON files" @echo " htmlhelp to make HTML files and a HTML help project" @echo " qthelp to make HTML files and a qthelp project" @echo " applehelp to make an Apple Help Book" @echo " devhelp to make HTML files and a Devhelp project" @echo " epub to make an epub" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" @echo " latexpdf to make LaTeX files and run them through pdflatex" @echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx" @echo " text to make text files" @echo " man to make manual pages" @echo " texinfo to make Texinfo files" @echo " info to make Texinfo files and run them through makeinfo" @echo " gettext to make PO message catalogs" @echo " changes to make an overview of all changed/added/deprecated items" @echo " xml to make Docutils-native XML files" @echo " pseudoxml to make pseudoxml-XML files for display purposes" @echo " linkcheck to check all external links for integrity" @echo " doctest to run all doctests embedded in the documentation (if enabled)" @echo " coverage to run coverage check of the documentation (if enabled)" clean: rm -rf $(BUILDDIR)/* html: $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." dirhtml: $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." singlehtml: $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml @echo @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml." pickle: $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle @echo @echo "Build finished; now you can process the pickle files." json: $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json @echo @echo "Build finished; now you can process the JSON files." htmlhelp: $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp @echo @echo "Build finished; now you can run HTML Help Workshop with the" \ ".hhp project file in $(BUILDDIR)/htmlhelp." qthelp: $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp @echo @echo "Build finished; now you can run "qcollectiongenerator" with the" \ ".qhcp project file in $(BUILDDIR)/qthelp, like this:" @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/FEniCSFormCompilerFFC.qhcp" @echo "To view the help file:" @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/FEniCSFormCompilerFFC.qhc" applehelp: $(SPHINXBUILD) -b applehelp $(ALLSPHINXOPTS) $(BUILDDIR)/applehelp @echo @echo "Build finished. The help book is in $(BUILDDIR)/applehelp." @echo "N.B. You won't be able to view it unless you put it in" \ "~/Library/Documentation/Help or install it in your application" \ "bundle." epub: $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub @echo @echo "Build finished. The epub file is in $(BUILDDIR)/epub." latex: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." @echo "Run \`make' in that directory to run these through (pdf)latex" \ "(use \`make latexpdf' here to do that automatically)." latexpdf: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo "Running LaTeX files through pdflatex..." $(MAKE) -C $(BUILDDIR)/latex all-pdf @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." latexpdfja: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo "Running LaTeX files through platex and dvipdfmx..." $(MAKE) -C $(BUILDDIR)/latex all-pdf-ja @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." text: $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text @echo @echo "Build finished. The text files are in $(BUILDDIR)/text." man: $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man @echo @echo "Build finished. The manual pages are in $(BUILDDIR)/man." texinfo: $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo @echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo." @echo "Run \`make' in that directory to run these through makeinfo" \ "(use \`make info' here to do that automatically)." info: $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo "Running Texinfo files through makeinfo..." make -C $(BUILDDIR)/texinfo info @echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo." gettext: $(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale @echo @echo "Build finished. The message catalogs are in $(BUILDDIR)/locale." changes: $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes @echo @echo "The overview file is in $(BUILDDIR)/changes." linkcheck: $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck @echo @echo "Link check complete; look for any errors in the above output " \ "or in $(BUILDDIR)/linkcheck/output.txt." doctest: $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest @echo "Testing of doctests in the sources finished, look at the " \ "results in $(BUILDDIR)/doctest/output.txt." coverage: $(SPHINXBUILD) -b coverage $(ALLSPHINXOPTS) $(BUILDDIR)/coverage @echo "Testing of coverage in the sources finished, look at the " \ "results in $(BUILDDIR)/coverage/python.txt." xml: $(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml @echo @echo "Build finished. The XML files are in $(BUILDDIR)/xml." pseudoxml: $(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml @echo @echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml." ffc-2019.1.0.post0/doc/sphinx/README000066400000000000000000000011611346026536000165100ustar00rootroot00000000000000==================== Sphinx documentation ==================== FFC is documented using Sphinx and reStructured text. The documentation is hosted at http://fenics-ffc.readthedocs.org/. The online documentation is automatically updated upon pushes to the FFC master branch. Updating the API documentation ============================== If the FFC API is changed, the script:: ./generate-apidoc must be run to update the autodoc file. The script can be run from any directory. Building the documentation locally ================================== The HTML documentation can be built locally using:: make html ffc-2019.1.0.post0/doc/sphinx/requirements.txt000066400000000000000000000003141346026536000211130ustar00rootroot00000000000000-e git+https://bitbucket.org/fenics-project/dijitso.git#egg=dijitso -e git+https://bitbucket.org/fenics-project/fiat.git#egg=fiat -e git+https://bitbucket.org/fenics-project/ufl.git#egg=ufl sphinx==1.7.0 ffc-2019.1.0.post0/doc/sphinx/source/000077500000000000000000000000001346026536000171315ustar00rootroot00000000000000ffc-2019.1.0.post0/doc/sphinx/source/conf.py000066400000000000000000000242751346026536000204420ustar00rootroot00000000000000# -*- coding: utf-8 -*- # # FEniCS Form Compiler (FFC) documentation build configuration file, created by # sphinx-quickstart on Wed Nov 4 11:52:46 2015. # # This file is execfile()d with the current directory set to its # containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. import sys import os import shlex import pkg_resources import datetime # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. #sys.path.insert(0, os.path.abspath('.')) # -- General configuration ------------------------------------------------ # If your documentation needs a minimal Sphinx version, state it here. #needs_sphinx = '1.0' # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom # ones. extensions = [ 'sphinx.ext.autodoc', 'sphinx.ext.doctest', 'sphinx.ext.intersphinx', 'sphinx.ext.coverage', 'sphinx.ext.mathjax', 'sphinx.ext.viewcode', 'sphinx.ext.todo', ] # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix(es) of source filenames. # You can specify multiple suffix as a list of string: # source_suffix = ['.rst', '.md'] source_suffix = '.rst' # The encoding of source files. #source_encoding = 'utf-8-sig' # The master toctree document. master_doc = 'index' # General information about the project. project = u'FEniCS Form Compiler (FFC)' this_year = datetime.date.today().year copyright = u'%s, FEniCS Project' % this_year author = u'FEniCS Project' version = pkg_resources.get_distribution("fenics-ffc").version release = version # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. # # This is also used if you do content translation via gettext catalogs. # Usually you set "language" from the command line for these cases. language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: #today = '' # Else, today_fmt is used as the format for a strftime call. #today_fmt = '%B %d, %Y' # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. exclude_patterns = [] # The reST default role (used for this markup: `text`) to use for all # documents. #default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. #add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). #add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. #show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # A list of ignored prefixes for module index sorting. #modindex_common_prefix = [] # If true, keep warnings as "system message" paragraphs in the built documents. #keep_warnings = False # If true, `todo` and `todoList` produce output, else they produce nothing. todo_include_todos = True # -- Options for HTML output ---------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. #html_theme = 'alabaster' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. #html_theme_options = {} # Add any paths that contain custom themes here, relative to this directory. #html_theme_path = [] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". #html_title = None # A shorter title for the navigation bar. Default is the same as html_title. #html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. #html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. #html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ['_static'] # Add any extra paths that contain custom files (such as robots.txt or # .htaccess) here, relative to this directory. These files are copied # directly to the root of the documentation. #html_extra_path = [] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. #html_last_updated_fmt = '%b %d, %Y' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. #html_use_smartypants = True # Custom sidebar templates, maps document names to template names. #html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. #html_additional_pages = {} # If false, no module index is generated. #html_domain_indices = True # If false, no index is generated. #html_use_index = True # If true, the index is split into individual pages for each letter. #html_split_index = False # If true, links to the reST sources are added to the pages. #html_show_sourcelink = True # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. #html_show_sphinx = True # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. #html_show_copyright = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. #html_use_opensearch = '' # This is the file name suffix for HTML files (e.g. ".xhtml"). #html_file_suffix = None # Language to be used for generating the HTML full-text search index. # Sphinx supports the following languages: # 'da', 'de', 'en', 'es', 'fi', 'fr', 'hu', 'it', 'ja' # 'nl', 'no', 'pt', 'ro', 'ru', 'sv', 'tr' #html_search_language = 'en' # A dictionary with options for the search language support, empty by default. # Now only 'ja' uses this config value #html_search_options = {'type': 'default'} # The name of a javascript file (relative to the configuration directory) that # implements a search results scorer. If empty, the default will be used. #html_search_scorer = 'scorer.js' # Output file base name for HTML help builder. htmlhelp_basename = 'FEniCSFormCompilerFFCdoc' # -- Options for LaTeX output --------------------------------------------- latex_elements = { # The paper size ('letterpaper' or 'a4paper'). #'papersize': 'letterpaper', # The font size ('10pt', '11pt' or '12pt'). #'pointsize': '10pt', # Additional stuff for the LaTeX preamble. #'preamble': '', # Latex figure (float) alignment #'figure_align': 'htbp', } # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, # author, documentclass [howto, manual, or own class]). latex_documents = [ (master_doc, 'FEniCSFormCompilerFFC.tex', u'FEniCS Form Compiler (FFC) Documentation', u'FEniCS Project', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. #latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. #latex_use_parts = False # If true, show page references after internal links. #latex_show_pagerefs = False # If true, show URL addresses after external links. #latex_show_urls = False # Documents to append as an appendix to all manuals. #latex_appendices = [] # If false, no module index is generated. #latex_domain_indices = True # -- Options for manual page output --------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [ (master_doc, 'fenicsformcompilerffc', u'FEniCS Form Compiler (FFC) Documentation', [author], 1) ] # If true, show URL addresses after external links. #man_show_urls = False # -- Options for Texinfo output ------------------------------------------- # Grouping the document tree into Texinfo files. List of tuples # (source start file, target name, title, author, # dir menu entry, description, category) texinfo_documents = [ (master_doc, 'FEniCSFormCompilerFFC', u'FEniCS Form Compiler (FFC) Documentation', author, 'FEniCSFormCompilerFFC', 'One line description of project.', 'Miscellaneous'), ] # Documents to append as an appendix to all manuals. #texinfo_appendices = [] # If false, no module index is generated. #texinfo_domain_indices = True # How to display URL addresses: 'footnote', 'no', or 'inline'. #texinfo_show_urls = 'footnote' # If true, do not generate a @detailmenu in the "Top" node's menu. #texinfo_no_detailmenu = False # Example configuration for intersphinx: refer to the Python standard library. intersphinx_mapping = {'https://docs.python.org/': None} def run_apidoc(_): from sphinx.apidoc import main # Get location of Sphinx files sphinx_source_dir = os.path.abspath(os.path.dirname(__file__)) repo_dir = os.path.abspath(os.path.join(sphinx_source_dir, os.path.pardir, os.path.pardir, os.path.pardir)) apidoc_dir = os.path.join(sphinx_source_dir, "api-doc") # Include these modules modules = ['ffc'] for module in modules: # Generate .rst files ready for autodoc module_dir = os.path.join(repo_dir, module) main(["-f", # Overwrite existing files "-d", "1", # Maximum depth of submodules to show in the TOC "-o", apidoc_dir, # Directory to place all output module_dir # Module directory ] ) def setup(app): app.connect('builder-inited', run_apidoc) ffc-2019.1.0.post0/doc/sphinx/source/index.rst000066400000000000000000000017021346026536000207720ustar00rootroot00000000000000.. title:: FFC ============================= FFC: The FEniCS Form Compiler ============================= FFC is a compiler for finite element variational forms. From a high-level description of the form, it generates efficient low-level C++ code that can be used to assemble the corresponding discrete operator (tensor). In particular, a bilinear form may be assembled into a matrix and a linear form may be assembled into a vector. FFC may be used either from the command line (by invoking the ``ffc`` command) or as a Python module (``import ffc``). FFC is part of the FEniCS Project. For more information, visit http://www.fenicsproject.org Documentation ============= .. toctree:: :titlesonly: :maxdepth: 1 installation manual API reference (FFC) API reference (UFLACS) releases [FIXME: These links don't belong here, should go under API reference somehow.] * :ref:`genindex` * :ref:`modindex` ffc-2019.1.0.post0/doc/sphinx/source/installation.rst000066400000000000000000000051231346026536000223650ustar00rootroot00000000000000.. title:: Installation ============ Installation ============ FFC is normally installed as part of an installation of FEniCS. If you are using FFC as part of the FEniCS software suite, it is recommended that you follow the `installation instructions for FEniCS `__. To install FFC itself, read on below for a list of requirements and installation instructions. Requirements and dependencies ============================= FFC requires Python version 3.5 or later and depends on the following Python packages: * NumPy * six FFC also depends on the following FEniCS Python packages: * FIAT * UFL * dijitso These packages will be automatically installed as part of the installation of FFC, if not already present on your system. .. _tsfc_requirements: TSFC requirements ----------------- To use experimental ``tsfc`` representation, additional dependencies are needed: * `TSFC `_ [1]_ * `COFFEE `_ [1]_ * `FInAT `_ [1]_ and in turn their additional dependencies: * singledispatch [2]_ * networkx [2]_ * PuLP [2]_ .. note:: TSFC requirements are not installed in FEniCS Docker images by default yet but they can be easilly installed on demand:: docker pull quay.io/fenicsproject/stable docker run -ti --rm quay.io/fenicsproject/stable pip3 install --prefix=${FENICS_PREFIX} --no-cache-dir \ git+https://github.com/blechta/tsfc.git@2018.1.0 \ git+https://github.com/blechta/COFFEE.git@2018.1.0 \ git+https://github.com/blechta/FInAT.git@2018.1.0 \ singledispatch networkx pulp && \ sudo rm -rf /tmp/* /var/tmp/* The first two commands (or their modification, or ``fenicsproject`` helper script) are to be run on a host, while the last command, to be run in the container, actually installs all the TSFC requirements. For further reading, see `FEniCS Docker reference `_. .. [1] These are forks of the original packages tested to be compatible with FFC and updated frequently from upstream. .. [2] Pip-installable. Installation instructions ========================= To install FFC, download the source code from the `FFC Bitbucket repository `__, and run the following command: .. code-block:: console pip3 install . To install to a specific location, add the ``--prefix`` flag to the installation command: .. code-block:: console pip3 install --prefix= . ffc-2019.1.0.post0/doc/sphinx/source/manual.rst000066400000000000000000000001471346026536000211420ustar00rootroot00000000000000.. title:: User manual =========== User manual =========== .. note:: This page is work in progress. ffc-2019.1.0.post0/doc/sphinx/source/releases.rst000066400000000000000000000003771346026536000214750ustar00rootroot00000000000000Release notes ============= .. toctree:: :maxdepth: 2 releases/next releases/v2019.1.0 releases/v2018.1.0 releases/v2017.2.0 releases/v2017.1.0.post2 releases/v2017.1.0 releases/v2016.2.0 releases/v2016.1.0 releases/v1.6.0 ffc-2019.1.0.post0/doc/sphinx/source/releases/000077500000000000000000000000001346026536000207345ustar00rootroot00000000000000ffc-2019.1.0.post0/doc/sphinx/source/releases/next.rst000066400000000000000000000010021346026536000224350ustar00rootroot00000000000000=========================== Changes in the next release =========================== Summary of changes ================== .. note:: Developers should use this page to track and list changes during development. At the time of release, this page should be published (and renamed) to list the most important changes in the new release. Detailed changes ================ .. note:: At the time of release, make a verbatim copy of the ChangeLog here (and remove this note). ffc-2019.1.0.post0/doc/sphinx/source/releases/v1.6.0.rst000066400000000000000000000006021346026536000223140ustar00rootroot00000000000000======================== Changes in version 1.6.0 ======================== FFC 1.6.0 was released on 2015-07-28. - Rename and modify a number of UFC interface functions. See docstrings in ufc.h for details. - Bump required SWIG version to 3.0.3 - Disable dual basis (``tabulate_coordinates`` and ``evaluate_dofs``) for enriched elements until correct implementation is brought up ffc-2019.1.0.post0/doc/sphinx/source/releases/v2016.1.0.rst000066400000000000000000000007341346026536000225450ustar00rootroot00000000000000=========================== Changes in version 2016.1.0 =========================== FFC 2016.1.0 was released on 2016-06-23. - Add function get_ufc_include to get path to ufc.h - Merge UFLACS into FFC - Generalize ufc interface to non-affine parameterized coordinates - Add ufc::coordinate_mapping class - Make ufc interface depend on C++11 features requiring gcc version >= 4.8 - Add function ufc_signature() to the form compiler interface - Add function git_commit_hash() ffc-2019.1.0.post0/doc/sphinx/source/releases/v2016.2.0.rst000066400000000000000000000025401346026536000225430ustar00rootroot00000000000000=========================== Changes in version 2016.2.0 =========================== FFC 2016.2.0 was released on 2016-11-30. Summary of changes ================== - Generalize ufc interface to non-affine parameterized coordinates - Add ``ufc::coordinate_mapping`` class - Make ufc interface depend on C++11 features requiring gcc version >= 4.8 - Change the mapping ``pullback as metric`` to ``double covariant piola`` (this preserves tangential-tangential trace). - Added Hellan-Herrmann-Johnson element as supported element - Add mapping ``double contravariant piola`` (this preserves normal-normal trace). - Include comment with effective representation and integral metadata to generated ``tabulate_tensor`` code Detailed changes ================ - Jit compiler now compiles elements separately from forms to avoid duplicate work - Add parameter max_signature_length to optionally shorten signatures in the jit cache - Move uflacs module into ffc.uflacs - Remove installation of pkg-config and CMake files (UFC path and compiler flags are available from ffc module) - Add dependency on dijitso and remove dependency on instant - Add experimental Bitbucket pipelines - Tidy the repo after UFC and UFLACS merge, and general spring cleanup. This includes removal of instructions how to merge two repos, commit hash c8389032268041fe94682790cb773663bdf27286. ffc-2019.1.0.post0/doc/sphinx/source/releases/v2017.1.0.post2.rst000066400000000000000000000003261346026536000236110ustar00rootroot00000000000000=========================== Changes in version 2017.1.0 =========================== FFC 2017.1.0.post2 was released on 2017-09-12. Summary of changes ================== - Change PyPI package name to fenics-ffc. ffc-2019.1.0.post0/doc/sphinx/source/releases/v2017.1.0.rst000066400000000000000000000007521346026536000225460ustar00rootroot00000000000000=========================== Changes in version 2017.1.0 =========================== FFC 2017.1.0 was released on 2017-05-09. Summary of changes ================== - Add experimental ``tsfc`` representation; for installation see :ref:`tsfc_requirements` Detailed changes ================ - Let ffc -O parameter take an optional integer level like -O2, -O0 - Implement blockwise optimizations in uflacs code generation - Expose uflacs optimization parameters through parameter system ffc-2019.1.0.post0/doc/sphinx/source/releases/v2017.2.0.rst000066400000000000000000000016471346026536000225530ustar00rootroot00000000000000=========================== Changes in version 2017.2.0 =========================== FFC 2017.2.0 was released on 2017-12-05. Summary of changes ================== - Some fixes for ufc::eval for esoteric element combinations - Reimplement code generation for all ufc classes with new class ufc::coordinate_mapping which can map between coordinates, compute jacobians, etc. for a coordinate mapping parameterized by a specific finite element. - New functions in ufc::finite_element: - evaluate_reference_basis - evaluate_reference_basis_derivatives - transform_reference_basis_derivatives - tabulate_reference_dof_coordinates - New functions in ufc::dofmap: - num_global_support_dofs - num_element_support_dofs - Improved docstrings for parts of ufc.h - FFC now accepts Q and DQ finite element families defined on quadrilaterals and hexahedrons - Some fixes for ufc_geometry.h for quadrilateral and hexahedron cells ffc-2019.1.0.post0/doc/sphinx/source/releases/v2018.1.0.rst000066400000000000000000000003001346026536000225340ustar00rootroot00000000000000=========================== Changes in version 2018.1.0 =========================== FFC 2018.1.0 was released on 2018-06-14. Summary of changes ================== - Remove python2 support. ffc-2019.1.0.post0/doc/sphinx/source/releases/v2019.1.0.rst000066400000000000000000000004041346026536000225420ustar00rootroot00000000000000=========================== Changes in version 2019.1.0 =========================== Summary of changes ================== - Support ``NodalEnrichedElement`` Detailed changes ================ - Support ``NodalEnrichedElement`` see ``demo/NodalMini.ufl``. ffc-2019.1.0.post0/ffc/000077500000000000000000000000001346026536000143115ustar00rootroot00000000000000ffc-2019.1.0.post0/ffc/__init__.py000066400000000000000000000031551346026536000164260ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ FEniCS Form Compiler (FFC) -------------------------- FFC compiles finite element variational forms into C++ code. The interface consists of the following functions: compile_form - Compilation of forms compile_element - Compilation of finite elements jit - Just-In-Time compilation of forms and elements default_parameters - Default parameter values for FFC ufc_signature - Signature of UFC interface (SHA-1 hash of ufc.h) """ import pkg_resources __version__ = pkg_resources.get_distribution("fenics-ffc").version from ffc.git_commit_hash import git_commit_hash # Import compiler functions from ffc.compiler import compile_form, compile_element # Import JIT compiler from ffc.jitcompiler import jit # Import UFC config functions from ffc.backends.ufc import get_include_path, get_ufc_cxx_flags, get_ufc_signature, ufc_signature # Import default parameters from ffc.parameters import default_parameters, default_jit_parameters # Import plotting from ffc.plot import * # Duplicate list of supported elements from FIAT from FIAT import supported_elements supported_elements = sorted(supported_elements.keys()) # Append elements that we can plot from ffc.plot import element_colors supported_elements_for_plotting = list(set(supported_elements).union(set(element_colors.keys()))) supported_elements_for_plotting.sort() # Remove elements from list that we don't support or don't trust supported_elements.remove("Argyris") supported_elements.remove("Hermite") supported_elements.remove("Morley") # Import main function, entry point to script from ffc.main import main ffc-2019.1.0.post0/ffc/__main__.py000066400000000000000000000016001346026536000164000ustar00rootroot00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- # Copyright (C) 2017-2017 Martin Sandve Alnæs # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . """This is triggered by running 'python -m ffc'.""" from ffc.main import main if __name__ == "__main__": import sys sys.exit(main()) ffc-2019.1.0.post0/ffc/analysis.py000066400000000000000000000571721346026536000165220ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2007-2017 Anders Logg, Martin Alnaes, Kristian B. Oelgaard, # and others # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . """ Compiler stage 1: Analysis -------------------------- This module implements the analysis/preprocessing of variational forms, including automatic selection of elements, degrees and form representation type. """ import numpy import os import copy from itertools import chain # UFL modules from ufl.classes import Form, CellVolume, FacetArea from ufl.integral import Integral from ufl.finiteelement import MixedElement, EnrichedElement, VectorElement from ufl.algorithms import sort_elements from ufl.algorithms import compute_form_data from ufl.algorithms.analysis import extract_sub_elements from ufl import custom_integral_types # FFC modules from ffc.log import info, begin, end, warning, debug, error, ffc_assert, warning_blue from ffc.utils import all_equal # Default precision for formatting floats default_precision = numpy.finfo("double").precision + 1 # == 16 def analyze_forms(forms, parameters): """ Analyze form(s), returning form_datas - a tuple of form_data objects unique_elements - a tuple of unique elements across all forms element_numbers - a mapping to unique numbers for all elements """ return analyze_ufl_objects(forms, "form", parameters) def analyze_elements(elements, parameters): return analyze_ufl_objects(elements, "element", parameters) def analyze_coordinate_mappings(coordinate_elements, parameters): return analyze_ufl_objects(coordinate_elements, "coordinate_mapping", parameters) def analyze_ufl_objects(ufl_objects, kind, parameters): """ Analyze ufl object(s), either forms, elements, or coordinate mappings, returning: form_datas - a tuple of form_data objects unique_elements - a tuple of unique elements across all forms element_numbers - a mapping to unique numbers for all elements """ begin("Compiler stage 1: Analyzing %s(s)" % (kind,)) form_datas = () unique_elements = set() unique_coordinate_elements = set() if kind == "form": forms = ufl_objects # Analyze forms form_datas = tuple(_analyze_form(form, parameters) for form in forms) # Extract unique elements accross all forms for form_data in form_datas: unique_elements.update(form_data.unique_sub_elements) # Extract coordinate elements across all forms for form_data in form_datas: unique_coordinate_elements.update(form_data.coordinate_elements) elif kind == "element": elements = ufl_objects # Extract unique (sub)elements unique_elements.update(extract_sub_elements(elements)) elif kind == "coordinate_mapping": meshes = ufl_objects # Extract unique (sub)elements unique_coordinate_elements = [mesh.ufl_coordinate_element() for mesh in meshes] # Make sure coordinate elements and their subelements are included unique_elements.update(extract_sub_elements(unique_coordinate_elements)) # Sort elements unique_elements = sort_elements(unique_elements) #unique_coordinate_elements = sort_elements(unique_coordinate_elements) unique_coordinate_elements = sorted(unique_coordinate_elements, key=lambda x: repr(x)) # Check for schemes for QuadratureElements for element in unique_elements: if element.family() == "Quadrature": qs = element.quadrature_scheme() if qs is None: error("Missing quad_scheme in quadrature element.") # Compute element numbers element_numbers = _compute_element_numbers(unique_elements) end() return form_datas, unique_elements, element_numbers, unique_coordinate_elements def _compute_element_numbers(elements): "Build map from elements to element numbers." element_numbers = {} for (i, element) in enumerate(elements): element_numbers[element] = i return element_numbers def _analyze_form(form, parameters): "Analyze form, returning form data." # Check that form is not empty if form.empty(): error("Form (%s) seems to be zero: cannot compile it." % str(form)) # Hack to override representation with environment variable forced_r = os.environ.get("FFC_FORCE_REPRESENTATION") if forced_r: warning("representation: forced by $FFC_FORCE_REPRESENTATION to '%s'" % forced_r) r = forced_r else: # Check representation parameters to figure out how to # preprocess r = _extract_representation_family(form, parameters) debug("Preprocessing form using '%s' representation family." % r) # Compute form metadata if r == "uflacs": # Temporary workaround to let uflacs have a different # preprocessing pipeline than the legacy quadrature # representation. This approach imposes a limitation that, # e.g. uflacs and qudrature, representations cannot be mixed # in the same form. from ufl.classes import Jacobian form_data = compute_form_data(form, do_apply_function_pullbacks=True, do_apply_integral_scaling=True, do_apply_geometry_lowering=True, preserve_geometry_types=(Jacobian,), do_apply_restrictions=True) elif r == "tsfc": try: # TSFC provides compute_form_data wrapper using correct # kwargs from tsfc.ufl_utils import compute_form_data as tsfc_compute_form_data except ImportError: error("Failed to import tsfc.ufl_utils.compute_form_data when asked " "for tsfc representation.") form_data = tsfc_compute_form_data(form) elif r == "quadrature": # quadrature representation form_data = compute_form_data(form) else: error("Unexpected representation family '%s' for form preprocessing." % r) info("") info(str(form_data)) # Attach integral meta data _attach_integral_metadata(form_data, r, parameters) _validate_representation_choice(form_data, r) return form_data def _extract_representation_family(form, parameters): """Return 'uflacs', 'tsfc' or 'quadrature', or raise error. This takes care of (a) compatibility between representations due to differences in preprocessing, (b) choosing uflacs for higher-order geometries. NOTE: Final representation is picked later by ``_determine_representation``. """ # Fetch all representation choice requests from metadata representations = set(integral.metadata().get("representation", "auto") for integral in form.integrals()) # If auto is present, add parameters value (which may still be # auto) and then remove auto so there's no auto in the set if "auto" in representations: representations.add(parameters["representation"]) representations.remove("auto") # Sanity check ffc_assert(len(representations.intersection(('auto', None))) == 0, "Unexpected representation family candidates '%s'." % representations) # No representations requested, find compatible representations compatible = _find_compatible_representations(form.integrals(), []) if len(representations) == 1: r = representations.pop() if r not in compatible: error("Representation family %s is not compatible with this form (try one of %s)" % (r, sorted(compatible))) return r elif len(representations) == 0: if len(compatible) == 1: # If only one compatible, use it return compatible.pop() else: # Default to uflacs # NOTE: Need to pick the same default as in _auto_select_representation return "uflacs" else: # Don't tolerate user requests for mixing old and new # representation families in same form due to restrictions in # preprocessing assert len(representations) > 1 error("Cannot mix quadrature, uflacs, or tsfc " "representation in single form.") def _validate_representation_choice(form_data, preprocessing_representation_family): """Check that effective representations * do not mix quadrature, uflacs and tsfc, * implement higher-order geometry, * match employed preprocessing strategy. This function is final check that everything is compatible due to the mess in this file. Better safe than sorry... """ # Fetch all representations from integral metadata (this should by # now be populated with parameters or defaults instead of auto) representations = set() for ida in form_data.integral_data: representations.add(ida.metadata["representation"]) for integral in ida.integrals: representations.add(integral.metadata()["representation"]) # No integrals if len(representations) == 0: return # Require unique family; allow quadrature only with affine meshes if len(representations) != 1: error("Failed to extract unique representation family. " "Got '%s'." % representations) if _has_higher_order_geometry(form_data.preprocessed_form): ffc_assert('quadrature' not in representations, "Did not expect quadrature representation for higher-order geometry.") # Check preprocessing strategy ffc_assert(preprocessing_representation_family in representations, "Form has been preprocessed using '%s' representaion family, " "while '%s' representations have been set for integrals." % (preprocessing_representation_family, representations)) def _has_custom_integrals(o): if isinstance(o, Integral): return o.integral_type() in custom_integral_types elif isinstance(o, Form): return any(_has_custom_integrals(itg) for itg in o.integrals()) elif isinstance(o, (list, tuple)): return any(_has_custom_integrals(itg) for itg in o) else: raise NotImplementedError def _has_higher_order_geometry(o): if isinstance(o, Integral): P1 = VectorElement("P", o.ufl_domain().ufl_cell(), 1) return o.ufl_domain().ufl_coordinate_element() != P1 elif isinstance(o, Form): P1 = VectorElement("P", o.ufl_cell(), 1) return any(d.ufl_coordinate_element() != P1 for d in o.ufl_domains()) elif isinstance(o, (list, tuple)): return any(_has_higher_order_geometry(itg) for itg in o) else: raise NotImplementedError def _extract_common_quadrature_degree(integral_metadatas): # Check that quadrature degree is the same quadrature_degrees = [md["quadrature_degree"] for md in integral_metadatas] for d in quadrature_degrees: if not isinstance(d, int): error("Invalid non-integer quadrature degree %s" % (str(d),)) qd = max(quadrature_degrees) if not all_equal(quadrature_degrees): # FIXME: Shouldn't we raise here? # TODO: This may be loosened up without too much effort, # if the form compiler handles mixed integration degree, # something that most of the pipeline seems to be ready for. info("Quadrature degree must be equal within each sub domain, using degree %d." % qd) return qd def _autoselect_quadrature_degree(integral_metadata, integral, form_data): # Automatic selection of quadrature degree qd = integral_metadata["quadrature_degree"] pd = integral_metadata["estimated_polynomial_degree"] # Special case: handling -1 as "auto" for quadrature_degree if qd in [-1, None]: qd = "auto" # TODO: Add other options here if qd == "auto": qd = pd info("quadrature_degree: auto --> %d" % qd) if isinstance(qd, int): if qd >= 0: info("quadrature_degree: %d" % qd) else: error("Illegal negative quadrature degree %s " % (qd,)) else: error("Invalid quadrature_degree %s." % (qd,)) tdim = integral.ufl_domain().topological_dimension() _check_quadrature_degree(qd, tdim) return qd def _check_quadrature_degree(degree, top_dim): """Check that quadrature degree does not result in a unreasonable high number of integration points.""" num_points = ((degree + 1 + 1) // 2)**top_dim if num_points >= 100: warning_blue("WARNING: The number of integration points for each cell will be: %d" % num_points) warning_blue(" Consider using the option 'quadrature_degree' to reduce the number of points") def _extract_common_quadrature_rule(integral_metadatas): # Check that quadrature rule is the same (To support mixed rules # would be some work since num_points is used to identify # quadrature rules in large parts of the pipeline) quadrature_rules = [md["quadrature_rule"] for md in integral_metadatas] if all_equal(quadrature_rules): qr = quadrature_rules[0] else: qr = "canonical" # FIXME: Shouldn't we raise here? info("Quadrature rule must be equal within each sub domain, using %s rule." % qr) return qr def _autoselect_quadrature_rule(integral_metadata, integral, form_data): # Automatic selection of quadrature rule qr = integral_metadata["quadrature_rule"] if qr in ["auto", None]: # Just use default for now. qr = "default" info("quadrature_rule: auto --> %s" % qr) elif qr in ("default", "canonical", "vertex"): info("quadrature_rule: %s" % qr) else: info("Valid choices are 'default', 'canonical', 'vertex', and 'auto'.") error("Illegal choice of quadrature rule for integral: " + str(qr)) # Return automatically determined quadrature rule return qr def _determine_representation(integral_metadatas, ida, form_data, form_r_family, parameters): "Determine one unique representation considering all integrals together." # Extract unique representation among these single-domain # integrals (Generating code with different representations within # a single tabulate_tensor is considered not worth the effort) representations = set(md["representation"] for md in integral_metadatas if md["representation"] != "auto") optimize_values = set(md["optimize"] for md in integral_metadatas) precision_values = set(md["precision"] for md in integral_metadatas) if len(representations) > 1: error("Integral representation must be equal within each sub domain or 'auto', got %s." % (str(sorted(str(v) for v in representations)),)) if len(optimize_values) > 1: error("Integral 'optimize' metadata must be equal within each sub domain or not set, got %s." % (str(sorted(str(v) for v in optimize_values)),)) if len(precision_values) > 1: error("Integral 'precision' metadata must be equal within each sub domain or not set, got %s." % (str(sorted(str(v) for v in precision_values)),)) # The one and only non-auto representation found, or get from parameters r, = representations or (parameters["representation"],) o, = optimize_values or (parameters["optimize"],) # FIXME: Default param value is zero which is not interpreted well by tsfc! p, = precision_values or (parameters["precision"],) # If it's still auto, try to determine which representation is # best for these integrals if r == "auto": # Find representations compatible with these integrals compatible = _find_compatible_representations(ida.integrals, form_data.unique_sub_elements) # Pick the one compatible or default to uflacs if len(compatible) == 0: error("Found no representation capable of compiling this form.") elif len(compatible) == 1: r, = compatible else: # NOTE: Need to pick the same default as in # _extract_representation_family if form_r_family == "uflacs": r = "uflacs" elif form_r_family == "tsfc": r = "tsfc" elif form_r_family == "quadrature": r = "quadrature" else: error("Invalid form representation family %s." % (form_r_family,)) info("representation: auto --> %s" % r) else: info("representation: %s" % r) if p is None: p = default_precision # Hack to override representation with environment variable forced_r = os.environ.get("FFC_FORCE_REPRESENTATION") if forced_r: r = forced_r warning("representation: forced by $FFC_FORCE_REPRESENTATION to '%s'" % r) return r, o, p return r, o, p def _attach_integral_metadata(form_data, form_r_family, parameters): "Attach integral metadata" # TODO: A nicer data flow would avoid modifying the form_data at all. # Parameter values which make sense "per integrals" or "per integral" metadata_keys = ( "representation", "optimize", # TODO: Could have finer optimize (sub)parameters here later "precision", # NOTE: We don't pass precision to quadrature, it's not # worth resolving set_float_formatting hack for # deprecated backends "quadrature_degree", "quadrature_rule", ) # Get defaults from parameters metadata_parameters = {key: parameters[key] for key in metadata_keys if key in parameters} # Iterate over integral collections quad_schemes = [] for ida in form_data.integral_data: # Iterate over integrals # Start with default values of integral metadata # (these will be either the FFC defaults, globally modified defaults, # or overrides explicitly passed by the user to e.g. assemble()) integral_metadatas = [copy.deepcopy(metadata_parameters) for integral in ida.integrals] # Update with integral specific overrides for i, integral in enumerate(ida.integrals): integral_metadatas[i].update(integral.metadata() or {}) # Determine representation, must be equal for all integrals on # same subdomain r, o, p = _determine_representation(integral_metadatas, ida, form_data, form_r_family, parameters) for i, integral in enumerate(ida.integrals): integral_metadatas[i]["representation"] = r integral_metadatas[i]["optimize"] = o integral_metadatas[i]["precision"] = p ida.metadata["representation"] = r ida.metadata["optimize"] = o ida.metadata["precision"] = p # Determine automated updates to metadata values for i, integral in enumerate(ida.integrals): qr = _autoselect_quadrature_rule(integral_metadatas[i], integral, form_data) qd = _autoselect_quadrature_degree(integral_metadatas[i], integral, form_data) integral_metadatas[i]["quadrature_rule"] = qr integral_metadatas[i]["quadrature_degree"] = qd # Extract common metadata for integral collection qr = _extract_common_quadrature_rule(integral_metadatas) qd = _extract_common_quadrature_degree(integral_metadatas) ida.metadata["quadrature_rule"] = qr ida.metadata["quadrature_degree"] = qd # Add common num_cells (I believe this must be equal but I'm # not that into this work) num_cells = set(md.get("num_cells") for md in integral_metadatas) if len(num_cells) != 1: error("Found integrals with different num_cells metadata on same subdomain: %s" % (str(list(num_cells)),)) num_cells, = num_cells ida.metadata["num_cells"] = num_cells # Reconstruct integrals to avoid modifying the input integral, # which would affect the signature computation if the integral # was used again in the user program. Modifying attributes of # form_data.integral_data is less problematic since it's # lifetime is internal to the form compiler pipeline. for i, integral in enumerate(ida.integrals): ida.integrals[i] = integral.reconstruct(metadata=integral_metadatas[i]) # Collect all quad schemes quad_schemes.extend([md["quadrature_rule"] for md in integral_metadatas]) # Validate consistency of schemes for QuadratureElements # TODO: Can loosen up this a bit, only needs to be consistent # with the integrals that the elements are used in _validate_quadrature_schemes_of_elements(quad_schemes, form_data.unique_sub_elements) def _validate_quadrature_schemes_of_elements(quad_schemes, elements): # Update scheme for QuadratureElements if quad_schemes and all_equal(quad_schemes): scheme = quad_schemes[0] else: scheme = "canonical" info("Quadrature rule must be equal within each sub domain, using %s rule." % scheme) for element in elements: if element.family() == "Quadrature": qs = element.quadrature_scheme() if qs != scheme: error("Quadrature element must have specified quadrature scheme (%s) equal to the integral (%s)." % (qs, scheme)) def _get_sub_elements(element): "Get sub elements." sub_elements = [element] if isinstance(element, MixedElement): for e in element.sub_elements(): sub_elements += _get_sub_elements(e) elif isinstance(element, EnrichedElement): for e in element._elements: sub_elements += _get_sub_elements(e) return sub_elements def _find_compatible_representations(integrals, elements): """Automatically select a suitable representation for integral. Note that the selection is made for each integral, not for each term. This means that terms which are grouped by UFL into the same integral (if their measures are equal) will necessarily get the same representation. """ # All representations compatible = set(("uflacs", "quadrature", "tsfc")) # Check for non-affine meshes if _has_higher_order_geometry(integrals): compatible &= set(("uflacs", "tsfc")) # Custom integrals if _has_custom_integrals(integrals): compatible &= set(("quadrature",)) # Use quadrature for vertex integrals if any(integral.integral_type() == "vertex" for integral in integrals): # TODO: Test with uflacs, I think this works fine now: compatible &= set(("quadrature", "uflacs", "tsfc")) # Get ALL sub elements, needed to check for restrictions of # EnrichedElements. sub_elements = [] for e in elements: sub_elements += _get_sub_elements(e) # Use quadrature representation if we have a quadrature element if any(e.family() == "Quadrature" for e in sub_elements): # TODO: Test with uflacs, might need a little adjustment: compatible &= set(("quadrature", "uflacs", "tsfc")) return compatible ffc-2019.1.0.post0/ffc/backends/000077500000000000000000000000001346026536000160635ustar00rootroot00000000000000ffc-2019.1.0.post0/ffc/backends/__init__.py000066400000000000000000000000001346026536000201620ustar00rootroot00000000000000ffc-2019.1.0.post0/ffc/backends/dolfin/000077500000000000000000000000001346026536000173365ustar00rootroot00000000000000ffc-2019.1.0.post0/ffc/backends/dolfin/__init__.py000066400000000000000000000000001346026536000214350ustar00rootroot00000000000000ffc-2019.1.0.post0/ffc/backends/dolfin/capsules.py000066400000000000000000000072351346026536000215360ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2008-2017 Martin Sandve Alnes # # This file is part of DOLFIN. # # DOLFIN is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # DOLFIN is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with DOLFIN. If not, see . # # Modified by Marie E. Rognes class UFCFormNames: "Encapsulation of the names related to a generated UFC form." def __init__(self, name, coefficient_names, ufc_form_classname, ufc_finite_element_classnames, ufc_dofmap_classnames, superclassname='Form'): """Arguments: @param name: Name of form (e.g. 'a', 'L', 'M'). @param coefficient_names: List of names of form coefficients (e.g. 'f', 'g'). @param ufc_form_classname: Name of ufc::form subclass. @param ufc_finite_element_classnames: List of names of ufc::finite_element subclasses (length rank + num_coefficients). @param ufc_dofmap_classnames: List of names of ufc::dofmap subclasses (length rank + num_coefficients). @param superclassname (optional): Name of dolfin super class (defaults to 'Form') """ assert len(coefficient_names) <= len(ufc_dofmap_classnames) assert len(ufc_finite_element_classnames) == len(ufc_dofmap_classnames) self.num_coefficients = len(coefficient_names) self.rank = len(ufc_finite_element_classnames) - self.num_coefficients self.name = name self.coefficient_names = coefficient_names self.ufc_form_classname = ufc_form_classname self.ufc_finite_element_classnames = ufc_finite_element_classnames self.ufc_dofmap_classnames = ufc_dofmap_classnames self.superclassname = superclassname def __str__(self): s = "UFCFormNames instance:\n" s += "rank: %d\n" % self.rank s += "num_coefficients: %d\n" % self.num_coefficients s += "name: %s\n" % self.name s += "coefficient_names: %s\n" % str(self.coefficient_names) s += "ufc_form_classname: %s\n" % str(self.ufc_form_classname) s += "finite_element_classnames: %s\n" % str(self.ufc_finite_element_classnames) s += "ufc_dofmap_classnames: %s\n" % str(self.ufc_dofmap_classnames) return s class UFCElementNames: "Encapsulation of the names related to a generated UFC element." def __init__(self, name, ufc_finite_element_classnames, ufc_dofmap_classnames): """Arguments: """ assert len(ufc_finite_element_classnames) == len(ufc_dofmap_classnames) self.name = name self.ufc_finite_element_classnames = ufc_finite_element_classnames self.ufc_dofmap_classnames = ufc_dofmap_classnames def __str__(self): s = "UFCFiniteElementNames instance:\n" s += "name: %s\n" \ % self.name s += "finite_element_classnames: %s\n" \ % str(self.ufc_finite_element_classnames) s += "ufc_dofmap_classnames: %s\n" \ % str(self.ufc_dofmap_classnames) return s ffc-2019.1.0.post0/ffc/backends/dolfin/form.py000066400000000000000000000340701346026536000206570ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2011 Marie E. Rognes # # This file is part of DOLFIN. # # DOLFIN is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # DOLFIN is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with DOLFIN. If not, see . # # Based on original implementation by Martin Alnes and Anders Logg # # Modified by Anders Logg 2015 # # Last changed: 2016-03-02 from .includes import snippets from .functionspace import * from .goalfunctional import generate_update_ec __all__ = ["generate_form"] def generate_form(form, classname, error_control): """Generate dolfin wrapper code associated with a form including code for function spaces used in form and typedefs @param form: A UFCFormNames instance @param classname Name of Form class. """ blocks = [] # Generate code for Form_x_FunctionSpace_y subclasses wrap = apply_function_space_template blocks += [wrap("%s_FunctionSpace_%d" % (classname, i), form.ufc_finite_element_classnames[i], form.ufc_dofmap_classnames[i]) for i in range(form.rank)] # Generate code for Form_x_MultiMeshFunctionSpace_y subclasses wrap = apply_multimesh_function_space_template if not error_control: # FIXME: Issue #91 blocks += [wrap("%s_MultiMeshFunctionSpace_%d" % (classname, i), "%s_FunctionSpace_%d" % (classname, i), form.ufc_finite_element_classnames[i], form.ufc_dofmap_classnames[i]) for i in range(form.rank)] # Add typedefs CoefficientSpace_z -> Form_x_FunctionSpace_y blocks += ["typedef CoefficientSpace_%s %s_FunctionSpace_%d;\n" % (form.coefficient_names[i], classname, form.rank + i) for i in range(form.num_coefficients)] # Generate Form subclass blocks += [generate_form_class(form, classname, error_control)] # Return code return "\n".join(blocks) def generate_form_class(form, classname, error_control): "Generate dolfin wrapper code for a single Form class." # Generate constructors constructors = generate_form_constructors(form, classname) multimesh_constructors = generate_multimesh_form_constructors(form, classname) # Generate data for coefficient assignments (number, name) = generate_coefficient_map_data(form) # Generate typedefs for FunctionSpace subclasses for Coefficients typedefs = [" // Typedefs", generate_typedefs(form, classname, error_control), ""] # Member variables for coefficients members = [" dolfin::CoefficientAssigner %s;" % coefficient for coefficient in form.coefficient_names] if form.superclassname == "GoalFunctional": members += [generate_update_ec(form)] # Group typedefs and members together for inserting into template additionals = "\n".join(typedefs + [" // Coefficients"] + members) # Wrap functions in class body code = "" code += apply_form_template(classname, constructors, number, name, additionals, form.superclassname) code += "\n" if not error_control: code += apply_multimesh_form_template(classname, multimesh_constructors, number, name, additionals, form.superclassname) # Return code return code def generate_coefficient_map_data(form): """Generate data for code for the functions Form::coefficient_number and Form::coefficient_name.""" # Write error if no coefficients if form.num_coefficients == 0: message = '''\ dolfin::dolfin_error("generated code for class %s", "access coefficient data", "There are no coefficients");''' % form.superclassname num = "\n %s\n return 0;" % message name = '\n %s\n return "unnamed";' % message return (num, name) # Otherwise create switch ifstr = "if " num = "" name = ' switch (i)\n {\n' for i, coeff in enumerate(form.coefficient_names): num += ' %s(name == "%s")\n return %d;\n' % (ifstr, coeff, i) name += ' case %d:\n return "%s";\n' % (i, coeff) ifstr = 'else if ' # Create final return message = '''\ dolfin::dolfin_error("generated code for class %s", "access coefficient data", "Invalid coefficient");''' % form.superclassname num += "\n %s\n return 0;" % message name += ' }\n\n %s\n return "unnamed";' % message return (num, name) def generate_form_constructors(form, classname): """Generate the dolfin::Form constructors for different combinations of references/shared pointers etc.""" # FIXME: This can be simplied now that we don't create reference version coeffs = ("shared_ptr_coefficient",) spaces = ("shared_ptr_space",) # Treat functionals a little special if form.rank == 0: spaces = ("shared_ptr_mesh",) # Generate permutations of constructors constructors = [] for space in spaces: constructors += [generate_constructor(form, classname, space)] if form.num_coefficients > 0: constructors += [generate_constructor(form, classname, space, coeff) for coeff in coeffs] # Return joint constructor code return "\n\n".join(constructors) def generate_multimesh_form_constructors(form, classname): """Generate the dolfin::MultiMeshForm constructors for different combinations of references/shared pointers etc.""" # FIXME: This can be simplied now that we don't create reference version coeffs = ("shared_ptr_coefficient",) spaces = ("multimesh_shared_ptr_space",) # Treat functionals a little special if form.rank == 0: spaces = ("shared_ptr_multimesh",) # Generate permutations of constructors constructors = [] for space in spaces: constructors += [generate_multimesh_constructor(form, classname, space)] if form.num_coefficients > 0: constructors += [generate_multimesh_constructor(form, classname, space, coeff) for coeff in coeffs] # Return joint constructor code return "\n\n".join(constructors) def generate_constructor(form, classname, space_tag, coefficient_tag=None): "Generate a single Form constructor according to the given parameters." # Extract correct code snippets (argument, assign) = snippets[space_tag] # Construct list of arguments and function space assignments name = "V%d" if form.rank > 0: arguments = [argument % (name % i) for i in reversed(range(form.rank))] assignments = [assign % (i, name % i) for i in range(form.rank)] else: arguments = [argument] assignments = [assign] # Add coefficients to argument/assignment lists if specified if coefficient_tag is not None: (argument, assign) = snippets[coefficient_tag] arguments += [argument % name for name in form.coefficient_names] if form.rank > 0: # FIXME: To match old generated code only assignments += [""] assignments += [assign % (name, name) for name in form.coefficient_names] # Add assignment of _ufc_form variable line = "\n _ufc_form = std::make_shared();" # FIXME: To match old generated code only if form.rank == 0 and coefficient_tag is None: line = " _ufc_form = std::make_shared();" assignments += [line % form.ufc_form_classname] # Construct list for initialization of Coefficient references initializers = ["%s(*this, %d)" % (name, number) for (number, name) in enumerate(form.coefficient_names)] # Join lists together arguments = ", ".join(arguments) initializers = ", " + ", ".join(initializers) if initializers else "" body = "\n".join(assignments) # Wrap code into template args = {"classname": classname, "rank": form.rank, "num_coefficients": form.num_coefficients, "arguments": arguments, "initializers": initializers, "body": body, "superclass": form.superclassname } code = form_constructor_template % args return code def generate_multimesh_constructor(form, classname, space_tag, coefficient_tag=None): "Generate a single Form constructor according to the given parameters." # Extract correct code snippets (argument, dummy) = snippets[space_tag] # Construct list of arguments name = "V%d" if form.rank > 0: arguments = [argument % (name % i) for i in reversed(range(form.rank))] spaces = [name % i for i in reversed(range(form.rank))] else: arguments = [argument] spaces = "mesh" # Add coefficients to argument/assignment lists if specified assignments = [] if coefficient_tag is not None: (argument, assign) = snippets[coefficient_tag] arguments += [argument % name for name in form.coefficient_names] if form.rank > 0: # FIXME: To match old generated code only assignments += [""] assignments += [assign % (name, name) for name in form.coefficient_names] # Construct list for initialization of Coefficient references initializers = ["%s(*this, %d)" % (name, number) for (number, name) in enumerate(form.coefficient_names)] # Join lists together arguments = ", ".join(arguments) initializers = ", " + ", ".join(initializers) if initializers else "" # Ignore if functional if form.rank != 0: spaces = ", ".join(spaces) # Set access method if space_tag == "multimesh_shared_ptr_space": access = "->" else: access = "." # Create body for building multimesh function space body = "" body += " // Create and add standard forms\n" body += " std::size_t num_parts = V0%snum_parts(); // assume all equal and pick first\n" % access body += " for (std::size_t part = 0; part < num_parts; part++)\n" body += " {\n" body += " std::shared_ptr a(new %s(%s));\n" % (classname, ", ".join("V%d%spart(part)" % (i, access) for i in reversed(range(form.rank)))) body += " add(a);\n\n" body += " }\n" body += " // Build multimesh form\n" body += " build();\n" # FIXME: Issue #91 if form.rank == 0: body = "" body += " // Creating a form for each part of the mesh\n" body += " for (std::size_t i=0; i< mesh->num_parts(); i++)\n" body += " {\n" body += " std::shared_ptr a(new %s(mesh->part(i))); " % (classname) body += " add(a);" body += " }\n" body += " // Build multimesh form\n" body += " build();\n" # Create body for assigning coefficients body += "\n /// Assign coefficients" body += "\n".join(assignments) + "\n" # Wrap code into template args = {"classname": classname, "rank": form.rank, "num_coefficients": form.num_coefficients, "arguments": arguments, "spaces": spaces, "initializers": initializers, "body": body, "superclass": form.superclassname } code = multimesh_form_constructor_template % args return code form_class_template = """\ class %(classname)s: public dolfin::%(superclass)s { public: %(constructors)s // Destructor ~%(classname)s() {} /// Return the number of the coefficient with this name virtual std::size_t coefficient_number(const std::string& name) const { %(coefficient_number)s } /// Return the name of the coefficient with this number virtual std::string coefficient_name(std::size_t i) const { %(coefficient_name)s } %(members)s }; """ multimesh_form_class_template = """\ class MultiMesh%(classname)s: public dolfin::MultiMesh%(superclass)s { public: %(constructors)s // Destructor ~MultiMesh%(classname)s() {} /// Return the number of the coefficient with this name virtual std::size_t coefficient_number(const std::string& name) const { %(coefficient_number)s } /// Return the name of the coefficient with this number virtual std::string coefficient_name(std::size_t i) const { %(coefficient_name)s } %(members)s }; """ # Template code for Form constructor form_constructor_template = """\ // Constructor %(classname)s(%(arguments)s): dolfin::%(superclass)s(%(rank)d, %(num_coefficients)d)%(initializers)s { %(body)s }""" multimesh_form_constructor_template = """\ // Constructor MultiMesh%(classname)s(%(arguments)s): dolfin::MultiMesh%(superclass)s(%(spaces)s)%(initializers)s { %(body)s }""" def apply_form_template(classname, constructors, number, name, members, superclass): args = {"classname": classname, "superclass": superclass, "constructors": constructors, "coefficient_number": number, "coefficient_name": name, "members": members} return form_class_template % args def apply_multimesh_form_template(classname, constructors, number, name, members, superclass): members = members.replace("CoefficientAssigner", "MultiMeshCoefficientAssigner") # hack args = {"classname": classname, "superclass": superclass, "constructors": constructors, "coefficient_number": number, "coefficient_name": name, "members": members} return multimesh_form_class_template % args ffc-2019.1.0.post0/ffc/backends/dolfin/functionspace.py000066400000000000000000000120031346026536000225450ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2011 Marie E. Rognes # # This file is part of DOLFIN. # # DOLFIN is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # DOLFIN is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with DOLFIN. If not, see . # # Based on original implementation by Martin Alnes and Anders Logg # # Modified by Anders Logg 2015 # # Last changed: 2016-03-02 from .includes import snippets __all__ = ["apply_function_space_template", "apply_multimesh_function_space_template", "extract_coefficient_spaces", "generate_typedefs"] def extract_coefficient_spaces(forms): """Extract a list of tuples (classname, finite_element_classname, dofmap_classname) for the coefficient spaces in the set of given forms. This can then be used for input to the function space template.""" # Extract data for each coefficient space spaces = {} for form in forms: for (i, name) in enumerate(form.coefficient_names): # Skip if already considered if name in spaces: continue # Map element name, dof map name etc to this coefficient spaces[name] = ("CoefficientSpace_%s" % name, form.ufc_finite_element_classnames[form.rank + i], form.ufc_dofmap_classnames[form.rank + i]) # Return coefficient spaces sorted alphabetically by coefficient # name names = sorted(spaces.keys()) return [spaces[name] for name in names] def generate_typedefs(form, classname, error_control): """Generate typedefs for test, trial and coefficient spaces relative to a function space.""" pairs = [] # Generate typedef data for test/trial spaces pairs += [("%s_FunctionSpace_%d" % (classname, i), snippets["functionspace"][i]) for i in range(form.rank)] if not error_control: # FIXME: Issue #91 pairs += [("%s_MultiMeshFunctionSpace_%d" % (classname, i), snippets["multimeshfunctionspace"][i]) for i in range(form.rank)] # Generate typedefs for coefficient spaces pairs += [("%s_FunctionSpace_%d" % (classname, form.rank + i), "CoefficientSpace_%s" % form.coefficient_names[i]) for i in range(form.num_coefficients)] # Combine data to typedef code code = "\n".join(" typedef %s %s;" % (to, fro) for (to, fro) in pairs) return code function_space_template = """\ class %(classname)s: public dolfin::FunctionSpace { public: // Constructor for standard function space %(classname)s(std::shared_ptr mesh): dolfin::FunctionSpace(mesh, std::make_shared(std::make_shared<%(ufc_finite_element_classname)s>()), std::make_shared(std::make_shared<%(ufc_dofmap_classname)s>(), *mesh)) { // Do nothing } // Constructor for constrained function space %(classname)s(std::shared_ptr mesh, std::shared_ptr constrained_domain): dolfin::FunctionSpace(mesh, std::make_shared(std::make_shared<%(ufc_finite_element_classname)s>()), std::make_shared(std::make_shared<%(ufc_dofmap_classname)s>(), *mesh, constrained_domain)) { // Do nothing } }; """ multimesh_function_space_template = """\ class %(classname)s: public dolfin::MultiMeshFunctionSpace { public: // Constructor for multimesh function space %(classname)s(std::shared_ptr multimesh): dolfin::MultiMeshFunctionSpace(multimesh) { // Create and add standard function spaces for (std::size_t part = 0; part < multimesh->num_parts(); part++) { std::shared_ptr V(new %(single_name)s(multimesh->part(part))); add(V); } // Build multimesh function space build(); } }; """ def apply_function_space_template(name, element_name, dofmap_name): args = {"classname": name, "ufc_finite_element_classname": element_name, "ufc_dofmap_classname": dofmap_name} return function_space_template % args def apply_multimesh_function_space_template(name, single_name, element_name, dofmap_name): args = {"classname": name, "single_name": single_name, "ufc_finite_element_classname": element_name, "ufc_dofmap_classname": dofmap_name} return multimesh_function_space_template % args ffc-2019.1.0.post0/ffc/backends/dolfin/goalfunctional.py000066400000000000000000000160411346026536000227170ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2010 Marie E. Rognes # # This file is part of DOLFIN. # # DOLFIN is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # DOLFIN is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with DOLFIN. If not, see . # # Last changed: 2011-07-06 __all__ = ["generate_update_ec"] attach_coefficient_template = """ // Attach coefficients from %(from)s to %(to)s for (std::size_t i = 0; i < %(from)s.num_coefficients(); i++) { name = %(from)s.coefficient_name(i); // Don't attach discrete primal solution here (not computed). if (name == "__discrete_primal_solution") continue; // Test whether %(to)s has coefficient named 'name' try { %(to)s->coefficient_number(name); } catch (...) { std::cout << "Attaching coefficient named: " << name << " to %(to)s"; std::cout << " failed! But this might be expected." << std::endl; continue; } %(to)s->set_coefficient(name, %(from)s.coefficient(i)); } """ attach_domains_template = """ // Attach subdomains from %(from)s to %(to)s %(to)s->dx = %(from)s.cell_domains(); %(to)s->ds = %(from)s.exterior_facet_domains(); %(to)s->dS = %(from)s.interior_facet_domains(); """ update_ec_template = """ /// Initialize all error control forms, attach coefficients and /// (re-)set error control virtual void update_ec(const dolfin::Form& a, const dolfin::Form& L) { // This stuff is created here and shipped elsewhere std::shared_ptr a_star; // Dual lhs std::shared_ptr L_star; // Dual rhs std::shared_ptr V_Ez_h; // Extrapolation space std::shared_ptr Ez_h; // Extrapolated dual std::shared_ptr residual; // Residual (as functional) std::shared_ptr V_R_T; // Trial space for cell residual std::shared_ptr a_R_T; // Cell residual lhs std::shared_ptr L_R_T; // Cell residual rhs std::shared_ptr V_b_T; // Function space for cell bubble std::shared_ptr b_T; // Cell bubble std::shared_ptr V_R_dT; // Trial space for facet residual std::shared_ptr a_R_dT; // Facet residual lhs std::shared_ptr L_R_dT; // Facet residual rhs std::shared_ptr V_b_e; // Function space for cell cone std::shared_ptr b_e; // Cell cone std::shared_ptr V_eta_T; // Function space for indicators std::shared_ptr eta_T; // Indicator form // Some handy views assert(a.function_space(0)); const auto Vhat = a.function_space(0); // Primal test assert(a.function_space(0)); const auto V = a.function_space(1); // Primal trial assert(V->mesh()); const auto mesh = V->mesh(); std::string name; // Initialize dual forms a_star.reset(new %(a_star)s(V, Vhat)); L_star.reset(new %(L_star)s(V)); %(attach_a_star)s %(attach_L_star)s // Initialize residual residual.reset(new %(residual)s(mesh)); %(attach_residual)s // Initialize extrapolation space and (fake) extrapolation V_Ez_h.reset(new %(V_Ez_h)s(mesh)); Ez_h.reset(new dolfin::Function(V_Ez_h)); residual->set_coefficient("__improved_dual", Ez_h); // Create bilinear and linear form for computing cell residual R_T V_R_T.reset(new %(V_R_T)s(mesh)); a_R_T.reset(new %(a_R_T)s(V_R_T, V_R_T)); L_R_T.reset(new %(L_R_T)s(V_R_T)); // Initialize bubble and attach to a_R_T and L_R_T V_b_T.reset(new %(V_b_T)s(mesh)); b_T.reset(new dolfin::Function(V_b_T)); *b_T->vector() = 1.0; %(attach_L_R_T)s // Attach bubble function to _a_R_T and _L_R_T a_R_T->set_coefficient("__cell_bubble", b_T); L_R_T->set_coefficient("__cell_bubble", b_T); // Create bilinear and linear form for computing facet residual R_dT V_R_dT.reset(new %(V_R_dT)s(mesh)); a_R_dT.reset(new %(a_R_dT)s(V_R_dT, V_R_dT)); L_R_dT.reset(new %(L_R_dT)s(V_R_dT)); %(attach_L_R_dT)s // Initialize (fake) cone and attach to a_R_dT and L_R_dT V_b_e.reset(new %(V_b_e)s(mesh)); b_e.reset(new dolfin::Function(V_b_e)); a_R_dT->set_coefficient("__cell_cone", b_e); L_R_dT->set_coefficient("__cell_cone", b_e); // Create error indicator form V_eta_T.reset(new %(V_eta_T)s(mesh)); eta_T.reset(new %(eta_T)s(V_eta_T)); // Update error control _ec.reset(new dolfin::ErrorControl(a_star, L_star, residual, a_R_T, L_R_T, a_R_dT, L_R_dT, eta_T, %(linear)s)); } """ def _attach(tos, froms): if not isinstance(froms, tuple): return attach_coefficient_template % {"to": tos, "from": froms} \ + attach_domains_template % {"to": tos, "from": froms} # NB: If multiple forms involved, attach domains from last form. coeffs = "\n".join([attach_coefficient_template % {"to": to, "from": fro} for (to, fro) in zip(tos, froms)]) domains = attach_domains_template % {"to": tos[-1], "from": froms[-1]} return coeffs + domains def generate_maps(linear): """ NB: This depends on the ordering of the forms """ maps = {"a_star": "Form_%d" % 0, "L_star": "Form_%d" % 1, "residual": "Form_%d" % 2, "a_R_T": "Form_%d" % 3, "L_R_T": "Form_%d" % 4, "a_R_dT": "Form_%d" % 5, "L_R_dT": "Form_%d" % 6, "eta_T": "Form_%d" % 7, "V_Ez_h": "CoefficientSpace_%s" % "__improved_dual", "V_R_T": "Form_%d::TestSpace" % 4, "V_b_T": "CoefficientSpace_%s" % "__cell_bubble", "V_R_dT": "Form_%d::TestSpace" % 6, "V_b_e": "CoefficientSpace_%s" % "__cell_cone", "V_eta_T": "Form_%d::TestSpace" % 7, "attach_a_star": _attach("a_star", "a"), "attach_L_star": _attach("L_star", "(*this)"), "attach_residual": _attach(("residual",) * 2, ("a", "L")), "attach_L_R_T": _attach(("L_R_T",) * 2, ("a", "L")), "attach_L_R_dT": _attach(("L_R_dT",) * 2, ("a", "L")), "linear": "true" if linear else "false" } return maps def generate_update_ec(form): linear = "__discrete_primal_solution" in form.coefficient_names maps = generate_maps(linear) code = update_ec_template % maps return code ffc-2019.1.0.post0/ffc/backends/dolfin/includes.py000066400000000000000000000047271346026536000215300ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Based on original implementation by Martin Alnes and Anders Logg # # Modified by Anders Logg 2015 __all__ = ["dolfin_tag", "stl_includes", "dolfin_includes", "snippets"] dolfin_tag = "// DOLFIN wrappers" stl_includes = """\ // Standard library includes #include """ dolfin_includes = """\ // DOLFIN includes #include #include #include #include #include #include #include #include #include #include #include #include #include #include #include """ snippets = {"shared_ptr_space": ("std::shared_ptr %s", " _function_spaces[%d] = %s;"), "referenced_space": ("const dolfin::FunctionSpace& %s", " _function_spaces[%d] = reference_to_no_delete_pointer(%s);"), "multimesh_shared_ptr_space": ("std::shared_ptr %s", None), "multimesh_referenced_space": ("const dolfin::MultiMeshFunctionSpace& %s", None), "shared_ptr_mesh": ("std::shared_ptr mesh", " _mesh = mesh;"), "shared_ptr_multimesh": ("std::shared_ptr mesh", " _multimesh = mesh;"), "referenced_mesh": ("const dolfin::Mesh& mesh", " _mesh = reference_to_no_delete_pointer(mesh);"), "shared_ptr_coefficient": ("std::shared_ptr %s", " this->%s = %s;"), "shared_ptr_ref_coefficient": ("std::shared_ptr %s", " this->%s = *%s;"), "referenced_coefficient": ("const dolfin::GenericFunction& %s", " this->%s = %s;"), "functionspace": ("TestSpace", "TrialSpace"), "multimeshfunctionspace": ("MultiMeshTestSpace", "MultiMeshTrialSpace") } ffc-2019.1.0.post0/ffc/backends/dolfin/wrappers.py000066400000000000000000000141211346026536000215520ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2011 Marie E. Rognes # # This file is part of DOLFIN. # # DOLFIN is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # DOLFIN is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with DOLFIN. If not, see . # # Based on original implementation by Martin Alnes and Anders Logg # # Modified by Anders Logg 2015 # # Last changed: 2015-11-05 from . import includes as incl from .functionspace import * from .form import generate_form from .capsules import UFCElementNames __all__ = ["generate_dolfin_code"] # NB: generate_dolfin_namespace(...) assumes that if a coefficient has # the same name in multiple forms, it is indeed the same coefficient: parameters = {"use_common_coefficient_names": True} def generate_dolfin_code(prefix, header, forms, common_function_space=False, add_guards=False, error_control=False): """Generate complete dolfin wrapper code with given generated names. @param prefix: String, prefix for all form names. @param header: Code that will be inserted at the top of the file. @param forms: List of UFCFormNames instances or single UFCElementNames. @param common_function_space: True if common function space, otherwise False @param add_guards: True iff guards (ifdefs) should be added @param error_control: True iff adaptivity typedefs (ifdefs) should be added """ # Generate dolfin namespace namespace = generate_dolfin_namespace(prefix, forms, common_function_space, error_control) # Collect pieces of code code = [incl.dolfin_tag, header, incl.stl_includes, incl.dolfin_includes, namespace] # Add ifdefs/endifs if specified if add_guards: guard_name = ("%s_h" % prefix).upper() preguard = "#ifndef %s\n#define %s\n" % (guard_name, guard_name) postguard = "\n#endif\n\n" code = [preguard] + code + [postguard] # Return code return "\n".join(code) def generate_dolfin_namespace(prefix, forms, common_function_space=False, error_control=False): # Allow forms to represent a single space, and treat separately if isinstance(forms, UFCElementNames): return generate_single_function_space(prefix, forms) # Extract (common) coefficient spaces assert(parameters["use_common_coefficient_names"]) spaces = extract_coefficient_spaces(forms) # Generate code for common coefficient spaces code = [apply_function_space_template(*space) for space in spaces] # Generate code for forms code += [generate_form(form, "Form_%s" % form.name, error_control) for form in forms] # Generate namespace typedefs (Bilinear/Linear & Test/Trial/Function) code += [generate_namespace_typedefs(forms, common_function_space, error_control)] # Wrap code in namespace block code = "\nnamespace %s\n{\n\n%s\n}" % (prefix, "\n".join(code)) # Return code return code def generate_single_function_space(prefix, space): code = apply_function_space_template("FunctionSpace", space.ufc_finite_element_classnames[0], space.ufc_dofmap_classnames[0]) code = "\nnamespace %s\n{\n\n%s\n}" % (prefix, code) return code def generate_namespace_typedefs(forms, common_function_space, error_control): # Generate typedefs as (fro, to) pairs of strings pairs = [] # Add typedef for Functional/LinearForm/BilinearForm if only one # is present of each aliases = ["Functional", "LinearForm", "BilinearForm"] extra_aliases = {"LinearForm": "ResidualForm", "BilinearForm": "JacobianForm"} for rank in sorted(range(len(aliases)), reverse=True): forms_of_rank = [form for form in forms if form.rank == rank] if len(forms_of_rank) == 1: pairs += [("Form_%s" % forms_of_rank[0].name, aliases[rank])] if not error_control: # FIXME: Issue #91 pairs += [("MultiMeshForm_%s" % forms_of_rank[0].name, "MultiMesh" + aliases[rank])] if aliases[rank] in extra_aliases: extra_alias = extra_aliases[aliases[rank]] pairs += [("Form_%s" % forms_of_rank[0].name, extra_alias)] if not error_control: # FIXME: Issue #91 pairs += [("MultiMeshForm_%s" % forms_of_rank[0].name, "MultiMesh" + extra_alias)] # Keepin' it simple: Add typedef for FunctionSpace if term applies if common_function_space: for i, form in enumerate(forms): if form.rank: pairs += [("Form_%s::TestSpace" % form.name, "FunctionSpace")] if not error_control: # FIXME: Issue #91 pairs += [("Form_%s::MultiMeshTestSpace" % form.name, "MultiMeshFunctionSpace")] break # Add specialized typedefs when adding error control wrapppers if error_control: pairs += error_control_pairs(forms) # Combine data to typedef code typedefs = "\n".join("typedef %s %s;" % (to, fro) for (to, fro) in pairs) # Return typedefs or "" if not typedefs: return "" return "// Class typedefs\n" + typedefs + "\n" def error_control_pairs(forms): assert (len(forms) == 11), "Expecting 11 error control forms" return [("Form_%s" % forms[8].name, "BilinearForm"), ("Form_%s" % forms[9].name, "LinearForm"), ("Form_%s" % forms[10].name, "GoalFunctional")] ffc-2019.1.0.post0/ffc/backends/ufc/000077500000000000000000000000001346026536000166405ustar00rootroot00000000000000ffc-2019.1.0.post0/ffc/backends/ufc/__init__.py000066400000000000000000000144241346026536000207560ustar00rootroot00000000000000# -*- coding: utf-8 -*- """Code generation format strings for UFC (Unified Form-assembly Code) Five format strings are defined for each of the following UFC classes: cell_integral exterior_facet_integral interior_facet_integral custom_integral cutcell_integral interface_integral overlap_integral function finite_element dofmap coordinate_mapping form The strings are named: '_header' '_implementation' '_combined' '_jit_header' '_jit_implementation' The header and implementation contain the definition and declaration of the class respectively, and are meant to be placed in .h and .cpp files, while the combined version is for an implementation within a single .h header. The _jit_ versions are used in the jit compiler and contains some additional factory functions exported as extern "C" to allow construction of compiled objects through ctypes without dealing with C++ ABI and name mangling issues. Each string has at least the following format variables: 'classname', 'members', 'constructor', 'destructor', plus one for each interface function with name equal to the function name. For more information about UFC and the FEniCS Project, visit http://www.fenicsproject.org https://bitbucket.org/fenics-project/ffc """ __author__ = "Martin Sandve Alnæs, Anders Logg, Kent-Andre Mardal, Ola Skavhaug, and Hans Petter Langtangen" __date__ = "2017-05-09" __version__ = "2018.1.0" __license__ = "This code is released into the public domain" import os from hashlib import sha1 from ffc.backends.ufc.function import * from ffc.backends.ufc.finite_element import * from ffc.backends.ufc.dofmap import * from ffc.backends.ufc.coordinate_mapping import * from ffc.backends.ufc.integrals import * from ffc.backends.ufc.form import * # Get abspath on import, it can in some cases be # a relative path w.r.t. curdir on startup _include_path = os.path.dirname(os.path.abspath(__file__)) def get_include_path(): "Return location of UFC header files" return _include_path # Platform specific snippets for controlling visilibity of exported symbols in generated shared libraries visibility_snippet = """ // Based on https://gcc.gnu.org/wiki/Visibility #if defined _WIN32 || defined __CYGWIN__ #ifdef __GNUC__ #define DLL_EXPORT __attribute__ ((dllexport)) #else #define DLL_EXPORT __declspec(dllexport) #endif #else #define DLL_EXPORT __attribute__ ((visibility ("default"))) #endif """ # Generic factory function signature factory_decl = """ extern "C" %(basename)s * create_%(publicname)s(); """ # Generic factory function implementation. Set basename to the base class, # and note that publicname and privatename does not need to match, allowing # multiple factory functions to return the same object. factory_impl = """ extern "C" DLL_EXPORT %(basename)s * create_%(publicname)s() { return new %(privatename)s(); } """ def all_ufc_classnames(): "Build list of all classnames." integral_names = ["cell", "exterior_facet", "interior_facet", "vertex", "custom", "cutcell", "interface", "overlap"] integral_classnames = [integral_name + "_integral" for integral_name in integral_names] jitable_classnames = ["finite_element", "dofmap", "coordinate_mapping", "form"] classnames = ["function"] + jitable_classnames + integral_classnames return classnames def _build_templates(): "Build collection of all templates to store in the templates dict." templates = {} classnames = all_ufc_classnames() for classname in classnames: # Expect all classes to have header, implementation, and combined versions header = globals()[classname + "_header"] implementation = globals()[classname + "_implementation"] combined = globals()[classname + "_combined"] # Construct jit header with class and factory function signature _fac_decl = factory_decl % { "basename": "ufc::" + classname, "publicname": "%(classname)s", "privatename": "%(classname)s", } jit_header = header + _fac_decl # Construct jit implementation template with class declaration, # factory function implementation, and class definition _fac_impl = factory_impl % { "basename": "ufc::" + classname, "publicname": "%(classname)s", "privatename": "%(classname)s", } jit_implementation = implementation + _fac_impl # Store all in templates dict templates[classname + "_header"] = header templates[classname + "_implementation"] = implementation templates[classname + "_combined"] = combined templates[classname + "_jit_header"] = jit_header templates[classname + "_jit_implementation"] = jit_implementation return templates def _compute_ufc_templates_signature(templates): # Compute signature of jit templates h = sha1() for k in sorted(templates): h.update(k.encode("utf-8")) h.update(templates[k].encode("utf-8")) return h.hexdigest() def _compute_ufc_signature(): # Compute signature of ufc header files h = sha1() for fn in ("ufc.h", "ufc_geometry.h"): with open(os.path.join(get_include_path(), fn)) as f: h.update(f.read().encode("utf-8")) return h.hexdigest() # Build these on import templates = _build_templates() _ufc_signature = _compute_ufc_signature() _ufc_templates_signature = _compute_ufc_templates_signature(templates) def get_ufc_signature(): """Return SHA-1 hash of the contents of ufc.h and ufc_geometry.h. In this implementation, the value is computed on import. """ return _ufc_signature def get_ufc_templates_signature(): """Return SHA-1 hash of the ufc code templates. In this implementation, the value is computed on import. """ return _ufc_templates_signature def get_ufc_cxx_flags(): """Return C++ flags for compiling UFC C++11 code. Return type is a list of strings. Used internally in some tests. """ return ["-std=c++11"] # ufc_signature() already introduced to FFC standard in 1.7.0dev, # called by the dolfin cmake build system to compare against # future imported ffc versions for compatibility. ufc_signature = get_ufc_signature ffc-2019.1.0.post0/ffc/backends/ufc/coordinate_mapping.py000066400000000000000000000157501346026536000230640ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Code generation format strings for UFC (Unified Form-assembly Code) # This code is released into the public domain. # # The FEniCS Project (http://www.fenicsproject.org/) 2006-2017 coordinate_mapping_combined = """ class %(classname)s: public ufc::coordinate_mapping {%(members)s public: %(classname)s(%(constructor_arguments)s) : ufc::coordinate_mapping()%(initializer_list)s { %(constructor)s } ~%(classname)s() override { %(destructor)s } const char * signature() const final override { %(signature)s } ufc::coordinate_mapping * create() const final override { %(create)s } std::size_t geometric_dimension() const final override { %(geometric_dimension)s } std::size_t topological_dimension() const final override { %(topological_dimension)s } ufc::shape cell_shape() const final override { %(cell_shape)s } ufc::finite_element * create_coordinate_finite_element() const final override { %(create_coordinate_finite_element)s } ufc::dofmap * create_coordinate_dofmap() const final override { %(create_coordinate_dofmap)s } void compute_physical_coordinates( double * x, std::size_t num_points, const double * X, const double * coordinate_dofs) const final override { %(compute_physical_coordinates)s } void compute_reference_coordinates( double * X, std::size_t num_points, const double * x, const double * coordinate_dofs, int cell_orientation) const final override { %(compute_reference_coordinates)s } void compute_reference_geometry( double * X, double * J, double * detJ, double * K, std::size_t num_points, const double * x, const double * coordinate_dofs, int cell_orientation) const final override { %(compute_reference_geometry)s } void compute_jacobians( double * J, std::size_t num_points, const double * X, const double * coordinate_dofs) const final override { %(compute_jacobians)s } void compute_jacobian_determinants( double * detJ, std::size_t num_points, const double * J, int cell_orientation) const final override { %(compute_jacobian_determinants)s } void compute_jacobian_inverses( double * K, std::size_t num_points, const double * J, const double * detJ) const final override { %(compute_jacobian_inverses)s } void compute_geometry( double * x, double * J, double * detJ, double * K, std::size_t num_points, const double * X, const double * coordinate_dofs, int cell_orientation) const final override { %(compute_geometry)s } void compute_midpoint_geometry( double * x, double * J, const double * coordinate_dofs) const final override { %(compute_midpoint_geometry)s } }; """ coordinate_mapping_header = """ class %(classname)s: public ufc::coordinate_mapping {%(members)s public: %(classname)s(%(constructor_arguments)s); ~%(classname)s() override; const char * signature() const final override; ufc::coordinate_mapping * create() const final override; std::size_t geometric_dimension() const final override; std::size_t topological_dimension() const final override; ufc::shape cell_shape() const final override; ufc::finite_element * create_coordinate_finite_element() const final override; ufc::dofmap * create_coordinate_dofmap() const final override; void compute_physical_coordinates( double * x, std::size_t num_points, const double * X, const double * coordinate_dofs) const final override; void compute_reference_coordinates( double * X, std::size_t num_points, const double * x, const double * coordinate_dofs, int cell_orientation) const final override; void compute_reference_geometry( double * X, double * J, double * detJ, double * K, std::size_t num_points, const double * x, const double * coordinate_dofs, int cell_orientation) const final override; void compute_jacobians( double * J, std::size_t num_points, const double * X, const double * coordinate_dofs) const final override; void compute_jacobian_determinants( double * detJ, std::size_t num_points, const double * J, int cell_orientation) const final override; void compute_jacobian_inverses( double * K, std::size_t num_points, const double * J, const double * detJ) const final override; void compute_geometry( double * x, double * J, double * detJ, double * K, std::size_t num_points, const double * X, const double * coordinate_dofs, int cell_orientation) const final override; void compute_midpoint_geometry( double * x, double * J, const double * coordinate_dofs) const final override; }; """ coordinate_mapping_implementation = """ %(classname)s::%(classname)s(%(constructor_arguments)s) : ufc::coordinate_mapping()%(initializer_list)s { %(constructor)s } %(classname)s::~%(classname)s() { %(destructor)s } const char * %(classname)s::signature() const { %(signature)s } ufc::coordinate_mapping * %(classname)s::create() const { %(create)s } std::size_t %(classname)s::geometric_dimension() const { %(geometric_dimension)s } std::size_t %(classname)s::topological_dimension() const { %(topological_dimension)s } ufc::shape %(classname)s::cell_shape() const { %(cell_shape)s } ufc::finite_element * %(classname)s::create_coordinate_finite_element() const { %(create_coordinate_finite_element)s } ufc::dofmap * %(classname)s::create_coordinate_dofmap() const { %(create_coordinate_dofmap)s } void %(classname)s::compute_physical_coordinates( double * x, std::size_t num_points, const double * X, const double * coordinate_dofs) const { %(compute_physical_coordinates)s } void %(classname)s::compute_reference_coordinates( double * X, std::size_t num_points, const double * x, const double * coordinate_dofs, int cell_orientation) const { %(compute_reference_coordinates)s } void %(classname)s::compute_reference_geometry( double * X, double * J, double * detJ, double * K, std::size_t num_points, const double * x, const double * coordinate_dofs, int cell_orientation) const { %(compute_reference_geometry)s } void %(classname)s::compute_jacobians( double * J, std::size_t num_points, const double * X, const double * coordinate_dofs) const { %(compute_jacobians)s } void %(classname)s::compute_jacobian_determinants( double * detJ, std::size_t num_points, const double * J, int cell_orientation) const { %(compute_jacobian_determinants)s } void %(classname)s::compute_jacobian_inverses( double * K, std::size_t num_points, const double * J, const double * detJ) const { %(compute_jacobian_inverses)s } void %(classname)s::compute_geometry( double * x, double * J, double * detJ, double * K, std::size_t num_points, const double * X, const double * coordinate_dofs, int cell_orientation) const { %(compute_geometry)s } void %(classname)s::compute_midpoint_geometry( double * x, double * J, const double * coordinate_dofs) const { %(compute_midpoint_geometry)s } """ ffc-2019.1.0.post0/ffc/backends/ufc/dofmap.py000066400000000000000000000141231346026536000204610ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Code generation format strings for UFC (Unified Form-assembly Code) # This code is released into the public domain. # # The FEniCS Project (http://www.fenicsproject.org/) 2006-2017. dofmap_combined = """ class %(classname)s: public ufc::dofmap {%(members)s public: %(classname)s(%(constructor_arguments)s) : ufc::dofmap()%(initializer_list)s { %(constructor)s } ~%(classname)s() override { %(destructor)s } const char * signature() const final override { %(signature)s } bool needs_mesh_entities(std::size_t d) const final override { %(needs_mesh_entities)s } std::size_t topological_dimension() const final override { %(topological_dimension)s } std::size_t global_dimension(const std::vector& num_global_entities) const final override { %(global_dimension)s } std::size_t num_global_support_dofs() const final override { %(num_global_support_dofs)s } std::size_t num_element_support_dofs() const final override { %(num_element_support_dofs)s } std::size_t num_element_dofs() const final override { %(num_element_dofs)s } std::size_t num_facet_dofs() const final override { %(num_facet_dofs)s } std::size_t num_entity_dofs(std::size_t d) const final override { %(num_entity_dofs)s } std::size_t num_entity_closure_dofs(std::size_t d) const final override { %(num_entity_closure_dofs)s } void tabulate_dofs(std::size_t * dofs, const std::vector& num_global_entities, const std::vector>& entity_indices) const final override { %(tabulate_dofs)s } void tabulate_facet_dofs(std::size_t * dofs, std::size_t facet) const final override { %(tabulate_facet_dofs)s } void tabulate_entity_dofs(std::size_t * dofs, std::size_t d, std::size_t i) const final override { %(tabulate_entity_dofs)s } void tabulate_entity_closure_dofs(std::size_t * dofs, std::size_t d, std::size_t i) const final override { %(tabulate_entity_closure_dofs)s } std::size_t num_sub_dofmaps() const final override { %(num_sub_dofmaps)s } ufc::dofmap * create_sub_dofmap(std::size_t i) const final override { %(create_sub_dofmap)s } ufc::dofmap * create() const final override { %(create)s } }; """ dofmap_header = """ class %(classname)s: public ufc::dofmap {%(members)s public: %(classname)s(%(constructor_arguments)s); ~%(classname)s() override; const char * signature() const final override; bool needs_mesh_entities(std::size_t d) const final override; std::size_t topological_dimension() const final override; std::size_t global_dimension(const std::vector& num_global_entities) const final override; std::size_t num_global_support_dofs() const final override; std::size_t num_element_support_dofs() const final override; std::size_t num_element_dofs() const final override; std::size_t num_facet_dofs() const final override; std::size_t num_entity_dofs(std::size_t d) const final override; std::size_t num_entity_closure_dofs(std::size_t d) const final override; void tabulate_dofs(std::size_t * dofs, const std::vector& num_global_entities, const std::vector>& entity_indices) const final override; void tabulate_facet_dofs(std::size_t * dofs, std::size_t facet) const final override; void tabulate_entity_dofs(std::size_t * dofs, std::size_t d, std::size_t i) const final override; void tabulate_entity_closure_dofs(std::size_t * dofs, std::size_t d, std::size_t i) const final override; std::size_t num_sub_dofmaps() const final override; ufc::dofmap * create_sub_dofmap(std::size_t i) const final override; ufc::dofmap * create() const final override; }; """ dofmap_implementation = """ %(classname)s::%(classname)s(%(constructor_arguments)s) : ufc::dofmap()%(initializer_list)s { %(constructor)s } %(classname)s::~%(classname)s() { %(destructor)s } const char * %(classname)s::signature() const { %(signature)s } bool %(classname)s::needs_mesh_entities(std::size_t d) const { %(needs_mesh_entities)s } std::size_t %(classname)s::topological_dimension() const { %(topological_dimension)s } std::size_t %(classname)s::global_dimension(const std::vector& num_global_entities) const { %(global_dimension)s } std::size_t %(classname)s::num_global_support_dofs() const { %(num_global_support_dofs)s } std::size_t %(classname)s::num_element_support_dofs() const { %(num_element_support_dofs)s } std::size_t %(classname)s::num_element_dofs() const { %(num_element_dofs)s } std::size_t %(classname)s::num_facet_dofs() const { %(num_facet_dofs)s } std::size_t %(classname)s::num_entity_dofs(std::size_t d) const { %(num_entity_dofs)s } std::size_t %(classname)s::num_entity_closure_dofs(std::size_t d) const { %(num_entity_closure_dofs)s } void %(classname)s::tabulate_dofs(std::size_t * dofs, const std::vector& num_global_entities, const std::vector>& entity_indices) const { %(tabulate_dofs)s } void %(classname)s::tabulate_facet_dofs(std::size_t * dofs, std::size_t facet) const { %(tabulate_facet_dofs)s } void %(classname)s::tabulate_entity_dofs(std::size_t * dofs, std::size_t d, std::size_t i) const { %(tabulate_entity_dofs)s } void %(classname)s::tabulate_entity_closure_dofs(std::size_t * dofs, std::size_t d, std::size_t i) const { %(tabulate_entity_closure_dofs)s } std::size_t %(classname)s::num_sub_dofmaps() const { %(num_sub_dofmaps)s } ufc::dofmap * %(classname)s::create_sub_dofmap(std::size_t i) const { %(create_sub_dofmap)s } ufc::dofmap * %(classname)s::create() const { %(create)s } """ ffc-2019.1.0.post0/ffc/backends/ufc/finite_element.py000066400000000000000000000430731346026536000222100ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Code generation format strings for UFC (Unified Form-assembly Code) # This code is released into the public domain. # # The FEniCS Project (http://www.fenicsproject.org/) 2006-2017. finite_element_combined = """ class %(classname)s: public ufc::finite_element {%(members)s public: %(classname)s(%(constructor_arguments)s) : ufc::finite_element()%(initializer_list)s { %(constructor)s } ~%(classname)s() override { %(destructor)s } const char * signature() const final override { %(signature)s } ufc::shape cell_shape() const final override { %(cell_shape)s } std::size_t topological_dimension() const final override { %(topological_dimension)s } std::size_t geometric_dimension() const final override { %(geometric_dimension)s } std::size_t space_dimension() const final override { %(space_dimension)s } std::size_t value_rank() const final override { %(value_rank)s } std::size_t value_dimension(std::size_t i) const final override { %(value_dimension)s } std::size_t value_size() const final override { %(value_size)s } std::size_t reference_value_rank() const final override { %(reference_value_rank)s } std::size_t reference_value_dimension(std::size_t i) const final override { %(reference_value_dimension)s } std::size_t reference_value_size() const final override { %(reference_value_size)s } std::size_t degree() const final override { %(degree)s } const char * family() const final override { %(family)s } void evaluate_reference_basis(double * reference_values, std::size_t num_points, const double * X) const final override { %(evaluate_reference_basis)s } void evaluate_reference_basis_derivatives(double * reference_values, std::size_t order, std::size_t num_points, const double * X) const final override { %(evaluate_reference_basis_derivatives)s } void transform_reference_basis_derivatives(double * values, std::size_t order, std::size_t num_points, const double * reference_values, const double * X, const double * J, const double * detJ, const double * K, int cell_orientation) const final override { %(transform_reference_basis_derivatives)s } void evaluate_basis(std::size_t i, double * values, const double * x, const double * coordinate_dofs, int cell_orientation, const ufc::coordinate_mapping * cm=nullptr ) const final override { %(evaluate_basis)s } void evaluate_basis_all(double * values, const double * x, const double * coordinate_dofs, int cell_orientation, const ufc::coordinate_mapping * cm=nullptr ) const final override { %(evaluate_basis_all)s } void evaluate_basis_derivatives(std::size_t i, std::size_t n, double * values, const double * x, const double * coordinate_dofs, int cell_orientation, const ufc::coordinate_mapping * cm=nullptr ) const final override { %(evaluate_basis_derivatives)s } void evaluate_basis_derivatives_all(std::size_t n, double * values, const double * x, const double * coordinate_dofs, int cell_orientation, const ufc::coordinate_mapping * cm=nullptr ) const final override { %(evaluate_basis_derivatives_all)s } double evaluate_dof(std::size_t i, const ufc::function& f, const double * coordinate_dofs, int cell_orientation, const ufc::cell& c, const ufc::coordinate_mapping * cm=nullptr ) const final override { %(evaluate_dof)s } void evaluate_dofs(double * values, const ufc::function& f, const double * coordinate_dofs, int cell_orientation, const ufc::cell& c, const ufc::coordinate_mapping * cm=nullptr ) const final override { %(evaluate_dofs)s } void interpolate_vertex_values(double * vertex_values, const double * dof_values, const double * coordinate_dofs, int cell_orientation, const ufc::coordinate_mapping * cm=nullptr ) const final override { %(interpolate_vertex_values)s } void tabulate_dof_coordinates(double * dof_coordinates, const double * coordinate_dofs, const ufc::coordinate_mapping * cm=nullptr ) const final override { %(tabulate_dof_coordinates)s } void tabulate_reference_dof_coordinates(double * reference_dof_coordinates) const final override { %(tabulate_reference_dof_coordinates)s } std::size_t num_sub_elements() const final override { %(num_sub_elements)s } ufc::finite_element * create_sub_element(std::size_t i) const final override { %(create_sub_element)s } ufc::finite_element * create() const final override { %(create)s } }; """ finite_element_header = """ class %(classname)s: public ufc::finite_element {%(members)s public: %(classname)s(%(constructor_arguments)s); ~%(classname)s() override; const char * signature() const final override; ufc::shape cell_shape() const final override; std::size_t topological_dimension() const final override; std::size_t geometric_dimension() const final override; std::size_t space_dimension() const final override; std::size_t value_rank() const final override; std::size_t value_dimension(std::size_t i) const final override; std::size_t value_size() const final override; std::size_t reference_value_rank() const final override; std::size_t reference_value_dimension(std::size_t i) const final override; std::size_t reference_value_size() const final override; std::size_t degree() const final override; const char * family() const final override; void evaluate_reference_basis(double * reference_values, std::size_t num_points, const double * X) const final override; void evaluate_reference_basis_derivatives(double * reference_values, std::size_t order, std::size_t num_points, const double * X) const final override; void transform_reference_basis_derivatives(double * values, std::size_t order, std::size_t num_points, const double * reference_values, const double * X, const double * J, const double * detJ, const double * K, int cell_orientation) const final override; void evaluate_basis(std::size_t i, double * values, const double * x, const double * coordinate_dofs, int cell_orientation, const ufc::coordinate_mapping * cm=nullptr ) const final override; void evaluate_basis_all(double * values, const double * x, const double * coordinate_dofs, int cell_orientation, const ufc::coordinate_mapping * cm=nullptr ) const final override; void evaluate_basis_derivatives(std::size_t i, std::size_t n, double * values, const double * x, const double * coordinate_dofs, int cell_orientation, const ufc::coordinate_mapping * cm=nullptr ) const final override; void evaluate_basis_derivatives_all(std::size_t n, double * values, const double * x, const double * coordinate_dofs, int cell_orientation, const ufc::coordinate_mapping * cm=nullptr ) const final override; double evaluate_dof(std::size_t i, const ufc::function& f, const double * coordinate_dofs, int cell_orientation, const ufc::cell& c, const ufc::coordinate_mapping * cm=nullptr ) const final override; void evaluate_dofs(double * values, const ufc::function& f, const double * coordinate_dofs, int cell_orientation, const ufc::cell& c, const ufc::coordinate_mapping * cm=nullptr ) const final override; void interpolate_vertex_values(double * vertex_values, const double * dof_values, const double * coordinate_dofs, int cell_orientation, const ufc::coordinate_mapping * cm=nullptr ) const final override; void tabulate_dof_coordinates(double * dof_coordinates, const double * coordinate_dofs, const ufc::coordinate_mapping * cm=nullptr ) const final override; void tabulate_reference_dof_coordinates(double * reference_dof_coordinates) const final override; std::size_t num_sub_elements() const final override; ufc::finite_element * create_sub_element(std::size_t i) const final override; ufc::finite_element * create() const final override; }; """ finite_element_implementation = """ %(classname)s::%(classname)s(%(constructor_arguments)s) : ufc::finite_element()%(initializer_list)s { %(constructor)s } %(classname)s::~%(classname)s() { %(destructor)s } const char * %(classname)s::signature() const { %(signature)s } ufc::shape %(classname)s::cell_shape() const { %(cell_shape)s } std::size_t %(classname)s::topological_dimension() const { %(topological_dimension)s } std::size_t %(classname)s::geometric_dimension() const { %(geometric_dimension)s } std::size_t %(classname)s::space_dimension() const { %(space_dimension)s } std::size_t %(classname)s::value_rank() const { %(value_rank)s } std::size_t %(classname)s::value_dimension(std::size_t i) const { %(value_dimension)s } std::size_t %(classname)s::value_size() const { %(value_size)s } std::size_t %(classname)s::reference_value_rank() const { %(reference_value_rank)s } std::size_t %(classname)s::reference_value_dimension(std::size_t i) const { %(reference_value_dimension)s } std::size_t %(classname)s::reference_value_size() const { %(reference_value_size)s } std::size_t %(classname)s::degree() const { %(degree)s } const char * %(classname)s::family() const { %(family)s } void %(classname)s::evaluate_reference_basis(double * reference_values, std::size_t num_points, const double * X) const { %(evaluate_reference_basis)s } void %(classname)s::evaluate_reference_basis_derivatives(double * reference_values, std::size_t order, std::size_t num_points, const double * X) const { %(evaluate_reference_basis_derivatives)s } void %(classname)s::transform_reference_basis_derivatives(double * values, std::size_t order, std::size_t num_points, const double * reference_values, const double * X, const double * J, const double * detJ, const double * K, int cell_orientation) const { %(transform_reference_basis_derivatives)s } void %(classname)s::evaluate_basis(std::size_t i, double * values, const double * x, const double * coordinate_dofs, int cell_orientation, const ufc::coordinate_mapping * cm ) const { %(evaluate_basis)s } void %(classname)s::evaluate_basis_all(double * values, const double * x, const double * coordinate_dofs, int cell_orientation, const ufc::coordinate_mapping * cm ) const { %(evaluate_basis_all)s } void %(classname)s::evaluate_basis_derivatives(std::size_t i, std::size_t n, double * values, const double * x, const double * coordinate_dofs, int cell_orientation, const ufc::coordinate_mapping * cm ) const { %(evaluate_basis_derivatives)s } void %(classname)s::evaluate_basis_derivatives_all(std::size_t n, double * values, const double * x, const double * coordinate_dofs, int cell_orientation, const ufc::coordinate_mapping * cm ) const { %(evaluate_basis_derivatives_all)s } double %(classname)s::evaluate_dof(std::size_t i, const ufc::function& f, const double * coordinate_dofs, int cell_orientation, const ufc::cell& c, const ufc::coordinate_mapping * cm ) const { %(evaluate_dof)s } void %(classname)s::evaluate_dofs(double * values, const ufc::function& f, const double * coordinate_dofs, int cell_orientation, const ufc::cell& c, const ufc::coordinate_mapping * cm ) const { %(evaluate_dofs)s } void %(classname)s::interpolate_vertex_values(double * vertex_values, const double * dof_values, const double * coordinate_dofs, int cell_orientation, const ufc::coordinate_mapping * cm ) const { %(interpolate_vertex_values)s } void %(classname)s::tabulate_dof_coordinates(double * dof_coordinates, const double * coordinate_dofs, const ufc::coordinate_mapping * cm ) const { %(tabulate_dof_coordinates)s } void %(classname)s::tabulate_reference_dof_coordinates(double * reference_dof_coordinates) const { %(tabulate_reference_dof_coordinates)s } std::size_t %(classname)s::num_sub_elements() const { %(num_sub_elements)s } ufc::finite_element * %(classname)s::create_sub_element(std::size_t i) const { %(create_sub_element)s } ufc::finite_element * %(classname)s::create() const { %(create)s } """ ffc-2019.1.0.post0/ffc/backends/ufc/form.py000066400000000000000000000306401346026536000201600ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Code generation format strings for UFC (Unified Form-assembly Code) # This code is released into the public domain. # # The FEniCS Project (http://www.fenicsproject.org/) 2006-2017. form_combined = """ class %(classname)s: public ufc::form {%(members)s public: %(classname)s(%(constructor_arguments)s) : ufc::form()%(initializer_list)s { %(constructor)s } ~%(classname)s() override { %(destructor)s } const char * signature() const final override { %(signature)s } std::size_t rank() const final override { %(rank)s } std::size_t num_coefficients() const final override { %(num_coefficients)s } std::size_t original_coefficient_position(std::size_t i) const final override { %(original_coefficient_position)s } ufc::finite_element * create_coordinate_finite_element() const final override { %(create_coordinate_finite_element)s } ufc::dofmap * create_coordinate_dofmap() const final override { %(create_coordinate_dofmap)s } ufc::coordinate_mapping * create_coordinate_mapping() const final override { %(create_coordinate_mapping)s } ufc::finite_element * create_finite_element(std::size_t i) const final override { %(create_finite_element)s } ufc::dofmap * create_dofmap(std::size_t i) const final override { %(create_dofmap)s } std::size_t max_cell_subdomain_id() const final override { %(max_cell_subdomain_id)s } std::size_t max_exterior_facet_subdomain_id() const final override { %(max_exterior_facet_subdomain_id)s } std::size_t max_interior_facet_subdomain_id() const final override { %(max_interior_facet_subdomain_id)s } std::size_t max_vertex_subdomain_id() const final override { %(max_vertex_subdomain_id)s } std::size_t max_custom_subdomain_id() const final override { %(max_custom_subdomain_id)s } std::size_t max_cutcell_subdomain_id() const final override { %(max_cutcell_subdomain_id)s } std::size_t max_interface_subdomain_id() const final override { %(max_interface_subdomain_id)s } std::size_t max_overlap_subdomain_id() const final override { %(max_overlap_subdomain_id)s } bool has_cell_integrals() const final override { %(has_cell_integrals)s } bool has_exterior_facet_integrals() const final override { %(has_exterior_facet_integrals)s } bool has_interior_facet_integrals() const final override { %(has_interior_facet_integrals)s } bool has_vertex_integrals() const final override { %(has_vertex_integrals)s } bool has_custom_integrals() const final override { %(has_custom_integrals)s } bool has_cutcell_integrals() const final override { %(has_cutcell_integrals)s } bool has_interface_integrals() const final override { %(has_interface_integrals)s } bool has_overlap_integrals() const final override { %(has_overlap_integrals)s } ufc::cell_integral * create_cell_integral(std::size_t subdomain_id) const final override { %(create_cell_integral)s } ufc::exterior_facet_integral * create_exterior_facet_integral(std::size_t subdomain_id) const final override { %(create_exterior_facet_integral)s } ufc::interior_facet_integral * create_interior_facet_integral(std::size_t subdomain_id) const final override { %(create_interior_facet_integral)s } ufc::vertex_integral * create_vertex_integral(std::size_t subdomain_id) const final override { %(create_vertex_integral)s } ufc::custom_integral * create_custom_integral(std::size_t subdomain_id) const final override { %(create_custom_integral)s } ufc::cutcell_integral * create_cutcell_integral(std::size_t subdomain_id) const final override { %(create_cutcell_integral)s } ufc::interface_integral * create_interface_integral(std::size_t subdomain_id) const final override { %(create_interface_integral)s } ufc::overlap_integral * create_overlap_integral(std::size_t subdomain_id) const final override { %(create_overlap_integral)s } ufc::cell_integral * create_default_cell_integral() const final override { %(create_default_cell_integral)s } ufc::exterior_facet_integral * create_default_exterior_facet_integral() const final override { %(create_default_exterior_facet_integral)s } ufc::interior_facet_integral * create_default_interior_facet_integral() const final override { %(create_default_interior_facet_integral)s } ufc::vertex_integral * create_default_vertex_integral() const final override { %(create_default_vertex_integral)s } ufc::custom_integral * create_default_custom_integral() const final override { %(create_default_custom_integral)s } ufc::cutcell_integral * create_default_cutcell_integral() const final override { %(create_default_cutcell_integral)s } ufc::interface_integral * create_default_interface_integral() const final override { %(create_default_interface_integral)s } ufc::overlap_integral * create_default_overlap_integral() const final override { %(create_default_overlap_integral)s } }; """ form_header = """ class %(classname)s: public ufc::form {%(members)s public: %(classname)s(%(constructor_arguments)s); ~%(classname)s() override; const char * signature() const final override; std::size_t rank() const final override; std::size_t num_coefficients() const final override; std::size_t original_coefficient_position(std::size_t i) const final override; ufc::finite_element * create_coordinate_finite_element() const final override; ufc::dofmap * create_coordinate_dofmap() const final override; ufc::coordinate_mapping * create_coordinate_mapping() const final override; ufc::finite_element * create_finite_element(std::size_t i) const final override; ufc::dofmap * create_dofmap(std::size_t i) const final override; std::size_t max_cell_subdomain_id() const final override; std::size_t max_exterior_facet_subdomain_id() const final override; std::size_t max_interior_facet_subdomain_id() const final override; std::size_t max_vertex_subdomain_id() const final override; std::size_t max_custom_subdomain_id() const final override; std::size_t max_cutcell_subdomain_id() const final override; std::size_t max_interface_subdomain_id() const final override; std::size_t max_overlap_subdomain_id() const final override; bool has_cell_integrals() const final override; bool has_exterior_facet_integrals() const final override; bool has_interior_facet_integrals() const final override; bool has_vertex_integrals() const final override; bool has_custom_integrals() const final override; bool has_cutcell_integrals() const final override; bool has_interface_integrals() const final override; bool has_overlap_integrals() const final override; ufc::cell_integral * create_cell_integral(std::size_t i) const final override; ufc::exterior_facet_integral * create_exterior_facet_integral(std::size_t i) const final override; ufc::interior_facet_integral * create_interior_facet_integral(std::size_t i) const final override; ufc::vertex_integral * create_vertex_integral(std::size_t i) const final override; ufc::custom_integral * create_custom_integral(std::size_t i) const final override; ufc::cutcell_integral * create_cutcell_integral(std::size_t i) const final override; ufc::interface_integral * create_interface_integral(std::size_t i) const final override; ufc::overlap_integral * create_overlap_integral(std::size_t i) const final override; ufc::cell_integral * create_default_cell_integral() const final override; ufc::exterior_facet_integral * create_default_exterior_facet_integral() const final override; ufc::interior_facet_integral * create_default_interior_facet_integral() const final override; ufc::vertex_integral * create_default_vertex_integral() const final override; ufc::custom_integral * create_default_custom_integral() const final override; ufc::cutcell_integral * create_default_cutcell_integral() const final override; ufc::interface_integral * create_default_interface_integral() const final override; ufc::overlap_integral * create_default_overlap_integral() const final override; }; """ form_implementation = """ %(classname)s::%(classname)s(%(constructor_arguments)s) : ufc::form()%(initializer_list)s { %(constructor)s } %(classname)s::~%(classname)s() { %(destructor)s } const char * %(classname)s::signature() const { %(signature)s } std::size_t %(classname)s::rank() const { %(rank)s } std::size_t %(classname)s::num_coefficients() const { %(num_coefficients)s } std::size_t %(classname)s::original_coefficient_position(std::size_t i) const { %(original_coefficient_position)s } ufc::finite_element * %(classname)s::create_coordinate_finite_element() const { %(create_coordinate_finite_element)s } ufc::dofmap * %(classname)s::create_coordinate_dofmap() const { %(create_coordinate_dofmap)s } ufc::coordinate_mapping * %(classname)s::create_coordinate_mapping() const { %(create_coordinate_mapping)s } ufc::finite_element * %(classname)s::create_finite_element(std::size_t i) const { %(create_finite_element)s } ufc::dofmap * %(classname)s::create_dofmap(std::size_t i) const { %(create_dofmap)s } std::size_t %(classname)s::max_cell_subdomain_id() const { %(max_cell_subdomain_id)s } std::size_t %(classname)s::max_exterior_facet_subdomain_id() const { %(max_exterior_facet_subdomain_id)s } std::size_t %(classname)s::max_interior_facet_subdomain_id() const { %(max_interior_facet_subdomain_id)s } std::size_t %(classname)s::max_vertex_subdomain_id() const { %(max_vertex_subdomain_id)s } std::size_t %(classname)s::max_custom_subdomain_id() const { %(max_custom_subdomain_id)s } std::size_t %(classname)s::max_cutcell_subdomain_id() const { %(max_cutcell_subdomain_id)s } std::size_t %(classname)s::max_interface_subdomain_id() const { %(max_interface_subdomain_id)s } std::size_t %(classname)s::max_overlap_subdomain_id() const { %(max_overlap_subdomain_id)s } bool %(classname)s::has_cell_integrals() const { %(has_cell_integrals)s } bool %(classname)s::has_exterior_facet_integrals() const { %(has_exterior_facet_integrals)s } bool %(classname)s::has_interior_facet_integrals() const { %(has_interior_facet_integrals)s } bool %(classname)s::has_vertex_integrals() const { %(has_vertex_integrals)s } bool %(classname)s::has_custom_integrals() const { %(has_custom_integrals)s } bool %(classname)s::has_cutcell_integrals() const { %(has_cutcell_integrals)s } bool %(classname)s::has_interface_integrals() const { %(has_interface_integrals)s } bool %(classname)s::has_overlap_integrals() const { %(has_overlap_integrals)s } ufc::cell_integral * %(classname)s::create_cell_integral(std::size_t subdomain_id) const { %(create_cell_integral)s } ufc::exterior_facet_integral * %(classname)s::create_exterior_facet_integral(std::size_t subdomain_id) const { %(create_exterior_facet_integral)s } ufc::interior_facet_integral * %(classname)s::create_interior_facet_integral(std::size_t subdomain_id) const { %(create_interior_facet_integral)s } ufc::vertex_integral * %(classname)s::create_vertex_integral(std::size_t subdomain_id) const { %(create_vertex_integral)s } ufc::custom_integral * %(classname)s::create_custom_integral(std::size_t subdomain_id) const { %(create_custom_integral)s } ufc::cutcell_integral * %(classname)s::create_cutcell_integral(std::size_t subdomain_id) const { %(create_cutcell_integral)s } ufc::interface_integral * %(classname)s::create_interface_integral(std::size_t subdomain_id) const { %(create_interface_integral)s } ufc::overlap_integral * %(classname)s::create_overlap_integral(std::size_t subdomain_id) const { %(create_overlap_integral)s } ufc::cell_integral * %(classname)s::create_default_cell_integral() const { %(create_default_cell_integral)s } ufc::exterior_facet_integral * %(classname)s::create_default_exterior_facet_integral() const { %(create_default_exterior_facet_integral)s } ufc::interior_facet_integral * %(classname)s::create_default_interior_facet_integral() const { %(create_default_interior_facet_integral)s } ufc::vertex_integral * %(classname)s::create_default_vertex_integral() const { %(create_default_vertex_integral)s } ufc::custom_integral * %(classname)s::create_default_custom_integral() const { %(create_default_custom_integral)s } ufc::cutcell_integral * %(classname)s::create_default_cutcell_integral() const { %(create_default_cutcell_integral)s } ufc::interface_integral * %(classname)s::create_default_interface_integral() const { %(create_default_interface_integral)s } ufc::overlap_integral * %(classname)s::create_default_overlap_integral() const { %(create_default_overlap_integral)s } """ ffc-2019.1.0.post0/ffc/backends/ufc/function.py000066400000000000000000000024421346026536000210410ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Code generation format strings for UFC (Unified Form-assembly Code) # This code is released into the public domain. # # The FEniCS Project (http://www.fenicsproject.org/) 2006-2017 function_combined = """ class %(classname)s: public ufc::function {%(members)s public: %(classname)s::%(classname)s(%(constructor_arguments)s) : ufc::function()%(initializer_list)s { %(constructor)s } ~%(classname)s() override { %(destructor)s } void evaluate(double * values, const double * coordinates, const ufc::cell& c) const final override { %(evaluate)s } }; """ function_header = """ class %(classname)s: public ufc::function {%(members)s public: %(classname)s(%(constructor_arguments)s); ~%(classname)s() override; void evaluate(double * values, const double * coordinates, const ufc::cell& c) const final override; }; """ function_implementation = """ %(classname)s::%(classname)s(%(constructor_arguments)s) : ufc::function()%(initializer_list)s { %(constructor)s } %(classname)s::~%(classname)s() { %(destructor)s } void %(classname)s::evaluate(double * values, const double * coordinates, const ufc::cell& c) const { %(evaluate)s } """ ffc-2019.1.0.post0/ffc/backends/ufc/integrals.py000066400000000000000000000425351346026536000212130ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Code generation format strings for UFC (Unified Form-assembly Code) # This code is released into the public domain. # # The FEniCS Project (http://www.fenicsproject.org/) 2006-2017 cell_integral_combined = """ class %(classname)s: public ufc::cell_integral {%(members)s public: %(classname)s(%(constructor_arguments)s) : ufc::cell_integral()%(initializer_list)s { %(constructor)s } ~%(classname)s() override { %(destructor)s } const std::vector & enabled_coefficients() const final override { %(enabled_coefficients)s } void tabulate_tensor(double * A, const double * const * w, const double * coordinate_dofs, int cell_orientation) const final override { %(tabulate_tensor_comment)s %(tabulate_tensor)s } }; """ cell_integral_header = """ class %(classname)s: public ufc::cell_integral {%(members)s public: %(classname)s(%(constructor_arguments)s); ~%(classname)s() override; const std::vector & enabled_coefficients() const final override; void tabulate_tensor(double * A, const double * const * w, const double * coordinate_dofs, int cell_orientation) const final override; }; """ cell_integral_implementation = """ %(classname)s::%(classname)s(%(constructor_arguments)s) : ufc::cell_integral()%(initializer_list)s { %(constructor)s } %(classname)s::~%(classname)s() { %(destructor)s } const std::vector & %(classname)s::enabled_coefficients() const { %(enabled_coefficients)s } void %(classname)s::tabulate_tensor(double * A, const double * const * w, const double * coordinate_dofs, int cell_orientation) const { %(tabulate_tensor_comment)s %(tabulate_tensor)s } """ exterior_facet_integral_combined = """ class %(classname)s: public ufc::exterior_facet_integral {%(members)s public: %(classname)s(%(constructor_arguments)s) : ufc::exterior_facet_integral()%(initializer_list)s { %(constructor)s } ~%(classname)s() override { %(destructor)s } const std::vector & enabled_coefficients() const final override { %(enabled_coefficients)s } void tabulate_tensor(double * A, const double * const * w, const double * coordinate_dofs, std::size_t facet, int cell_orientation) const final override { %(tabulate_tensor_comment)s %(tabulate_tensor)s } }; """ exterior_facet_integral_header = """ class %(classname)s: public ufc::exterior_facet_integral {%(members)s public: %(classname)s(%(constructor_arguments)s); ~%(classname)s() override; const std::vector & enabled_coefficients() const final override; void tabulate_tensor(double * A, const double * const * w, const double * coordinate_dofs, std::size_t facet, int cell_orientation) const final override; }; """ exterior_facet_integral_implementation = """ %(classname)s::%(classname)s(%(constructor_arguments)s) : ufc::exterior_facet_integral()%(initializer_list)s { %(constructor)s } %(classname)s::~%(classname)s() { %(destructor)s } const std::vector & %(classname)s::enabled_coefficients() const { %(enabled_coefficients)s } void %(classname)s::tabulate_tensor(double * A, const double * const * w, const double * coordinate_dofs, std::size_t facet, int cell_orientation) const { %(tabulate_tensor_comment)s %(tabulate_tensor)s } """ interior_facet_integral_combined = """ class %(classname)s: public ufc::interior_facet_integral {%(members)s public: %(classname)s(%(constructor_arguments)s) : ufc::interior_facet_integral()%(initializer_list)s { %(constructor)s } ~%(classname)s() override { %(destructor)s } const std::vector & enabled_coefficients() const final override { %(enabled_coefficients)s } void tabulate_tensor(double * A, const double * const * w, const double * coordinate_dofs_0, const double * coordinate_dofs_1, std::size_t facet_0, std::size_t facet_1, int cell_orientation_0, int cell_orientation_1) const final override { %(tabulate_tensor_comment)s %(tabulate_tensor)s } }; """ interior_facet_integral_header = """ class %(classname)s: public ufc::interior_facet_integral {%(members)s public: %(classname)s(%(constructor_arguments)s); ~%(classname)s() override; const std::vector & enabled_coefficients() const final override; void tabulate_tensor(double * A, const double * const * w, const double * coordinate_dofs_0, const double * coordinate_dofs_1, std::size_t facet_0, std::size_t facet_1, int cell_orientation_0, int cell_orientation_1) const final override; }; """ interior_facet_integral_implementation = """ %(classname)s::%(classname)s(%(constructor_arguments)s) : ufc::interior_facet_integral()%(initializer_list)s { %(constructor)s } %(classname)s::~%(classname)s() { %(destructor)s } const std::vector & %(classname)s::enabled_coefficients() const { %(enabled_coefficients)s } void %(classname)s::tabulate_tensor(double * A, const double * const * w, const double * coordinate_dofs_0, const double * coordinate_dofs_1, std::size_t facet_0, std::size_t facet_1, int cell_orientation_0, int cell_orientation_1) const { %(tabulate_tensor_comment)s %(tabulate_tensor)s } """ vertex_integral_combined = """ class %(classname)s: public ufc::vertex_integral {%(members)s public: %(classname)s(%(constructor_arguments)s) : ufc::vertex_integral()%(initializer_list)s { %(constructor)s } ~%(classname)s() override { %(destructor)s } const std::vector & enabled_coefficients() const final override { %(enabled_coefficients)s } void tabulate_tensor(double * A, const double * const * w, const double * coordinate_dofs, std::size_t vertex, int cell_orientation) const final override { %(tabulate_tensor_comment)s %(tabulate_tensor)s } }; """ vertex_integral_header = """ class %(classname)s: public ufc::vertex_integral {%(members)s public: %(classname)s(%(constructor_arguments)s); ~%(classname)s() override; const std::vector & enabled_coefficients() const final override; void tabulate_tensor(double * A, const double * const * w, const double * coordinate_dofs, std::size_t vertex, int cell_orientation) const final override; }; """ vertex_integral_implementation = """ %(classname)s::%(classname)s(%(constructor_arguments)s) : ufc::vertex_integral()%(initializer_list)s { %(constructor)s } %(classname)s::~%(classname)s() { %(destructor)s } const std::vector & %(classname)s::enabled_coefficients() const { %(enabled_coefficients)s } void %(classname)s::tabulate_tensor(double * A, const double * const * w, const double * coordinate_dofs, std::size_t vertex, int cell_orientation) const { %(tabulate_tensor_comment)s %(tabulate_tensor)s } """ custom_integral_combined = """\ class %(classname)s: public ufc::custom_integral {%(members)s public: %(classname)s(%(constructor_arguments)s) : ufc::custom_integral()%(initializer_list)s { %(constructor)s } ~%(classname)s() override { %(destructor)s } const std::vector & enabled_coefficients() const final override { %(enabled_coefficients)s } std::size_t num_cells() const final override { %(num_cells)s } void tabulate_tensor(double * A, const double * const * w, const double * coordinate_dofs, std::size_t num_quadrature_points, const double * quadrature_points, const double * quadrature_weights, const double * facet_normals, int cell_orientation) const final override { %(tabulate_tensor_comment)s %(tabulate_tensor)s } }; """ custom_integral_header = """ class %(classname)s: public ufc::custom_integral {%(members)s public: %(classname)s(%(constructor_arguments)s); ~%(classname)s() override; const std::vector & enabled_coefficients() const final override; std::size_t num_cells() const final override; void tabulate_tensor(double * A, const double * const * w, const double * coordinate_dofs, std::size_t num_quadrature_points, const double * quadrature_points, const double * quadrature_weights, const double * facet_normals, int cell_orientation) const final override; }; """ custom_integral_implementation = """ %(classname)s::%(classname)s(%(constructor_arguments)s) : ufc::custom_integral()%(initializer_list)s { %(constructor)s } %(classname)s::~%(classname)s() { %(destructor)s } const std::vector & %(classname)s::enabled_coefficients() const { %(enabled_coefficients)s } std::size_t %(classname)s::num_cells() const { %(num_cells)s } void %(classname)s::tabulate_tensor(double * A, const double * const * w, const double * coordinate_dofs, std::size_t num_quadrature_points, const double * quadrature_points, const double * quadrature_weights, const double * facet_normals, int cell_orientation) const { %(tabulate_tensor_comment)s %(tabulate_tensor)s } """ cutcell_integral_combined = """ class %(classname)s: public ufc::cutcell_integral {%(members)s public: %(classname)s(%(constructor_arguments)s) : ufc::cutcell_integral()%(initializer_list)s { %(constructor)s } ~%(classname)s() override { %(destructor)s } const std::vector & enabled_coefficients() const final override { %(enabled_coefficients)s } void tabulate_tensor(double * A, const double * const * w, const double * coordinate_dofs, std::size_t num_quadrature_points, const double * quadrature_points, const double * quadrature_weights, int cell_orientation) const final override { %(tabulate_tensor_comment)s %(tabulate_tensor)s } }; """ cutcell_integral_header = """ class %(classname)s: public ufc::cutcell_integral {%(members)s public: %(classname)s(%(constructor_arguments)s); ~%(classname)s() override; const std::vector & enabled_coefficients() const final override; void tabulate_tensor(double * A, const double * const * w, const double * coordinate_dofs, std::size_t num_quadrature_points, const double * quadrature_points, const double * quadrature_weights, int cell_orientation) const final override; }; """ cutcell_integral_implementation = """ %(classname)s::%(classname)s(%(constructor_arguments)s) : ufc::cutcell_integral()%(initializer_list)s { %(constructor)s } %(classname)s::~%(classname)s() { %(destructor)s } const std::vector & %(classname)s::enabled_coefficients() const { %(enabled_coefficients)s } void %(classname)s::tabulate_tensor(double * A, const double * const * w, const double * coordinate_dofs, std::size_t num_quadrature_points, const double * quadrature_points, const double * quadrature_weights, int cell_orientation) const { %(tabulate_tensor_comment)s %(tabulate_tensor)s } """ interface_integral_combined = """ class %(classname)s: public ufc::interface_integral {%(members)s public: %(classname)s(%(constructor_arguments)s) : ufc::interface_integral()%(initializer_list)s { %(constructor)s } ~%(classname)s() override { %(destructor)s } const std::vector & enabled_coefficients() const final override { %(enabled_coefficients)s } void tabulate_tensor(double * A, const double * const * w, const double * coordinate_dofs, std::size_t num_quadrature_points, const double * quadrature_points, const double * quadrature_weights, const double * facet_normals, int cell_orientation) const final override { %(tabulate_tensor_comment)s %(tabulate_tensor)s } }; """ interface_integral_header = """ class %(classname)s: public ufc::interface_integral {%(members)s public: %(classname)s(%(constructor_arguments)s); ~%(classname)s() override; const std::vector & enabled_coefficients() const final override; void tabulate_tensor(double * A, const double * const * w, const double * coordinate_dofs, std::size_t num_quadrature_points, const double * quadrature_points, const double * quadrature_weights, const double * facet_normals, int cell_orientation) const final override; }; """ interface_integral_implementation = """ %(classname)s::%(classname)s(%(constructor_arguments)s) : ufc::interface_integral()%(initializer_list)s { %(constructor)s } %(classname)s::~%(classname)s() { %(destructor)s } const std::vector & %(classname)s::enabled_coefficients() const { %(enabled_coefficients)s } void %(classname)s::tabulate_tensor(double * A, const double * const * w, const double * coordinate_dofs, std::size_t num_quadrature_points, const double * quadrature_points, const double * quadrature_weights, const double * facet_normals, int cell_orientation) const { %(tabulate_tensor_comment)s %(tabulate_tensor)s } """ overlap_integral_combined = """ class %(classname)s: public ufc::overlap_integral {%(members)s public: %(classname)s(%(constructor_arguments)s) : ufc::overlap_integral()%(initializer_list)s { %(constructor)s } ~%(classname)s() override { %(destructor)s } const std::vector & enabled_coefficients() const final override { %(enabled_coefficients)s } void tabulate_tensor(double * A, const double * const * w, const double * coordinate_dofs, std::size_t num_quadrature_points, const double * quadrature_points, const double * quadrature_weights, int cell_orientation) const final override { %(tabulate_tensor_comment)s %(tabulate_tensor)s } }; """ overlap_integral_header = """ class %(classname)s: public ufc::overlap_integral {%(members)s public: %(classname)s(%(constructor_arguments)s); ~%(classname)s() override; const std::vector & enabled_coefficients() const final override; void tabulate_tensor(double * A, const double * const * w, const double * coordinate_dofs, std::size_t num_quadrature_points, const double * quadrature_points, const double * quadrature_weights, int cell_orientation) const final override; }; """ overlap_integral_implementation = """ %(classname)s::%(classname)s(%(constructor_arguments)s) : ufc::overlap_integral()%(initializer_list)s { %(constructor)s } %(classname)s::~%(classname)s() { %(destructor)s } const std::vector & %(classname)s::enabled_coefficients() const { %(enabled_coefficients)s } void %(classname)s::tabulate_tensor(double * A, const double * const * w, const double * coordinate_dofs, std::size_t num_quadrature_points, const double * quadrature_points, const double * quadrature_weights, int cell_orientation) const { %(tabulate_tensor_comment)s %(tabulate_tensor)s } """ ffc-2019.1.0.post0/ffc/backends/ufc/ufc.h000066400000000000000000001103431346026536000175700ustar00rootroot00000000000000/// This is UFC (Unified Form-assembly Code) /// This code is released into the public domain. /// /// The FEniCS Project (http://www.fenicsproject.org/) 2006-2017. /// /// UFC defines the interface between code generated by FFC /// and the DOLFIN C++ library. Changes here must be reflected /// both in the FFC code generation and in the DOLFIN library calls. #ifndef __UFC_H #define __UFC_H #define UFC_VERSION_MAJOR 2019 #define UFC_VERSION_MINOR 1 #define UFC_VERSION_MAINTENANCE 0 #define UFC_VERSION_RELEASE 1 #include #include #include #include #define CONCAT(a,b,c) #a "." #b "." #c #define EVALUATOR(a,b,c) CONCAT(a,b,c) #if UFC_VERSION_RELEASE const char UFC_VERSION[] = EVALUATOR(UFC_VERSION_MAJOR, UFC_VERSION_MINOR, UFC_VERSION_MAINTENANCE); #else const char UFC_VERSION[] = EVALUATOR(UFC_VERSION_MAJOR, UFC_VERSION_MINOR, UFC_VERSION_MAINTENANCE) ".dev0"; #endif #undef CONCAT #undef EVALUATOR namespace ufc { /// Valid cell shapes enum class shape {interval, triangle, quadrilateral, tetrahedron, hexahedron, vertex}; /// This class defines the data structure for a cell in a mesh. class cell { public: /// Shape of the cell shape cell_shape; /// Topological dimension of the mesh std::size_t topological_dimension; /// Geometric dimension of the mesh std::size_t geometric_dimension; /// Array of global indices for the mesh entities of the cell std::vector > entity_indices; /// Cell index (short-cut for entity_indices[topological_dimension][0]) std::size_t index; /// Local facet index int local_facet; /// Cell orientation int orientation; /// Unique mesh identifier int mesh_identifier; }; /// This class defines the interface for a general tensor-valued function. class function { public: /// Destructor virtual ~function() {} /// Evaluate function at given point in cell virtual void evaluate(double * values, const double * coordinates, const cell& c) const = 0; }; /// Forward declaration class coordinate_mapping; /// This class defines the interface for a finite element. class finite_element { public: /// Destructor virtual ~finite_element() {} /// Return a string identifying the finite element virtual const char * signature() const = 0; /// Return the cell shape virtual shape cell_shape() const = 0; /// Return the topological dimension of the cell shape virtual std::size_t topological_dimension() const = 0; /// Return the geometric dimension of the cell shape virtual std::size_t geometric_dimension() const = 0; /// Return the dimension of the finite element function space virtual std::size_t space_dimension() const = 0; /// Return the rank of the value space virtual std::size_t value_rank() const = 0; /// Return the dimension of the value space for axis i virtual std::size_t value_dimension(std::size_t i) const = 0; /// Return the number of components of the value space virtual std::size_t value_size() const = 0; /// Return the rank of the reference value space virtual std::size_t reference_value_rank() const = 0; /// Return the dimension of the reference value space for axis i virtual std::size_t reference_value_dimension(std::size_t i) const = 0; /// Return the number of components of the reference value space virtual std::size_t reference_value_size() const = 0; /// Return the maximum polynomial degree of the finite element function space virtual std::size_t degree() const = 0; /// Return the family of the finite element function space virtual const char * family() const = 0; /// Evaluate all basis functions at given point X in reference cell /// /// @param[out] reference_values /// Basis function values on reference element. /// Dimensions: reference_values[num_points][num_dofs][reference_value_size] /// @param[in] num_points /// Number of points. /// @param[in] X /// Reference cell coordinates. /// Dimensions: X[num_points][tdim] /// virtual void evaluate_reference_basis(double * reference_values, std::size_t num_points, const double * X) const = 0; /// Evaluate specific order derivatives of all basis functions at given point X in reference cell /// /// @param[out] reference_values /// Basis function derivative values on reference element. /// Dimensions: reference_values[num_points][num_dofs][num_derivatives][reference_value_size] /// where num_derivatives = pow(order, tdim). /// TODO: Document ordering of derivatives for order > 1. /// @param[in] order /// Derivative order to compute. /// @param[in] num_points /// Number of points. /// @param[in] X /// Reference cell coordinates. /// Dimensions: X[num_points][tdim] /// virtual void evaluate_reference_basis_derivatives(double * reference_values, std::size_t order, std::size_t num_points, const double * X) const = 0; /// Transform order n derivatives (can be 0) of all basis functions /// previously evaluated in points X in reference cell with given /// Jacobian J and its inverse K for each point /// /// @param[out] values /// Transformed basis function (derivative) values. /// Dimensions: values[num_points][num_dofs][num_derivatives][value_size] /// where num_derivatives = pow(order, tdim). /// TODO: Document ordering of derivatives for order > 1. /// @param[in] order /// Derivative order to compute. Can be zero to just apply Piola mappings. /// @param[in] num_points /// Number of points. /// @param[in] reference_values /// Basis function derivative values on reference element. /// Dimensions: reference_values[num_points][num_dofs][num_derivatives][reference_value_size] /// where num_derivatives = pow(order, tdim). /// TODO: Document ordering of derivatives for order > 1. /// @param[in] X /// Reference cell coordinates. /// Dimensions: X[num_points][tdim] /// @param[in] J /// Jacobian of coordinate field, J = dx/dX. /// Dimensions: J[num_points][gdim][tdim] /// @param[in] detJ /// (Pseudo-)Determinant of Jacobian. /// Dimensions: detJ[num_points] /// @param[in] K /// (Pseudo-)Inverse of Jacobian of coordinate field. /// Dimensions: K[num_points][tdim][gdim] /// @param[in] cell_orientation /// Orientation of the cell, 1 means flipped w.r.t. reference cell. /// Only relevant on manifolds (tdim < gdim). /// virtual void transform_reference_basis_derivatives(double * values, std::size_t order, std::size_t num_points, const double * reference_values, const double * X, const double * J, const double * detJ, const double * K, int cell_orientation) const = 0; /// Evaluate basis function i at given point x in cell virtual void evaluate_basis(std::size_t i, double * values, const double * x, const double * coordinate_dofs, int cell_orientation, const ufc::coordinate_mapping * cm=nullptr ) const = 0; /// Evaluate all basis functions at given point x in cell virtual void evaluate_basis_all(double * values, const double * x, const double * coordinate_dofs, int cell_orientation, const ufc::coordinate_mapping * cm=nullptr ) const = 0; /// Evaluate order n derivatives of basis function i at given point x in cell virtual void evaluate_basis_derivatives(std::size_t i, std::size_t n, double * values, const double * x, const double * coordinate_dofs, int cell_orientation, const ufc::coordinate_mapping * cm=nullptr ) const = 0; /// Evaluate order n derivatives of all basis functions at given point x in cell virtual void evaluate_basis_derivatives_all(std::size_t n, double * values, const double * x, const double * coordinate_dofs, int cell_orientation, const ufc::coordinate_mapping * cm=nullptr ) const = 0; /// Evaluate linear functional for dof i on the function f /// The cell argument is only included here so we can pass it to the function. virtual double evaluate_dof(std::size_t i, const function& f, const double * coordinate_dofs, int cell_orientation, const cell& c, const ufc::coordinate_mapping * cm=nullptr ) const = 0; /// Evaluate linear functionals for all dofs on the function f /// The cell argument is only included here so we can pass it to the function. virtual void evaluate_dofs(double * values, const function& f, const double * coordinate_dofs, int cell_orientation, const cell& c, const ufc::coordinate_mapping * cm=nullptr ) const = 0; /// Interpolate vertex values from dof values virtual void interpolate_vertex_values(double * vertex_values, const double * dof_values, const double * coordinate_dofs, int cell_orientation, const ufc::coordinate_mapping * cm=nullptr ) const = 0; /// Tabulate the coordinates of all dofs on a cell virtual void tabulate_dof_coordinates(double * dof_coordinates, const double * coordinate_dofs, const ufc::coordinate_mapping * cm=nullptr ) const = 0; /// Tabulate the coordinates of all dofs on a reference cell virtual void tabulate_reference_dof_coordinates(double * reference_dof_coordinates) const = 0; /// Return the number of sub elements (for a mixed element) virtual std::size_t num_sub_elements() const = 0; /// Create a new finite element for sub element i (for a mixed element) virtual finite_element * create_sub_element(std::size_t i) const = 0; /// Create a new class instance virtual finite_element * create() const = 0; }; /// This class defines the interface for a local-to-global mapping of /// degrees of freedom (dofs). class dofmap { public: /// Destructor virtual ~dofmap() {} /// Return a string identifying the dofmap virtual const char * signature() const = 0; /// Return true iff mesh entities of topological dimension d are /// needed virtual bool needs_mesh_entities(std::size_t d) const = 0; /// Return the topological dimension of the associated cell shape virtual std::size_t topological_dimension() const = 0; /// Return the dimension of the global finite element function space virtual std::size_t global_dimension(const std::vector& num_global_mesh_entities) const = 0; /// Return the dimension of the local finite element function space /// Return the number of dofs with global support (i.e. global constants) virtual std::size_t num_global_support_dofs() const = 0; /// Return the dimension of the local finite element function space /// for a cell (not including global support dofs) virtual std::size_t num_element_support_dofs() const = 0; /// Return the dimension of the local finite element function space /// for a cell (old version including global support dofs) virtual std::size_t num_element_dofs() const = 0; /// Return the number of dofs on each cell facet virtual std::size_t num_facet_dofs() const = 0; /// Return the number of dofs associated with each cell /// entity of dimension d virtual std::size_t num_entity_dofs(std::size_t d) const = 0; /// Return the number of dofs associated with the closure /// of each cell entity dimension d virtual std::size_t num_entity_closure_dofs(std::size_t d) const = 0; /// Tabulate the local-to-global mapping of dofs on a cell virtual void tabulate_dofs(std::size_t * dofs, const std::vector& num_global_entities, const std::vector>& entity_indices) const = 0; /// Tabulate the local-to-local mapping from facet dofs to cell dofs virtual void tabulate_facet_dofs(std::size_t * dofs, std::size_t facet) const = 0; /// Tabulate the local-to-local mapping of dofs on entity (d, i) virtual void tabulate_entity_dofs(std::size_t * dofs, std::size_t d, std::size_t i) const = 0; /// Tabulate the local-to-local mapping of dofs on the closure of entity (d, i) virtual void tabulate_entity_closure_dofs(std::size_t * dofs, std::size_t d, std::size_t i) const = 0; /// Return the number of sub dofmaps (for a mixed element) virtual std::size_t num_sub_dofmaps() const = 0; /// Create a new dofmap for sub dofmap i (for a mixed element) virtual dofmap * create_sub_dofmap(std::size_t i) const = 0; /// Create a new class instance virtual dofmap * create() const = 0; }; /// A representation of a coordinate mapping parameterized by a local finite element basis on each cell class coordinate_mapping { public: virtual ~coordinate_mapping() {} /// Return coordinate_mapping signature string virtual const char * signature() const = 0; /// Create object of the same type virtual coordinate_mapping * create() const = 0; /// Return geometric dimension of the coordinate_mapping virtual std::size_t geometric_dimension() const = 0; /// Return topological dimension of the coordinate_mapping virtual std::size_t topological_dimension() const = 0; /// Return cell shape of the coordinate_mapping virtual shape cell_shape() const = 0; /// Create finite_element object representing the coordinate parameterization virtual finite_element * create_coordinate_finite_element() const = 0; /// Create dofmap object representing the coordinate parameterization virtual dofmap * create_coordinate_dofmap() const = 0; /// Compute physical coordinates x from reference coordinates X, /// the inverse of compute_reference_coordinates /// /// @param[out] x /// Physical coordinates. /// Dimensions: x[num_points][gdim] /// @param[in] num_points /// Number of points. /// @param[in] X /// Reference cell coordinates. /// Dimensions: X[num_points][tdim] /// @param[in] coordinate_dofs /// Dofs of the coordinate field on the cell. /// Dimensions: coordinate_dofs[num_dofs][gdim]. /// virtual void compute_physical_coordinates( double * x, std::size_t num_points, const double * X, const double * coordinate_dofs) const = 0; /// Compute reference coordinates X from physical coordinates x, /// the inverse of compute_physical_coordinates /// /// @param[out] X /// Reference cell coordinates. /// Dimensions: X[num_points][tdim] /// @param[in] num_points /// Number of points. /// @param[in] x /// Physical coordinates. /// Dimensions: x[num_points][gdim] /// @param[in] coordinate_dofs /// Dofs of the coordinate field on the cell. /// Dimensions: coordinate_dofs[num_dofs][gdim]. /// @param[in] cell_orientation /// Orientation of the cell, 1 means flipped w.r.t. reference cell. /// Only relevant on manifolds (tdim < gdim). /// virtual void compute_reference_coordinates( double * X, std::size_t num_points, const double * x, const double * coordinate_dofs, int cell_orientation) const = 0; /// Compute X, J, detJ, K from physical coordinates x on a cell /// /// @param[out] X /// Reference cell coordinates. /// Dimensions: X[num_points][tdim] /// @param[out] J /// Jacobian of coordinate field, J = dx/dX. /// Dimensions: J[num_points][gdim][tdim] /// @param[out] detJ /// (Pseudo-)Determinant of Jacobian. /// Dimensions: detJ[num_points] /// @param[out] K /// (Pseudo-)Inverse of Jacobian of coordinate field. /// Dimensions: K[num_points][tdim][gdim] /// @param[in] num_points /// Number of points. /// @param[in] x /// Physical coordinates. /// Dimensions: x[num_points][gdim] /// @param[in] coordinate_dofs /// Dofs of the coordinate field on the cell. /// Dimensions: coordinate_dofs[num_dofs][gdim]. /// @param[in] cell_orientation /// Orientation of the cell, 1 means flipped w.r.t. reference cell. /// Only relevant on manifolds (tdim < gdim). /// virtual void compute_reference_geometry( double * X, double * J, double * detJ, double * K, std::size_t num_points, const double * x, const double * coordinate_dofs, int cell_orientation) const = 0; /// Compute Jacobian of coordinate mapping J = dx/dX at reference coordinates X /// /// @param[out] J /// Jacobian of coordinate field, J = dx/dX. /// Dimensions: J[num_points][gdim][tdim] /// @param[in] num_points /// Number of points. /// @param[in] X /// Reference cell coordinates. /// Dimensions: X[num_points][tdim] /// @param[in] coordinate_dofs /// Dofs of the coordinate field on the cell. /// Dimensions: coordinate_dofs[num_dofs][gdim]. /// virtual void compute_jacobians( double * J, std::size_t num_points, const double * X, const double * coordinate_dofs) const = 0; /// Compute determinants of (pseudo-)Jacobians J /// /// @param[out] detJ /// (Pseudo-)Determinant of Jacobian. /// Dimensions: detJ[num_points] /// @param[in] num_points /// Number of points. /// @param[in] J /// Jacobian of coordinate field, J = dx/dX. /// Dimensions: J[num_points][gdim][tdim] /// @param[in] cell_orientation /// Orientation of the cell, 1 means flipped w.r.t. reference cell. /// Only relevant on manifolds (tdim < gdim). /// virtual void compute_jacobian_determinants( double * detJ, std::size_t num_points, const double * J, int cell_orientation) const = 0; /// Compute (pseudo-)inverses K of (pseudo-)Jacobians J /// /// @param[out] K /// (Pseudo-)Inverse of Jacobian of coordinate field. /// Dimensions: K[num_points][tdim][gdim] /// @param[in] num_points /// Number of points. /// @param[in] J /// Jacobian of coordinate field, J = dx/dX. /// Dimensions: J[num_points][gdim][tdim] /// @param[in] detJ /// (Pseudo-)Determinant of Jacobian. /// Dimensions: detJ[num_points] /// virtual void compute_jacobian_inverses( double * K, std::size_t num_points, const double * J, const double * detJ) const = 0; /// Combined (for convenience) computation of x, J, detJ, K from X and coordinate_dofs on a cell /// /// @param[out] x /// Physical coordinates. /// Dimensions: x[num_points][gdim] /// @param[out] J /// Jacobian of coordinate field, J = dx/dX. /// Dimensions: J[num_points][gdim][tdim] /// @param[out] detJ /// (Pseudo-)Determinant of Jacobian. /// Dimensions: detJ[num_points] /// @param[out] K /// (Pseudo-)Inverse of Jacobian of coordinate field. /// Dimensions: K[num_points][tdim][gdim] /// @param[in] num_points /// Number of points. /// @param[in] X /// Reference cell coordinates. /// Dimensions: X[num_points][tdim] /// @param[in] coordinate_dofs /// Dofs of the coordinate field on the cell. /// Dimensions: coordinate_dofs[num_dofs][gdim]. /// @param[in] cell_orientation /// Orientation of the cell, 1 means flipped w.r.t. reference cell. /// Only relevant on manifolds (tdim < gdim). /// virtual void compute_geometry( double * x, double * J, double * detJ, double * K, std::size_t num_points, const double * X, const double * coordinate_dofs, int cell_orientation) const = 0; /// Compute x and J at midpoint of cell /// /// @param[out] x /// Physical coordinates. /// Dimensions: x[gdim] /// @param[out] J /// Jacobian of coordinate field, J = dx/dX. /// Dimensions: J[gdim][tdim] /// @param[in] coordinate_dofs /// Dofs of the coordinate field on the cell. /// Dimensions: coordinate_dofs[num_dofs][gdim]. /// virtual void compute_midpoint_geometry( double * x, double * J, const double * coordinate_dofs) const = 0; }; /// This class defines the shared interface for classes implementing /// the tabulation of a tensor corresponding to the local contribution /// to a form from an integral. class integral { public: /// Destructor virtual ~integral() {} /// Tabulate which form coefficients are used by this integral virtual const std::vector & enabled_coefficients() const = 0; }; /// This class defines the interface for the tabulation of the cell /// tensor corresponding to the local contribution to a form from /// the integral over a cell. class cell_integral: public integral { public: /// Destructor virtual ~cell_integral() {} /// Tabulate the tensor for the contribution from a local cell virtual void tabulate_tensor(double * A, const double * const * w, const double * coordinate_dofs, int cell_orientation) const = 0; }; /// This class defines the interface for the tabulation of the /// exterior facet tensor corresponding to the local contribution to /// a form from the integral over an exterior facet. class exterior_facet_integral: public integral { public: /// Destructor virtual ~exterior_facet_integral() {} /// Tabulate the tensor for the contribution from a local exterior facet virtual void tabulate_tensor(double * A, const double * const * w, const double * coordinate_dofs, std::size_t facet, int cell_orientation) const = 0; }; /// This class defines the interface for the tabulation of the /// interior facet tensor corresponding to the local contribution to /// a form from the integral over an interior facet. class interior_facet_integral: public integral { public: /// Destructor virtual ~interior_facet_integral() {} /// Tabulate the tensor for the contribution from a local interior facet virtual void tabulate_tensor(double * A, const double * const * w, const double * coordinate_dofs_0, const double * coordinate_dofs_1, std::size_t facet_0, std::size_t facet_1, int cell_orientation_0, int cell_orientation_1) const = 0; }; /// This class defines the interface for the tabulation of /// an expression evaluated at exactly one point. class vertex_integral: public integral { public: /// Constructor vertex_integral() {} /// Destructor virtual ~vertex_integral() {} /// Tabulate the tensor for the contribution from the local vertex virtual void tabulate_tensor(double * A, const double * const * w, const double * coordinate_dofs, std::size_t vertex, int cell_orientation) const = 0; }; /// This class defines the interface for the tabulation of the /// tensor corresponding to the local contribution to a form from /// the integral over a custom domain defined in terms of a set of /// quadrature points and weights. class custom_integral: public integral { public: /// Constructor custom_integral() {} /// Destructor virtual ~custom_integral() {}; /// Return the number of cells involved in evaluation of the integral virtual std::size_t num_cells() const = 0; /// Tabulate the tensor for the contribution from a custom domain virtual void tabulate_tensor(double * A, const double * const * w, const double * coordinate_dofs, std::size_t num_quadrature_points, const double * quadrature_points, const double * quadrature_weights, const double * facet_normals, int cell_orientation) const = 0; }; /// This class defines the interface for the tabulation of the /// tensor corresponding to the local contribution to a form from /// the integral over a cut cell defined in terms of a set of /// quadrature points and weights. class cutcell_integral: public integral { public: /// Constructor cutcell_integral() {} /// Destructor virtual ~cutcell_integral() {}; /// Tabulate the tensor for the contribution from a cutcell domain virtual void tabulate_tensor(double * A, const double * const * w, const double * coordinate_dofs, std::size_t num_quadrature_points, const double * quadrature_points, const double * quadrature_weights, int cell_orientation) const = 0; }; /// This class defines the interface for the tabulation of the /// tensor corresponding to the local contribution to a form from /// the integral over a cut cell defined in terms of a set of /// quadrature points and weights. class interface_integral: public integral { public: /// Constructor interface_integral() {} /// Destructor virtual ~interface_integral() {}; /// Tabulate the tensor for the contribution from an interface domain virtual void tabulate_tensor(double * A, const double * const * w, const double * coordinate_dofs, std::size_t num_quadrature_points, const double * quadrature_points, const double * quadrature_weights, const double * facet_normals, int cell_orientation) const = 0; }; /// This class defines the interface for the tabulation of the /// tensor corresponding to the local contribution to a form from /// the integral over the overlapped portion of a cell defined in /// terms of a set of quadrature points and weights. class overlap_integral: public integral { public: /// Constructor overlap_integral() {} /// Destructor virtual ~overlap_integral() {}; /// Tabulate the tensor for the contribution from an overlap domain virtual void tabulate_tensor(double * A, const double * const * w, const double * coordinate_dofs, std::size_t num_quadrature_points, const double * quadrature_points, const double * quadrature_weights, int cell_orientation) const = 0; }; /// This class defines the interface for the assembly of the global /// tensor corresponding to a form with r + n arguments, that is, a /// mapping /// /// a : V1 x V2 x ... Vr x W1 x W2 x ... x Wn -> R /// /// with arguments v1, v2, ..., vr, w1, w2, ..., wn. The rank r /// global tensor A is defined by /// /// A = a(V1, V2, ..., Vr, w1, w2, ..., wn), /// /// where each argument Vj represents the application to the /// sequence of basis functions of Vj and w1, w2, ..., wn are given /// fixed functions (coefficients). class form { public: /// Destructor virtual ~form() {} /// Return a string identifying the form virtual const char * signature() const = 0; /// Return the rank of the global tensor (r) virtual std::size_t rank() const = 0; /// Return the number of coefficients (n) virtual std::size_t num_coefficients() const = 0; /// Return original coefficient position for each coefficient /// /// @param i /// Coefficient number, 0 <= i < n /// virtual std::size_t original_coefficient_position(std::size_t i) const = 0; /// Create a new finite element for parameterization of coordinates virtual finite_element * create_coordinate_finite_element() const = 0; /// Create a new dofmap for parameterization of coordinates virtual dofmap * create_coordinate_dofmap() const = 0; /// Create a new coordinate mapping virtual coordinate_mapping * create_coordinate_mapping() const = 0; /// Create a new finite element for argument function 0 <= i < r+n /// /// @param i /// Argument number if 0 <= i < r /// Coefficient number j=i-r if r+j <= i < r+n /// virtual finite_element * create_finite_element(std::size_t i) const = 0; /// Create a new dofmap for argument function 0 <= i < r+n /// /// @param i /// Argument number if 0 <= i < r /// Coefficient number j=i-r if r+j <= i < r+n /// virtual dofmap * create_dofmap(std::size_t i) const = 0; /// Return the upper bound on subdomain ids for cell integrals virtual std::size_t max_cell_subdomain_id() const = 0; /// Return the upper bound on subdomain ids for exterior facet integrals virtual std::size_t max_exterior_facet_subdomain_id() const = 0; /// Return the upper bound on subdomain ids for interior facet integrals virtual std::size_t max_interior_facet_subdomain_id() const = 0; /// Return the upper bound on subdomain ids for vertex integrals virtual std::size_t max_vertex_subdomain_id() const = 0; /// Return the upper bound on subdomain ids for custom integrals virtual std::size_t max_custom_subdomain_id() const = 0; /// Return the upper bound on subdomain ids for cutcell integrals virtual std::size_t max_cutcell_subdomain_id() const = 0; /// Return the upper bound on subdomain ids for interface integrals virtual std::size_t max_interface_subdomain_id() const = 0; /// Return the upper bound on subdomain ids for overlap integrals virtual std::size_t max_overlap_subdomain_id() const = 0; /// Return whether form has any cell integrals virtual bool has_cell_integrals() const = 0; /// Return whether form has any exterior facet integrals virtual bool has_exterior_facet_integrals() const = 0; /// Return whether form has any interior facet integrals virtual bool has_interior_facet_integrals() const = 0; /// Return whether form has any vertex integrals virtual bool has_vertex_integrals() const = 0; /// Return whether form has any custom integrals virtual bool has_custom_integrals() const = 0; /// Return whether form has any cutcell integrals virtual bool has_cutcell_integrals() const = 0; /// Return whether form has any interface integrals virtual bool has_interface_integrals() const = 0; /// Return whether form has any overlap integrals virtual bool has_overlap_integrals() const = 0; /// Create a new cell integral on sub domain subdomain_id virtual cell_integral * create_cell_integral(std::size_t subdomain_id) const = 0; /// Create a new exterior facet integral on sub domain subdomain_id virtual exterior_facet_integral * create_exterior_facet_integral(std::size_t subdomain_id) const = 0; /// Create a new interior facet integral on sub domain subdomain_id virtual interior_facet_integral * create_interior_facet_integral(std::size_t subdomain_id) const = 0; /// Create a new vertex integral on sub domain subdomain_id virtual vertex_integral * create_vertex_integral(std::size_t subdomain_id) const = 0; /// Create a new custom integral on sub domain subdomain_id virtual custom_integral * create_custom_integral(std::size_t subdomain_id) const = 0; /// Create a new cutcell integral on sub domain subdomain_id virtual cutcell_integral * create_cutcell_integral(std::size_t subdomain_id) const = 0; /// Create a new interface integral on sub domain subdomain_id virtual interface_integral * create_interface_integral(std::size_t subdomain_id) const = 0; /// Create a new overlap integral on sub domain subdomain_id virtual overlap_integral * create_overlap_integral(std::size_t subdomain_id) const = 0; /// Create a new cell integral on everywhere else virtual cell_integral * create_default_cell_integral() const = 0; /// Create a new exterior facet integral on everywhere else virtual exterior_facet_integral * create_default_exterior_facet_integral() const = 0; /// Create a new interior facet integral on everywhere else virtual interior_facet_integral * create_default_interior_facet_integral() const = 0; /// Create a new vertex integral on everywhere else virtual vertex_integral * create_default_vertex_integral() const = 0; /// Create a new custom integral on everywhere else virtual custom_integral * create_default_custom_integral() const = 0; /// Create a new cutcell integral on everywhere else virtual cutcell_integral * create_default_cutcell_integral() const = 0; /// Create a new interface integral on everywhere else virtual interface_integral * create_default_interface_integral() const = 0; /// Create a new overlap integral on everywhere else virtual overlap_integral * create_default_overlap_integral() const = 0; }; } #endif ffc-2019.1.0.post0/ffc/backends/ufc/ufc_geometry.h000066400000000000000000001244441346026536000215120ustar00rootroot00000000000000// This file provides utility functions for computing geometric quantities. // This code is released into the public domain. // // The FEniCS Project (http://www.fenicsproject.org/) 2013-2015. #ifndef __UFC_GEOMETRY_H #define __UFC_GEOMETRY_H #include // TODO: Wrap all in namespace ufc //namespace ufc //{ /// A note regarding data structures. All matrices are represented as /// row-major flattened raw C++ arrays. Benchmarks indicate that when /// optimization (-O1 and up) is used, the following conditions hold: /// /// 1. std::vector is just as fast as raw C++ arrays for indexing. /// /// 2. Flattened arrays are twice as fast as nested arrays, both for /// std:vector and raw C++ arrays. /// /// 3. Defining an array by 'std::vector x(n)', where n is a /// literal, leads to dynamic allocation and results in significant /// slowdowns compared to the definition 'double x[n]'. /// /// The conclusion is that we should use flattened raw C++ arrays in /// the interfaces for these utility functions, since some of the /// arrays passed to these functions (in particular Jacobians) are /// created inside the generated functions (tabulate_tensor). Note /// that an std::vector x may also be passed as raw pointer by &x[0]. // TODO: Should signatures of compute___d match for each foo? // On one hand the snippets use different quantities, on the other // some consistency is nice to simplify the code generation. // Currently only the arguments that are actually used are included. /// --- Some fixed numbers by name for readability in this file --- // TODO: Use these numbers where relevant below to make this file more self documenting // (namespaced using UFC_ in the names because they collide with variables in other libraries) // Use this for array dimensions indexed by local vertex number #define UFC_NUM_VERTICES_IN_INTERVAL 2 #define UFC_NUM_VERTICES_IN_TRIANGLE 3 #define UFC_NUM_VERTICES_IN_TETRAHEDRON 4 #define UFC_NUM_VERTICES_IN_QUADRILATERAL 4 #define UFC_NUM_VERTICES_IN_HEXAHEDRON 8 // Use this for array dimensions indexed by local edge number #define UFC_NUM_EDGES_IN_INTERVAL 1 #define UFC_NUM_EDGES_IN_TRIANGLE 3 #define UFC_NUM_EDGES_IN_TETRAHEDRON 6 #define UFC_NUM_EDGES_IN_QUADRILATERAL 4 #define UFC_NUM_EDGES_IN_HEXAHEDRON 12 // Use this for array dimensions indexed by local facet number #define UFC_NUM_FACETS_IN_INTERVAL 2 #define UFC_NUM_FACETS_IN_TRIANGLE 3 #define UFC_NUM_FACETS_IN_TETRAHEDRON 4 #define UFC_NUM_FACETS_IN_QUADRILATERAL 4 #define UFC_NUM_FACETS_IN_HEXAHEDRON 6 // Use UFC_GDIM_N to show the intention that the geometric dimension is N #define UFC_GDIM_1 1 #define UFC_GDIM_2 2 #define UFC_GDIM_3 3 // Use UFC_TDIM_N to show the intention that the topological dimension is N #define UFC_TDIM_1 1 #define UFC_TDIM_2 2 #define UFC_TDIM_3 3 /// --- Local reference cell coordinates by UFC conventions --- static const double interval_vertices[UFC_NUM_VERTICES_IN_INTERVAL][UFC_TDIM_1] = { {0.0}, {1.0} }; static const double triangle_vertices[UFC_NUM_VERTICES_IN_TRIANGLE][UFC_TDIM_2] = { {0.0, 0.0}, {1.0, 0.0}, {0.0, 1.0} }; static const double tetrahedron_vertices[UFC_NUM_VERTICES_IN_TETRAHEDRON][UFC_TDIM_3] = { {0.0, 0.0, 0.0}, {1.0, 0.0, 0.0}, {0.0, 1.0, 0.0}, {0.0, 0.0, 1.0} }; static const double quadrilateral_vertices[UFC_NUM_VERTICES_IN_QUADRILATERAL][UFC_TDIM_2] = { {0.0, 0.0}, {0.0, 1.0}, {1.0, 0.0}, {1.0, 1.0}, }; static const double hexahedron_vertices[UFC_NUM_VERTICES_IN_HEXAHEDRON][UFC_TDIM_3] = { {0.0, 0.0, 0.0}, {0.0, 0.0, 1.0}, {0.0, 1.0, 0.0}, {0.0, 1.0, 1.0}, {1.0, 0.0, 0.0}, {1.0, 0.0, 1.0}, {1.0, 1.0, 0.0}, {1.0, 1.0, 1.0}, }; /// --- Local reference cell midpoint by UFC conventions --- static const double interval_midpoint[UFC_TDIM_1] = { 0.5 }; static const double triangle_midpoint[UFC_TDIM_2] = { 1.0/3.0, 1.0/3.0 }; static const double tetrahedron_midpoint[UFC_TDIM_3] = { 0.25, 0.25, 0.25 }; static const double quadrilateral_midpoint[UFC_TDIM_2] = { 0.5, 0.5 }; static const double hexahedron_midpoint[UFC_TDIM_3] = { 0.5, 0.5, 0.5 }; /// --- Local reference cell facet midpoints by UFC conventions --- static const double interval_facet_midpoint[UFC_NUM_FACETS_IN_INTERVAL][UFC_TDIM_1] = { {0.0}, {1.0} }; static const double triangle_facet_midpoint[UFC_NUM_FACETS_IN_TRIANGLE][UFC_TDIM_2] = { {0.5, 0.5}, {0.0, 0.5}, {0.5, 0.0} }; static const double tetrahedron_facet_midpoint[UFC_NUM_FACETS_IN_TETRAHEDRON][UFC_TDIM_3] = { {0.5, 0.5, 0.5}, {0.0, 1.0/3.0, 1.0/3.0}, {1.0/3.0, 0.0, 1.0/3.0}, {1.0/3.0, 1.0/3.0, 0.0}, }; static const double quadrilateral_facet_midpoint[UFC_NUM_FACETS_IN_QUADRILATERAL][UFC_TDIM_2] = { {0.0, 0.5}, {1.0, 0.5}, {0.5, 0.0}, {0.5, 1.0}, }; static const double hexahedron_facet_midpoint[UFC_NUM_FACETS_IN_HEXAHEDRON][UFC_TDIM_3] = { {0.0, 0.5, 0.5}, {1.0, 0.5, 0.5}, {0.5, 0.0, 0.5}, {0.5, 1.0, 0.5}, {0.5, 0.5, 0.0}, {0.5, 0.5, 1.0}, }; /// --- Local reference cell facet orientations by UFC conventions --- static const double interval_facet_orientations[UFC_NUM_FACETS_IN_INTERVAL] = { -1.0, +1.0, }; static const double triangle_facet_orientations[UFC_NUM_FACETS_IN_TRIANGLE] = { +1.0, -1.0, +1.0 }; static const double tetrahedron_facet_orientations[UFC_NUM_FACETS_IN_TETRAHEDRON] = { +1.0, -1.0, +1.0, -1.0 }; // FIXME: Insert quad conventions here /* static const double quadrilateral_facet_orientations[UFC_NUM_FACETS_IN_QUADRILATERAL] = { +1.0, -1.0, -1.0, +1.0, }; */ // FIXME: Insert quad conventions here /* static const double hexahedron_facet_orientations[UFC_NUM_FACETS_IN_HEXAHEDRON] = { +1.0, -1.0, -1.0, +1.0, +1.0, -1.0, }; */ /// --- Local reference cell entity relations by UFC conventions --- static const unsigned int triangle_edge_vertices[UFC_NUM_EDGES_IN_TRIANGLE][2] = { {1, 2}, {0, 2}, {0, 1} }; static const unsigned int tetrahedron_edge_vertices[UFC_NUM_EDGES_IN_TETRAHEDRON][2] = { {2, 3}, {1, 3}, {1, 2}, {0, 3}, {0, 2}, {0, 1} }; static const unsigned int quadrilateral_edge_vertices[UFC_NUM_EDGES_IN_QUADRILATERAL][2] = { {0, 1}, {2, 3}, {0, 2}, {1, 3}, }; static const unsigned int hexahedron_edge_vertices[UFC_NUM_EDGES_IN_HEXAHEDRON][2] = { {0, 1}, {2, 3}, {4, 5}, {6, 7}, {0, 2}, {1, 3}, {4, 6}, {5, 7}, {0, 4}, {1, 5}, {2, 6}, {3, 7}, }; /// --- Local reference cell entity relations by UFC conventions --- static const unsigned int interval_facet_vertices[UFC_NUM_FACETS_IN_INTERVAL][1] = { {0}, {1} }; static const unsigned int triangle_facet_vertices[UFC_NUM_FACETS_IN_TRIANGLE][UFC_NUM_VERTICES_IN_INTERVAL] = { {1, 2}, {0, 2}, {0, 1} }; static const unsigned int tetrahedron_facet_vertices[UFC_NUM_FACETS_IN_TETRAHEDRON][UFC_NUM_VERTICES_IN_TRIANGLE] = { {1, 2, 3}, {0, 2, 3}, {0, 1, 3}, {0, 1, 2} }; static const unsigned int tetrahedron_facet_edge_vertices[UFC_NUM_FACETS_IN_TETRAHEDRON][UFC_NUM_FACETS_IN_TRIANGLE][UFC_NUM_VERTICES_IN_INTERVAL] = { {{2, 3}, {1, 3}, {1, 2}}, {{2, 3}, {0, 3}, {0, 2}}, {{1, 3}, {0, 3}, {0, 1}}, {{1, 2}, {0, 2}, {0, 1}} }; static const unsigned int quadrilateral_facet_vertices[UFC_NUM_FACETS_IN_QUADRILATERAL][UFC_NUM_VERTICES_IN_INTERVAL] = { {0, 1}, {2, 3}, {0, 2}, {1, 3}, }; static const unsigned int hexahedron_facet_vertices[UFC_NUM_FACETS_IN_HEXAHEDRON][UFC_NUM_VERTICES_IN_QUADRILATERAL] = { {0, 1, 2, 3}, {4, 5, 6, 7}, {0, 1, 4, 5}, {2, 3, 6, 7}, {0, 2, 4, 6}, {1, 3, 5, 7}, }; static const unsigned int hexahedron_facet_edge_vertices[UFC_NUM_FACETS_IN_HEXAHEDRON][UFC_NUM_FACETS_IN_QUADRILATERAL][UFC_NUM_VERTICES_IN_INTERVAL] = { {{0, 1}, {2, 3}, {0, 2}, {1, 3}}, {{4, 5}, {6, 7}, {4, 6}, {5, 7}}, {{0, 1}, {4, 5}, {0, 4}, {1, 5}}, {{2, 3}, {6, 7}, {2, 6}, {3, 7}}, {{0, 2}, {4, 6}, {0, 4}, {2, 6}}, {{1, 3}, {5, 7}, {1, 5}, {3, 7}}, }; /// --- Reference cell edge vectors by UFC conventions (edge vertex 1 - edge vertex 0 for each edge in cell) --- static const double triangle_reference_edge_vectors[UFC_NUM_EDGES_IN_TRIANGLE][UFC_TDIM_2] = { {-1.0, 1.0}, { 0.0, 1.0}, { 1.0, 0.0}, }; static const double tetrahedron_reference_edge_vectors[UFC_NUM_EDGES_IN_TETRAHEDRON][UFC_TDIM_3] = { { 0.0, -1.0, 1.0}, {-1.0, 0.0, 1.0}, {-1.0, 1.0, 0.0}, { 0.0, 0.0, 1.0}, { 0.0, 1.0, 0.0}, { 1.0, 0.0, 0.0}, }; // Edge vectors for each triangle facet of a tetrahedron static const double tetrahedron_facet_reference_edge_vectors[UFC_NUM_FACETS_IN_TETRAHEDRON][UFC_NUM_EDGES_IN_TRIANGLE][UFC_TDIM_3] = { { // facet 0 { 0.0, -1.0, 1.0}, {-1.0, 0.0, 1.0}, {-1.0, 1.0, 0.0}, }, { // facet 1 { 0.0, -1.0, 1.0}, { 0.0, 0.0, 1.0}, { 0.0, 1.0, 0.0}, }, { // facet 2 {-1.0, 0.0, 1.0}, { 0.0, 0.0, 1.0}, { 1.0, 0.0, 0.0}, }, { // facet 3 {-1.0, 1.0, 0.0}, { 0.0, 1.0, 0.0}, { 1.0, 0.0, 0.0}, }, }; static const double quadrilateral_reference_edge_vectors[UFC_NUM_EDGES_IN_QUADRILATERAL][UFC_TDIM_2] = { { 0.0, 1.0}, { 0.0, 1.0}, { 1.0, 0.0}, { 1.0, 0.0}, }; static const double hexahedron_reference_edge_vectors[UFC_NUM_EDGES_IN_HEXAHEDRON][UFC_TDIM_3] = { { 0.0, 0.0, 1.0}, { 0.0, 0.0, 1.0}, { 0.0, 0.0, 1.0}, { 0.0, 0.0, 1.0}, { 0.0, 1.0, 0.0}, { 0.0, 1.0, 0.0}, { 0.0, 1.0, 0.0}, { 0.0, 1.0, 0.0}, { 1.0, 0.0, 0.0}, { 1.0, 0.0, 0.0}, { 1.0, 0.0, 0.0}, { 1.0, 0.0, 0.0}, }; // Edge vectors for each quadrilateral facet of a hexahedron static const double hexahedron_facet_reference_edge_vectors[UFC_NUM_FACETS_IN_HEXAHEDRON][UFC_NUM_EDGES_IN_QUADRILATERAL][UFC_TDIM_3] = { { // facet 0 { 0.0, 0.0, 1.0}, { 0.0, 0.0, 1.0}, { 0.0, 1.0, 0.0}, { 0.0, 1.0, 0.0}, }, { // facet 1 { 0.0, 0.0, 1.0}, { 0.0, 0.0, 1.0}, { 0.0, 1.0, 0.0}, { 0.0, 1.0, 0.0}, }, { // facet 2 { 0.0, 0.0, 1.0}, { 0.0, 0.0, 1.0}, { 1.0, 0.0, 0.0}, { 1.0, 0.0, 0.0}, }, { // facet 3 { 0.0, 0.0, 1.0}, { 0.0, 0.0, 1.0}, { 1.0, 0.0, 0.0}, { 1.0, 0.0, 0.0}, }, { // facet 4 { 0.0, 1.0, 0.0}, { 0.0, 1.0, 0.0}, { 1.0, 0.0, 0.0}, { 1.0, 0.0, 0.0}, }, { // facet 5 { 0.0, 1.0, 0.0}, { 0.0, 1.0, 0.0}, { 1.0, 0.0, 0.0}, { 1.0, 0.0, 0.0}, }, }; /// --- Reference cell facet normals by UFC conventions (outwards pointing on reference cell) --- static const double interval_reference_facet_normals[UFC_NUM_FACETS_IN_INTERVAL][UFC_TDIM_1] = { {-1.0}, {+1.0}, }; static const double triangle_reference_facet_normals[UFC_NUM_FACETS_IN_TRIANGLE][UFC_TDIM_2] = { { 0.7071067811865476, 0.7071067811865476 }, {-1.0, 0.0}, { 0.0, -1.0}, }; static const double tetrahedron_reference_facet_normals[UFC_NUM_FACETS_IN_TETRAHEDRON][UFC_TDIM_3] = { {0.5773502691896258, 0.5773502691896258, 0.5773502691896258}, {-1.0, 0.0, 0.0}, { 0.0, -1.0, 0.0}, { 0.0, 0.0, -1.0}, }; static const double quadrilateral_reference_facet_normals[UFC_NUM_FACETS_IN_QUADRILATERAL][UFC_TDIM_2] = { { -1.0, 0.0 }, { 1.0, 0.0 }, { 0.0, -1.0 }, { 0.0, 1.0 }, }; static const double hexahedron_reference_facet_normals[UFC_NUM_FACETS_IN_HEXAHEDRON][UFC_TDIM_3] = { { -1.0, 0.0, 0.0}, { 1.0, 0.0, 0.0}, { 0.0, -1.0, 0.0}, { 0.0, 1.0, 0.0}, { 0.0, 0.0, -1.0}, { 0.0, 0.0, 1.0}, }; /// --- Reference cell volumes by UFC conventions --- static const double interval_reference_cell_volume = 1.0; static const double triangle_reference_cell_volume = 0.5; static const double tetrahedron_reference_cell_volume = 1.0/6.0; static const double quadrilateral_reference_cell_volume = 1.0; static const double hexahedron_reference_cell_volume = 1.0; static const double interval_reference_facet_volume = 1.0; static const double triangle_reference_facet_volume = 1.0; static const double tetrahedron_reference_facet_volume = 0.5; static const double quadrilateral_reference_facet_volume = 1.0; static const double hexahedron_reference_facet_volume = 1.0; /// --- Jacobians of reference facet cell to reference cell coordinate mappings by UFC conventions --- static const double triangle_reference_facet_jacobian[UFC_NUM_FACETS_IN_TRIANGLE][UFC_TDIM_2][UFC_TDIM_2-1] = { { {-1.0}, { 1.0} }, { { 0.0}, { 1.0} }, { { 1.0}, { 0.0} }, }; static const double tetrahedron_reference_facet_jacobian[UFC_NUM_FACETS_IN_TETRAHEDRON][UFC_TDIM_3][UFC_TDIM_3-1] = { { {-1.0, -1.0}, {1.0, 0.0}, {0.0, 1.0} }, { { 0.0, 0.0}, {1.0, 0.0}, {0.0, 1.0} }, { { 1.0, 0.0}, {0.0, 0.0}, {0.0, 1.0} }, { { 1.0, 0.0}, {0.0, 1.0}, {0.0, 0.0} }, }; static const double quadrilateral_reference_facet_jacobian[UFC_NUM_FACETS_IN_QUADRILATERAL][UFC_TDIM_2][UFC_TDIM_2-1] = { { { 0.0}, { 1.0} }, { { 0.0}, { 1.0} }, { { 1.0}, { 0.0} }, { { 1.0}, { 0.0} }, }; static const double hexahedron_reference_facet_jacobian[UFC_NUM_FACETS_IN_HEXAHEDRON][UFC_TDIM_3][UFC_TDIM_3-1] = { { { 0.0, 0.0}, {1.0, 0.0}, {0.0, 1.0} }, { { 0.0, 0.0}, {1.0, 0.0}, {0.0, 1.0} }, { { 1.0, 0.0}, {0.0, 0.0}, {0.0, 1.0} }, { { 1.0, 0.0}, {0.0, 0.0}, {0.0, 1.0} }, { { 1.0, 0.0}, {0.0, 1.0}, {0.0, 0.0} }, { { 1.0, 0.0}, {0.0, 1.0}, {0.0, 0.0} }, }; /// --- Coordinate mappings from reference facet cell to reference cell by UFC conventions --- inline void compute_reference_facet_to_reference_cell_coordinates_interval(double Xc[UFC_TDIM_1], unsigned int facet) { switch (facet) { case 0: Xc[0] = 0.0; break; case 1: Xc[0] = 1.0; break; }; } inline void compute_reference_facet_to_reference_cell_coordinates_triangle(double Xc[UFC_TDIM_2], const double Xf[UFC_TDIM_2-1], unsigned int facet) { switch (facet) { case 0: Xc[0] = 1.0 - Xf[0]; Xc[1] = Xf[0]; break; case 1: Xc[0] = 0.0; Xc[1] = Xf[0]; break; case 2: Xc[0] = Xf[0]; Xc[1] = 0.0; break; }; } inline void compute_reference_facet_to_reference_cell_coordinates_tetrahedron(double Xc[UFC_TDIM_3], const double Xf[UFC_TDIM_3-1], unsigned int facet) { switch (facet) { case 0: Xc[0] = 1.0 - Xf[0] - Xf[1]; Xc[1] = Xf[0]; Xc[2] = Xf[1]; break; case 1: Xc[0] = 0.0; Xc[1] = Xf[0]; Xc[2] = Xf[1]; break; case 2: Xc[0] = Xf[0]; Xc[1] = 0.0; Xc[2] = Xf[1]; break; case 3: Xc[0] = Xf[0]; Xc[1] = Xf[1]; Xc[2] = 0.0; break; }; } ///--- Computation of Jacobian matrices --- /// Compute Jacobian J for interval embedded in R^1 inline void compute_jacobian_interval_1d(double J[UFC_GDIM_1*UFC_TDIM_1], const double coordinate_dofs[2]) { J[0] = coordinate_dofs[1] - coordinate_dofs[0]; } /// Compute Jacobian J for interval embedded in R^2 inline void compute_jacobian_interval_2d(double J[UFC_GDIM_2*UFC_TDIM_1], const double coordinate_dofs[4]) { J[0] = coordinate_dofs[2] - coordinate_dofs[0]; J[1] = coordinate_dofs[3] - coordinate_dofs[1]; } /// Compute Jacobian J for interval embedded in R^3 inline void compute_jacobian_interval_3d(double J[UFC_GDIM_3*UFC_TDIM_1], const double coordinate_dofs[6]) { J[0] = coordinate_dofs[3] - coordinate_dofs[0]; J[1] = coordinate_dofs[4] - coordinate_dofs[1]; J[2] = coordinate_dofs[5] - coordinate_dofs[2]; } /// Compute Jacobian J for triangle embedded in R^2 inline void compute_jacobian_triangle_2d(double J[UFC_GDIM_2*UFC_TDIM_2], const double coordinate_dofs[6]) { J[0] = coordinate_dofs[2] - coordinate_dofs[0]; J[1] = coordinate_dofs[4] - coordinate_dofs[0]; J[2] = coordinate_dofs[3] - coordinate_dofs[1]; J[3] = coordinate_dofs[5] - coordinate_dofs[1]; } /// Compute Jacobian J for triangle embedded in R^3 inline void compute_jacobian_triangle_3d(double J[UFC_GDIM_3*UFC_TDIM_2], const double coordinate_dofs[9]) { J[0] = coordinate_dofs[3] - coordinate_dofs[0]; J[1] = coordinate_dofs[6] - coordinate_dofs[0]; J[2] = coordinate_dofs[4] - coordinate_dofs[1]; J[3] = coordinate_dofs[7] - coordinate_dofs[1]; J[4] = coordinate_dofs[5] - coordinate_dofs[2]; J[5] = coordinate_dofs[8] - coordinate_dofs[2]; } /// Compute Jacobian J for tetrahedron embedded in R^3 inline void compute_jacobian_tetrahedron_3d(double J[UFC_GDIM_3*UFC_TDIM_3], const double coordinate_dofs[12]) { J[0] = coordinate_dofs[3] - coordinate_dofs[0]; J[1] = coordinate_dofs[6] - coordinate_dofs[0]; J[2] = coordinate_dofs[9] - coordinate_dofs[0]; J[3] = coordinate_dofs[4] - coordinate_dofs[1]; J[4] = coordinate_dofs[7] - coordinate_dofs[1]; J[5] = coordinate_dofs[10] - coordinate_dofs[1]; J[6] = coordinate_dofs[5] - coordinate_dofs[2]; J[7] = coordinate_dofs[8] - coordinate_dofs[2]; J[8] = coordinate_dofs[11] - coordinate_dofs[2]; } //--- Computation of Jacobian inverses --- // TODO: Remove this when ffc is updated to use the NEW ones below /// Compute Jacobian inverse K for interval embedded in R^1 inline void compute_jacobian_inverse_interval_1d(double* K, double& det, const double* J) { det = J[0]; K[0] = 1.0 / det; } /// Compute Jacobian (pseudo)inverse K for interval embedded in R^2 inline void compute_jacobian_inverse_interval_2d(double* K, double& det, const double* J) { const double det2 = J[0]*J[0] + J[1]*J[1]; det = std::sqrt(det2); K[0] = J[0] / det2; K[1] = J[1] / det2; } /// Compute Jacobian (pseudo)inverse K for interval embedded in R^3 inline void compute_jacobian_inverse_interval_3d(double* K, double& det, const double* J) { // TODO: Move computation of det to a separate function, det is often needed when K is not const double det2 = J[0]*J[0] + J[1]*J[1] + J[2]*J[2]; det = std::sqrt(det2); K[0] = J[0] / det2; K[1] = J[1] / det2; K[2] = J[2] / det2; } /// Compute Jacobian inverse K for triangle embedded in R^2 inline void compute_jacobian_inverse_triangle_2d(double* K, double& det, const double* J) { det = J[0]*J[3] - J[1]*J[2]; K[0] = J[3] / det; K[1] = -J[1] / det; K[2] = -J[2] / det; K[3] = J[0] / det; } /// Compute Jacobian (pseudo)inverse K for triangle embedded in R^3 inline void compute_jacobian_inverse_triangle_3d(double* K, double& det, const double* J) { const double d_0 = J[2]*J[5] - J[4]*J[3]; const double d_1 = J[4]*J[1] - J[0]*J[5]; const double d_2 = J[0]*J[3] - J[2]*J[1]; const double c_0 = J[0]*J[0] + J[2]*J[2] + J[4]*J[4]; const double c_1 = J[1]*J[1] + J[3]*J[3] + J[5]*J[5]; const double c_2 = J[0]*J[1] + J[2]*J[3] + J[4]*J[5]; const double den = c_0*c_1 - c_2*c_2; const double det2 = d_0*d_0 + d_1*d_1 + d_2*d_2; det = std::sqrt(det2); K[0] = (J[0]*c_1 - J[1]*c_2) / den; K[1] = (J[2]*c_1 - J[3]*c_2) / den; K[2] = (J[4]*c_1 - J[5]*c_2) / den; K[3] = (J[1]*c_0 - J[0]*c_2) / den; K[4] = (J[3]*c_0 - J[2]*c_2) / den; K[5] = (J[5]*c_0 - J[4]*c_2) / den; } /// Compute Jacobian inverse K for tetrahedron embedded in R^3 inline void compute_jacobian_inverse_tetrahedron_3d(double* K, double& det, const double* J) { const double d_00 = J[4]*J[8] - J[5]*J[7]; const double d_01 = J[5]*J[6] - J[3]*J[8]; const double d_02 = J[3]*J[7] - J[4]*J[6]; const double d_10 = J[2]*J[7] - J[1]*J[8]; const double d_11 = J[0]*J[8] - J[2]*J[6]; const double d_12 = J[1]*J[6] - J[0]*J[7]; const double d_20 = J[1]*J[5] - J[2]*J[4]; const double d_21 = J[2]*J[3] - J[0]*J[5]; const double d_22 = J[0]*J[4] - J[1]*J[3]; det = J[0]*d_00 + J[3]*d_10 + J[6]*d_20; K[0] = d_00 / det; K[1] = d_10 / det; K[2] = d_20 / det; K[3] = d_01 / det; K[4] = d_11 / det; K[5] = d_21 / det; K[6] = d_02 / det; K[7] = d_12 / det; K[8] = d_22 / det; } //--- NEW Computation of Jacobian (sub)determinants --- /// Compute Jacobian determinant for interval embedded in R^1 inline void compute_jacobian_determinants_interval_1d(double & det, const double J[UFC_GDIM_1*UFC_TDIM_1]) { det = J[0]; } /// Compute Jacobian (pseudo)determinants for interval embedded in R^2 inline void compute_jacobian_determinants_interval_2d(double & det2, double & det, const double J[UFC_GDIM_2*UFC_TDIM_1]) { det2 = J[0]*J[0] + J[1]*J[1]; det = std::sqrt(det2); } /// Compute Jacobian (pseudo)determinants for interval embedded in R^3 inline void compute_jacobian_determinants_interval_3d(double & det2, double & det, const double J[UFC_GDIM_3*UFC_TDIM_1]) { det2 = J[0]*J[0] + J[1]*J[1] + J[2]*J[2]; det = std::sqrt(det2); } /// Compute Jacobian determinant for triangle embedded in R^2 inline void compute_jacobian_determinants_triangle_2d(double & det, const double J[UFC_GDIM_2*UFC_TDIM_2]) { det = J[0]*J[3] - J[1]*J[2]; } /// Compute Jacobian (pseudo)determinants for triangle embedded in R^3 inline void compute_jacobian_determinants_triangle_3d(double & den, double & det2, double & det, double c[3], const double J[UFC_GDIM_3*UFC_TDIM_2]) { const double d_0 = J[2]*J[5] - J[4]*J[3]; const double d_1 = J[4]*J[1] - J[0]*J[5]; const double d_2 = J[0]*J[3] - J[2]*J[1]; c[0] = J[0]*J[0] + J[2]*J[2] + J[4]*J[4]; c[1] = J[1]*J[1] + J[3]*J[3] + J[5]*J[5]; c[2] = J[0]*J[1] + J[2]*J[3] + J[4]*J[5]; den = c[0]*c[1] - c[2]*c[2]; det2 = d_0*d_0 + d_1*d_1 + d_2*d_2; det = std::sqrt(det2); } /// Compute Jacobian determinants for tetrahedron embedded in R^3 inline void compute_jacobian_determinants_tetrahedron_3d(double & det, double d[9], const double J[UFC_GDIM_3*UFC_TDIM_3]) { d[0*3 + 0] = J[4]*J[8] - J[5]*J[7]; d[0*3 + 1] = J[5]*J[6] - J[3]*J[8]; d[0*3 + 2] = J[3]*J[7] - J[4]*J[6]; d[1*3 + 0] = J[2]*J[7] - J[1]*J[8]; d[1*3 + 1] = J[0]*J[8] - J[2]*J[6]; d[1*3 + 2] = J[1]*J[6] - J[0]*J[7]; d[2*3 + 0] = J[1]*J[5] - J[2]*J[4]; d[2*3 + 1] = J[2]*J[3] - J[0]*J[5]; d[2*3 + 2] = J[0]*J[4] - J[1]*J[3]; det = J[0]*d[0*3 + 0] + J[3]*d[1*3 + 0] + J[6]*d[2*3 + 0]; } //--- NEW Computation of Jacobian inverses --- /// Compute Jacobian inverse K for interval embedded in R^1 inline void new_compute_jacobian_inverse_interval_1d(double K[UFC_TDIM_1*UFC_GDIM_1], double det) { K[0] = 1.0 / det; } /// Compute Jacobian (pseudo)inverse K for interval embedded in R^2 inline void new_compute_jacobian_inverse_interval_2d(double K[UFC_TDIM_1*UFC_GDIM_2], double det2, const double J[UFC_GDIM_2*UFC_TDIM_1]) { K[0] = J[0] / det2; K[1] = J[1] / det2; } /// Compute Jacobian (pseudo)inverse K for interval embedded in R^3 inline void new_compute_jacobian_inverse_interval_3d(double K[UFC_TDIM_1*UFC_GDIM_3], double det2, const double J[UFC_GDIM_3*UFC_TDIM_1]) { K[0] = J[0] / det2; K[1] = J[1] / det2; K[2] = J[2] / det2; } /// Compute Jacobian inverse K for triangle embedded in R^2 inline void new_compute_jacobian_inverse_triangle_2d(double K[UFC_TDIM_2*UFC_GDIM_2], double det, const double J[UFC_GDIM_2*UFC_TDIM_2]) { K[0] = J[3] / det; K[1] = -J[1] / det; K[2] = -J[2] / det; K[3] = J[0] / det; } /// Compute Jacobian (pseudo)inverse K for triangle embedded in R^3 inline void new_compute_jacobian_inverse_triangle_3d(double K[UFC_TDIM_2*UFC_GDIM_3], double den, const double c[3], const double J[UFC_GDIM_3*UFC_TDIM_2]) { K[0] = (J[0]*c[1] - J[1]*c[2]) / den; K[1] = (J[2]*c[1] - J[3]*c[2]) / den; K[2] = (J[4]*c[1] - J[5]*c[2]) / den; K[3] = (J[1]*c[0] - J[0]*c[2]) / den; K[4] = (J[3]*c[0] - J[2]*c[2]) / den; K[5] = (J[5]*c[0] - J[4]*c[2]) / den; } /// Compute Jacobian inverse K for tetrahedron embedded in R^3 inline void new_compute_jacobian_inverse_tetrahedron_3d(double K[UFC_TDIM_3*UFC_GDIM_3], double det, const double d[9]) { K[0] = d[0*3 + 0] / det; K[1] = d[1*3 + 0] / det; K[2] = d[2*3 + 0] / det; K[3] = d[0*3 + 1] / det; K[4] = d[1*3 + 1] / det; K[5] = d[2*3 + 1] / det; K[6] = d[0*3 + 2] / det; K[7] = d[1*3 + 2] / det; K[8] = d[2*3 + 2] / det; } // --- Computation of edge, face, facet scaling factors /// Compute edge scaling factors for triangle embedded in R^2 inline void compute_edge_scaling_factors_triangle_2d(double dx[2], const double coordinate_dofs[6], std::size_t facet) { // Get vertices on edge const unsigned int v0 = triangle_facet_vertices[facet][0]; const unsigned int v1 = triangle_facet_vertices[facet][1]; // Compute scale factor (length of edge scaled by length of reference interval) dx[0] = coordinate_dofs[2*v1 + 0] - coordinate_dofs[2*v0 + 0]; dx[1] = coordinate_dofs[2*v1 + 1] - coordinate_dofs[2*v0 + 1]; } /// Compute facet scaling factor for triangle embedded in R^2 inline void compute_facet_scaling_factor_triangle_2d(double & det, const double dx[2]) { det = std::sqrt(dx[0]*dx[0] + dx[1]*dx[1]); } /// Compute edge scaling factors for triangle embedded in R^3 inline void compute_edge_scaling_factors_triangle_3d(double dx[3], const double coordinate_dofs[9], std::size_t facet) { // Get vertices on edge const unsigned int v0 = triangle_facet_vertices[facet][0]; const unsigned int v1 = triangle_facet_vertices[facet][1]; // Compute scale factor (length of edge scaled by length of reference interval) dx[0] = coordinate_dofs[3*v1 + 0] - coordinate_dofs[3*v0 + 0]; dx[1] = coordinate_dofs[3*v1 + 1] - coordinate_dofs[3*v0 + 1]; dx[2] = coordinate_dofs[3*v1 + 2] - coordinate_dofs[3*v0 + 2]; } /// Compute facet scaling factor for triangle embedded in R^3 inline void compute_facet_scaling_factor_triangle_3d(double & det, const double dx[3]) { det = std::sqrt(dx[0]*dx[0] + dx[1]*dx[1] + dx[2]*dx[2]); } /// Compute face scaling factors for tetrahedron embedded in R^3 inline void compute_face_scaling_factors_tetrahedron_3d(double a[3], const double coordinate_dofs[12], std::size_t facet) { // Get vertices on face const unsigned int v0 = tetrahedron_facet_vertices[facet][0]; const unsigned int v1 = tetrahedron_facet_vertices[facet][1]; const unsigned int v2 = tetrahedron_facet_vertices[facet][2]; // Compute scale factor (area of face scaled by area of reference triangle) a[0] = (coordinate_dofs[3*v0 + 1]*coordinate_dofs[3*v1 + 2] + coordinate_dofs[3*v0 + 2]*coordinate_dofs[3*v2 + 1] + coordinate_dofs[3*v1 + 1]*coordinate_dofs[3*v2 + 2]) - (coordinate_dofs[3*v2 + 1]*coordinate_dofs[3*v1 + 2] + coordinate_dofs[3*v2 + 2]*coordinate_dofs[3*v0 + 1] + coordinate_dofs[3*v1 + 1]*coordinate_dofs[3*v0 + 2]); a[1] = (coordinate_dofs[3*v0 + 2]*coordinate_dofs[3*v1 + 0] + coordinate_dofs[3*v0 + 0]*coordinate_dofs[3*v2 + 2] + coordinate_dofs[3*v1 + 2]*coordinate_dofs[3*v2 + 0]) - (coordinate_dofs[3*v2 + 2]*coordinate_dofs[3*v1 + 0] + coordinate_dofs[3*v2 + 0]*coordinate_dofs[3*v0 + 2] + coordinate_dofs[3*v1 + 2]*coordinate_dofs[3*v0 + 0]); a[2] = (coordinate_dofs[3*v0 + 0]*coordinate_dofs[3*v1 + 1] + coordinate_dofs[3*v0 + 1]*coordinate_dofs[3*v2 + 0] + coordinate_dofs[3*v1 + 0]*coordinate_dofs[3*v2 + 1]) - (coordinate_dofs[3*v2 + 0]*coordinate_dofs[3*v1 + 1] + coordinate_dofs[3*v2 + 1]*coordinate_dofs[3*v0 + 0] + coordinate_dofs[3*v1 + 0]*coordinate_dofs[3*v0 + 1]); } /// Compute facet scaling factor for tetrahedron embedded in R^3 inline void compute_facet_scaling_factor_tetrahedron_3d(double & det, const double a[3]) { det = std::sqrt(a[0]*a[0] + a[1]*a[1] + a[2]*a[2]); } ///--- Compute facet normal directions --- /// Compute facet direction for interval embedded in R^1 inline void compute_facet_normal_direction_interval_1d(bool & direction, const double coordinate_dofs[2], std::size_t facet) { direction = facet == 0 ? coordinate_dofs[0] > coordinate_dofs[1] : coordinate_dofs[1] > coordinate_dofs[0]; } /// Compute facet direction for triangle embedded in R^2 inline void compute_facet_normal_direction_triangle_2d(bool & direction, const double coordinate_dofs[6], const double dx[2], std::size_t facet) { const unsigned int v0 = triangle_facet_vertices[facet][0]; direction = dx[1]*(coordinate_dofs[2*facet ] - coordinate_dofs[2*v0 ]) - dx[0]*(coordinate_dofs[2*facet + 1] - coordinate_dofs[2*v0 + 1]) < 0; } /// Compute facet direction for tetrahedron embedded in R^3 inline void compute_facet_normal_direction_tetrahedron_3d(bool & direction, const double coordinate_dofs[9], const double a[3], std::size_t facet) { const unsigned int v0 = tetrahedron_facet_vertices[facet][0]; direction = a[0]*(coordinate_dofs[3*facet ] - coordinate_dofs[3*v0 ]) + a[1]*(coordinate_dofs[3*facet + 1] - coordinate_dofs[3*v0 + 1]) + a[2]*(coordinate_dofs[3*facet + 2] - coordinate_dofs[3*v0 + 2]) < 0; } ///--- Compute facet normal vectors --- /// Compute facet normal for interval embedded in R^1 inline void compute_facet_normal_interval_1d(double n[UFC_GDIM_1], bool direction) { // Facet normals are 1.0 or -1.0: (-1.0) <-- X------X --> (1.0) n[0] = direction ? 1.0 : -1.0; } /// Compute facet normal for interval embedded in R^2 inline void compute_facet_normal_interval_2d(double n[UFC_GDIM_2], const double coordinate_dofs[4], std::size_t facet) { if (facet == 0) { n[0] = coordinate_dofs[0] - coordinate_dofs[2]; n[1] = coordinate_dofs[1] - coordinate_dofs[3]; } else { n[0] = coordinate_dofs[2] - coordinate_dofs[0]; n[1] = coordinate_dofs[3] - coordinate_dofs[1]; } const double n_length = std::sqrt(n[0]*n[0] + n[1]*n[1]); n[0] /= n_length; n[1] /= n_length; } /// Compute facet normal for interval embedded in R^3 inline void compute_facet_normal_interval_3d(double n[UFC_GDIM_3], const double coordinate_dofs[6], std::size_t facet) { if (facet == 0) { n[0] = coordinate_dofs[0] - coordinate_dofs[3]; n[1] = coordinate_dofs[1] - coordinate_dofs[4]; n[1] = coordinate_dofs[2] - coordinate_dofs[5]; } else { n[0] = coordinate_dofs[3] - coordinate_dofs[0]; n[1] = coordinate_dofs[4] - coordinate_dofs[1]; n[1] = coordinate_dofs[5] - coordinate_dofs[2]; } const double n_length = std::sqrt(n[0]*n[0] + n[1]*n[1] + n[2]*n[2]); n[0] /= n_length; n[1] /= n_length; n[2] /= n_length; } /// Compute facet normal for triangle embedded in R^2 inline void compute_facet_normal_triangle_2d(double n[UFC_GDIM_2], const double dx[2], const double det, bool direction) { // Compute facet normals from the facet scale factor constants n[0] = direction ? dx[1] / det : -dx[1] / det; n[1] = direction ? -dx[0] / det : dx[0] / det; } /// Compute facet normal for triangle embedded in R^3 inline void compute_facet_normal_triangle_3d(double n[UFC_GDIM_3], const double coordinate_dofs[6], std::size_t facet) { // Compute facet normal for triangles in 3D const unsigned int vertex0 = facet; // Get coordinates corresponding the vertex opposite this const unsigned int vertex1 = triangle_facet_vertices[facet][0]; const unsigned int vertex2 = triangle_facet_vertices[facet][1]; // Define vectors n = (p2 - p0) and t = normalized (p2 - p1) n[0] = coordinate_dofs[3*vertex2 + 0] - coordinate_dofs[3*vertex0 + 0]; n[1] = coordinate_dofs[3*vertex2 + 1] - coordinate_dofs[3*vertex0 + 1]; n[2] = coordinate_dofs[3*vertex2 + 2] - coordinate_dofs[3*vertex0 + 2]; double t0 = coordinate_dofs[3*vertex2 + 0] - coordinate_dofs[3*vertex1 + 0]; double t1 = coordinate_dofs[3*vertex2 + 1] - coordinate_dofs[3*vertex1 + 1]; double t2 = coordinate_dofs[3*vertex2 + 2] - coordinate_dofs[3*vertex1 + 2]; const double t_length = std::sqrt(t0*t0 + t1*t1 + t2*t2); t0 /= t_length; t1 /= t_length; t2 /= t_length; // Subtract, the projection of (p2 - p0) onto (p2 - p1), from (p2 - p0) const double ndott = t0*n[0] + t1*n[1] + t2*n[2]; n[0] -= ndott*t0; n[1] -= ndott*t1; n[2] -= ndott*t2; const double n_length = std::sqrt(n[0]*n[0] + n[1]*n[1] + n[2]*n[2]); // Normalize n[0] /= n_length; n[1] /= n_length; n[2] /= n_length; } /// Compute facet normal for tetrahedron embedded in R^3 inline void compute_facet_normal_tetrahedron_3d(double n[UFC_GDIM_3], const double a[3], const double det, bool direction) { // Compute facet normals from the facet scale factor constants n[0] = direction ? a[0] / det : -a[0] / det; n[1] = direction ? a[1] / det : -a[1] / det; n[2] = direction ? a[2] / det : -a[2] / det; } ///--- Compute circumradius --- /// Compute circumradius for interval embedded in R^1 inline void compute_circumradius_interval_1d(double & circumradius, double volume) { // Compute circumradius; in 1D it is equal to half the cell length circumradius = volume / 2.0; } /// Compute circumradius for interval embedded in R^2 inline void compute_circumradius_interval_2d(double & circumradius, double volume) { // Compute circumradius of interval in 2D (1/2 volume) circumradius = volume / 2.0; } /// Compute circumradius for interval embedded in R^3 inline void compute_circumradius_interval_3d(double & circumradius, double volume) { // Compute circumradius of interval in 3D (1/2 volume) circumradius = volume / 2.0; } /// Compute circumradius for triangle embedded in R^2 inline void compute_circumradius_triangle_2d(double & circumradius, const double coordinate_dofs[6], const double J[UFC_GDIM_2*UFC_TDIM_2], double volume) { // Compute circumradius of triangle in 2D const double v1v2 = std::sqrt( (coordinate_dofs[4] - coordinate_dofs[2])*(coordinate_dofs[4] - coordinate_dofs[2]) + (coordinate_dofs[5] - coordinate_dofs[3])*(coordinate_dofs[5] - coordinate_dofs[3]) ); const double v0v2 = std::sqrt(J[3]*J[3] + J[1]*J[1]); const double v0v1 = std::sqrt(J[0]*J[0] + J[2]*J[2]); circumradius = 0.25*(v1v2*v0v2*v0v1) / volume; } /// Compute circumradius for triangle embedded in R^3 inline void compute_circumradius_triangle_3d(double & circumradius, const double coordinate_dofs[9], const double J[UFC_GDIM_3*UFC_TDIM_2], double volume) { // Compute circumradius of triangle in 3D const double v1v2 = std::sqrt( (coordinate_dofs[6] - coordinate_dofs[3])*(coordinate_dofs[6] - coordinate_dofs[3]) + (coordinate_dofs[7] - coordinate_dofs[4])*(coordinate_dofs[7] - coordinate_dofs[4]) + (coordinate_dofs[8] - coordinate_dofs[5])*(coordinate_dofs[8] - coordinate_dofs[5])); const double v0v2 = std::sqrt( J[3]*J[3] + J[1]*J[1] + J[5]*J[5]); const double v0v1 = std::sqrt( J[0]*J[0] + J[2]*J[2] + J[4]*J[4]); circumradius = 0.25*(v1v2*v0v2*v0v1) / volume; } /// Compute circumradius for tetrahedron embedded in R^3 inline void compute_circumradius_tetrahedron_3d(double & circumradius, const double coordinate_dofs[12], const double J[UFC_GDIM_3*UFC_TDIM_3], double volume) { // Compute circumradius const double v1v2 = std::sqrt( (coordinate_dofs[6] - coordinate_dofs[3])*(coordinate_dofs[6] - coordinate_dofs[3]) + (coordinate_dofs[7] - coordinate_dofs[4])*(coordinate_dofs[7] - coordinate_dofs[4]) + (coordinate_dofs[8] - coordinate_dofs[5])*(coordinate_dofs[8] - coordinate_dofs[5]) ); const double v0v2 = std::sqrt(J[1]*J[1] + J[4]*J[4] + J[7]*J[7]); const double v0v1 = std::sqrt(J[0]*J[0] + J[3]*J[3] + J[6]*J[6]); const double v0v3 = std::sqrt(J[2]*J[2] + J[5]*J[5] + J[8]*J[8]); const double v1v3 = std::sqrt( (coordinate_dofs[ 9] - coordinate_dofs[3])*(coordinate_dofs[ 9] - coordinate_dofs[3]) + (coordinate_dofs[10] - coordinate_dofs[4])*(coordinate_dofs[10] - coordinate_dofs[4]) + (coordinate_dofs[11] - coordinate_dofs[5])*(coordinate_dofs[11] - coordinate_dofs[5]) ); const double v2v3 = std::sqrt( (coordinate_dofs[ 9] - coordinate_dofs[6])*(coordinate_dofs[ 9] - coordinate_dofs[6]) + (coordinate_dofs[10] - coordinate_dofs[7])*(coordinate_dofs[10] - coordinate_dofs[7]) + (coordinate_dofs[11] - coordinate_dofs[8])*(coordinate_dofs[11] - coordinate_dofs[8]) ); const double la = v1v2*v0v3; const double lb = v0v2*v1v3; const double lc = v0v1*v2v3; const double s = 0.5*(la+lb+lc); const double area = std::sqrt(s*(s-la)*(s-lb)*(s-lc)); circumradius = area / (6.0*volume); } ///--- Compute max facet edge lengths --- /// Compute min edge length in facet of tetrahedron embedded in R^3 inline void compute_min_facet_edge_length_tetrahedron_3d(double & min_edge_length, unsigned int facet, const double coordinate_dofs[3*4]) { // TODO: Extract compute_facet_edge_lengths_tetrahedron_3d(), reuse between min/max functions double edge_lengths_sqr[3]; for (unsigned int edge = 0; edge < 3; ++edge) { const unsigned int vertex0 = tetrahedron_facet_edge_vertices[facet][edge][0]; const unsigned int vertex1 = tetrahedron_facet_edge_vertices[facet][edge][1]; edge_lengths_sqr[edge] = (coordinate_dofs[3*vertex1 + 0] - coordinate_dofs[3*vertex0 + 0])*(coordinate_dofs[3*vertex1 + 0] - coordinate_dofs[3*vertex0 + 0]) + (coordinate_dofs[3*vertex1 + 1] - coordinate_dofs[3*vertex0 + 1])*(coordinate_dofs[3*vertex1 + 1] - coordinate_dofs[3*vertex0 + 1]) + (coordinate_dofs[3*vertex1 + 2] - coordinate_dofs[3*vertex0 + 2])*(coordinate_dofs[3*vertex1 + 2] - coordinate_dofs[3*vertex0 + 2]); } min_edge_length = std::sqrt(std::min(std::min(edge_lengths_sqr[1], edge_lengths_sqr[1]), edge_lengths_sqr[2])); } ///--- Compute max facet edge lengths --- /// Compute max edge length in facet of tetrahedron embedded in R^3 inline void compute_max_facet_edge_length_tetrahedron_3d(double & max_edge_length, unsigned int facet, const double coordinate_dofs[12]) { // TODO: Extract compute_facet_edge_lengths_tetrahedron_3d(), reuse between min/max functions double edge_lengths_sqr[3]; for (unsigned int edge = 0; edge < 3; ++edge) { const unsigned int vertex0 = tetrahedron_facet_edge_vertices[facet][edge][0]; const unsigned int vertex1 = tetrahedron_facet_edge_vertices[facet][edge][1]; edge_lengths_sqr[edge] = (coordinate_dofs[3*vertex1 + 0] - coordinate_dofs[3*vertex0 + 0])*(coordinate_dofs[3*vertex1 + 0] - coordinate_dofs[3*vertex0 + 0]) + (coordinate_dofs[3*vertex1 + 1] - coordinate_dofs[3*vertex0 + 1])*(coordinate_dofs[3*vertex1 + 1] - coordinate_dofs[3*vertex0 + 1]) + (coordinate_dofs[3*vertex1 + 2] - coordinate_dofs[3*vertex0 + 2])*(coordinate_dofs[3*vertex1 + 2] - coordinate_dofs[3*vertex0 + 2]); } max_edge_length = std::sqrt(std::max(std::max(edge_lengths_sqr[0], edge_lengths_sqr[1]), edge_lengths_sqr[2])); } //} // TODO: Wrap all in namespace ufc #endif ffc-2019.1.0.post0/ffc/classname.py000066400000000000000000000024741346026536000166400ustar00rootroot00000000000000# -*- coding: utf-8 -*- "This module defines some basics for generating C++ code." # Copyright (C) 2009-2017 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # Modified by Kristian B. Oelgaard 2011 # Modified by Marie E. Rognes 2010 # Modified by Martin Sandve Alnæs 2013-2017 # Modified by Chris Richardson 2017 # ufc class names def make_classname(prefix, basename, signature): pre = prefix.lower() + "_" if prefix else "" sig = str(signature).lower() return "%s%s_%s" % (pre, basename, sig) def make_integral_classname(prefix, integral_type, form_id, subdomain_id): basename = "%s_integral_%s" % (integral_type, str(form_id).lower()) return make_classname(prefix, basename, subdomain_id) ffc-2019.1.0.post0/ffc/codegeneration.py000066400000000000000000000253571346026536000176650ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ Compiler stage 4: Code generation --------------------------------- This module implements the generation of C++ code for the body of each UFC function from an (optimized) intermediate representation (OIR). """ # Copyright (C) 2009-2017 Anders Logg, Martin Sandve Alnæs, Marie E. Rognes, # Kristian B. Oelgaard, and others # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . from itertools import chain from ufl import product # FFC modules from ffc.log import info, begin, end, debug_code, dstr # FFC code generation modules from ffc.representation import pick_representation, ufc_integral_types import ffc.uflacs.language.cnodes as L from ffc.uflacs.language.format_lines import format_indented_lines from ffc.uflacs.backends.ufc.utils import generate_error from ffc.uflacs.backends.ufc.generators import ufc_integral, ufc_finite_element, ufc_dofmap, ufc_coordinate_mapping, ufc_form def generate_code(ir, parameters): "Generate code from intermediate representation." begin("Compiler stage 4: Generating code") full_ir = ir # FIXME: This has global side effects # Set code generation parameters # set_float_formatting(parameters["precision"]) # set_exception_handling(parameters["convert_exceptions_to_warnings"]) # Extract representations ir_finite_elements, ir_dofmaps, ir_coordinate_mappings, ir_integrals, ir_forms = ir # Generate code for finite_elements info("Generating code for %d finite_element(s)" % len(ir_finite_elements)) code_finite_elements = [_generate_finite_element_code(ir, parameters) for ir in ir_finite_elements] # Generate code for dofmaps info("Generating code for %d dofmap(s)" % len(ir_dofmaps)) code_dofmaps = [_generate_dofmap_code(ir, parameters) for ir in ir_dofmaps] # Generate code for coordinate_mappings info("Generating code for %d coordinate_mapping(s)" % len(ir_coordinate_mappings)) code_coordinate_mappings = [_generate_coordinate_mapping_code(ir, parameters) for ir in ir_coordinate_mappings] # Generate code for integrals info("Generating code for integrals") code_integrals = [_generate_integral_code(ir, parameters) for ir in ir_integrals] # Generate code for forms info("Generating code for forms") code_forms = [_generate_form_code(ir, parameters) for ir in ir_forms] # Extract additional includes includes = _extract_includes(full_ir, code_integrals) end() return (code_finite_elements, code_dofmaps, code_coordinate_mappings, code_integrals, code_forms, includes) def _extract_includes(full_ir, code_integrals): ir_finite_elements, ir_dofmaps, ir_coordinate_mappings, ir_integrals, ir_forms = full_ir # Includes added by representations includes = set() for code in code_integrals: includes.update(code["additional_includes_set"]) # Includes for dependencies in jit mode jit = any(full_ir[i][j]["jit"] for i in range(len(full_ir)) for j in range(len(full_ir[i]))) if jit: dep_includes = set() for ir in ir_finite_elements: dep_includes.update(_finite_element_jit_includes(ir)) for ir in ir_dofmaps: dep_includes.update(_dofmap_jit_includes(ir)) for ir in ir_coordinate_mappings: dep_includes.update(_coordinate_mapping_jit_includes(ir)) #for ir in ir_integrals: # dep_includes.update(_integral_jit_includes(ir)) for ir in ir_forms: dep_includes.update(_form_jit_includes(ir)) includes.update(['#include "%s"' % inc for inc in dep_includes]) return includes def _finite_element_jit_includes(ir): classnames = ir["create_sub_element"] postfix = "_finite_element" return [classname.rpartition(postfix)[0] + ".h" for classname in classnames] def _dofmap_jit_includes(ir): classnames = ir["create_sub_dofmap"] postfix = "_dofmap" return [classname.rpartition(postfix)[0] + ".h" for classname in classnames] def _coordinate_mapping_jit_includes(ir): classnames = [ ir["coordinate_finite_element_classname"], ir["scalar_coordinate_finite_element_classname"] ] postfix = "_finite_element" return [classname.rpartition(postfix)[0] + ".h" for classname in classnames] def _form_jit_includes(ir): # Gather all header names for classes that are separately compiled # For finite_element and dofmap the module and header name is the prefix, # extracted here with .split, and equal for both classes so we skip dofmap here: classnames = list(chain( ir["create_finite_element"], ir["create_coordinate_finite_element"] )) postfix = "_finite_element" includes = [classname.rpartition(postfix)[0] + ".h" for classname in classnames] classnames = ir["create_coordinate_mapping"] postfix = "_coordinate_mapping" includes += [classname.rpartition(postfix)[0] + ".h" for classname in classnames] return includes tt_timing_template = """ // Initialize timing variables static const std::size_t _tperiod = 10000; static std::size_t _tcount = 0; static auto _tsum = std::chrono::nanoseconds::zero(); static auto _tavg_best = std::chrono::nanoseconds::max(); static auto _tmin = std::chrono::nanoseconds::max(); static auto _tmax = std::chrono::nanoseconds::min(); // Measure single kernel time auto _before = std::chrono::high_resolution_clock::now(); { // Begin original kernel %s } // End original kernel // Measure single kernel time auto _after = std::chrono::high_resolution_clock::now(); // Update time stats const std::chrono::seconds _s(1); auto _tsingle = _after - _before; ++_tcount; _tsum += _tsingle; _tmin = std::min(_tmin, _tsingle); _tmax = std::max(_tmax, _tsingle); if (_tcount %% _tperiod == 0 || _tsum > _s) { // Record best average across batches std::chrono::nanoseconds _tavg = _tsum / _tcount; if (_tavg_best > _tavg) _tavg_best = _tavg; // Convert to ns auto _tot_ns = std::chrono::duration_cast(_tsum).count(); auto _avg_ns = std::chrono::duration_cast(_tavg).count(); auto _min_ns = std::chrono::duration_cast(_tmin).count(); auto _max_ns = std::chrono::duration_cast(_tmax).count(); auto _avg_best_ns = std::chrono::duration_cast(_tavg_best).count(); // Print report std::cout << "FFC tt time:" << " avg_best = " << _avg_best_ns << " ns," << " avg = " << _avg_ns << " ns," << " min = " << _min_ns << " ns," << " max = " << _max_ns << " ns," << " tot = " << _tot_ns << " ns," << " n = " << _tcount << std::endl; // Reset statistics for next batch _tcount = 0; _tsum = std::chrono::nanoseconds(0); _tmin = std::chrono::nanoseconds::max(); _tmax = std::chrono::nanoseconds::min(); } """ def _generate_tabulate_tensor_comment(ir, parameters): "Generate comment for tabulate_tensor." r = ir["representation"] integrals_metadata = ir["integrals_metadata"] integral_metadata = ir["integral_metadata"] comment = [ L.Comment("This function was generated using '%s' representation" % r), L.Comment("with the following integrals metadata:"), L.Comment(""), L.Comment("\n".join(dstr(integrals_metadata).split("\n")[:-1])) ] for i, metadata in enumerate(integral_metadata): comment += [ L.Comment(""), L.Comment("and the following integral %d metadata:" % i), L.Comment(""), L.Comment("\n".join(dstr(metadata).split("\n")[:-1])) ] return format_indented_lines(L.StatementList(comment).cs_format(0), 1) def _generate_integral_code(ir, parameters): "Generate code for integrals from intermediate representation." # Select representation r = pick_representation(ir["representation"]) # Generate code # TODO: Drop prefix argument and get from ir: code = r.generate_integral_code(ir, ir["prefix"], parameters) # Hack for benchmarking overhead in assembler with empty # tabulate_tensor if parameters["generate_dummy_tabulate_tensor"]: code["tabulate_tensor"] = "" # Wrapping tabulate_tensor in a timing snippet for benchmarking if parameters["add_tabulate_tensor_timing"]: code["tabulate_tensor"] = tt_timing_template % code["tabulate_tensor"] code["additional_includes_set"] = code.get("additional_includes_set", set()) code["additional_includes_set"].add("#include ") code["additional_includes_set"].add("#include ") # Generate comment code["tabulate_tensor_comment"] = _generate_tabulate_tensor_comment(ir, parameters) return code # TODO: Replace the above with this, currently something is not working def _new_generate_integral_code(ir, parameters): "Generate code for integrals from intermediate representation." return ufc_integral(ir["integral_type"]).generate_snippets(L, ir, parameters) def _generate_finite_element_code(ir, parameters): "Generate code for finite_element from intermediate representation." return ufc_finite_element().generate_snippets(L, ir, parameters) def _generate_dofmap_code(ir, parameters): "Generate code for dofmap from intermediate representation." return ufc_dofmap().generate_snippets(L, ir, parameters) def _generate_coordinate_mapping_code(ir, parameters): "Generate code for coordinate_mapping from intermediate representation." return ufc_coordinate_mapping().generate_snippets(L, ir, parameters) def _generate_form_code(ir, parameters): "Generate code for coordinate_mapping from intermediate representation." return ufc_form().generate_snippets(L, ir, parameters) ffc-2019.1.0.post0/ffc/compiler.py000066400000000000000000000176451346026536000165120ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ This is the compiler, acting as the main interface for compilation of forms and breaking the compilation into several sequential stages. The output of each stage is the input of the next stage. Compiler stage 0: Language, parsing ----------------------------------- Input: Python code or .ufl file Output: UFL form This stage consists of parsing and expressing a form in the UFL form language. This stage is completely handled by UFL. Compiler stage 1: Analysis -------------------------- Input: UFL form Output: Preprocessed UFL form and FormData (metadata) This stage preprocesses the UFL form and extracts form metadata. It may also perform simplifications on the form. Compiler stage 2: Code representation ------------------------------------- Input: Preprocessed UFL form and FormData (metadata) Output: Intermediate Representation (IR) This stage examines the input and generates all data needed for code generation. This includes generation of finite element basis functions, extraction of data for mapping of degrees of freedom and possible precomputation of integrals. Most of the complexity of compilation is handled in this stage. The IR is stored as a dictionary, mapping names of UFC functions to data needed for generation of the corresponding code. Compiler stage 3: Optimization ------------------------------ Input: Intermediate Representation (IR) Output: Optimized Intermediate Representation (OIR) This stage examines the IR and performs optimizations. Optimization is currently disabled as a separate stage but is implemented as part of the code generation for quadrature representation. Compiler stage 4: Code generation --------------------------------- Input: Optimized Intermediate Representation (OIR) Output: C++ code This stage examines the OIR and generates the actual C++ code for the body of each UFC function. The code is stored as a dictionary, mapping names of UFC functions to strings containing the C++ code of the body of each function. Compiler stage 5: Code formatting --------------------------------- Input: C++ code Output: C++ code files This stage examines the generated C++ code and formats it according to the UFC format, generating as output one or more .h/.cpp files conforming to the UFC format. The main interface is defined by the following two functions: compile_form compile_element The compiler stages are implemented by the following functions: analyze_forms or analyze_elements (stage 1) compute_ir (stage 2) optimize_ir (stage 3) generate_code (stage 4) format_code (stage 5) """ # Copyright (C) 2007-2017 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # Modified by Kristian B. Oelgaard, 2010. # Modified by Dag Lindbo, 2008. # Modified by Garth N. Wells, 2009. # Modified by Martin Sandve Alnæs, 2013-2017 __all__ = ["compile_form", "compile_element"] # Python modules from time import time import os import ufl # FFC modules from ffc.log import info, info_green, warning, error from ffc.parameters import validate_parameters from ffc.analysis import analyze_ufl_objects from ffc.representation import compute_ir from ffc.optimization import optimize_ir from ffc.codegeneration import generate_code from ffc.formatting import format_code from ffc.wrappers import generate_wrapper_code def _print_timing(stage, timing): "Print timing results." info("Compiler stage %s finished in %g seconds.\n" % (str(stage), timing)) def compile_form(forms, object_names=None, prefix="Form", parameters=None, jit=False): """This function generates UFC code for a given UFL form or list of UFL forms.""" return compile_ufl_objects(forms, "form", object_names, prefix, parameters, jit) def compile_element(elements, object_names=None, prefix="Element", parameters=None, jit=False): """This function generates UFC code for a given UFL element or list of UFL elements.""" return compile_ufl_objects(elements, "element", object_names, prefix, parameters, jit) def compile_coordinate_mapping(meshes, object_names=None, prefix="Mesh", parameters=None, jit=False): """This function generates UFC code for a given UFL mesh or list of UFL meshes.""" return compile_ufl_objects(meshes, "coordinate_mapping", object_names, prefix, parameters, jit) def compile_ufl_objects(ufl_objects, kind, object_names=None, prefix=None, parameters=None, jit=False): """This function generates UFC code for a given UFL form or list of UFL forms.""" info("Compiling %s %s\n" % (kind, prefix)) # Reset timing cpu_time_0 = time() # Note that jit will always pass validated parameters so # this is only for commandline and direct call from python if not jit: parameters = validate_parameters(parameters) # Check input arguments if not isinstance(ufl_objects, (list, tuple)): ufl_objects = (ufl_objects,) if not ufl_objects: return "", "" if prefix != os.path.basename(prefix): error("Invalid prefix, looks like a full path? prefix='{}'.".format(prefix)) if object_names is None: object_names = {} # Stage 1: analysis cpu_time = time() analysis = analyze_ufl_objects(ufl_objects, kind, parameters) _print_timing(1, time() - cpu_time) # Stage 2: intermediate representation cpu_time = time() ir = compute_ir(analysis, prefix, parameters, jit) _print_timing(2, time() - cpu_time) # Stage 3: optimization cpu_time = time() oir = optimize_ir(ir, parameters) _print_timing(3, time() - cpu_time) # Stage 4: code generation cpu_time = time() code = generate_code(oir, parameters) _print_timing(4, time() - cpu_time) # Stage 4.1: generate wrappers cpu_time = time() wrapper_code = generate_wrapper_code(analysis, prefix, object_names, parameters) _print_timing(4.1, time() - cpu_time) # Stage 5: format code cpu_time = time() code_h, code_c = format_code(code, wrapper_code, prefix, parameters, jit) _print_timing(5, time() - cpu_time) info_green("FFC finished in %g seconds.", time() - cpu_time_0) if jit: # Must use processed elements from analysis here form_datas, unique_elements, element_numbers, unique_coordinate_elements = analysis # Wrap coordinate elements in Mesh object to represent that # we want a ufc::coordinate_mapping not a ufc::finite_element unique_meshes = [ufl.Mesh(element, ufl_id=0) for element in unique_coordinate_elements] # Avoid returning self as dependency for infinite recursion unique_elements = tuple(element for element in unique_elements if element not in ufl_objects) unique_meshes = tuple(mesh for mesh in unique_meshes if mesh not in ufl_objects) # Setup dependencies (these will be jitted before continuing to compile ufl_objects) dependent_ufl_objects = { "element": unique_elements, "coordinate_mapping": unique_meshes, } return code_h, code_c, dependent_ufl_objects else: return code_h, code_c ffc-2019.1.0.post0/ffc/errorcontrol/000077500000000000000000000000001346026536000170435ustar00rootroot00000000000000ffc-2019.1.0.post0/ffc/errorcontrol/__init__.py000066400000000000000000000004611346026536000211550ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ This module contains functionality for working with automated goal-oriented error control. In particular it offers the following function: compile_with_error_control - Compile forms and generate error control forms """ from .errorcontrol import compile_with_error_control ffc-2019.1.0.post0/ffc/errorcontrol/errorcontrol.py000066400000000000000000000126751346026536000221620ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ This module provides compilation of forms required for goal-oriented error control """ # Copyright (C) 2010 Marie E. Rognes # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . from ufl.utils.sorting import sorted_by_key from ufl import Coefficient from ffc.log import error from ffc.compiler import compile_form __all__ = ["compile_with_error_control"] def compile_with_error_control(forms, object_names, reserved_objects, prefix, parameters): """ Compile forms and additionally generate and compile forms required for performing goal-oriented error control For linear problems, the input forms should be a bilinear form (a) and a linear form (L) specifying the variational problem and additionally a linear form (M) specifying the goal functional. For nonlinear problems, the input should be linear form (F) and a functional (M) specifying the goal functional. *Arguments* forms (tuple) Three (linear case) or two (nonlinear case) forms specifying the primal problem and the goal object_names (dict) Map from object ids to object names reserved_names (dict) Map from reserved object names to object ids prefix (string) Basename of header file parameters (dict) Parameters for form compilation """ # Check input arguments F, M, u = prepare_input_arguments(forms, object_names, reserved_objects) # Generate forms to be used for the error control from ffc.errorcontrol.errorcontrolgenerators import UFLErrorControlGenerator generator = UFLErrorControlGenerator(F, M, u) ec_forms = generator.generate_all_error_control_forms() # Check that there are no conflicts between user defined and # generated names ec_names = generator.ec_names if set(object_names.values()) & set(ec_names.values()): comment = "%s are reserved error control names." % str(sorted(ec_names.values())) error("Conflict between user defined and generated names: %s" % comment) # Add names generated for error control to object_names for (objid, name) in sorted_by_key(ec_names): object_names[objid] = name # Compile error control and input (pde + goal) forms as normal forms = generator.primal_forms() code_h, code_c = compile_form(ec_forms + forms, object_names, prefix, parameters) return code_h, code_c def prepare_input_arguments(forms, object_names, reserved_objects): """ Extract required input arguments to UFLErrorControlGenerator. *Arguments* forms (tuple) Three (linear case) or two (nonlinear case) forms specifying the primal problem and the goal object_names (dict) Map from object ids to object names reserved_names (dict) Map from reserved object names to object ids *Returns* tuple (of length 3) containing Form or tuple A single linear form or a tuple of a bilinear and a linear form Form A linear form or a functional for the goal functional Coefficient The coefficient considered as the unknown """ # Check that we get a tuple of forms if not isinstance(forms, (list, tuple)): error("Expecting tuple of forms, got %s" % str(forms)) def __is_nonlinear(forms): return len(forms) == 2 def __is_linear(forms): return len(forms) == 3 # Extract Coefficient labelled as 'unknown' u = reserved_objects.get("unknown", None) if __is_nonlinear(forms): # Check that unknown is defined if u is None: error("Can't extract 'unknown'. The Coefficient representing the unknown must be labelled by 'unknown' for nonlinear problems.") (F, M) = forms # Check that forms have the expected rank assert len(F.arguments()) == 1 assert len(M.arguments()) == 0 # Return primal, goal and unknown return (F, M, u) elif __is_linear(forms): # Throw error if unknown is given, don't quite know what to do # with this case yet if u is not None: error("'unknown' defined: not implemented for linear problems") (a, L, M) = forms # Check that forms have the expected rank arguments = a.arguments() assert len(arguments) == 2 assert len(L.arguments()) == 1 assert len(M.arguments()) == 1 # Standard case: create default Coefficient in trial space and # label it __discrete_primal_solution V = arguments[1].ufl_function_space() u = Coefficient(V) object_names[id(u)] = "__discrete_primal_solution" return ((a, L), M, u) else: error("Wrong input tuple length: got %s, expected 2 or 3-tuple" % str(forms)) ffc-2019.1.0.post0/ffc/errorcontrol/errorcontrolgenerators.py000066400000000000000000000247061346026536000242520ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ This module provides an abstract ErrorControlGenerator class for generating forms required for goal-oriented error control and a realization of this: UFLErrorControlGenerator for handling pure UFL forms. """ # Copyright (C) 2010 Marie E. Rognes # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . from ufl import inner, dx, ds, dS, avg, replace, action __all__ = ["ErrorControlGenerator", "UFLErrorControlGenerator"] class ErrorControlGenerator: def __init__(self, module, F, M, u): """ *Arguments* module (Python module) The module to use for specific form manipulations (typically ufl or dolfin) F (tuple or Form) tuple of (bilinear, linear) forms or linear form M (Form) functional or linear form u (Coefficient) The coefficient considered as the unknown. """ # Store module self.module = module # Store solution Coefficient/Function self.u = u # Extract the lhs (bilinear form), rhs (linear form), goal # (functional), weak residual (linear form) linear_case = (isinstance(F, (tuple, list)) and len(F) == 2) if linear_case: self.lhs, self.rhs = F try: self.goal = action(M, u) except Exception: self.goal = M # Allow functionals as input as well self.weak_residual = self.rhs - action(self.lhs, u) else: self.lhs = self.module.derivative(F, u) self.rhs = F self.goal = M self.weak_residual = - F # At least check that final forms have correct rank assert(len(self.lhs.arguments()) == 2) assert(len(self.rhs.arguments()) == 1) assert(len(self.goal.arguments()) == 0) assert(len(self.weak_residual.arguments()) == 1) # Get the domain self.domain = self.weak_residual.ufl_domain() # Store map from identifiers to names for forms and generated # coefficients self.ec_names = {} # Use predefined names for the forms in the primal problem self.ec_names[id(self.lhs)] = "lhs" self.ec_names[id(self.rhs)] = "rhs" self.ec_names[id(self.goal)] = "goal" # Initialize other required data self.initialize_data() def initialize_data(self): """ Initialize specific data """ msg = """ErrorControlGenerator acts as an abstract class. Subclasses must overload the initialize_data() method and provide a certain set of variables. See UFLErrorControlGenerator for an example.""" raise NotImplementedError(msg) def generate_all_error_control_forms(self): """ Utility function for generating all (8) forms required for error control in addition to the primal forms """ # Generate dual forms (a_star, L_star) = self.dual_forms() # Generate forms for computing strong cell residual (a_R_T, L_R_T) = self.cell_residual() # Generate forms for computing strong facet residuals (a_R_dT, L_R_dT) = self.facet_residual() # Generate form for computing error estimate eta_h = self.error_estimate() # Generate form for computing error indicators eta_T = self.error_indicators() # Paranoid checks added after introduction of multidomain features in ufl: for i, form in enumerate((a_star, L_star, eta_h, a_R_T, L_R_T, a_R_dT, L_R_dT, eta_T)): assert form.ufl_domain() is not None # Return all generated forms in CERTAIN order matching # constructor of dolfin/adaptivity/ErrorControl.h return (a_star, L_star, eta_h, a_R_T, L_R_T, a_R_dT, L_R_dT, eta_T) def primal_forms(self): """ Return primal forms in order (bilinear, linear, functional) """ return self.lhs, self.rhs, self.goal def dual_forms(self): """ Generate and return (bilinear, linear) forms defining linear dual variational problem """ a_star = self.module.adjoint(self.lhs) L_star = self.module.derivative(self.goal, self.u) return (a_star, L_star) def cell_residual(self): """ Generate and return (bilinear, linear) forms defining linear variational problem for the strong cell residual """ # Define trial and test functions for the cell residuals on # discontinuous version of primal trial space R_T = self.module.TrialFunction(self._dV) v = self.module.TestFunction(self._dV) # Extract original test function in the weak residual v_h = self.weak_residual.arguments()[0] # Define forms defining linear variational problem for cell # residual v_T = self._b_T * v a_R_T = inner(v_T, R_T) * dx(self.domain) L_R_T = replace(self.weak_residual, {v_h: v_T}) return (a_R_T, L_R_T) def facet_residual(self): """ Generate and return (bilinear, linear) forms defining linear variational problem for the strong facet residual(s) """ # Define trial and test functions for the facet residuals on # discontinuous version of primal trial space R_e = self.module.TrialFunction(self._dV) v = self.module.TestFunction(self._dV) # Extract original test function in the weak residual v_h = self.weak_residual.arguments()[0] # Define forms defining linear variational problem for facet # residual v_e = self._b_e * v a_R_dT = ((inner(v_e('+'), R_e('+')) + inner(v_e('-'), R_e('-'))) * dS(self.domain) + inner(v_e, R_e) * ds(self.domain)) L_R_dT = (replace(self.weak_residual, {v_h: v_e}) - inner(v_e, self._R_T) * dx(self.domain)) return (a_R_dT, L_R_dT) def error_estimate(self): """ Generate and return functional defining error estimate """ # Error estimate defined as r(Ez_h): eta_h = action(self.weak_residual, self._Ez_h) return eta_h def error_indicators(self): """ Generate and return linear form defining error indicators """ # Extract these to increase readability R_T = self._R_T R_dT = self._R_dT z = self._Ez_h z_h = self._z_h # Define linear form for computing error indicators v = self.module.TestFunction(self._DG0) eta_T = (v * inner(R_T, z - z_h) * dx(self.domain) + avg(v)*(inner(R_dT('+'), (z - z_h)('+')) + inner(R_dT('-'), (z - z_h)('-'))) * dS(self.domain) + v * inner(R_dT, z - z_h) * ds(self.domain)) return eta_T class UFLErrorControlGenerator(ErrorControlGenerator): """ This class provides a realization of ErrorControlGenerator for use with pure UFL forms """ def __init__(self, F, M, u): """ *Arguments* F (tuple or Form) tuple of (bilinear, linear) forms or linear form M (Form) functional or linear form u (Coefficient) The coefficient considered as the unknown. """ ErrorControlGenerator.__init__(self, __import__("ufl"), F, M, u) def initialize_data(self): """ Extract required objects for defining error control forms. This will be stored, reused and in particular named. """ # Developer's note: The UFL-FFC-DOLFIN--PyDOLFIN toolchain for # error control is quite fine-tuned. In particular, the order # of coefficients in forms is (and almost must be) used for # their assignment. This means that the order in which these # coefficients are defined matters and should be considered # fixed. from ufl import FiniteElement, FunctionSpace, Coefficient from ufl.algorithms.elementtransformations import tear, increase_order # Primal trial element space self._V = self.u.ufl_function_space() # Extract domain domain = self.u.ufl_domain() # Primal test space == Dual trial space Vhat = self.weak_residual.arguments()[0].ufl_function_space() # Discontinuous version of primal trial element space self._dV = FunctionSpace(domain, tear(self._V.ufl_element())) # Extract geometric dimension gdim = domain.geometric_dimension() # Coefficient representing improved dual E = FunctionSpace(domain, increase_order(Vhat.ufl_element())) self._Ez_h = Coefficient(E) self.ec_names[id(self._Ez_h)] = "__improved_dual" # Coefficient representing cell bubble function Belm = FiniteElement("B", domain.ufl_cell(), gdim + 1) Bfs = FunctionSpace(domain, Belm) self._b_T = Coefficient(Bfs) self.ec_names[id(self._b_T)] = "__cell_bubble" # Coefficient representing strong cell residual self._R_T = Coefficient(self._dV) self.ec_names[id(self._R_T)] = "__cell_residual" # Coefficient representing cell cone function Celm = FiniteElement("DG", domain.ufl_cell(), gdim) Cfs = FunctionSpace(domain, Celm) self._b_e = Coefficient(Cfs) self.ec_names[id(self._b_e)] = "__cell_cone" # Coefficient representing strong facet residual self._R_dT = Coefficient(self._dV) self.ec_names[id(self._R_dT)] = "__facet_residual" # Define discrete dual on primal test space self._z_h = Coefficient(Vhat) self.ec_names[id(self._z_h)] = "__discrete_dual_solution" # Piecewise constants for assembling indicators self._DG0 = FunctionSpace(domain, FiniteElement("DG", domain.ufl_cell(), 0)) ffc-2019.1.0.post0/ffc/fiatinterface.py000066400000000000000000000306631346026536000174770ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2009-2017 Kristian B. Oelgaard and Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # Modified by Garth N. Wells, 2009. # Modified by Marie Rognes, 2009-2013. # Modified by Martin Sandve Alnæs, 2013 # Modified by Lizao Li, 2015, 2016 # Python modules import numpy from numpy import array # UFL and FIAT modules import ufl import FIAT from FIAT.enriched import EnrichedElement from FIAT.nodal_enriched import NodalEnrichedElement from FIAT.hdiv_trace import HDivTrace from FIAT.mixed import MixedElement from FIAT.P0 import P0 from FIAT.restricted import RestrictedElement from FIAT.quadrature_element import QuadratureElement from FIAT.tensor_product import FlattenedDimensions # FFC modules from ffc.log import debug, error # Dictionary mapping from cellname to dimension from ufl.cell import cellname2dim # Element families supported by FFC supported_families = ("Brezzi-Douglas-Marini", "Brezzi-Douglas-Fortin-Marini", "Crouzeix-Raviart", "Discontinuous Lagrange", "Discontinuous Raviart-Thomas", "HDiv Trace", "Lagrange", "Lobatto", "Nedelec 1st kind H(curl)", "Nedelec 2nd kind H(curl)", "Radau", "Raviart-Thomas", "Real", "Bubble", "Quadrature", "Regge", "Hellan-Herrmann-Johnson", "Q", "DQ", "TensorProductElement") # Cache for computed elements _cache = {} class SpaceOfReals(object): """Constant over the entire domain, rather than just cellwise.""" def reference_cell(cellname): "Return FIAT reference cell" return FIAT.ufc_cell(cellname) def reference_cell_vertices(cellname): "Return dict of coordinates of reference cell vertices for this 'cellname'." cell = reference_cell(cellname) return cell.get_vertices() def create_element(ufl_element): # Create element signature for caching (just use UFL element) element_signature = ufl_element # Check cache if element_signature in _cache: debug("Reusing element from cache") return _cache[element_signature] # Create regular FIAT finite element if isinstance(ufl_element, ufl.FiniteElement): element = _create_fiat_element(ufl_element) # Create mixed element (implemented by FFC) elif isinstance(ufl_element, ufl.MixedElement): elements = _extract_elements(ufl_element) element = MixedElement(elements) # Create element union elif isinstance(ufl_element, ufl.EnrichedElement): elements = [create_element(e) for e in ufl_element._elements] element = EnrichedElement(*elements) # Create nodal element union elif isinstance(ufl_element, ufl.NodalEnrichedElement): elements = [create_element(e) for e in ufl_element._elements] element = NodalEnrichedElement(*elements) # Create restricted element elif isinstance(ufl_element, ufl.RestrictedElement): element = _create_restricted_element(ufl_element) else: error("Cannot handle this element type: %s" % str(ufl_element)) # Store in cache _cache[element_signature] = element return element def _create_fiat_element(ufl_element): "Create FIAT element corresponding to given finite element." # Get element data family = ufl_element.family() cell = ufl_element.cell() cellname = cell.cellname() degree = ufl_element.degree() # Check that FFC supports this element if family not in supported_families: error("This element family (%s) is not supported by FFC." % family) # Create FIAT cell fiat_cell = reference_cell(cellname) # Handle the space of the constant if family == "Real": element = _create_fiat_element(ufl.FiniteElement("DG", cell, 0)) element.__class__ = type('SpaceOfReals', (type(element), SpaceOfReals), {}) return element # Handle quadrilateral case by reconstructing the element with cell TensorProductCell (interval x interval) if cellname == "quadrilateral": quadrilateral_tpc = ufl.TensorProductCell(ufl.Cell("interval"), ufl.Cell("interval")) return FlattenedDimensions(_create_fiat_element(ufl_element.reconstruct(cell = quadrilateral_tpc))) # Handle hexahedron case by reconstructing the element with cell TensorProductCell (quadrilateral x interval) # This creates TensorProductElement(TensorProductElement(interval, interval), interval) # Therefore dof entities consists of nested tuples, example: ((0, 1), 1) elif cellname == "hexahedron": hexahedron_tpc = ufl.TensorProductCell(ufl.Cell("quadrilateral"), ufl.Cell("interval")) return FlattenedDimensions(_create_fiat_element(ufl_element.reconstruct(cell = hexahedron_tpc))) # FIXME: AL: Should this really be here? # Handle QuadratureElement if family == "Quadrature": # Compute number of points per axis from the degree of the element scheme = ufl_element.quadrature_scheme() assert degree is not None assert scheme is not None # Create quadrature (only interested in points) # TODO: KBO: What should we do about quadrature functions that live on ds, dS? # Get cell and facet names. points, weights = create_quadrature(cellname, degree, scheme) # Make element element = QuadratureElement(fiat_cell, points) else: # Check if finite element family is supported by FIAT if family not in FIAT.supported_elements: error("Sorry, finite element of type \"%s\" are not supported by FIAT.", family) ElementClass = FIAT.supported_elements[family] # Create tensor product FIAT finite element if isinstance(ufl_element, ufl.TensorProductElement): A = create_element(ufl_element.sub_elements()[0]) B = create_element(ufl_element.sub_elements()[1]) element = ElementClass(A, B) # Create normal FIAT finite element else: if degree is None: element = ElementClass(fiat_cell) else: element = ElementClass(fiat_cell, degree) # Consistency check between UFL and FIAT elements. if element.value_shape() != ufl_element.reference_value_shape(): error("Something went wrong in the construction of FIAT element from UFL element." + "Shapes are %s and %s." % (element.value_shape(), ufl_element.reference_value_shape())) return element def create_quadrature(shape, degree, scheme="default"): """ Generate quadrature rule (points, weights) for given shape that will integrate an polynomial of order 'degree' exactly. """ if isinstance(shape, int) and shape == 0: return (numpy.zeros((1, 0)), numpy.ones((1,))) if shape in cellname2dim and cellname2dim[shape] == 0: return (numpy.zeros((1, 0)), numpy.ones((1,))) if scheme == "vertex": # The vertex scheme, i.e., averaging the function value in the vertices # and multiplying with the simplex volume, is only of order 1 and # inferior to other generic schemes in terms of error reduction. # Equation systems generated with the vertex scheme have some # properties that other schemes lack, e.g., the mass matrix is # a simple diagonal matrix. This may be prescribed in certain cases. if degree > 1: from warnings import warn warn(("Explicitly selected vertex quadrature (degree 1), " + "but requested degree is %d.") % degree) if shape == "tetrahedron": return (array([[0.0, 0.0, 0.0], [1.0, 0.0, 0.0], [0.0, 1.0, 0.0], [0.0, 0.0, 1.0]]), array([1.0 / 24.0, 1.0 / 24.0, 1.0 / 24.0, 1.0 / 24.0]) ) elif shape == "triangle": return (array([[0.0, 0.0], [1.0, 0.0], [0.0, 1.0]]), array([1.0 / 6.0, 1.0 / 6.0, 1.0 / 6.0]) ) elif shape == "interval": # Trapezoidal rule. return (array([[0.0], [1.0]]), array([1.0 / 2.0, 1.0 / 2.0]) ) quad_rule = FIAT.create_quadrature(reference_cell(shape), degree, scheme) points = numpy.asarray(quad_rule.get_points()) weights = numpy.asarray(quad_rule.get_weights()) return points, weights def map_facet_points(points, facet, cellname): """ Map points from the e (UFC) reference simplex of dimension d - 1 to a given facet on the (UFC) reference simplex of dimension d. This may be used to transform points tabulated for example on the 2D reference triangle to points on a given facet of the reference tetrahedron. """ # Extract the geometric dimension of the points we want to map dim = len(points[0]) + 1 # Special case, don't need to map coordinates on vertices if dim == 1: return [[(0.0,), (1.0,)][facet]] # Get the FIAT reference cell fiat_cell = reference_cell(cellname) # Extract vertex coordinates from cell and map of facet index to # indicent vertex indices coordinate_dofs = fiat_cell.get_vertices() facet_vertices = fiat_cell.get_topology()[dim - 1] # coordinate_dofs = \ # {1: ((0.,), (1.,)), # 2: ((0., 0.), (1., 0.), (0., 1.)), # 3: ((0., 0., 0.), (1., 0., 0.),(0., 1., 0.), (0., 0., 1))} # Facet vertices # facet_vertices = \ # {2: ((1, 2), (0, 2), (0, 1)), # 3: ((1, 2, 3), (0, 2, 3), (0, 1, 3), (0, 1, 2))} # Compute coordinates and map the points coordinates = [coordinate_dofs[v] for v in facet_vertices[facet]] new_points = [] for point in points: w = (1.0 - sum(point),) + tuple(point) x = tuple(sum([w[i] * array(coordinates[i]) for i in range(len(w))])) new_points += [x] return new_points def _extract_elements(ufl_element, restriction_domain=None): "Recursively extract un-nested list of (component) elements." elements = [] if isinstance(ufl_element, ufl.MixedElement): for sub_element in ufl_element.sub_elements(): elements += _extract_elements(sub_element, restriction_domain) return elements # Handle restricted elements since they might be mixed elements too. if isinstance(ufl_element, ufl.RestrictedElement): base_element = ufl_element.sub_element() restriction_domain = ufl_element.restriction_domain() return _extract_elements(base_element, restriction_domain) if restriction_domain: ufl_element = ufl.RestrictedElement(ufl_element, restriction_domain) elements += [create_element(ufl_element)] return elements def _create_restricted_element(ufl_element): "Create an FFC representation for an UFL RestrictedElement." if not isinstance(ufl_element, ufl.RestrictedElement): error("create_restricted_element expects an ufl.RestrictedElement") base_element = ufl_element.sub_element() restriction_domain = ufl_element.restriction_domain() # If simple element -> create RestrictedElement from fiat_element if isinstance(base_element, ufl.FiniteElement): element = _create_fiat_element(base_element) return RestrictedElement(element, restriction_domain=restriction_domain) # If restricted mixed element -> convert to mixed restricted element if isinstance(base_element, ufl.MixedElement): elements = _extract_elements(base_element, restriction_domain) return MixedElement(elements) error("Cannot create restricted element from %s" % str(ufl_element)) ffc-2019.1.0.post0/ffc/formatting.py000066400000000000000000000206471346026536000170460ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ Compiler stage 5: Code formatting --------------------------------- This module implements the formatting of UFC code from a given dictionary of generated C++ code for the body of each UFC function. It relies on templates for UFC code available as part of the module ufc_utils. """ # Copyright (C) 2009-2017 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # Python modules import os # FFC modules from ffc.log import info, error, begin, end, dstr from ffc import __version__ as FFC_VERSION from ffc.backends.ufc import __version__ as UFC_VERSION from ffc.classname import make_classname from ffc.backends.ufc import templates, visibility_snippet, factory_decl, factory_impl from ffc.parameters import compilation_relevant_parameters format_template = { "ufc comment" : """\ // This code conforms with the UFC specification version %(ufc_version)s // and was automatically generated by FFC version %(ffc_version)s. """, "dolfin comment" : """\ // This code conforms with the UFC specification version %(ufc_version)s // and was automatically generated by FFC version %(ffc_version)s. // // This code was generated with the option '-l dolfin' and // contains DOLFIN-specific wrappers that depend on DOLFIN. """, "header_h" : """\ #ifndef __%(prefix_upper)s_H #define __%(prefix_upper)s_H """, "header_c" : """\ #include "%(prefix)s.h" """, "footer" : """\ #endif """ } def generate_factory_functions(prefix, kind, classname): publicname = make_classname(prefix, kind, "main") code_h = factory_decl % { "basename": "ufc::%s" % kind, "publicname": publicname, } code_c = factory_impl % { "basename": "ufc::%s" % kind, "publicname": publicname, "privatename": classname } return code_h, code_c def generate_jit_factory_functions(code, prefix): # Extract code (code_finite_elements, code_dofmaps, code_coordinate_mappings, code_integrals, code_forms, includes) = code if code_forms: # Direct jit of form code_h, code_c = generate_factory_functions( prefix, "form", code_forms[-1]["classname"]) elif code_coordinate_mappings: # Direct jit of coordinate mapping code_h, code_c = generate_factory_functions( prefix, "coordinate_mapping", code_coordinate_mappings[-1]["classname"]) else: # Direct jit of element code_h, code_c = generate_factory_functions( prefix, "finite_element", code_finite_elements[-1]["classname"]) fh, fc = generate_factory_functions( prefix, "dofmap", code_dofmaps[-1]["classname"]) code_h += fh code_c += fc return code_h, code_c def format_code(code, wrapper_code, prefix, parameters, jit=False): "Format given code in UFC format. Returns two strings with header and source file contents." begin("Compiler stage 5: Formatting code") # Extract code (code_finite_elements, code_dofmaps, code_coordinate_mappings, code_integrals, code_forms, includes) = code # Generate code for comment on top of file code_h_pre = _generate_comment(parameters) + "\n" code_c_pre = _generate_comment(parameters) + "\n" # Generate code for header code_h_pre += format_template["header_h"] % {"prefix_upper": prefix.upper()} code_c_pre += format_template["header_c"] % {"prefix": prefix} # Add includes includes_h, includes_c = _generate_includes(includes, parameters) code_h_pre += includes_h code_c_pre += includes_c # Header and implementation code code_h = "" code_c = "" if jit: code_c += visibility_snippet # Generate code for finite_elements for code_finite_element in code_finite_elements: code_h += _format_h("finite_element", code_finite_element, parameters, jit) code_c += _format_c("finite_element", code_finite_element, parameters, jit) # Generate code for dofmaps for code_dofmap in code_dofmaps: code_h += _format_h("dofmap", code_dofmap, parameters, jit) code_c += _format_c("dofmap", code_dofmap, parameters, jit) # Generate code for coordinate_mappings for code_coordinate_mapping in code_coordinate_mappings: code_h += _format_h("coordinate_mapping", code_coordinate_mapping, parameters, jit) code_c += _format_c("coordinate_mapping", code_coordinate_mapping, parameters, jit) # Generate code for integrals for code_integral in code_integrals: code_h += _format_h(code_integral["class_type"], code_integral, parameters, jit) code_c += _format_c(code_integral["class_type"], code_integral, parameters, jit) # Generate code for form for code_form in code_forms: code_h += _format_h("form", code_form, parameters, jit) code_c += _format_c("form", code_form, parameters, jit) # Add wrappers if wrapper_code: code_h += wrapper_code # Generate code for footer code_h += format_template["footer"] # Add headers to body code_h = code_h_pre + code_h if code_c: code_c = code_c_pre + code_c end() return code_h, code_c def write_code(code_h, code_c, prefix, parameters): # Write file(s) _write_file(code_h, prefix, ".h", parameters) if code_c: _write_file(code_c, prefix, ".cpp", parameters) def _format_h(class_type, code, parameters, jit=False): "Format header code for given class type." if jit: return templates[class_type + "_jit_header"] % code + "\n" elif parameters["split"]: return templates[class_type + "_header"] % code + "\n" else: return templates[class_type + "_combined"] % code + "\n" def _format_c(class_type, code, parameters, jit=False): "Format implementation code for given class type." if jit: return templates[class_type + "_jit_implementation"] % code + "\n" elif parameters["split"]: return templates[class_type + "_implementation"] % code + "\n" else: return "" def _write_file(output, prefix, postfix, parameters): "Write generated code to file." filename = os.path.join(parameters["output_dir"], prefix + postfix) with open(filename, "w") as hfile: hfile.write(output) info("Output written to " + filename + ".") def _generate_comment(parameters): "Generate code for comment on top of file." # Drop irrelevant parameters parameters = compilation_relevant_parameters(parameters) # Generate top level comment args = {"ffc_version": FFC_VERSION, "ufc_version": UFC_VERSION} if parameters["format"] == "ufc": comment = format_template["ufc comment"] % args elif parameters["format"] == "dolfin": comment = format_template["dolfin comment"] % args else: error("Unable to format code, unknown format \"%s\".", parameters["format"]) # Add parameter information comment += "//\n" comment += "// This code was generated with the following parameters:\n" comment += "//\n" comment += "\n".join([""] + ["//" + (" " + l) for l in dstr(parameters).split("\n")][:-1]) comment += "\n" return comment def _generate_includes(includes, parameters): default_h_includes = [ "#include ", ] default_cpp_includes = [ # TODO: Avoid adding these includes if we don't need them: "#include ", "#include ", "#include ", ] external_includes = set("#include <%s>" % inc for inc in parameters.get("external_includes", ())) s = set(default_h_includes + default_cpp_includes) | includes s2 = (set(default_cpp_includes) | external_includes) - s includes_h = "\n".join(sorted(s)) + "\n" if s else "" includes_cpp = "\n".join(sorted(s2)) + "\n" if s2 else "" return includes_h, includes_cpp ffc-2019.1.0.post0/ffc/git_commit_hash.py.in000066400000000000000000000015041346026536000204260ustar00rootroot00000000000000# Copyright (C) 2016 Jan Blechta # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . def git_commit_hash(): """Return git changeset hash (returns "unknown" if changeset is not known) """ return "@GIT_COMMIT_HASH" ffc-2019.1.0.post0/ffc/jitcompiler.py000066400000000000000000000245311346026536000172110ustar00rootroot00000000000000# -*- coding: utf-8 -*- """This module provides a just-in-time (JIT) form compiler. It uses dijitso to wrap the generated code into a Python module.""" # Copyright (C) 2007-2017 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # Modified by Johan Hake, 2008-2009 # Modified by Ilmar Wilbers, 2008 # Modified by Kristian B. Oelgaard, 2009 # Modified by Joachim Haga, 2011. # Modified by Martin Sandve Alnæs, 2013-2017 # Python modules import os import sys from hashlib import sha1 # FEniCS modules import ufl # Not importing globally to keep dijitso optional if jit is not used #import dijitso # FFC modules from ffc.log import log from ffc.log import error from ffc.log import set_level from ffc.log import set_prefix from ffc.log import INFO from ffc.parameters import validate_jit_parameters, compute_jit_parameters_signature from ffc.compiler import compile_form, compile_element, compile_coordinate_mapping from ffc.backends.ufc import get_include_path as get_ufc_include_path from ffc.backends.ufc import get_ufc_signature, get_ufc_templates_signature from ffc import __version__ as FFC_VERSION from ffc.classname import make_classname def jit_generate(ufl_object, module_name, signature, parameters): "Callback function passed to dijitso.jit: generate code and return as strings." log(INFO + 5, "Calling FFC just-in-time (JIT) compiler, this may take some time.") # Generate actual code for this object if isinstance(ufl_object, ufl.Form): compile_object = compile_form elif isinstance(ufl_object, ufl.FiniteElementBase): compile_object = compile_element elif isinstance(ufl_object, ufl.Mesh): compile_object = compile_coordinate_mapping code_h, code_c, dependent_ufl_objects = compile_object(ufl_object, prefix=module_name, parameters=parameters, jit=True) # Jit compile dependent objects separately, # but pass indirect=True to skip instantiating objects. # (this is done in here such that it's only triggered # if parent jit module is missing, and it's done after # compile_object because some misformed ufl objects may # require analysis to determine (looking at you Expression...)) dependencies = [] for dep in dependent_ufl_objects["element"]: dep_module_name = jit(dep, parameters, indirect=True) dependencies.append(dep_module_name) for dep in dependent_ufl_objects["coordinate_mapping"]: dep_module_name = jit(dep, parameters, indirect=True) dependencies.append(dep_module_name) return code_h, code_c, dependencies def _string_tuple(param): "Split a : separated string or convert a list to a tuple." if isinstance(param, (tuple, list)): pass elif isinstance(param, str): param = param.split(":") else: param = () param = tuple(p for p in param if p) assert all(isinstance(p, str) for p in param) return param def jit_build(ufl_object, module_name, parameters): "Wraps dijitso jit with some parameter conversion etc." import dijitso # FIXME: Expose more dijitso parameters? # FIXME: dijitso build params are not part of module_name here. # Currently dijitso doesn't add to the module signature. # Translating the C++ flags from ffc parameters to dijitso # to get equivalent behaviour to instant code build_params = {} build_params["debug"] = not parameters["cpp_optimize"] build_params["cxxflags_opt"] = tuple(parameters["cpp_optimize_flags"].split()) build_params["cxxflags_debug"] = ("-O0",) build_params["include_dirs"] = (get_ufc_include_path(),) + _string_tuple(parameters.get("external_include_dirs")) build_params["lib_dirs"] = _string_tuple(parameters.get("external_library_dirs")) build_params["libs"] = _string_tuple(parameters.get("external_libraries")) # Interpreting FFC default "" as None, use "." if you want to point to curdir cache_dir = parameters.get("cache_dir") or None if cache_dir: cache_params = {"cache_dir": cache_dir} else: cache_params = {} # This will do some rudimenrary checking of the params and fill in dijitso defaults params = dijitso.validate_params({ "cache": cache_params, "build": build_params, "generator": parameters, # ffc parameters, just passed on to jit_generate }) # Carry out jit compilation, calling jit_generate only if needed module, signature = dijitso.jit(jitable=ufl_object, name=module_name, params=params, generate=jit_generate) return module def compute_jit_prefix(ufl_object, parameters, kind=None): "Compute the prefix (module name) for jit modules." # Get signature from ufl object if isinstance(ufl_object, ufl.Form): kind = "form" object_signature = ufl_object.signature() elif isinstance(ufl_object, ufl.Mesh): # When coordinate mapping is represented by a Mesh, just getting its coordinate element kind = "coordinate_mapping" ufl_object = ufl_object.ufl_coordinate_element() object_signature = repr(ufl_object) # ** must match below elif kind == "coordinate_mapping" and isinstance(ufl_object, ufl.FiniteElementBase): # When coordinate mapping is represented by its coordinate element object_signature = repr(ufl_object) # ** must match above elif isinstance(ufl_object, ufl.FiniteElementBase): kind = "element" object_signature = repr(ufl_object) else: error("Unknown ufl object type %s" % (ufl_object.__class__.__name__,)) # Compute deterministic string of relevant parameters parameters_signature = compute_jit_parameters_signature(parameters) # Increase this number at any time to invalidate cache # signatures if code generation has changed in important # ways without the change being visible in regular signatures: jit_version_bump = 3 # Build combined signature signatures = [ object_signature, parameters_signature, str(FFC_VERSION), str(jit_version_bump), get_ufc_signature(), get_ufc_templates_signature(), kind, ] string = ";".join(signatures) signature = sha1(string.encode('utf-8')).hexdigest() # Optionally shorten signature max_signature_length = parameters["max_signature_length"] if max_signature_length: signature = signature[:max_signature_length] # Combine into prefix with some info including kind prefix = ("ffc_%s_%s" % (kind, signature)).lower() return kind, prefix class FFCError(Exception): pass class FFCJitError(FFCError): pass def jit(ufl_object, parameters=None, indirect=False): """Just-in-time compile the given form or element Parameters: ufl_object : The UFL object to be compiled parameters : A set of parameters """ # Check parameters parameters = validate_jit_parameters(parameters) # FIXME: Setting the log level here becomes a permanent side effect... # Set log level set_level(parameters["log_level"]) set_prefix(parameters["log_prefix"]) # Make unique module name for generated code kind, module_name = compute_jit_prefix(ufl_object, parameters) # Inspect cache and generate+build if necessary module = jit_build(ufl_object, module_name, parameters) # Raise exception on failure to build or import module if module is None: # TODO: To communicate directory name here, need dijitso params to call #fail_dir = dijitso.cache.create_fail_dir_path(signature, dijitso_cache_params) raise FFCJitError("A directory with files to reproduce the jit build failure has been created.") # Construct instance of object from compiled code unless indirect if indirect: return module_name else: # FIXME: Streamline number of return arguments here across kinds if kind == "form": if parameters.get("representation") == "quadrature" or \ any(itg.metadata().get("representation") == "quadrature" for itg in ufl_object.integrals()): from ffc.quadrature.deprecation import issue_deprecation_warning issue_deprecation_warning() compiled_form = _instantiate_form(module, module_name) return (compiled_form, module, module_name) # TODO: module, module_name are never used in dolfin, drop? #return _instantiate_form(module, module_name) elif kind == "element": fe, dm = _instantiate_element_and_dofmap(module, module_name) return fe, dm elif kind == "coordinate_mapping": cm = _instantiate_coordinate_mapping(module, module_name) return cm else: error("Unknown kind %s" % (kind,)) def _instantiate_form(module, prefix): "Instantiate an object of the jit-compiled form." import dijitso classname = make_classname(prefix, "form", "main") form = dijitso.extract_factory_function(module, "create_" + classname)() return form def _instantiate_element_and_dofmap(module, prefix): "Instantiate objects of the jit-compiled finite_element and dofmap." import dijitso fe_classname = make_classname(prefix, "finite_element", "main") dm_classname = make_classname(prefix, "dofmap", "main") fe = dijitso.extract_factory_function(module, "create_" + fe_classname)() dm = dijitso.extract_factory_function(module, "create_" + dm_classname)() return (fe, dm) def _instantiate_coordinate_mapping(module, prefix): "Instantiate an object of the jit-compiled coordinate_mapping." import dijitso classname = make_classname(prefix, "coordinate_mapping", "main") form = dijitso.extract_factory_function(module, "create_" + classname)() return form ffc-2019.1.0.post0/ffc/log.py000066400000000000000000000044361346026536000154530ustar00rootroot00000000000000# -*- coding: utf-8 -*- """This module provides functions used by the FFC implementation to output messages. These may be redirected by the user of FFC. This module reuses the corresponding log.py module from UFL which is a wrapper for the standard Python logging module. """ # Copyright (C) 2009 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # Modified by Kristian B. Oelgaard, 2009 # UFL modules from ufl.log import Logger from ufl.log import log_functions from ufl.log import INFO, DEBUG, WARNING, ERROR, CRITICAL from ufl.utils.sorting import sorted_by_key from ufl.utils.formatting import dstr, tstr # Create FFC logger ffc_logger = Logger("FFC") # Create FFC global log functions for foo in log_functions: exec("%s = lambda *message : ffc_logger.%s(*message)" % (foo, foo)) # Assertion, copied from UFL def ffc_assert(condition, *message): "Assert that condition is true and otherwise issue an error with given message." condition or error(*message) # Set default log level set_level(INFO) #--- Specialized FFC debugging tools --- def debug_dict(d, title=""): "Pretty-print dictionary." if not title: title = "Dictionary" info("") begin(title) info("") for (key, value) in sorted_by_key(d): info(key) info("-" * len(key)) info(str(value)) info("") end() def debug_ir(ir, name=""): "Debug intermediate representation." title = "Intermediate representation" if name: title += " (%s)" % str(name) debug_dict(ir, title) def debug_code(code, name=""): "Debug generated code." title = "Generated code" if name: title += " (%s)" % str(name) debug_dict(code, title) ffc-2019.1.0.post0/ffc/main.py000066400000000000000000000215721346026536000156160ustar00rootroot00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- """This script is the command-line interface to FFC. It parses command-line arguments and generates code from input UFL form files. """ # Copyright (C) 2004-2017 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # Modified by Johan Jansson, 2005. # Modified by Ola Skavhaug, 2006. # Modified by Dag Lindbo, 2008. # Modified by Kristian B. Oelgaard 2010. # Modified by Martin Sandve Alnæs 2017. # Python modules. import sys import getopt import cProfile import re import string import os from os import curdir from os import path from os import getcwd # UFL modules. from ufl.log import UFLException from ufl.algorithms import load_ufl_file import ufl # FFC modules. from ffc.log import push_level, pop_level, set_indent, ffc_logger from ffc.log import DEBUG, INFO, WARNING, ERROR from ffc.parameters import default_parameters from ffc import __version__ as FFC_VERSION, get_ufc_signature from ffc.backends.ufc import __version__ as UFC_VERSION from ffc.backends.ufc import get_include_path from ffc.compiler import compile_form, compile_element from ffc.formatting import write_code from ffc.errorcontrol import compile_with_error_control def print_error(msg): "Print error message (cannot use log system at top level)." print("\n".join(["*** FFC: " + line for line in msg.split("\n")])) def info_version(): "Print version number." print("""\ This is FFC, the FEniCS Form Compiler, version {0}. UFC backend version {1}, signature {2}. For further information, visit https://bitbucket.org/fenics-project/ffc/. Python {3} on {4} """.format(FFC_VERSION, UFC_VERSION, get_ufc_signature(), sys.version, sys.platform)) def info_usage(): "Print usage information." info_version() print("""Usage: ffc [OPTION]... input.form For information about the FFC command-line interface, refer to the FFC man page which may invoked by 'man ffc' (if installed). """) def compile_ufl_data(ufd, prefix, parameters): if parameters["error_control"]: code_h, code_c = compile_with_error_control(ufd.forms, ufd.object_names, ufd.reserved_objects, prefix, parameters) elif len(ufd.forms) > 0: code_h, code_c = compile_form(ufd.forms, ufd.object_names, prefix=prefix, parameters=parameters) else: code_h, code_c = compile_element(ufd.elements, prefix=prefix, parameters=parameters) return code_h, code_c def main(args=None): """This is the commandline tool for the python module ffc.""" if args is None: args = sys.argv[1:] # Get command-line arguments try: if "-O" in args: args[args.index("-O")] = "-O2" opts, args = getopt.getopt(args, "hIVSdvsl:r:f:O:o:q:ep", ["help", "includes", "version", "signature", "debug", "verbose", "silent", "language=", "representation=", "optimize=", "output-directory=", "quadrature-rule=", "error-control", "profile"]) except getopt.GetoptError: info_usage() print_error("Illegal command-line arguments.") return 1 # Check for --help if ("-h", "") in opts or ("--help", "") in opts: info_usage() return 0 # Check for --includes if ("-I", "") in opts or ("--includes", "") in opts: print(get_include_path()) return 0 # Check for --version if ("-V", "") in opts or ("--version", "") in opts: info_version() return 0 # Check for --signature if ("-S", "") in opts or ("--signature", "") in opts: print(get_ufc_signature()) return 0 # Check that we get at least one file if len(args) == 0: print_error("Missing file.") return 1 # Get parameters parameters = default_parameters() # Choose WARNING as default for script parameters["log_level"] = WARNING # Set default value (not part of in parameters[]) enable_profile = False # Parse command-line parameters for opt, arg in opts: if opt in ("-v", "--verbose"): parameters["log_level"] = INFO elif opt in ("-d", "--debug"): parameters["log_level"] = DEBUG elif opt in ("-s", "--silent"): parameters["log_level"] = ERROR elif opt in ("-l", "--language"): parameters["format"] = arg elif opt in ("-r", "--representation"): parameters["representation"] = arg elif opt in ("-q", "--quadrature-rule"): parameters["quadrature_rule"] = arg elif opt == "-f": if len(arg.split("=")) == 2: (key, value) = arg.split("=") default = parameters.get(key) if isinstance(default, int): value = int(value) elif isinstance(default, float): value = float(value) parameters[key] = value elif len(arg.split("==")) == 1: key = arg.split("=")[0] if key.startswith("no-"): key = key[3:] value = False else: value = True parameters[key] = value else: info_usage() return 1 elif opt in ("-O", "--optimize"): parameters["optimize"] = bool(int(arg)) elif opt in ("-o", "--output-directory"): parameters["output_dir"] = arg elif opt in ("-e", "--error-control"): parameters["error_control"] = True elif opt in ("-p", "--profile"): enable_profile = True # Set log_level push_level(parameters["log_level"]) # FIXME: This is terrible! # Set UFL precision #ufl.constantvalue.precision = int(parameters["precision"]) # Print a versioning message if verbose output was requested if parameters["log_level"] <= INFO: info_version() # Call parser and compiler for each file resultcode = 0 init_indent = ffc_logger._indent_level try: resultcode = _compile_files(args, parameters, enable_profile) finally: # Reset logging level and indent pop_level() set_indent(init_indent) return resultcode def _compile_files(args, parameters, enable_profile): # Call parser and compiler for each file for filename in args: # Get filename prefix and suffix prefix, suffix = os.path.splitext(os.path.basename(filename)) suffix = suffix.replace(os.path.extsep, "") # Check file suffix if suffix != "ufl": print_error("Expecting a UFL form file (.ufl).") return 1 # Remove weird characters (file system allows more than the C # preprocessor) prefix = re.subn("[^{}]".format(string.ascii_letters + string.digits + "_"), "!", prefix)[0] prefix = re.subn("!+", "_", prefix)[0] # Turn on profiling if enable_profile: pr = cProfile.Profile() pr.enable() # Load UFL file ufd = load_ufl_file(filename) # Previously wrapped in try-except, disabled to actually get information we need #try: # Generate code code_h, code_c = compile_ufl_data(ufd, prefix, parameters) # Write to file write_code(code_h, code_c, prefix, parameters) #except Exception as exception: # # Catch exceptions only when not in debug mode # if parameters["log_level"] <= DEBUG: # raise # else: # print("") # print_error(str(exception)) # print_error("To get more information about this error, rerun FFC with --debug.") # return 1 # Turn off profiling and write status to file if enable_profile: pr.disable() pfn = "ffc_{0}.profile".format(prefix) pr.dump_stats(pfn) print("Wrote profiling info to file {0}".format(pfn)) return 0 ffc-2019.1.0.post0/ffc/optimization.py000066400000000000000000000040201346026536000174050ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2009-2017 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # Modified by Martin Sandve Alnæs, 2013 """ Compiler stage 5: optimization ------------------------------ This module implements the optimization of an intermediate code representation. """ # FFC modules from ffc.log import info, begin, end from ffc.representation import pick_representation def optimize_ir(ir, parameters): "Optimize intermediate form representation." begin("Compiler stage 3: Optimizing intermediate representation") # Extract representations ir_elements, ir_dofmaps, ir_coordinate_mappings, ir_integrals, ir_forms = ir # Check if optimization is requested if not any(ir["integrals_metadata"]["optimize"] for ir in ir_integrals): info(r"Skipping optimizations, add -O or attach {'optimize': True} " "metadata to integrals") # Call on every bunch of integrals wich are compiled together oir_integrals = [_optimize_integral_ir(ir, parameters) if ir["integrals_metadata"]["optimize"] else ir for ir in ir_integrals] end() return ir_elements, ir_dofmaps, ir_coordinate_mappings, oir_integrals, ir_forms def _optimize_integral_ir(ir, parameters): "Compute optimized intermediate represention of integral." r = pick_representation(ir["representation"]) return r.optimize_integral_ir(ir, parameters) ffc-2019.1.0.post0/ffc/parameters.py000066400000000000000000000165351346026536000170400ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2005-2017 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . import os import copy from ffc.log import INFO # Comments from other places in code: # FIXME: Document option -fconvert_exceptions_to_warnings # NB! Parameters in the generate and build sets are # included in jit signature, cache and log are not. _FFC_GENERATE_PARAMETERS = { "format": "ufc", # code generation format "representation": "auto", # form representation / code # generation strategy "quadrature_rule": None, # quadrature rule used for # integration of element tensors # (None is auto) "quadrature_degree": None, # quadrature degree used for # computing integrals # (None is auto) "precision": None, # precision used when writing # numbers (None for max precision) "epsilon": 1e-14, # machine precision, used for # dropping zero terms in tables "split": False, # split generated code into .h and # .cpp file "form_postfix": True, # postfix form name with "Function", # "LinearForm" or BilinearForm "convert_exceptions_to_warnings": False, # convert all exceptions to warning # in generated code "error_control": False, # with error control "optimize": True, # turn on optimization for code generation "max_signature_length": 0, # set to positive integer to shorten signatures "generate_dummy_tabulate_tensor": False, # set to True to replace tabulate_tensor body with no-op "add_tabulate_tensor_timing": False, # set to True to add timing inside tabulate_tensor "external_includes": "", # ':' separated list of include filenames to add to generated code } _FFC_BUILD_PARAMETERS = { "cpp_optimize": True, # optimization for the C++ compiler "cpp_optimize_flags": "-O2", # optimization flags for the C++ compiler "external_libraries": "", # ':' separated list of libraries to link JIT compiled libraries with "external_library_dirs": "", # ':' separated list of library search dirs to add when JIT compiling "external_include_dirs": "", # ':' separated list of include dirs to add when JIT compiling } _FFC_CACHE_PARAMETERS = { "cache_dir": "", # cache dir used by Instant "output_dir": ".", # output directory for generated code } _FFC_LOG_PARAMETERS = { "log_level": INFO + 5, # log level, displaying only # messages with level >= log_level "log_prefix": "", # log prefix } FFC_PARAMETERS = {} FFC_PARAMETERS.update(_FFC_BUILD_PARAMETERS) FFC_PARAMETERS.update(_FFC_CACHE_PARAMETERS) FFC_PARAMETERS.update(_FFC_LOG_PARAMETERS) FFC_PARAMETERS.update(_FFC_GENERATE_PARAMETERS) def split_parameters(parameters): """Split a parameters dict into groups based on what parameters are used for. """ params = { "cache": {k: parameters[k] for k in _FFC_CACHE_PARAMETERS.keys()}, "build": {k: parameters[k] for k in _FFC_BUILD_PARAMETERS.keys()}, "generate": {k: parameters[k] for k in _FFC_GENERATE_PARAMETERS.keys()}, "log": {k: parameters[k] for k in _FFC_LOG_PARAMETERS.keys()}, } return params def default_parameters(): "Return (a copy of) the default parameter values for FFC." parameters = copy.deepcopy(FFC_PARAMETERS) # HACK r = os.environ.get("FFC_FORCE_REPRESENTATION") if r: parameters["representation"] = r return parameters def default_jit_parameters(): parameters = default_parameters() # TODO: This is not in the above parameters dict. # There are other parameters like this. # This is confusing, which parameters are available? What are the defaults? # Skip evaluation of basis derivatives in elements by default because it's costly # FIXME: Make this False when we have elements generated once instead of for each form parameters["no-evaluate_basis_derivatives"] = True # Don't postfix form names parameters["form_postfix"] = False return parameters def validate_parameters(parameters): "Initial check of parameters." p = default_parameters() if parameters is not None: p.update(parameters) _validate_parameters(p) return p def validate_jit_parameters(parameters): "Check parameters and add any missing parameters" p = default_jit_parameters() if parameters is not None: p.update(parameters) _validate_parameters(p) return p def _validate_parameters(parameters): """Does some casting of parameter values in place on the provided dictionary""" # Cast int optimize flag to bool if isinstance(parameters["optimize"], int): parameters["optimize"] = bool(parameters["optimize"]) # Convert all legal default values to None if parameters["quadrature_rule"] in ["auto", None, "None"]: parameters["quadrature_rule"] = None # Convert all legal default values to None and # cast nondefaults from str to int if parameters["quadrature_degree"] in ["auto", -1, None, "None"]: parameters["quadrature_degree"] = None else: try: parameters["quadrature_degree"] = int(parameters["quadrature_degree"]) except Exception: error("Failed to convert quadrature degree '%s' to int" % parameters.get("quadrature_degree")) # Convert all legal default values to None and # cast nondefaults from str to int if parameters["precision"] in ["auto", None, "None"]: parameters["precision"] = None else: try: parameters["precision"] = int(parameters["precision"]) except Exception: error("Failed to convert precision '%s' to int" % parameters.get("precision")) def compilation_relevant_parameters(parameters): p = parameters.copy() for k in _FFC_LOG_PARAMETERS: del p[k] for k in _FFC_CACHE_PARAMETERS: del p[k] # This doesn't work because some parameters may not be among the defaults above. # That is somewhat confusing but we'll just have to live with it at least for now. # sp = split_parameters(parameters) # p = {} # p.update(sp["generate"]) # p.update(sp["build"]) return p def compute_jit_parameters_signature(parameters): "Return parameters signature (some parameters must be ignored)." from ufl.utils.sorting import canonicalize_metadata parameters = compilation_relevant_parameters(parameters) return str(canonicalize_metadata(parameters)) ffc-2019.1.0.post0/ffc/plot.py000066400000000000000000000712451346026536000156520ustar00rootroot00000000000000# -*- coding: utf-8 -*- "This module provides functionality for plotting finite elements." # Copyright (C) 2010 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # First added: 2010-12-07 # Last changed: 2010-12-15 __all__ = ["plot"] from numpy import dot, cross, array, sin, cos, pi, sqrt from numpy.linalg import norm import sys from ffc.fiatinterface import create_element from ffc.log import warning, error, info # Import Soya3D try: import soya from soya.sphere import Sphere from soya.label3d import Label3D from soya.sdlconst import QUIT _soya_imported = True except ImportError: _soya_imported = False # Colors for elements element_colors = {"Argyris": (0.45, 0.70, 0.80), "Arnold-Winther": (0.00, 0.00, 1.00), "Brezzi-Douglas-Marini": (1.00, 1.00, 0.00), "Crouzeix-Raviart": (1.00, 0.25, 0.25), "Discontinuous Lagrange": (0.00, 0.25, 0.00), "Discontinuous Raviart-Thomas": (0.90, 0.90, 0.30), "Hermite": (0.50, 1.00, 0.50), "Lagrange": (0.00, 1.00, 0.00), "Mardal-Tai-Winther": (1.00, 0.10, 0.90), "Morley": (0.40, 0.40, 0.40), "Nedelec 1st kind H(curl)": (0.90, 0.30, 0.00), "Nedelec 2nd kind H(curl)": (0.70, 0.20, 0.00), "Raviart-Thomas": (0.90, 0.60, 0.00)} def plot(element, rotate=True): "Plot finite element." # Check if Soya3D has been imported if not _soya_imported: warning("Unable to plot element, Soya3D not available (install package python-soya).") return # Special case: plot dof notation if element == "notation": # Create model for notation notation = create_notation_models() # Render plot window render(notation, "Notation", 0, True, rotate) else: # Create cell model cell, is3d = create_cell_model(element) cellname = element.cell().cellname() # Assuming single cell # Create dof models dofs, num_moments = create_dof_models(element) # Create title if element.degree() is not None: title = "%s of degree %d on a %s" % (element.family(), element.degree(), cellname) else: title = "%s on a %s" % (element.family(), cellname) # Render plot window render([cell] + dofs, title, num_moments, is3d, rotate) def render(models, title, num_moments, is3d, rotate): "Render given list of models." # Note that we view from the positive z-axis, and not from the # negative y-axis. This should make no difference since the # element dofs are symmetric anyway and it plays better with # the default camera settings in Soya. # Initialize Soya soya.init(title) # Create scene scene = soya.World() scene.atmosphere = soya.Atmosphere() if title == "Notation": scene.atmosphere.bg_color = (0.0, 1.0, 0.0, 1.0) else: scene.atmosphere.bg_color = (1.0, 1.0, 1.0, 1.0) # Not used, need to manually handle rotation # label = Label3D(scene, text=str(num_moments), size=0.005) # label.set_xyz(1.0, 1.0, 1.0) # label.set_color((0.0, 0.0, 0.0, 1.0)) # Define rotation if is3d: class RotatingBody(soya.Body): def advance_time(self, proportion): self.rotate_y(2.0 * proportion) else: class RotatingBody(soya.Body): def advance_time(self, proportion): self.rotate_z(2.0 * proportion) # Select type of display, rotating or not if rotate: Body = RotatingBody else: Body = soya.Body # Add all models for model in models: body = Body(scene, model) # Set light light = soya.Light(scene) if is3d: light.set_xyz(1.0, 5.0, 5.0) else: light.set_xyz(0.0, 0.0, 1.0) light.cast_shadow = 1 light.shadow_color = (0.0, 0.0, 0.0, 0.5) # Set camera camera = soya.Camera(scene) camera.ortho = 0 p = camera.position() if is3d: if rotate: camera.set_xyz(-20, 10, 50.0) camera.fov = 2.1 p.set_xyz(0.0, 0.4, 0.0) else: camera.set_xyz(-20, 10, 50.0) camera.fov = 1.6 p.set_xyz(0.3, 0.42, 0.5) else: if rotate: camera.set_xyz(0, 10, 50.0) camera.fov = 2.6 p.set_xyz(0.0, 0.0, 0.0) else: camera.set_xyz(0, 10, 50.0) camera.fov = 1.7 p.set_xyz(0.5, 0.4, 0.0) camera.look_at(p) soya.set_root_widget(camera) # Handle exit class Idler(soya.Idler): def end_round(self): for event in self.events: if event[0] == QUIT: print("Closing plot, bye bye") sys.exit(0) # Main loop idler = Idler(scene) idler.idle() def tangents(n): "Return normalized tangent vectors for plane defined by given vector." # Figure out which vector to take cross product with eps = 1e-10 e = array((1.0, 0.0, 0.0)) if norm(cross(n, e)) < eps: e = array((0.0, 1.0, 0.0)) # Take cross products and normalize t0 = cross(n, e) t0 = t0 / norm(t0) t1 = cross(n, t0) t1 = t1 / norm(t0) return t0, t1 def Cylinder(scene, p0, p1, r, color=(0.0, 0.0, 0.0, 1.0)): "Return model for cylinder from p0 to p1 with radius r." # Convert to NumPy array if isinstance(p0, soya.Vertex): p0 = array((p0.x, p0.y, p0.z)) p1 = array((p1.x, p1.y, p1.z)) else: p0 = array(p0) p1 = array(p1) # Get tangent vectors for plane n = p0 - p1 n = n / norm(n) t0, t1 = tangents(n) # Traverse the circles num_steps = 10 dtheta = 2.0 * pi / float(num_steps) for i in range(num_steps): # Compute coordinates for square dx0 = cos(i * dtheta) * t0 + sin(i * dtheta) * t1 dx1 = cos((i + 1) * dtheta) * t0 + sin((i + 1) * dtheta) * t1 x0 = p0 + r * dx0 x1 = p0 + r * dx1 x2 = p1 + r * dx0 x3 = p1 + r * dx1 # Cover square by two triangles v0 = soya.Vertex(scene, x0[0], x0[1], x0[2], diffuse=color) v1 = soya.Vertex(scene, x1[0], x1[1], x1[2], diffuse=color) v2 = soya.Vertex(scene, x2[0], x2[1], x2[2], diffuse=color) v3 = soya.Vertex(scene, x3[0], x3[1], x3[2], diffuse=color) f0 = soya.Face(scene, (v0, v1, v2)) f1 = soya.Face(scene, (v1, v2, v3)) f0.double_sided = 1 f1.double_sided = 1 # Extract model model = scene.to_model() return model def Cone(scene, p0, p1, r, color=(0.0, 0.0, 0.0, 1.0)): "Return model for cone from p0 to p1 with radius r." # Convert to NumPy array if isinstance(p0, soya.Vertex): p0 = array((p0.x, p0.y, p0.z)) p1 = array((p1.x, p1.y, p1.z)) else: p0 = array(p0) p1 = array(p1) # Get tangent vectors for plane n = p0 - p1 n = n / norm(n) t0, t1 = tangents(n) # Traverse the circles num_steps = 10 dtheta = 2.0 * pi / float(num_steps) v2 = soya.Vertex(scene, p1[0], p1[1], p1[2], diffuse=color) for i in range(num_steps): # Compute coordinates for bottom of face dx0 = cos(i * dtheta) * t0 + sin(i * dtheta) * t1 dx1 = cos((i + 1) * dtheta) * t0 + sin((i + 1) * dtheta) * t1 x0 = p0 + r * dx0 x1 = p0 + r * dx1 # Create face v0 = soya.Vertex(scene, x0[0], x0[1], x0[2], diffuse=color) v1 = soya.Vertex(scene, x1[0], x1[1], x1[2], diffuse=color) f = soya.Face(scene, (v0, v1, v2)) f.double_sided = 1 # Extract model model = scene.to_model() return model def Arrow(scene, x, n, center=False): "Return model for arrow from x in direction n." # Convert to Numpy arrays x = array(x) n = array(n) # Get tangents t0, t1 = tangents(n) # Dimensions for arrow L = 0.3 l = 0.35 * L # noqa: E741 r = 0.04 * L R = 0.125 * L # Center arrow if center: print("Centering!") x -= 0.5 * (L + l) * n # Create cylinder and cone cylinder = Cylinder(scene, x, x + L * n, r) cone = Cone(scene, x + L * n, x + (L + l) * n, R) # Extract model return scene.to_model() def UnitTetrahedron(color=(0.0, 1.0, 0.0, 0.5)): "Return model for unit tetrahedron." info("Plotting unit tetrahedron") # Create separate scene (since we will extract a model, not render) scene = soya.World() # Create vertices v0 = soya.Vertex(scene, 0.0, 0.0, 0.0, diffuse=color) v1 = soya.Vertex(scene, 1.0, 0.0, 0.0, diffuse=color) v2 = soya.Vertex(scene, 0.0, 1.0, 0.0, diffuse=color) v3 = soya.Vertex(scene, 0.0, 0.0, 1.0, diffuse=color) # Create edges e0 = Cylinder(scene, v0, v1, 0.007) e1 = Cylinder(scene, v0, v2, 0.007) e2 = Cylinder(scene, v0, v3, 0.007) e3 = Cylinder(scene, v1, v2, 0.007) e4 = Cylinder(scene, v1, v3, 0.007) e5 = Cylinder(scene, v2, v3, 0.007) # Create faces f0 = soya.Face(scene, (v1, v2, v3)) f1 = soya.Face(scene, (v0, v2, v3)) f2 = soya.Face(scene, (v0, v1, v3)) f3 = soya.Face(scene, (v0, v1, v2)) # Make faces double sided f0.double_sided = 1 f1.double_sided = 1 f2.double_sided = 1 f3.double_sided = 1 # Extract model model = scene.to_model() return model def UnitTriangle(color=(0.0, 1.0, 0.0, 0.5)): "Return model for unit tetrahedron." info("Plotting unit triangle") # Create separate scene (since we will extract a model, not render) scene = soya.World() # Create vertice v0 = soya.Vertex(scene, 0.0, 0.0, 0.0, diffuse=color) v1 = soya.Vertex(scene, 1.0, 0.0, 0.0, diffuse=color) v2 = soya.Vertex(scene, 0.0, 1.0, 0.0, diffuse=color) # Create edges e0 = Cylinder(scene, v0, v1, 0.007) e1 = Cylinder(scene, v0, v2, 0.007) e2 = Cylinder(scene, v1, v2, 0.007) # Create face f = soya.Face(scene, (v0, v1, v2)) # Make face double sided f.double_sided = 1 # Extract model model = scene.to_model() return model def PointEvaluation(x): "Return model for point evaluation at given point." info("Plotting dof: point evaluation at x = %s" % str(x)) # Make sure point is 3D x = to3d(x) # Create separate scene (since we will extract a model, not render) scene = soya.World() # Define material (color) for the sphere material = soya.Material() material.diffuse = (0.0, 0.0, 0.0, 1.0) # Create sphere sphere = Sphere(scene, material=material) # Scale and moveand move to coordinate sphere.scale(0.05, 0.05, 0.05) p = sphere.position() p.set_xyz(x[0], x[1], x[2]) sphere.move(p) # Extract model model = scene.to_model() return model def PointDerivative(x): "Return model for evaluation of derivatives at given point." info("Plotting dof: point derivative at x = %s" % str(x)) # Make sure point is 3D x = to3d(x) # Create separate scene (since we will extract a model, not render) scene = soya.World() # Define material (color) for the sphere material = soya.Material() material.diffuse = (0.0, 0.0, 0.0, 0.2) # Create sphere sphere = Sphere(scene, material=material) # Scale and moveand move to coordinate sphere.scale(0.1, 0.1, 0.1) p = sphere.position() p.set_xyz(x[0], x[1], x[2]) sphere.move(p) # Extract model model = scene.to_model() return model def PointSecondDerivative(x): "Return model for evaluation of second derivatives at given point." info("Plotting dof: point derivative at x = %s" % str(x)) # Make sure point is 3D x = to3d(x) # Create separate scene (since we will extract a model, not render) scene = soya.World() # Define material (color) for the sphere material = soya.Material() material.diffuse = (0.0, 0.0, 0.0, 0.05) # Create sphere sphere = Sphere(scene, material=material) # Scale and moveand move to coordinate sphere.scale(0.15, 0.15, 0.15) p = sphere.position() p.set_xyz(x[0], x[1], x[2]) sphere.move(p) # Extract model model = scene.to_model() return model def DirectionalEvaluation(x, n, flip=False, center=False): "Return model for directional evaluation at given point in given direction." info("Plotting dof: directional evaluation at x = %s in direction n = %s" % (str(x), str(n))) # Make sure points are 3D x = to3d(x) n = to3d(n) # Create separate scene (since we will extract a model, not render) scene = soya.World() # Normalize n = array(n) n = 0.75 * n / norm(n) # Flip normal if necessary if flip and not pointing_outwards(x, n): info("Flipping direction of arrow so it points outward.") n = -n # Create arrow arrow = Arrow(scene, x, n, center) # Extract model model = scene.to_model() return model def DirectionalDerivative(x, n): "Return model for directional derivative at given point in given direction." info("Plotting dof: directional derivative at x = %s in direction n = %s" % (str(x), str(n))) # Make sure points are 3D x = to3d(x) n = to3d(n) # Create separate scene (since we will extract a model, not render) scene = soya.World() # Normalize n = array(n) n = 0.75 * n / norm(n) # Create line line = Cylinder(scene, x - 0.07 * n, x + 0.07 * n, 0.005) # Extract model model = scene.to_model() return model def IntegralMoment(cellname, num_moments, x=None): "Return model for integral moment for given element." info("Plotting dof: integral moment") # Set position if x is None and cellname == "triangle": a = 1.0 / (2 + sqrt(2)) # this was a fun exercise x = (a, a, 0.0) elif x is None: a = 1.0 / (3 + sqrt(3)) # so was this x = (a, a, a) # Make sure point is 3D x = to3d(x) # Fancy scaling of radius and color r = 1.0 / (num_moments + 5) if num_moments % 2 == 0: c = 1.0 else: c = 0.0 # Create separate scene (since we will extract a model, not render) scene = soya.World() # Define material (color) for the sphere material = soya.Material() material.diffuse = (c, c, c, 0.7) # Create sphere sphere = Sphere(scene, material=material) # Scale and moveand move to coordinate sphere.scale(r, r, r) p = sphere.position() p.set_xyz(x[0], x[1], x[2]) sphere.move(p) # Extract model model = scene.to_model() return model def create_cell_model(element): "Create Soya3D model for cell." # Get color family = element.family() if family not in element_colors: warning("Don't know a good color for elements of type '%s', using default color." % family) family = "Lagrange" color = element_colors[family] color = (color[0], color[1], color[2], 0.7) # Create model based on cell type cellname = element.cell().cellname() if cellname == "triangle": return UnitTriangle(color), False elif cellname == "tetrahedron": return UnitTetrahedron(color), True error("Unable to plot element, unhandled cell type: %s" % str(cellname)) def create_dof_models(element): "Create Soya3D models for dofs." # Flags for whether to flip and center arrows directional = {"PointScaledNormalEval": (True, False), "PointEdgeTangent": (False, True), "PointFaceTangent": (False, True)} # Elements not supported fully by FIAT unsupported = {"Argyris": argyris_dofs, "Arnold-Winther": arnold_winther_dofs, "Hermite": hermite_dofs, "Mardal-Tai-Winther": mardal_tai_winther_dofs, "Morley": morley_dofs} # Check if element is supported family = element.family() if family not in unsupported: # Create FIAT element and get dofs fiat_element = create_element(element) dofs = [(dof.get_type_tag(), dof.get_point_dict()) for dof in fiat_element.dual_basis()] else: # Bybass FIAT and set the dofs ourselves dofs = unsupported[family](element) # Iterate over dofs and add models models = [] num_moments = 0 for (dof_type, L) in dofs: # Check type of dof if dof_type == "PointEval": # Point evaluation, just get point points = list(L.keys()) if not len(points) == 1: error("Strange dof, single point expected.") x = points[0] # Generate model models.append(PointEvaluation(x)) elif dof_type == "PointDeriv": # Evaluation of derivatives at point points = list(L.keys()) if not len(points) == 1: error("Strange dof, single point expected.") x = points[0] # Generate model models.append(PointDerivative(x)) elif dof_type == "PointSecondDeriv": # Evaluation of derivatives at point points = list(L.keys()) if not len(points) == 1: error("Strange dof, single point expected.") x = points[0] # Generate model models.append(PointSecondDerivative(x)) elif dof_type in directional: # Normal evaluation, get point and normal points = list(L.keys()) if not len(points) == 1: error("Strange dof, single point expected.") x = points[0] n = [xx[0] for xx in L[x]] # Generate model flip, center = directional[dof_type] models.append(DirectionalEvaluation(x, n, flip, center)) elif dof_type == "PointNormalDeriv": # Evaluation of derivatives at point points = list(L.keys()) if not len(points) == 1: error("Strange dof, single point expected.") x = points[0] n = [xx[0] for xx in L[x]] # Generate model models.append(DirectionalDerivative(x, n)) elif dof_type in ("FrobeniusIntegralMoment", "IntegralMoment", "ComponentPointEval"): # Generate model models.append(IntegralMoment(element.cell().cellname(), num_moments)) # Count the number of integral moments num_moments += 1 else: error("Unable to plot dof, unhandled dof type: %s" % str(dof_type)) return models, num_moments def create_notation_models(): "Create Soya 3D models for notation." models = [] y = 1.3 dy = -0.325 # Create model for evaluation models.append(PointEvaluation([0, y])) y += dy # Create model for derivative evaluation models.append(PointDerivative([0, y])) models.append(PointDerivative([0, y])) models.append(PointDerivative([0, y])) y += dy # Create model for second derivative evaluation models.append(PointSecondDerivative([0, y])) models.append(PointSecondDerivative([0, y])) models.append(PointSecondDerivative([0, y])) y += dy # Create model for directional evaluation models.append(DirectionalEvaluation([0, y], [1, 1], False, True)) y += dy # Create model for directional evaluation models.append(DirectionalDerivative([0, y], [1, 1])) y += dy # Create model for integral moments models.append(IntegralMoment("tetrahedron", 0, [0, y])) models.append(IntegralMoment("tetrahedron", 1, [0, y])) models.append(IntegralMoment("tetrahedron", 2, [0, y])) return models def pointing_outwards(x, n): "Check if n is pointing inwards, used for flipping dofs." eps = 1e-10 x = array(x) + 0.1 * array(n) return x[0] < -eps or x[1] < -eps or x[2] < -eps or x[2] > 1.0 - x[0] - x[1] + eps def to3d(x): "Make sure point is 3D." if len(x) == 2: x = (x[0], x[1], 0.0) return x def arnold_winther_dofs(element): "Special fix for Arnold-Winther elements until Rob fixes in FIAT." if not element.cell().cellname() == "triangle": error("Unable to plot element, only know how to plot Mardal-Tai-Winther on triangles.") return [("PointEval", {(0.0, 0.0): [(1.0, ())]}), # hack, same dof three times ("PointEval", {(0.0, 0.0): [(1.0, ())]}), # hack, same dof three times ("PointEval", {(0.0, 0.0): [(1.0, ())]}), # hack, same dof three times ("PointEval", {(1.0, 0.0): [(1.0, ())]}), # hack, same dof three times ("PointEval", {(1.0, 0.0): [(1.0, ())]}), # hack, same dof three times ("PointEval", {(1.0, 0.0): [(1.0, ())]}), # hack, same dof three times ("PointEval", {(0.0, 1.0): [(1.0, ())]}), # hack, same dof three times ("PointEval", {(0.0, 1.0): [(1.0, ())]}), # hack, same dof three times ("PointEval", {(0.0, 1.0): [(1.0, ())]}), # hack, same dof three times ("PointScaledNormalEval", {(1.0 / 5, 0.0): [(0.0, (0,)), (-1.0, (1,))]}), ("PointScaledNormalEval", {(2.0 / 5, 0.0): [(0.0, (0,)), (-1.0, (1,))]}), ("PointScaledNormalEval", {(3.0 / 5, 0.0): [(0.0, (0,)), (-1.0, (1,))]}), ("PointScaledNormalEval", {(4.0 / 5, 0.0): [(0.0, (0,)), (-1.0, (1,))]}), ("PointScaledNormalEval", {(4.0 / 5, 1.0 / 5.0): [(1.0, (0,)), (1.0, (1,))]}), ("PointScaledNormalEval", {(3.0 / 5, 2.0 / 5.0): [(1.0, (0,)), (1.0, (1,))]}), ("PointScaledNormalEval", {(2.0 / 5, 3.0 / 5.0): [(1.0, (0,)), (1.0, (1,))]}), ("PointScaledNormalEval", {(1.0 / 5, 4.0 / 5.0): [(1.0, (0,)), (1.0, (1,))]}), ("PointScaledNormalEval", {(0.0, 1.0 / 5.0): [(-1.0, (0,)), (0.0, (1,))]}), ("PointScaledNormalEval", {(0.0, 2.0 / 5.0): [(-1.0, (0,)), (0.0, (1,))]}), ("PointScaledNormalEval", {(0.0, 3.0 / 5.0): [(-1.0, (0,)), (0.0, (1,))]}), ("PointScaledNormalEval", {(0.0, 4.0 / 5.0): [(-1.0, (0,)), (0.0, (1,))]}), ("IntegralMoment", None), ("IntegralMoment", None), ("IntegralMoment", None)] def argyris_dofs(element): "Special fix for Hermite elements until Rob fixes in FIAT." if not element.degree() == 5: error("Unable to plot element, only know how to plot quintic Argyris elements.") if not element.cell().cellname() == "triangle": error("Unable to plot element, only know how to plot Argyris on triangles.") return [("PointEval", {(0.0, 0.0): [(1.0, ())]}), ("PointEval", {(1.0, 0.0): [(1.0, ())]}), ("PointEval", {(0.0, 1.0): [(1.0, ())]}), ("PointDeriv", {(0.0, 0.0): [(1.0, ())]}), # hack, same dof twice ("PointDeriv", {(0.0, 0.0): [(1.0, ())]}), # hack, same dof twice ("PointDeriv", {(1.0, 0.0): [(1.0, ())]}), # hack, same dof twice ("PointDeriv", {(1.0, 0.0): [(1.0, ())]}), # hack, same dof twice ("PointDeriv", {(0.0, 1.0): [(1.0, ())]}), # hack, same dof twice ("PointDeriv", {(0.0, 1.0): [(1.0, ())]}), # hack, same dof twice ("PointSecondDeriv", {(0.0, 0.0): [(1.0, ())]}), # hack, same dof three times ("PointSecondDeriv", {(0.0, 0.0): [(1.0, ())]}), # hack, same dof three times ("PointSecondDeriv", {(0.0, 0.0): [(1.0, ())]}), # hack, same dof three times ("PointSecondDeriv", {(1.0, 0.0): [(1.0, ())]}), # hack, same dof three times ("PointSecondDeriv", {(1.0, 0.0): [(1.0, ())]}), # hack, same dof three times ("PointSecondDeriv", {(1.0, 0.0): [(1.0, ())]}), # hack, same dof three times ("PointSecondDeriv", {(0.0, 1.0): [(1.0, ())]}), # hack, same dof three times ("PointSecondDeriv", {(0.0, 1.0): [(1.0, ())]}), # hack, same dof three times ("PointSecondDeriv", {(0.0, 1.0): [(1.0, ())]}), # hack, same dof three times ("PointNormalDeriv", {(0.5, 0.0): [(0.0, (0,)), (-1.0, (1,))]}), ("PointNormalDeriv", {(0.5, 0.5): [(1.0, (0,)), (1.0, (1,))]}), ("PointNormalDeriv", {(0.0, 0.5): [(-1.0, (0,)), (0.0, (1,))]})] def hermite_dofs(element): "Special fix for Hermite elements until Rob fixes in FIAT." dofs_2d = [("PointEval", {(0.0, 0.0): [(1.0, ())]}), ("PointEval", {(1.0, 0.0): [(1.0, ())]}), ("PointEval", {(0.0, 1.0): [(1.0, ())]}), ("PointDeriv", {(0.0, 0.0): [(1.0, ())]}), # hack, same dof twice ("PointDeriv", {(0.0, 0.0): [(1.0, ())]}), # hack, same dof twice ("PointDeriv", {(1.0, 0.0): [(1.0, ())]}), # hack, same dof twice ("PointDeriv", {(1.0, 0.0): [(1.0, ())]}), # hack, same dof twice ("PointDeriv", {(0.0, 1.0): [(1.0, ())]}), # hack, same dof twice ("PointDeriv", {(0.0, 1.0): [(1.0, ())]}), # hack, same dof twice ("PointEval", {(1.0 / 3, 1.0 / 3): [(1.0, ())]})] dofs_3d = [("PointEval", {(0.0, 0.0, 0.0): [(1.0, ())]}), ("PointEval", {(1.0, 0.0, 0.0): [(1.0, ())]}), ("PointEval", {(0.0, 1.0, 0.0): [(1.0, ())]}), ("PointEval", {(0.0, 0.0, 1.0): [(1.0, ())]}), ("PointDeriv", {(0.0, 0.0, 0.0): [(1.0, ())]}), # hack, same dof three times ("PointDeriv", {(0.0, 0.0, 0.0): [(1.0, ())]}), # hack, same dof three times ("PointDeriv", {(0.0, 0.0, 0.0): [(1.0, ())]}), # hack, same dof three times ("PointDeriv", {(1.0, 0.0, 0.0): [(1.0, ())]}), # hack, same dof three times ("PointDeriv", {(1.0, 0.0, 0.0): [(1.0, ())]}), # hack, same dof three times ("PointDeriv", {(1.0, 0.0, 0.0): [(1.0, ())]}), # hack, same dof three times ("PointDeriv", {(0.0, 1.0, 0.0): [(1.0, ())]}), # hack, same dof three times ("PointDeriv", {(0.0, 1.0, 0.0): [(1.0, ())]}), # hack, same dof three times ("PointDeriv", {(0.0, 1.0, 0.0): [(1.0, ())]}), # hack, same dof three times ("PointDeriv", {(0.0, 0.0, 1.0): [(1.0, ())]}), # hack, same dof three times ("PointDeriv", {(0.0, 0.0, 1.0): [(1.0, ())]}), # hack, same dof three times ("PointDeriv", {(0.0, 0.0, 1.0): [(1.0, ())]}), # hack, same dof three times ("PointEval", {(1.0 / 3, 1.0 / 3, 1.0 / 3): [(1.0, ())]}), ("PointEval", {(0.0, 1.0 / 3, 1.0 / 3): [(1.0, ())]}), ("PointEval", {(1.0 / 3, 0.0, 1.0 / 3): [(1.0, ())]}), ("PointEval", {(1.0 / 3, 1.0 / 3, 0.0): [(1.0, ())]})] if element.cell().cellname() == "triangle": return dofs_2d else: return dofs_3d def mardal_tai_winther_dofs(element): "Special fix for Mardal-Tai-Winther elements until Rob fixes in FIAT." if not element.cell().cellname() == "triangle": error("Unable to plot element, only know how to plot Mardal-Tai-Winther on triangles.") return [("PointScaledNormalEval", {(1.0 / 3, 0.0): [(0.0, (0,)), (-1.0, (1,))]}), ("PointScaledNormalEval", {(2.0 / 3, 0.0): [(0.0, (0,)), (-1.0, (1,))]}), ("PointScaledNormalEval", {(2.0 / 3, 1.0 / 3.0): [(1.0, (0,)), (1.0, (1,))]}), ("PointScaledNormalEval", {(1.0 / 3, 2.0 / 3.0): [(1.0, (0,)), (1.0, (1,))]}), ("PointScaledNormalEval", {(0.0, 1.0 / 3.0): [(-1.0, (0,)), (0.0, (1,))]}), ("PointScaledNormalEval", {(0.0, 2.0 / 3.0): [(-1.0, (0,)), (0.0, (1,))]}), ("PointEdgeTangent", {(0.5, 0.0): [(-1.0, (0,)), (0.0, (1,))]}), ("PointEdgeTangent", {(0.5, 0.5): [(-1.0, (0,)), (1.0, (1,))]}), ("PointEdgeTangent", {(0.0, 0.5): [(0.0, (0,)), (-1.0, (1,))]})] def morley_dofs(element): "Special fix for Morley elements until Rob fixes in FIAT." if not element.cell().cellname() == "triangle": error("Unable to plot element, only know how to plot Morley on triangles.") return [("PointEval", {(0.0, 0.0): [(1.0, ())]}), ("PointEval", {(1.0, 0.0): [(1.0, ())]}), ("PointEval", {(0.0, 1.0): [(1.0, ())]}), ("PointNormalDeriv", {(0.5, 0.0): [(0.0, (0,)), (-1.0, (1,))]}), ("PointNormalDeriv", {(0.5, 0.5): [(1.0, (0,)), (1.0, (1,))]}), ("PointNormalDeriv", {(0.0, 0.5): [(-1.0, (0,)), (0.0, (1,))]})] ffc-2019.1.0.post0/ffc/quadrature/000077500000000000000000000000001346026536000164665ustar00rootroot00000000000000ffc-2019.1.0.post0/ffc/quadrature/__init__.py000066400000000000000000000003551346026536000206020ustar00rootroot00000000000000# -*- coding: utf-8 -*- from ffc.quadrature.quadraturerepresentation import compute_integral_ir from ffc.quadrature.quadratureoptimization import optimize_integral_ir from ffc.quadrature.quadraturegenerator import generate_integral_code ffc-2019.1.0.post0/ffc/quadrature/codesnippets.py000066400000000000000000001123411346026536000215420ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2007-2017 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # Modified by Kristian B. Oelgaard 2010-2013 # Modified by Marie Rognes 2007-2012 # Modified by Peter Brune 2009 # Modified by Martin Sandve Alnæs, 2013 # # First added: 2007-02-28 # Last changed: 2014-06-10 "Code snippets for code generation." __all__ = ["comment_ufc", "comment_dolfin", "header_h", "header_c", "footer", "compute_jacobian", "compute_jacobian_inverse", "eval_basis_decl", "eval_basis_init", "eval_basis", "eval_basis_copy", "eval_derivs_decl", "eval_derivs_init", "eval_derivs", "eval_derivs_copy"] __old__ = ["evaluate_f", "facet_determinant", "map_onto_physical", "fiat_coordinate_map", "transform_snippet", "scale_factor", "combinations_snippet", "normal_direction", "facet_normal", "ip_coordinates", "cell_volume", "circumradius", "facet_area", "min_facet_edge_length", "max_facet_edge_length", "orientation_snippet"] __all__ += __old__ comment_ufc = """\ // This code conforms with the UFC specification version %(ufc_version)s // and was automatically generated by FFC version %(ffc_version)s. """ comment_dolfin = """\ // This code conforms with the UFC specification version %(ufc_version)s // and was automatically generated by FFC version %(ffc_version)s. // // This code was generated with the option '-l dolfin' and // contains DOLFIN-specific wrappers that depend on DOLFIN. """ # Code snippets for headers and footers header_h = """\ #ifndef __%(prefix_upper)s_H #define __%(prefix_upper)s_H """ header_c = """\ #include "%(prefix)s.h" """ footer = """\ #endif """ # Code snippets for computing Jacobians _compute_jacobian_interval_1d = """\ // Compute Jacobian double J%(restriction)s[1]; compute_jacobian_interval_1d(J%(restriction)s, coordinate_dofs%(restriction)s); """ _compute_jacobian_interval_2d = """\ // Compute Jacobian double J%(restriction)s[2]; compute_jacobian_interval_2d(J%(restriction)s, coordinate_dofs%(restriction)s); """ _compute_jacobian_interval_3d = """\ // Compute Jacobian double J%(restriction)s[3]; compute_jacobian_interval_3d(J%(restriction)s, coordinate_dofs%(restriction)s); """ _compute_jacobian_triangle_2d = """\ // Compute Jacobian double J%(restriction)s[4]; compute_jacobian_triangle_2d(J%(restriction)s, coordinate_dofs%(restriction)s); """ _compute_jacobian_triangle_3d = """\ // Compute Jacobian double J%(restriction)s[6]; compute_jacobian_triangle_3d(J%(restriction)s, coordinate_dofs%(restriction)s); """ _compute_jacobian_tetrahedron_3d = """\ // Compute Jacobian double J%(restriction)s[9]; compute_jacobian_tetrahedron_3d(J%(restriction)s, coordinate_dofs%(restriction)s); """ compute_jacobian = {1: {1: _compute_jacobian_interval_1d, 2: _compute_jacobian_interval_2d, 3: _compute_jacobian_interval_3d}, 2: {2: _compute_jacobian_triangle_2d, 3: _compute_jacobian_triangle_3d}, 3: {3: _compute_jacobian_tetrahedron_3d}} # Code snippets for computing Jacobian inverses _compute_jacobian_inverse_interval_1d = """\ // Compute Jacobian inverse and determinant double K%(restriction)s[1]; double detJ%(restriction)s; compute_jacobian_inverse_interval_1d(K%(restriction)s, detJ%(restriction)s, J%(restriction)s); """ _compute_jacobian_inverse_interval_2d = """\ // Compute Jacobian inverse and determinant double K%(restriction)s[2]; double detJ%(restriction)s; compute_jacobian_inverse_interval_2d(K%(restriction)s, detJ%(restriction)s, J%(restriction)s); """ _compute_jacobian_inverse_interval_3d = """\ // Compute Jacobian inverse and determinant double K%(restriction)s[3]; double detJ%(restriction)s; compute_jacobian_inverse_interval_3d(K%(restriction)s, detJ%(restriction)s, J%(restriction)s); """ _compute_jacobian_inverse_triangle_2d = """\ // Compute Jacobian inverse and determinant double K%(restriction)s[4]; double detJ%(restriction)s; compute_jacobian_inverse_triangle_2d(K%(restriction)s, detJ%(restriction)s, J%(restriction)s); """ _compute_jacobian_inverse_triangle_3d = """\ // Compute Jacobian inverse and determinant double K%(restriction)s[6]; double detJ%(restriction)s; compute_jacobian_inverse_triangle_3d(K%(restriction)s, detJ%(restriction)s, J%(restriction)s); """ _compute_jacobian_inverse_tetrahedron_3d = """\ // Compute Jacobian inverse and determinant double K%(restriction)s[9]; double detJ%(restriction)s; compute_jacobian_inverse_tetrahedron_3d(K%(restriction)s, detJ%(restriction)s, J%(restriction)s); """ compute_jacobian_inverse = {1: {1: _compute_jacobian_inverse_interval_1d, 2: _compute_jacobian_inverse_interval_2d, 3: _compute_jacobian_inverse_interval_3d}, 2: {2: _compute_jacobian_inverse_triangle_2d, 3: _compute_jacobian_inverse_triangle_3d}, 3: {3: _compute_jacobian_inverse_tetrahedron_3d}} # Code snippet for scale factor scale_factor = """\ // Set scale factor const double det = std::abs(detJ);""" # FIXME: Old stuff below that should be cleaned up or moved to ufc_geometry.h orientation_snippet = """ // Check orientation if (cell_orientation%(restriction)s == -1) throw std::runtime_error("cell orientation must be defined (not -1)"); // (If cell_orientation == 1 = down, multiply det(J) by -1) else if (cell_orientation%(restriction)s == 1) detJ%(restriction)s *= -1; """ evaluate_f = "f.evaluate(vals, y, c);" _facet_determinant_1D = """\ // Facet determinant 1D (vertex) const double det = 1.0;""" _facet_determinant_2D = """\ // Get vertices on edge static unsigned int edge_vertices[3][2] = {{1, 2}, {0, 2}, {0, 1}}; const unsigned int v0 = edge_vertices[facet%(restriction)s][0]; const unsigned int v1 = edge_vertices[facet%(restriction)s][1]; // Compute scale factor (length of edge scaled by length of reference interval) const double dx0 = coordinate_dofs%(restriction)s[2*v1 + 0] - coordinate_dofs%(restriction)s[2*v0 + 0]; const double dx1 = coordinate_dofs%(restriction)s[2*v1 + 1] - coordinate_dofs%(restriction)s[2*v0 + 1]; const double det = std::sqrt(dx0*dx0 + dx1*dx1); """ _facet_determinant_2D_1D = """\ // Facet determinant 1D in 2D (vertex) const double det = 1.0; """ _facet_determinant_3D = """\ // Get vertices on face static unsigned int face_vertices[4][3] = {{1, 2, 3}, {0, 2, 3}, {0, 1, 3}, {0, 1, 2}}; const unsigned int v0 = face_vertices[facet%(restriction)s][0]; const unsigned int v1 = face_vertices[facet%(restriction)s][1]; const unsigned int v2 = face_vertices[facet%(restriction)s][2]; // Compute scale factor (area of face scaled by area of reference triangle) const double a0 = (coordinate_dofs%(restriction)s[3*v0 + 1]*coordinate_dofs%(restriction)s[3*v1 + 2] + coordinate_dofs%(restriction)s[3*v0 + 2]*coordinate_dofs%(restriction)s[3*v2 + 1] + coordinate_dofs%(restriction)s[3*v1 + 1]*coordinate_dofs%(restriction)s[3*v2 + 2]) - (coordinate_dofs%(restriction)s[3*v2 + 1]*coordinate_dofs%(restriction)s[3*v1 + 2] + coordinate_dofs%(restriction)s[3*v2 + 2]*coordinate_dofs%(restriction)s[3*v0 + 1] + coordinate_dofs%(restriction)s[3*v1 + 1]*coordinate_dofs%(restriction)s[3*v0 + 2]); const double a1 = (coordinate_dofs%(restriction)s[3*v0 + 2]*coordinate_dofs%(restriction)s[3*v1 + 0] + coordinate_dofs%(restriction)s[3*v0 + 0]*coordinate_dofs%(restriction)s[3*v2 + 2] + coordinate_dofs%(restriction)s[3*v1 + 2]*coordinate_dofs%(restriction)s[3*v2 + 0]) - (coordinate_dofs%(restriction)s[3*v2 + 2]*coordinate_dofs%(restriction)s[3*v1 + 0] + coordinate_dofs%(restriction)s[3*v2 + 0]*coordinate_dofs%(restriction)s[3*v0 + 2] + coordinate_dofs%(restriction)s[3*v1 + 2]*coordinate_dofs%(restriction)s[3*v0 + 0]); const double a2 = (coordinate_dofs%(restriction)s[3*v0 + 0]*coordinate_dofs%(restriction)s[3*v1 + 1] + coordinate_dofs%(restriction)s[3*v0 + 1]*coordinate_dofs%(restriction)s[3*v2 + 0] + coordinate_dofs%(restriction)s[3*v1 + 0]*coordinate_dofs%(restriction)s[3*v2 + 1]) - (coordinate_dofs%(restriction)s[3*v2 + 0]*coordinate_dofs%(restriction)s[3*v1 + 1] + coordinate_dofs%(restriction)s[3*v2 + 1]*coordinate_dofs%(restriction)s[3*v0 + 0] + coordinate_dofs%(restriction)s[3*v1 + 0]*coordinate_dofs%(restriction)s[3*v0 + 1]); const double det = std::sqrt(a0*a0 + a1*a1 + a2*a2); """ _facet_determinant_3D_2D = """\ // Facet determinant 2D in 3D (edge) // Get vertices on edge static unsigned int edge_vertices[3][2] = {{1, 2}, {0, 2}, {0, 1}}; const unsigned int v0 = edge_vertices[facet%(restriction)s][0]; const unsigned int v1 = edge_vertices[facet%(restriction)s][1]; // Compute scale factor (length of edge scaled by length of reference interval) const double dx0 = coordinate_dofs%(restriction)s[3*v1 + 0] - coordinate_dofs%(restriction)s[3*v0 + 0]; const double dx1 = coordinate_dofs%(restriction)s[3*v1 + 1] - coordinate_dofs%(restriction)s[3*v0 + 1]; const double dx2 = coordinate_dofs%(restriction)s[3*v1 + 2] - coordinate_dofs%(restriction)s[3*v0 + 2]; const double det = std::sqrt(dx0*dx0 + dx1*dx1 + dx2*dx2); """ _facet_determinant_3D_1D = """\ // Facet determinant 1D in 3D (vertex) const double det = 1.0; """ _normal_direction_1D = """\ const bool direction = facet%(restriction)s == 0 ? coordinate_dofs%(restriction)s[0] > coordinate_dofs%(restriction)s[1] : coordinate_dofs%(restriction)s[1] > coordinate_dofs%(restriction)s[0]; """ _normal_direction_2D = """\ const bool direction = dx1*(coordinate_dofs%(restriction)s[2*%(facet)s] - coordinate_dofs%(restriction)s[2*v0]) - dx0*(coordinate_dofs%(restriction)s[2*%(facet)s + 1] - coordinate_dofs%(restriction)s[2*v0 + 1]) < 0; """ _normal_direction_3D = """\ const bool direction = a0*(coordinate_dofs%(restriction)s[3*%(facet)s] - coordinate_dofs%(restriction)s[3*v0]) + a1*(coordinate_dofs%(restriction)s[3*%(facet)s + 1] - coordinate_dofs%(restriction)s[3*v0 + 1]) + a2*(coordinate_dofs%(restriction)s[3*%(facet)s + 2] - coordinate_dofs%(restriction)s[3*v0 + 2]) < 0; """ # MER: Coding all up in _facet_normal_ND_M_D for now; these are # therefore empty. _normal_direction_2D_1D = "" _normal_direction_3D_2D = "" _normal_direction_3D_1D = "" _facet_normal_1D = """ // Facet normals are 1.0 or -1.0: (-1.0) <-- X------X --> (1.0) const double n%(restriction)s = %(direction)sdirection ? 1.0 : -1.0;""" _facet_normal_2D = """\ // Compute facet normals from the facet scale factor constants const double n%(restriction)s0 = %(direction)sdirection ? dx1 / det : -dx1 / det; const double n%(restriction)s1 = %(direction)sdirection ? -dx0 / det : dx0 / det;""" _facet_normal_2D_1D = """ // Compute facet normal double n%(restriction)s0 = 0.0; double n%(restriction)s1 = 0.0; if (facet%(restriction)s == 0) { n%(restriction)s0 = coordinate_dofs%(restriction)s[0] - coordinate_dofs%(restriction)s[2]; n%(restriction)s1 = coordinate_dofs%(restriction)s[1] - coordinate_dofs%(restriction)s[3]; } else { n%(restriction)s0 = coordinate_dofs%(restriction)s[2] - coordinate_dofs%(restriction)s[0]; n%(restriction)s1 = coordinate_dofs%(restriction)s[3] - coordinate_dofs%(restriction)s[1]; } const double n%(restriction)s_length = std::sqrt(n%(restriction)s0*n%(restriction)s0 + n%(restriction)s1*n%(restriction)s1); n%(restriction)s0 /= n%(restriction)s_length; n%(restriction)s1 /= n%(restriction)s_length; """ _facet_normal_3D = """ const double n%(restriction)s0 = %(direction)sdirection ? a0 / det : -a0 / det; const double n%(restriction)s1 = %(direction)sdirection ? a1 / det : -a1 / det; const double n%(restriction)s2 = %(direction)sdirection ? a2 / det : -a2 / det;""" _facet_normal_3D_2D = """ // Compute facet normal for triangles in 3D const unsigned int vertex%(restriction)s0 = facet%(restriction)s; // Get coordinates corresponding the vertex opposite this // static unsigned int edge_vertices[3][2] = {{1, 2}, {0, 2}, {0, 1}}; const unsigned int vertex%(restriction)s1 = edge_vertices[facet%(restriction)s][0]; const unsigned int vertex%(restriction)s2 = edge_vertices[facet%(restriction)s][1]; // Define vectors n = (p2 - p0) and t = normalized (p2 - p1) double n%(restriction)s0 = coordinate_dofs%(restriction)s[3*vertex%(restriction)s2 + 0] - coordinate_dofs%(restriction)s[3*vertex%(restriction)s0 + 0]; double n%(restriction)s1 = coordinate_dofs%(restriction)s[3*vertex%(restriction)s2 + 1] - coordinate_dofs%(restriction)s[3*vertex%(restriction)s0 + 1]; double n%(restriction)s2 = coordinate_dofs%(restriction)s[3*vertex%(restriction)s2 + 2] - coordinate_dofs%(restriction)s[3*vertex%(restriction)s0 + 2]; double t%(restriction)s0 = coordinate_dofs%(restriction)s[3*vertex%(restriction)s2 + 0] - coordinate_dofs%(restriction)s[3*vertex%(restriction)s1 + 0]; double t%(restriction)s1 = coordinate_dofs%(restriction)s[3*vertex%(restriction)s2 + 1] - coordinate_dofs%(restriction)s[3*vertex%(restriction)s1 + 1]; double t%(restriction)s2 = coordinate_dofs%(restriction)s[3*vertex%(restriction)s2 + 2] - coordinate_dofs%(restriction)s[3*vertex%(restriction)s1 + 2]; const double t%(restriction)s_length = std::sqrt(t%(restriction)s0*t%(restriction)s0 + t%(restriction)s1*t%(restriction)s1 + t%(restriction)s2*t%(restriction)s2); t%(restriction)s0 /= t%(restriction)s_length; t%(restriction)s1 /= t%(restriction)s_length; t%(restriction)s2 /= t%(restriction)s_length; // Subtract, the projection of (p2 - p0) onto (p2 - p1), from (p2 - p0) const double ndott%(restriction)s = t%(restriction)s0*n%(restriction)s0 + t%(restriction)s1*n%(restriction)s1 + t%(restriction)s2*n%(restriction)s2; n%(restriction)s0 -= ndott%(restriction)s*t%(restriction)s0; n%(restriction)s1 -= ndott%(restriction)s*t%(restriction)s1; n%(restriction)s2 -= ndott%(restriction)s*t%(restriction)s2; const double n%(restriction)s_length = std::sqrt(n%(restriction)s0*n%(restriction)s0 + n%(restriction)s1*n%(restriction)s1 + n%(restriction)s2*n%(restriction)s2); // Normalize n%(restriction)s0 /= n%(restriction)s_length; n%(restriction)s1 /= n%(restriction)s_length; n%(restriction)s2 /= n%(restriction)s_length; """ _facet_normal_3D_1D = """ // Compute facet normal double n%(restriction)s0 = 0.0; double n%(restriction)s1 = 0.0; double n%(restriction)s2 = 0.0; if (facet%(restriction)s == 0) { n%(restriction)s0 = coordinate_dofs%(restriction)s[0] - coordinate_dofs%(restriction)s[3]; n%(restriction)s1 = coordinate_dofs%(restriction)s[1] - coordinate_dofs%(restriction)s[4]; n%(restriction)s1 = coordinate_dofs%(restriction)s[2] - coordinate_dofs%(restriction)s[5]; } else { n%(restriction)s0 = coordinate_dofs%(restriction)s[3] - coordinate_dofs%(restriction)s[0]; n%(restriction)s1 = coordinate_dofs%(restriction)s[4] - coordinate_dofs%(restriction)s[1]; n%(restriction)s1 = coordinate_dofs%(restriction)s[5] - coordinate_dofs%(restriction)s[2]; } const double n%(restriction)s_length = std::sqrt(n%(restriction)s0*n%(restriction)s0 + n%(restriction)s1*n%(restriction)s1 + n%(restriction)s2*n%(restriction)s2); n%(restriction)s0 /= n%(restriction)s_length; n%(restriction)s1 /= n%(restriction)s_length; n%(restriction)s2 /= n%(restriction)s_length; """ _cell_volume_1D = """\ // Compute cell volume const double volume%(restriction)s = std::abs(detJ%(restriction)s); """ _cell_volume_2D = """\ // Compute cell volume const double volume%(restriction)s = std::abs(detJ%(restriction)s)/2.0; """ _cell_volume_2D_1D = """\ // Compute cell volume of interval in 2D const double volume%(restriction)s = std::abs(detJ%(restriction)s); """ _cell_volume_3D = """\ // Compute cell volume const double volume%(restriction)s = std::abs(detJ%(restriction)s)/6.0; """ _cell_volume_3D_1D = """\ // Compute cell volume of interval in 3D const double volume%(restriction)s = std::abs(detJ%(restriction)s); """ _cell_volume_3D_2D = """\ // Compute cell volume of triangle in 3D const double volume%(restriction)s = std::abs(detJ%(restriction)s)/2.0; """ _circumradius_1D = """\ // Compute circumradius; in 1D it is equal to half the cell length const double circumradius%(restriction)s = std::abs(detJ%(restriction)s)/2.0; """ _circumradius_2D = """\ // Compute circumradius of triangle in 2D const double v1v2%(restriction)s = std::sqrt((coordinate_dofs%(restriction)s[4] - coordinate_dofs%(restriction)s[2])*(coordinate_dofs%(restriction)s[4] - coordinate_dofs%(restriction)s[2]) + (coordinate_dofs%(restriction)s[5] - coordinate_dofs%(restriction)s[3])*(coordinate_dofs%(restriction)s[5] - coordinate_dofs%(restriction)s[3]) ); const double v0v2%(restriction)s = std::sqrt(J%(restriction)s[3]*J%(restriction)s[3] + J%(restriction)s[1]*J%(restriction)s[1]); const double v0v1%(restriction)s = std::sqrt(J%(restriction)s[0]*J%(restriction)s[0] + J%(restriction)s[2]*J%(restriction)s[2]); const double circumradius%(restriction)s = 0.25*(v1v2%(restriction)s*v0v2%(restriction)s*v0v1%(restriction)s)/(volume%(restriction)s); """ _circumradius_2D_1D = """\ // Compute circumradius of interval in 3D (1/2 volume) const double circumradius%(restriction)s = std::abs(detJ%(restriction)s)/2.0; """ _circumradius_3D = """\ // Compute circumradius const double v1v2%(restriction)s = std::sqrt( (coordinate_dofs%(restriction)s[6] - coordinate_dofs%(restriction)s[3])*(coordinate_dofs%(restriction)s[6] - coordinate_dofs%(restriction)s[3]) + (coordinate_dofs%(restriction)s[7] - coordinate_dofs%(restriction)s[4])*(coordinate_dofs%(restriction)s[7] - coordinate_dofs%(restriction)s[4]) + (coordinate_dofs%(restriction)s[8] - coordinate_dofs%(restriction)s[5])*(coordinate_dofs%(restriction)s[8] - coordinate_dofs%(restriction)s[5]) ); const double v0v2%(restriction)s = std::sqrt(J%(restriction)s[1]*J%(restriction)s[1] + J%(restriction)s[4]*J%(restriction)s[4] + J%(restriction)s[7]*J%(restriction)s[7]); const double v0v1%(restriction)s = std::sqrt(J%(restriction)s[0]*J%(restriction)s[0] + J%(restriction)s[3]*J%(restriction)s[3] + J%(restriction)s[6]*J%(restriction)s[6]); const double v0v3%(restriction)s = std::sqrt(J%(restriction)s[2]*J%(restriction)s[2] + J%(restriction)s[5]*J%(restriction)s[5] + J%(restriction)s[8]*J%(restriction)s[8]); const double v1v3%(restriction)s = std::sqrt( (coordinate_dofs%(restriction)s[9] - coordinate_dofs%(restriction)s[3])*(coordinate_dofs%(restriction)s[9] - coordinate_dofs%(restriction)s[3]) + (coordinate_dofs%(restriction)s[10] - coordinate_dofs%(restriction)s[4])*(coordinate_dofs%(restriction)s[10] - coordinate_dofs%(restriction)s[4]) + (coordinate_dofs%(restriction)s[11] - coordinate_dofs%(restriction)s[5])*(coordinate_dofs%(restriction)s[11] - coordinate_dofs%(restriction)s[5]) ); const double v2v3%(restriction)s = std::sqrt( (coordinate_dofs%(restriction)s[9] - coordinate_dofs%(restriction)s[6])*(coordinate_dofs%(restriction)s[9] - coordinate_dofs%(restriction)s[6]) + (coordinate_dofs%(restriction)s[10] - coordinate_dofs%(restriction)s[7])*(coordinate_dofs%(restriction)s[10] - coordinate_dofs%(restriction)s[7]) + (coordinate_dofs%(restriction)s[11] - coordinate_dofs%(restriction)s[8])*(coordinate_dofs%(restriction)s[11] - coordinate_dofs%(restriction)s[8]) ); const double la%(restriction)s = v1v2%(restriction)s*v0v3%(restriction)s; const double lb%(restriction)s = v0v2%(restriction)s*v1v3%(restriction)s; const double lc%(restriction)s = v0v1%(restriction)s*v2v3%(restriction)s; const double s%(restriction)s = 0.5*(la%(restriction)s+lb%(restriction)s+lc%(restriction)s); const double area%(restriction)s = std::sqrt(s%(restriction)s*(s%(restriction)s-la%(restriction)s)*(s%(restriction)s-lb%(restriction)s)*(s%(restriction)s-lc%(restriction)s)); const double circumradius%(restriction)s = area%(restriction)s / ( 6.0*volume%(restriction)s ); """ _circumradius_3D_1D = """\ // Compute circumradius of interval in 3D (1/2 volume) const double circumradius%(restriction)s = std::abs(detJ%(restriction)s)/2.0; """ _circumradius_3D_2D = """\ // Compute circumradius of triangle in 3D const double v1v2%(restriction)s = std::sqrt( (coordinate_dofs%(restriction)s[6] - coordinate_dofs%(restriction)s[3])*(coordinate_dofs%(restriction)s[6] - coordinate_dofs%(restriction)s[3]) + (coordinate_dofs%(restriction)s[7] - coordinate_dofs%(restriction)s[4])*(coordinate_dofs%(restriction)s[7] - coordinate_dofs%(restriction)s[4]) + (coordinate_dofs%(restriction)s[8] - coordinate_dofs%(restriction)s[5])*(coordinate_dofs%(restriction)s[8] - coordinate_dofs%(restriction)s[5])); const double v0v2%(restriction)s = std::sqrt( J%(restriction)s[3]*J%(restriction)s[3] + J%(restriction)s[1]*J%(restriction)s[1] + J%(restriction)s[5]*J%(restriction)s[5]); const double v0v1%(restriction)s = std::sqrt( J%(restriction)s[0]*J%(restriction)s[0] + J%(restriction)s[2]*J%(restriction)s[2] + J%(restriction)s[4]*J%(restriction)s[4]); const double circumradius%(restriction)s = 0.25*(v1v2%(restriction)s*v0v2%(restriction)s*v0v1%(restriction)s)/(volume%(restriction)s); """ _facet_area_1D = """\ // Facet area (FIXME: Should this be 0.0?) const double facet_area = 1.0;""" _facet_area_2D = """\ // Facet area const double facet_area = det;""" _facet_area_2D_1D = """\ // Facet area const double facet_area = 1.0;""" _facet_area_3D = """\ // Facet area (divide by two because 'det' is scaled by area of reference triangle) const double facet_area = det/2.0;""" _facet_area_3D_1D = """\ // Facet area const double facet_area = 1.0;""" _facet_area_3D_2D = """\ // Facet area const double facet_area = det;""" evaluate_basis_dofmap = """\ unsigned int element = 0; unsigned int tmp = 0; for (unsigned int j = 0; j < %d; j++) { if (tmp + dofs_per_element[j] > i) { i -= tmp; element = element_types[j]; break; } else tmp += dofs_per_element[j]; }""" _min_facet_edge_length_3D = """\ // Min edge length of facet double min_facet_edge_length; compute_min_facet_edge_length_tetrahedron_3d(min_facet_edge_length, facet%(restriction)s, coordinate_dofs%(restriction)s); """ _max_facet_edge_length_3D = """\ // Max edge length of facet double max_facet_edge_length; compute_max_facet_edge_length_tetrahedron_3d(max_facet_edge_length, facet%(restriction)s, coordinate_dofs%(restriction)s); """ # FIXME: This is dead slow because of all the new calls # Used in evaluate_basis_derivatives. For second order derivatives in 2D it will # generate the combinations: [(0, 0), (0, 1), (1, 0), (1, 1)] (i.e., xx, xy, yx, yy) # which will also be the ordering of derivatives in the return value. combinations_snippet = """\ // Declare two dimensional array that holds combinations of derivatives and initialise unsigned int %(combinations)s[%(max_num_derivatives)s][%(max_degree)s]; for (unsigned int row = 0; row < %(max_num_derivatives)s; row++) { for (unsigned int col = 0; col < %(max_degree)s; col++) %(combinations)s[row][col] = 0; } // Generate combinations of derivatives for (unsigned int row = 1; row < %(num_derivatives)s; row++) { for (unsigned int num = 0; num < row; num++) { for (unsigned int col = %(n)s-1; col+1 > 0; col--) { if (%(combinations)s[row][col] + 1 > %(dimension-1)s) %(combinations)s[row][col] = 0; else { %(combinations)s[row][col] += 1; break; } } } }""" def _transform_snippet(tdim, gdim): if tdim == gdim: _t = "" _g = "" else: _t = "_t" _g = "_g" # Matricize K_ij -> {K_ij} matrix = "{{" + "}, {".join([", ".join(["K[%d]" % (t * gdim + g) for g in range(gdim)]) for t in range(tdim)]) + "}};\n\n" snippet = """\ // Compute inverse of Jacobian const double %%(K)s[%d][%d] = %s""" % (tdim, gdim, matrix) snippet += """// Declare transformation matrix // Declare pointer to two dimensional array and initialise double %%(transform)s[%%(max_g_deriv)s][%%(max_t_deriv)s]; for (unsigned int j = 0; j < %%(num_derivatives)s%(g)s; j++) { for (unsigned int k = 0; k < %%(num_derivatives)s%(t)s; k++) %%(transform)s[j][k] = 1; } // Construct transformation matrix for (unsigned int row = 0; row < %%(num_derivatives)s%(g)s; row++) { for (unsigned int col = 0; col < %%(num_derivatives)s%(t)s; col++) { for (unsigned int k = 0; k < %%(n)s; k++) %%(transform)s[row][col] *= %%(K)s[%%(combinations)s%(t)s[col][k]][%%(combinations)s%(g)s[row][k]]; } }""" % {"t": _t, "g": _g} return snippet # Codesnippets used in evaluate_dof _map_onto_physical_1D = """\ // Evaluate basis functions for affine mapping const double w0 = 1.0 - X_%(i)d[%(j)s][0]; const double w1 = X_%(i)d[%(j)s][0]; // Compute affine mapping y = F(X) y[0] = w0*coordinate_dofs[0] + w1*coordinate_dofs[1];""" _map_onto_physical_2D = """\ // Evaluate basis functions for affine mapping const double w0 = 1.0 - X_%(i)d[%(j)s][0] - X_%(i)d[%(j)s][1]; const double w1 = X_%(i)d[%(j)s][0]; const double w2 = X_%(i)d[%(j)s][1]; // Compute affine mapping y = F(X) y[0] = w0*coordinate_dofs[0] + w1*coordinate_dofs[2] + w2*coordinate_dofs[4]; y[1] = w0*coordinate_dofs[1] + w1*coordinate_dofs[3] + w2*coordinate_dofs[5];""" _map_onto_physical_2D_1D = """\ // Evaluate basis functions for affine mapping const double w0 = 1.0 - X_%(i)d[%(j)s][0]; const double w1 = X_%(i)d[%(j)s][0]; // Compute affine mapping y = F(X) y[0] = w0*coordinate_dofs[0] + w1*coordinate_dofs[2]; y[1] = w0*coordinate_dofs[1] + w1*coordinate_dofs[3];""" _map_onto_physical_3D = """\ // Evaluate basis functions for affine mapping const double w0 = 1.0 - X_%(i)d[%(j)s][0] - X_%(i)d[%(j)s][1] - X_%(i)d[%(j)s][2]; const double w1 = X_%(i)d[%(j)s][0]; const double w2 = X_%(i)d[%(j)s][1]; const double w3 = X_%(i)d[%(j)s][2]; // Compute affine mapping y = F(X) y[0] = w0*coordinate_dofs[0] + w1*coordinate_dofs[3] + w2*coordinate_dofs[6] + w3*coordinate_dofs[9]; y[1] = w0*coordinate_dofs[1] + w1*coordinate_dofs[4] + w2*coordinate_dofs[7] + w3*coordinate_dofs[10]; y[2] = w0*coordinate_dofs[2] + w1*coordinate_dofs[5] + w2*coordinate_dofs[8] + w3*coordinate_dofs[11];""" _map_onto_physical_3D_1D = """\ // Evaluate basis functions for affine mapping const double w0 = 1.0 - X_%(i)d[%(j)s][0]; const double w1 = X_%(i)d[%(j)s][0]; // Compute affine mapping y = F(X) y[0] = w0*coordinate_dofs[0] + w1*coordinate_dofs[3]; y[1] = w0*coordinate_dofs[1] + w1*coordinate_dofs[4]; y[2] = w0*coordinate_dofs[2] + w1*coordinate_dofs[5];""" _map_onto_physical_3D_2D = """\ // Evaluate basis functions for affine mapping const double w0 = 1.0 - X_%(i)d[%(j)s][0] - X_%(i)d[%(j)s][1]; const double w1 = X_%(i)d[%(j)s][0]; const double w2 = X_%(i)d[%(j)s][1]; // Compute affine mapping y = F(X) y[0] = w0*coordinate_dofs[0] + w1*coordinate_dofs[3] + w2*coordinate_dofs[6]; y[1] = w0*coordinate_dofs[1] + w1*coordinate_dofs[4] + w2*coordinate_dofs[7]; y[2] = w0*coordinate_dofs[2] + w1*coordinate_dofs[5] + w2*coordinate_dofs[8]; """ _ip_coordinates_1D = """\ X%(num_ip)d[0] = %(name)s[%(ip)s][0]*coordinate_dofs%(restriction)s[0] + \ %(name)s[%(ip)s][1]*coordinate_dofs%(restriction)s[1];""" _ip_coordinates_2D_1D = """\ X%(num_ip)d[0] =\ %(name)s[%(ip)s][0]*coordinate_dofs%(restriction)s[0] +\ %(name)s[%(ip)s][1]*coordinate_dofs%(restriction)s[2]; X%(num_ip)d[1] =\ %(name)s[%(ip)s][0]*coordinate_dofs%(restriction)s[1] +\ %(name)s[%(ip)s][1]*coordinate_dofs%(restriction)s[3];""" _ip_coordinates_3D_1D = """\ X%(num_ip)d[0] =\ %(name)s[%(ip)s][0]*coordinate_dofs%(restriction)s[0] +\ %(name)s[%(ip)s][1]*coordinate_dofs%(restriction)s[3]; X%(num_ip)d[1] =\ %(name)s[%(ip)s][0]*coordinate_dofs%(restriction)s[1] +\ %(name)s[%(ip)s][1]*coordinate_dofs%(restriction)s[4]; X%(num_ip)d[2] =\ %(name)s[%(ip)s][0]*coordinate_dofs%(restriction)s[2] +\ %(name)s[%(ip)s][1]*coordinate_dofs%(restriction)s[5];""" _ip_coordinates_2D = """\ X%(num_ip)d[0] = %(name)s[%(ip)s][0]*coordinate_dofs%(restriction)s[0] + \ %(name)s[%(ip)s][1]*coordinate_dofs%(restriction)s[2] + %(name)s[%(ip)s][2]*coordinate_dofs%(restriction)s[4]; X%(num_ip)d[1] = %(name)s[%(ip)s][0]*coordinate_dofs%(restriction)s[1] + \ %(name)s[%(ip)s][1]*coordinate_dofs%(restriction)s[3] + %(name)s[%(ip)s][2]*coordinate_dofs%(restriction)s[5];""" _ip_coordinates_3D_2D = """\ X%(num_ip)d[0] =\ %(name)s[%(ip)s][0]*coordinate_dofs%(restriction)s[0] +\ %(name)s[%(ip)s][1]*coordinate_dofs%(restriction)s[3] +\ %(name)s[%(ip)s][2]*coordinate_dofs%(restriction)s[6]; X%(num_ip)d[1] =\ %(name)s[%(ip)s][0]*coordinate_dofs%(restriction)s[1] +\ %(name)s[%(ip)s][1]*coordinate_dofs%(restriction)s[4] +\ %(name)s[%(ip)s][2]*coordinate_dofs%(restriction)s[7]; X%(num_ip)d[2] =\ %(name)s[%(ip)s][0]*coordinate_dofs%(restriction)s[2] +\ %(name)s[%(ip)s][1]*coordinate_dofs%(restriction)s[5] +\ %(name)s[%(ip)s][2]*coordinate_dofs%(restriction)s[8];""" _ip_coordinates_3D = """\ X%(num_ip)d[0] = %(name)s[%(ip)s][0]*coordinate_dofs%(restriction)s[0] + \ %(name)s[%(ip)s][1]*coordinate_dofs%(restriction)s[3] + \ %(name)s[%(ip)s][2]*coordinate_dofs%(restriction)s[6] + \ %(name)s[%(ip)s][3]*coordinate_dofs%(restriction)s[9]; X%(num_ip)d[1] = %(name)s[%(ip)s][0]*coordinate_dofs%(restriction)s[1] + \ %(name)s[%(ip)s][1]*coordinate_dofs%(restriction)s[4] + \ %(name)s[%(ip)s][2]*coordinate_dofs%(restriction)s[7] + \ %(name)s[%(ip)s][3]*coordinate_dofs%(restriction)s[10]; X%(num_ip)d[2] = %(name)s[%(ip)s][0]*coordinate_dofs%(restriction)s[2] + \ %(name)s[%(ip)s][1]*coordinate_dofs%(restriction)s[5] + \ %(name)s[%(ip)s][2]*coordinate_dofs%(restriction)s[8] + \ %(name)s[%(ip)s][3]*coordinate_dofs%(restriction)s[11];""" # Codesnippets used in evaluatebasis[|derivatives] _map_coordinates_FIAT_interval = """\ // Get coordinates and map to the reference (FIAT) element double X = (2.0*x[0] - coordinate_dofs[0] - coordinate_dofs[1]) / J[0];""" _map_coordinates_FIAT_interval_in_2D = """\ // Get coordinates and map to the reference (FIAT) element double X = 2*(std::sqrt(std::pow(x[0] - coordinate_dofs[0], 2) + std::pow(x[1] - coordinate_dofs[1], 2)) / detJ) - 1.0;""" _map_coordinates_FIAT_interval_in_3D = """\ // Get coordinates and map to the reference (FIAT) element double X = 2*(std::sqrt(std::pow(x[0] - coordinate_dofs[0], 2) + std::pow(x[1] - coordinate_dofs[1], 2) + std::pow(x[2] - coordinate_dofs[2], 2))/ detJ) - 1.0;""" _map_coordinates_FIAT_triangle = """\ // Compute constants const double C0 = coordinate_dofs[2] + coordinate_dofs[4]; const double C1 = coordinate_dofs[3] + coordinate_dofs[5]; // Get coordinates and map to the reference (FIAT) element double X = (J[1]*(C1 - 2.0*x[1]) + J[3]*(2.0*x[0] - C0)) / detJ; double Y = (J[0]*(2.0*x[1] - C1) + J[2]*(C0 - 2.0*x[0])) / detJ;""" _map_coordinates_FIAT_triangle_in_3D = """\ const double b0 = coordinate_dofs[0]; const double b1 = coordinate_dofs[1]; const double b2 = coordinate_dofs[2]; // P_FFC = J^dag (p - b), P_FIAT = 2*P_FFC - (1, 1) double X = 2*(K[0]*(x[0] - b0) + K[1]*(x[1] - b1) + K[2]*(x[2] - b2)) - 1.0; double Y = 2*(K[3]*(x[0] - b0) + K[4]*(x[1] - b1) + K[5]*(x[2] - b2)) - 1.0; """ _map_coordinates_FIAT_tetrahedron = """\ // Compute constants const double C0 = coordinate_dofs[9] + coordinate_dofs[6] + coordinate_dofs[3] - coordinate_dofs[0]; const double C1 = coordinate_dofs[10] + coordinate_dofs[7] + coordinate_dofs[4] - coordinate_dofs[1]; const double C2 = coordinate_dofs[11] + coordinate_dofs[8] + coordinate_dofs[5] - coordinate_dofs[2]; // Compute subdeterminants const double d_00 = J[4]*J[8] - J[5]*J[7]; const double d_01 = J[5]*J[6] - J[3]*J[8]; const double d_02 = J[3]*J[7] - J[4]*J[6]; const double d_10 = J[2]*J[7] - J[1]*J[8]; const double d_11 = J[0]*J[8] - J[2]*J[6]; const double d_12 = J[1]*J[6] - J[0]*J[7]; const double d_20 = J[1]*J[5] - J[2]*J[4]; const double d_21 = J[2]*J[3] - J[0]*J[5]; const double d_22 = J[0]*J[4] - J[1]*J[3]; // Get coordinates and map to the reference (FIAT) element double X = (d_00*(2.0*x[0] - C0) + d_10*(2.0*x[1] - C1) + d_20*(2.0*x[2] - C2)) / detJ; double Y = (d_01*(2.0*x[0] - C0) + d_11*(2.0*x[1] - C1) + d_21*(2.0*x[2] - C2)) / detJ; double Z = (d_02*(2.0*x[0] - C0) + d_12*(2.0*x[1] - C1) + d_22*(2.0*x[2] - C2)) / detJ; """ # Mappings to code snippets used by format These dictionaries accept # as keys: first the topological dimension, and second the geometric # dimension facet_determinant = {1: {1: _facet_determinant_1D, 2: _facet_determinant_2D_1D, 3: _facet_determinant_3D_1D}, 2: {2: _facet_determinant_2D, 3: _facet_determinant_3D_2D}, 3: {3: _facet_determinant_3D}} # Geometry related snippets map_onto_physical = {1: {1: _map_onto_physical_1D, 2: _map_onto_physical_2D_1D, 3: _map_onto_physical_3D_1D}, 2: {2: _map_onto_physical_2D, 3: _map_onto_physical_3D_2D}, 3: {3: _map_onto_physical_3D}} fiat_coordinate_map = {"interval": {1: _map_coordinates_FIAT_interval, 2: _map_coordinates_FIAT_interval_in_2D, 3: _map_coordinates_FIAT_interval_in_3D}, "triangle": {2: _map_coordinates_FIAT_triangle, 3: _map_coordinates_FIAT_triangle_in_3D}, "tetrahedron": {3: _map_coordinates_FIAT_tetrahedron}} transform_snippet = {"interval": {1: _transform_snippet(1, 1), 2: _transform_snippet(1, 2), 3: _transform_snippet(1, 3)}, "triangle": {2: _transform_snippet(2, 2), 3: _transform_snippet(2, 3)}, "tetrahedron": {3: _transform_snippet(3, 3)}} ip_coordinates = {1: {1: (3, _ip_coordinates_1D), 2: (6, _ip_coordinates_2D_1D), 3: (9, _ip_coordinates_3D_1D)}, 2: {2: (10, _ip_coordinates_2D), 3: (15, _ip_coordinates_3D_2D)}, 3: {3: (21, _ip_coordinates_3D)}} # FIXME: Rename as in compute_jacobian _compute_foo__d normal_direction = {1: {1: _normal_direction_1D, 2: _normal_direction_2D_1D, 3: _normal_direction_3D_1D}, 2: {2: _normal_direction_2D, 3: _normal_direction_3D_2D}, 3: {3: _normal_direction_3D}} facet_normal = {1: {1: _facet_normal_1D, 2: _facet_normal_2D_1D, 3: _facet_normal_3D_1D}, 2: {2: _facet_normal_2D, 3: _facet_normal_3D_2D}, 3: {3: _facet_normal_3D}} cell_volume = {1: {1: _cell_volume_1D, 2: _cell_volume_2D_1D, 3: _cell_volume_3D_1D}, 2: {2: _cell_volume_2D, 3: _cell_volume_3D_2D}, 3: {3: _cell_volume_3D}} circumradius = {1: {1: _circumradius_1D, 2: _circumradius_2D_1D, 3: _circumradius_3D_1D}, 2: {2: _circumradius_2D, 3: _circumradius_3D_2D}, 3: {3: _circumradius_3D}} facet_area = {1: {1: _facet_area_1D, 2: _facet_area_2D_1D, 3: _facet_area_3D_1D}, 2: {2: _facet_area_2D, 3: _facet_area_3D_2D}, 3: {3: _facet_area_3D}} min_facet_edge_length = {3: {3: _min_facet_edge_length_3D}} max_facet_edge_length = {3: {3: _max_facet_edge_length_3D}} # Code snippets for runtime quadrature (calling evaluate_basis) eval_basis_decl = """\ std::vector > %(table_name)s(num_quadrature_points);""" eval_basis_init = """\ for (std::size_t ip = 0; ip < num_quadrature_points; ip++) %(table_name)s[ip].resize(%(table_size)s);""" eval_basis = """\ // Get current quadrature point and compute values of basis functions const double* x = quadrature_points + ip*%(gdim)s; const double* v = coordinate_dofs + %(vertex_offset)s; %(classname)s %(classname)s_obj; %(classname)s_obj.evaluate_basis_all(%(eval_name)s, x, v, cell_orientation);""" eval_basis_copy = """\ // Copy values to table %(table_name)s for (std::size_t i = 0; i < %(space_dim)s; i++) %(table_name)s[ip][%(table_offset)s + i] = %(eval_name)s[%(eval_stride)s*i + %(eval_offset)s];""" eval_derivs_decl = """\ std::vector > %(table_name)s(num_quadrature_points);""" eval_derivs_init = """\ for (std::size_t ip = 0; ip < num_quadrature_points; ip++) %(table_name)s[ip].resize(%(table_size)s);""" eval_derivs = """\ // Get current quadrature point and compute values of basis function derivatives const double* x = quadrature_points + ip*%(gdim)s; const double* v = coordinate_dofs + %(vertex_offset)s; %(classname)s %(classname)s_obj; %(classname)s_obj.evaluate_basis_derivatives_all(%(n)s, %(eval_name)s, x, v, cell_orientation);""" eval_derivs_copy = """\ // Copy values to table %(table_name)s for (std::size_t i = 0; i < %(space_dim)s; i++) %(table_name)s[ip][%(table_offset)s + i] = %(eval_name)s[%(eval_stride)s*i + %(eval_offset)s];""" ffc-2019.1.0.post0/ffc/quadrature/cpp.py000066400000000000000000000746251346026536000176400ustar00rootroot00000000000000# -*- coding: utf-8 -*- "This module defines rules and algorithms for generating C++ code." # Copyright (C) 2009-2017 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # Modified by Kristian B. Oelgaard 2011 # Modified by Marie E. Rognes 2010 # Modified by Martin Sandve Alnæs 2013-2017 # Modified by Chris Richardson 2017 # Python modules import re import numpy import platform # UFL modules from ufl import custom_integral_types # FFC modules from ffc.log import debug, error # Default precision for formatting floats default_precision = numpy.finfo("double").precision + 1 # == 16 # ufc class names def make_classname(prefix, basename, signature): pre = prefix.lower() + "_" if prefix else "" sig = str(signature).lower() return "%s%s_%s" % (pre, basename, sig) def make_integral_classname(prefix, integral_type, form_id, subdomain_id): basename = "%s_integral_%s" % (integral_type, str(form_id).lower()) return make_classname(prefix, basename, subdomain_id) # Mapping of restrictions _fixed_map = {None: "", "+": "_0", "-": "_1"} _choose_map = lambda r: _fixed_map[r] if r in _fixed_map else "_%s" % str(r) # FIXME: MSA: Using a dict to collect functions in a namespace is weird # and makes the code harder to follow, change to a class # with member functions instead! # FIXME: KBO: format is a builtin_function, i.e., we should use a different name. # Formatting rules format = {} # Program flow format.update({ "return": lambda v: "return %s;" % str(v), "grouping": lambda v: "(%s)" % v, "block": lambda v: "{%s}" % v, "block begin": "{", "block end": "}", "list": lambda v: format["block"](format["list separator"].join([str(l) for l in v])), "switch": lambda v, cases, default=None, numbers=None: _generate_switch(v, cases, default, numbers), "exception": lambda v: "throw std::runtime_error(\"%s\");" % v, "warning": lambda v: 'std::cerr << "*** FFC warning: " << "%s" << std::endl;' % v, "comment": lambda v: "// %s" % v, "if": lambda c, v: "if (%s)\n{\n%s\n}\n" % (c, v), "loop": lambda i, j, k: "for (unsigned int %s = %s; %s < %s; %s++)" % (i, j, i, k, i), "generate loop": lambda v, w, _indent=0: _generate_loop(v, w, _indent), "is equal": " == ", "not equal": " != ", "less than": " < ", "greater than": " > ", "less equal": " <= ", "greater equal": " >= ", "and": " && ", "or": " || ", "not": lambda v: "!(%s)" % v, "do nothing": "// Do nothing" }) # Declarations format.update({ "declaration": lambda t, n, v=None: _declaration(t, n, v), "float declaration": "double", "int declaration": "int", "uint declaration": "unsigned int", "static const uint declaration": "static const unsigned int", "static const float declaration": "static const double", "vector table declaration": "std::vector< std::vector >", "double array declaration": "double*", "const double array declaration": "const double*", "const float declaration": lambda v, w: "const double %s = %s;" % (v, w), "const uint declaration": lambda v, w: "const unsigned int %s = %s;" % (v, w), "dynamic array": lambda t, n, s: "%s *%s = new %s[%s];" % (t, n, t, s), "static array": lambda t, n, s: "static %s %s[%d];" % (t, n, s), "fixed array": lambda t, n, s: "%s %s[%d];" % (t, n, s), "delete dynamic array": lambda n, s=None: _delete_array(n, s), "create foo": lambda v: "new %s()" % v, "create factory": lambda v: "create_%s()" % v }) # Mathematical operators format.update({ "add": lambda v: " + ".join(v), "iadd": lambda v, w: "%s += %s;" % (str(v), str(w)), "sub": lambda v: " - ".join(v), "neg": lambda v: "-%s" % v, "mul": lambda v: "*".join(v), "imul": lambda v, w: "%s *= %s;" % (str(v), str(w)), "div": lambda v, w: "%s/%s" % (str(v), str(w)), "inverse": lambda v: "(1.0/%s)" % v, "std power": lambda base, exp: "std::pow(%s, %s)" % (base, exp), "exp": lambda v: "std::exp(%s)" % str(v), "ln": lambda v: "std::log(%s)" % str(v), "cos": lambda v: "std::cos(%s)" % str(v), "sin": lambda v: "std::sin(%s)" % str(v), "tan": lambda v: "std::tan(%s)" % str(v), "cosh": lambda v: "std::cosh(%s)" % str(v), "sinh": lambda v: "std::sinh(%s)" % str(v), "tanh": lambda v: "std::tanh(%s)" % str(v), "acos": lambda v: "std::acos(%s)" % str(v), "asin": lambda v: "std::asin(%s)" % str(v), "atan": lambda v: "std::atan(%s)" % str(v), "atan_2": lambda v1, v2: "std::atan2(%s,%s)" % (str(v1), str(v2)), "erf": lambda v: "erf(%s)" % str(v), "bessel_i": lambda v, n: "boost::math::cyl_bessel_i(%s, %s)" % (str(n), str(v)), "bessel_j": lambda v, n: "boost::math::cyl_bessel_j(%s, %s)" % (str(n), str(v)), "bessel_k": lambda v, n: "boost::math::cyl_bessel_k(%s, %s)" % (str(n), str(v)), "bessel_y": lambda v, n: "boost::math::cyl_neumann(%s, %s)" % (str(n), str(v)), "absolute value": lambda v: "std::abs(%s)" % str(v), "min value": lambda l, r: "std::min(%s, %s)" % (str(l), str(r)), "max value": lambda l, r: "std::max(%s, %s)" % (str(l), str(r)), "sqrt": lambda v: "std::sqrt(%s)" % str(v), "addition": lambda v: _add(v), "multiply": lambda v: _multiply(v), "power": lambda base, exp: _power(base, exp), "inner product": lambda v, w: _inner_product(v, w), "assign": lambda v, w: "%s = %s;" % (v, str(w)), "component": lambda v, k: _component(v, k) }) # Formatting used in tabulate_tensor format.update({ "geometry tensor": lambda j, a: "G%d_%s" % (j, "_".join(["%d" % i for i in a])) }) # Geometry related variable names (from code snippets). format.update({ "entity index": "entity_indices", "num entities": "num_global_entities", "cell": lambda s: "ufc::shape::%s" % s, "J": lambda i, j, m, n: "J[%d]" % _flatten(i, j, m, n), "inv(J)": lambda i, j, m, n: "K[%d]" % _flatten(i, j, m, n), "det(J)": lambda r=None: "detJ%s" % _choose_map(r), "cell volume": lambda r=None: "volume%s" % _choose_map(r), "circumradius": lambda r=None: "circumradius%s" % _choose_map(r), "facet area": "facet_area", "min facet edge length": lambda r: "min_facet_edge_length", "max facet edge length": lambda r: "max_facet_edge_length", "scale factor": "det", "transform": lambda t, i, j, m, n, r: _transform(t, i, j, m, n, r), "normal component": lambda r, j: "n%s%s" % (_choose_map(r), j), "x coordinate": "X", "y coordinate": "Y", "z coordinate": "Z", "ip coordinates": lambda i, j: "X%d[%d]" % (i, j), "affine map table": lambda i, j: "FEA%d_f%d" % (i, j), "coordinate_dofs": lambda r=None: "coordinate_dofs%s" % _choose_map(r) }) # UFC function arguments and class members (names) format.update({ "element tensor": lambda i: "A[%s]" % i, "element tensor term": lambda i, j: "A%d[%s]" % (j, i), "coefficient": lambda j, k: format["component"]("w", [j, k]), "argument basis num": "i", "argument derivative order": "n", "argument values": "values", "argument coordinates": "dof_coordinates", "facet": lambda r: "facet%s" % _choose_map(r), "vertex": "vertex", "argument axis": "i", "argument dimension": "d", "argument entity": "i", "member global dimension": "_global_dimension", "argument dofs": "dofs", "argument dof num": "i", "argument dof values": "dof_values", "argument vertex values": "vertex_values", "argument sub": "i", # sub element "argument subdomain": "subdomain_id", # sub domain }) # Formatting used in evaluatedof. format.update({ "dof vals": "vals", "dof result": "result", "dof X": lambda i: "X_%d" % i, "dof D": lambda i: "D_%d" % i, "dof W": lambda i: "W_%d" % i, "dof copy": lambda i: "copy_%d" % i, "dof physical coordinates": "y" }) # Formatting used in evaluate_basis, evaluate_basis_derivatives and quadrature # code generators. format.update({ # evaluate_basis and evaluate_basis_derivatives "tmp value": lambda i: "tmp%d" % i, "tmp ref value": lambda i: "tmp_ref%d" % i, "local dof": "dof", "basisvalues": "basisvalues", "coefficients": lambda i: "coefficients%d" % (i), "num derivatives": lambda t_or_g: "num_derivatives" + t_or_g, "derivative combinations": lambda t_or_g: "combinations" + t_or_g, "transform matrix": "transform", "transform Jinv": "Jinv", "dmats": lambda i: "dmats%s" % (i), "dmats old": "dmats_old", "reference derivatives": "derivatives", "dof values": "dof_values", "dof map if": lambda i, j: "%d <= %s && %s <= %d"\ % (i, format["argument basis num"], format["argument basis num"], j), "dereference pointer": lambda n: "*%s" % n, "reference variable": lambda n: "&%s" % n, "call basis": lambda i, s: "evaluate_basis(%s, %s, x, coordinate_dofs, cell_orientation);" % (i, s), "call basis_all": "evaluate_basis_all(values, x, coordinate_dofs, cell_orientation);", "call basis_derivatives": lambda i, s: "evaluate_basis_derivatives(%s, n, %s, x, coordinate_dofs, cell_orientation);" % (i, s), "call basis_derivatives_all": lambda i, s: "evaluate_basis_derivatives_all(n, %s, x, coordinate_dofs, cell_orientation);" % s, # quadrature code generators "integration points": "ip", "first free index": "j", "second free index": "k", "geometry constant": lambda i: "G[%d]" % i, "ip constant": lambda i: "I[%d]" % i, "basis constant": lambda i: "B[%d]" % i, "conditional": lambda i: "C[%d]" % i, "evaluate conditional": lambda i, j, k: "(%s) ? %s : %s" % (i, j, k), # "geometry constant": lambda i: "G%d" % i, # "ip constant": lambda i: "I%d" % i, # "basis constant": lambda i: "B%d" % i, "function value": lambda i: "F%d" % i, "nonzero columns": lambda i: "nzc%d" % i, "weight": lambda i: "W" if i is None else "W%d" % (i), "psi name": lambda c, et, e, co, d, a: _generate_psi_name(c, et, e, co, d, a), # both "free indices": ["r", "s", "t", "u"], "matrix index": lambda i, j, range_j: _matrix_index(i, str(j), str(range_j)), "quadrature point": lambda i, gdim: "quadrature_points + %s*%d" % (i, gdim), "facet_normal_custom": lambda gdim: _generate_facet_normal_custom(gdim), }) # Misc format.update({ "bool": lambda v: {True: "true", False: "false"}[v], "str": lambda v: "%s" % v, "int": lambda v: "%d" % v, "list separator": ", ", "block separator": ",\n", "new line": "\\\n", "tabulate tensor": lambda m: _tabulate_tensor(m), }) # Code snippets from ffc.quadrature.codesnippets import * format.update({ "compute_jacobian": lambda tdim, gdim, r=None: compute_jacobian[tdim][gdim] % {"restriction": _choose_map(r)}, "compute_jacobian_inverse": lambda tdim, gdim, r=None: compute_jacobian_inverse[tdim][gdim] % {"restriction": _choose_map(r)}, "orientation": lambda tdim, gdim, r=None: orientation_snippet % {"restriction": _choose_map(r)} if tdim != gdim else "", "facet determinant": lambda tdim, gdim, r=None: facet_determinant[tdim][gdim] % {"restriction": _choose_map(r)}, "fiat coordinate map": lambda cell, gdim: fiat_coordinate_map[cell][gdim], "generate normal": lambda tdim, gdim, i: _generate_normal(tdim, gdim, i), "generate cell volume": lambda tdim, gdim, i, r=None: _generate_cell_volume(tdim, gdim, i, r), "generate circumradius": lambda tdim, gdim, i, r=None: _generate_circumradius(tdim, gdim, i, r), "generate facet area": lambda tdim, gdim: facet_area[tdim][gdim], "generate min facet edge length": lambda tdim, gdim, r=None: min_facet_edge_length[tdim][gdim] % {"restriction": _choose_map(r)}, "generate max facet edge length": lambda tdim, gdim, r=None: max_facet_edge_length[tdim][gdim] % {"restriction": _choose_map(r)}, "generate ip coordinates": lambda g, t, num_ip, name, ip, r=None: (ip_coordinates[t][g][0], ip_coordinates[t][g][1] % {"restriction": _choose_map(r), "ip": ip, "name": name, "num_ip": num_ip}), "scale factor snippet": scale_factor, "map onto physical": map_onto_physical, "evaluate basis snippet": eval_basis, "combinations": combinations_snippet, "transform snippet": transform_snippet, "evaluate function": evaluate_f, "ufc comment": comment_ufc, "dolfin comment": comment_dolfin, "header_h": header_h, "header_c": header_c, "footer": footer, "eval_basis_decl": eval_basis_decl, "eval_basis_init": eval_basis_init, "eval_basis": eval_basis, "eval_basis_copy": eval_basis_copy, "eval_derivs_decl": eval_derivs_decl, "eval_derivs_init": eval_derivs_init, "eval_derivs": eval_derivs, "eval_derivs_copy": eval_derivs_copy, "extract_cell_coordinates": lambda offset, r: "const double* coordinate_dofs_%d = coordinate_dofs + %d;" % (r, offset) }) # Helper functions for formatting def _declaration(type, name, value=None): if value is None: return "%s %s;" % (type, name) return "%s %s = %s;" % (type, name, str(value)) def _component(var, k): if not isinstance(k, (list, tuple)): k = [k] return "%s" % var + "".join("[%s]" % str(i) for i in k) def _delete_array(name, size=None): if size is None: return "delete [] %s;" % name f_r = format["free indices"][0] code = format["generate loop"](["delete [] %s;" % format["component"](name, f_r)], [(f_r, 0, size)]) code.append("delete [] %s;" % name) return "\n".join(code) def _multiply(factors): """ Generate string multiplying a list of numbers or strings. If a factor is zero, the whole product is zero. Any factors equal to one are ignored. """ # FIXME: This could probably be way more robust and elegant. cpp_str = format["str"] non_zero_factors = [] for f in factors: # Round-off if f is smaller than epsilon if isinstance(f, (int, float)): if abs(f) < format["epsilon"]: return cpp_str(0) if abs(f - 1.0) < format["epsilon"]: continue # Convert to string f = cpp_str(f) # Return zero if any factor is zero if f == "0": return cpp_str(0) # If f is 1, don't add it to list of factors if f == "1": continue # If sum-like, parentheseze factor if "+" in f or "-" in f: f = "(%s)" % f non_zero_factors += [f] if len(non_zero_factors) == 0: return cpp_str(1.0) return "*".join(non_zero_factors) def _add(terms): "Generate string summing a list of strings." # FIXME: Subtract absolute value of negative numbers result = " + ".join([str(t) for t in terms if (str(t) != "0")]) if result == "": return format["str"](0) return result def _power(base, exponent): "Generate code for base^exponent." if exponent >= 0: return _multiply(exponent * (base,)) else: return "1.0 / (%s)" % _power(base, -exponent) def _inner_product(v, w): "Generate string for v[0]*w[0] + ... + v[n]*w[n]." # Check that v and w have same length assert(len(v) == len(w)), "Sizes differ in inner-product!" # Special case, zero terms if len(v) == 0: return format["float"](0) # Straightforward handling when we only have strings if isinstance(v[0], str): return _add([_multiply([v[i], w[i]]) for i in range(len(v))]) # Fancy handling of negative numbers etc result = None eps = format["epsilon"] add = format["add"] sub = format["sub"] neg = format["neg"] mul = format["mul"] fl = format["float"] for (c, x) in zip(v, w): if result: if abs(c - 1.0) < eps: result = add([result, x]) elif abs(c + 1.0) < eps: result = sub([result, x]) elif c > eps: result = add([result, mul([fl(c), x])]) elif c < -eps: result = sub([result, mul([fl(-c), x])]) else: if abs(c - 1.0) < eps: result = x elif abs(c + 1.0) < eps: result = neg(x) elif c > eps: result = mul([fl(c), x]) elif c < -eps: result = neg(mul([fl(-c), x])) return result def _transform(type, i, j, m, n, r): map_name = {"J": "J", "JINV": "K"}[type] + _choose_map(r) return (map_name + "[%d]") % _flatten(i, j, m, n) # FIXME: Input to _generate_switch should be a list of tuples (i, case) def _generate_switch(variable, cases, default=None, numbers=None): "Generate switch statement from given variable and cases" # Special case: no cases and no default if len(cases) == 0 and default is None: return format["do nothing"] elif len(cases) == 0: return default # Special case: one case and no default if len(cases) == 1 and default is None: return cases[0] # Create numbers for switch if numbers is None: numbers = list(range(len(cases))) # Create switch code = "switch (%s)\n{\n" % variable for (i, case) in enumerate(cases): code += "case %d:\n {\n %s\n break;\n }\n" % (numbers[i], indent(case, 2)) code += "}\n" # Default value if default: code += "\n" + default return code def _tabulate_tensor(vals): "Tabulate a multidimensional tensor. (Replace tabulate_matrix and tabulate_vector)." # Prefetch formats to speed up code generation f_block = format["block"] f_list_sep = format["list separator"] f_block_sep = format["block separator"] # FIXME: KBO: Change this to "float" once issue in set_float_formatting is fixed. f_float = format["floating point"] f_epsilon = format["epsilon"] # Create numpy array and get shape. tensor = numpy.array(vals) shape = numpy.shape(tensor) if len(shape) == 1: # Create zeros if value is smaller than tolerance. values = [] for v in tensor: if abs(v) < f_epsilon: values.append(f_float(0.0)) else: values.append(f_float(v)) # Format values. return f_block(f_list_sep.join(values)) elif len(shape) > 1: return f_block(f_block_sep.join([_tabulate_tensor(tensor[i]) for i in range(shape[0])])) else: error("Not an N-dimensional array:\n%s" % tensor) def _generate_loop(lines, loop_vars, _indent): "This function generates a loop over a vector or matrix." # Prefetch formats to speed up code generation. f_loop = format["loop"] f_begin = format["block begin"] f_end = format["block end"] f_comment = format["comment"] if not loop_vars: return lines code = [] for ls in loop_vars: # Get index and lower and upper bounds. index, lower, upper = ls # Loop index. code.append(indent(f_loop(index, lower, upper), _indent)) code.append(indent(f_begin, _indent)) # Increase indentation. _indent += 2 # If this is the last loop, write values. if index == loop_vars[-1][0]: for l in lines: code.append(indent(l, _indent)) # Decrease indentation and write end blocks. indices = [var[0] for var in loop_vars] indices.reverse() for index in indices: _indent -= 2 code.append(indent(f_end + " " + f_comment("end loop over '%s'" % index), _indent)) return code def _matrix_index(i, j, range_j): "Map the indices in a matrix to an index in an array i.e., m[i][j] -> a[i*range(j)+j]" if i == 0: access = j elif i == 1: access = format["add"]([range_j, j]) else: irj = format["mul"]([format["str"](i), range_j]) access = format["add"]([irj, j]) return access def _generate_psi_name(counter, entity_type, entity, component, derivatives, avg): """Generate a name for the psi table of the form: FE#_f#_v#_C#_D###_A#, where '#' will be an integer value. FE - is a simple counter to distinguish the various bases, it will be assigned in an arbitrary fashion. f - denotes facets if applicable, range(element.num_facets()). v - denotes vertices if applicable, range(num_vertices). C - is the component number if any (flattened in the case of tensor valued functions) D - is the number of derivatives in each spatial direction if any. If the element is defined in 3D, then D012 means d^3(*)/dydz^2. A - denotes averaged over cell (AC) or facet (AF) """ name = "FE%d" % counter if entity_type == "facet": name += "_f%d" % entity elif entity_type == "vertex": name += "_v%d" % entity if component != () and component != []: name += "_C%d" % component if any(derivatives): name += "_D" + "".join(map(str, derivatives)) if avg == "cell": name += "_AC" elif avg == "facet": name += "_AF" return name def _generate_normal(tdim, gdim, integral_type, reference_normal=False): "Generate code for computing normal" # Choose snippets direction = normal_direction[tdim][gdim] assert (gdim in facet_normal[tdim]),\ "Facet normal not yet implemented for this tdim/gdim combo" normal = facet_normal[tdim][gdim] # Choose restrictions if integral_type == "exterior_facet": code = direction % {"restriction": "", "facet": "facet"} code += normal % {"direction": "", "restriction": ""} elif integral_type == "interior_facet": code = direction % {"restriction": _choose_map("+"), "facet": "facet_0"} code += normal % {"direction": "", "restriction": _choose_map("+")} code += normal % {"direction": "!", "restriction": _choose_map("-")} else: error("Unsupported integral_type: %s" % str(integral_type)) return code def _generate_facet_normal_custom(gdim): "Generate code for setting facet normal in custom integrals" code = format["comment"]("Set facet normal components for current quadrature point\n") for i in range(gdim): code += "const double n_0%d = facet_normals[%d*ip + %d];\n" % (i, gdim, i) code += "const double n_1%d = - facet_normals[%d*ip + %d];\n" % (i, gdim, i) return code def _generate_cell_volume(tdim, gdim, integral_type, r=None): "Generate code for computing cell volume." # Choose snippets volume = cell_volume[tdim][gdim] # Choose restrictions if integral_type in ("cell", "exterior_facet"): code = volume % {"restriction": ""} elif integral_type == "interior_facet": code = volume % {"restriction": _choose_map("+")} code += volume % {"restriction": _choose_map("-")} elif integral_type in custom_integral_types: code = volume % {"restriction": _choose_map(r)} else: error("Unsupported integral_type: %s" % str(integral_type)) return code def _generate_circumradius(tdim, gdim, integral_type, r=None): "Generate code for computing a cell's circumradius." # Choose snippets radius = circumradius[tdim][gdim] # Choose restrictions if integral_type in ("cell", "exterior_facet", "vertex"): code = radius % {"restriction": ""} elif integral_type == "interior_facet": code = radius % {"restriction": _choose_map("+")} code += radius % {"restriction": _choose_map("-")} elif integral_type in custom_integral_types: code = radius % {"restriction": _choose_map(r)} else: error("Unsupported integral_type: %s" % str(integral_type)) return code def _flatten(i, j, m, n): return i * n + j # Other functions def indent(block, num_spaces): "Indent each row of the given string block with n spaces." indentation = " " * num_spaces return indentation + ("\n" + indentation).join(block.split("\n")) def count_ops(code): "Count the number of operations in code (multiply-add pairs)." num_add = code.count(" + ") + code.count(" - ") num_multiply = code.count("*") + code.count("/") return (num_add + num_multiply) // 2 def set_float_formatting(precision): "Set floating point formatting based on precision." # Set default if not set if precision is None: precision = default_precision # Options for float formatting # f1 = "%%.%df" % precision # f2 = "%%.%de" % precision f1 = "%%.%dg" % precision f2 = "%%.%dg" % precision f_int = "%%.%df" % 1 eps = eval("1e-%s" % precision) # Regular float formatting def floating_point_regular(v): if abs(v - round(v, 1)) < eps: return f_int % v elif abs(v) < 100.0: return f1 % v else: return f2 % v # Special float formatting on Windows (remove extra leading zero) def floating_point_windows(v): return floating_point_regular(v).replace("e-0", "e-").replace("e+0", "e+") # Set float formatting if platform.system() == "Windows": format["float"] = floating_point_windows else: format["float"] = floating_point_regular # FIXME: KBO: Remove once we agree on the format of 'f1' format["floating point"] = format["float"] # Set machine precision format["epsilon"] = 10.0 * eval("1e-%s" % precision) # Hack to propagate precision to uflacs internals... #import ffc.uflacs.language.format_value #ffc.uflacs.language.format_value.set_float_precision(precision) def set_exception_handling(convert_exceptions_to_warnings): "Set handling of exceptions." if convert_exceptions_to_warnings: format["exception"] = format["warning"] # Declarations to examine types = [["double"], ["const", "double"], ["const", "double", "*", "const", "*"], ["int"], ["const", "int"], ["unsigned", "int"], ["bool"], ["const", "bool"], ["static", "unsigned", "int"], ["const", "unsigned", "int"]] # Special characters and delimiters special_characters = ["+", "-", "*", "/", "=", ".", " ", ";", "(", ")", "\\", "{", "}", "[", "]", "!"] def remove_unused(code, used_set=set()): """ Remove unused variables from a given C++ code. This is useful when generating code that will be compiled with gcc and parameters -Wall -Werror, in which case gcc returns an error when seeing a variable declaration for a variable that is never used. Optionally, a set may be specified to indicate a set of variables names that are known to be used a priori. """ # Dictionary of (declaration_line, used_lines) for variables variables = {} # List of variable names (so we can search them in order) variable_names = [] lines = code.split("\n") for (line_number, line) in enumerate(lines): # Exclude commented lines. if line[:2] == "//" or line[:3] == "///": continue # Split words words = [word for word in line.split(" ") if word != ""] # Remember line where variable is declared for type in [type for type in types if " ".join(type) in " ".join(words)]: # Fewer matches than line below. # for type in [type for type in types if len(words) > len(type)]: variable_type = words[0:len(type)] variable_name = words[len(type)] # Skip special characters if variable_name in special_characters: continue # Test if any of the special characters are present in the variable name # If this is the case, then remove these by assuming that the 'real' name # is the first entry in the return list. This is implemented to prevent # removal of e.g. 'double array[6]' if it is later used in a loop as 'array[i]' if variable_type == type: # Create correct variable name (e.g. y instead of # y[2]) for variables with separators seps_present = [sep for sep in special_characters if sep in variable_name] if seps_present: variable_name = sorted([variable_name.split(sep)[0] for sep in seps_present]) variable_name = variable_name[0] variables[variable_name] = (line_number, []) if variable_name not in variable_names: variable_names += [variable_name] # Mark line for used variables for variable_name in variables: (declaration_line, used_lines) = variables[variable_name] if _variable_in_line(variable_name, line) and line_number > declaration_line: variables[variable_name] = (declaration_line, used_lines + [line_number]) # Reverse the order of the variable names to catch variables used # only by variables that are removed variable_names.reverse() # Remove declarations that are not used removed_lines = [] for variable_name in variable_names: (declaration_line, used_lines) = variables[variable_name] for line in removed_lines: if line in used_lines: used_lines.remove(line) if not used_lines and variable_name not in used_set: debug("Removing unused variable: %s" % variable_name) lines[declaration_line] = None # KBO: Need to completely remove line for evaluate_basis* to work # lines[declaration_line] = "// " + lines[declaration_line] removed_lines += [declaration_line] return "\n".join([line for line in lines if line is not None]) def _variable_in_line(variable_name, line): "Check if variable name is used in line" if variable_name not in line: return False for character in special_characters: line = line.replace(character, "\\" + character) delimiter = "[" + ",".join(["\\" + c for c in special_characters]) + "]" return re.search(delimiter + variable_name + delimiter, line) is not None ffc-2019.1.0.post0/ffc/quadrature/deprecation.py000066400000000000000000000016441346026536000213420ustar00rootroot00000000000000# -*- coding: utf-8 -*- class QuadratureRepresentationDeprecationWarning(DeprecationWarning): """Warning about deprecation of quadrature representation""" def __init__(self): msg = """ *** ===================================================== *** *** FFC: quadrature representation is deprecated! It will *** *** likely be removed in 2018.2.0 release. Use uflacs *** *** representation instead. *** *** ===================================================== ***""" super(QuadratureRepresentationDeprecationWarning, self).__init__(msg) # Be very annoying - never ignore the warning import warnings warnings.simplefilter('always', QuadratureRepresentationDeprecationWarning) del warnings def issue_deprecation_warning(): """Issue warning about quadrature deprecation""" import warnings warnings.warn(QuadratureRepresentationDeprecationWarning(), stacklevel=2) ffc-2019.1.0.post0/ffc/quadrature/expr.py000066400000000000000000000113241346026536000200170ustar00rootroot00000000000000# -*- coding: utf-8 -*- "This file implements a base class to represent an expression." # Copyright (C) 2009-2010 Kristian B. Oelgaard # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # First added: 2009-08-08 # Last changed: 2010-01-21 # FFC quadrature modules. from .symbolics import create_float class Expr(object): __slots__ = ("val", "t", "_prec", "_repr", "_hash") def __init__(self): """An Expr object contains: val - float, holds value of object. t - Type (int), one of CONST, GEO, IP, BASIS. _prec - int, precedence which is used for comparison and comparing classes. _repr - str, string value of __repr__(), we only compute it once. _hash - int, hash value of __hash__(), we only compute it once. The constructor is empty, so initialisation of variables are left to child classes.""" pass # Representation of the expression. def __repr__(self): "Representation of the expression for comparison and debugging." return self._repr # Hash. def __hash__(self): "Hash (for lookup in {})." return self._hash # Comparison. def __eq__(self, other): "==, True if representations are equal." if isinstance(other, Expr): return self._repr == other._repr return False def __ne__(self, other): "!=, True if representations are not equal." if isinstance(other, Expr): return self._repr != other._repr return True def __lt__(self, other): """<, compare precedence and _repr if two objects have the same precedence.""" if not isinstance(other, Expr): return False if self._prec < other._prec: return True elif self._prec == other._prec: return self._repr < other._repr return False def __gt__(self, other): ">, opposite of __lt__." if not isinstance(other, Expr): return True if self._prec > other._prec: return True elif self._prec == other._prec: return self._repr > other._repr return False # Public functions (for FloatValue, other classes should overload # as needed) def expand(self): """Expand the expression. (FloatValue and Symbol are expanded by construction).""" # Nothing to be done. return self def get_unique_vars(self, var_type): "Get unique variables (Symbols) as a set." # A float is not a variable. return set() def get_var_occurrences(self): """Determine the number of times all variables occurs in the expression. Returns a dictionary of variables and the number of times they occur. Works for FloatValue and Symbol. """ # There is only one float value (if it is not -1 or 1). if self.val == 1.0 or self.val == -1.0: return {} return {self: 1} def ops(self): """Return number of operations to compute the expression. This is always zero for a FloatValue.""" # Just return 0. # NOTE: This means that minus in e.g., -2 and -2*x is not counted. return 0 def reduce_ops(self): """Reduce number of operations to evaluate the expression. There is nothing to be done for FloatValue and Symbol. """ # Nothing to be done. return self def reduce_var(self, var): """Reduce the expression by another variable by using division. This works for FloatValue, Symbol and Product. """ return self/var def reduce_vartype(self, var_type): """Reduce expression with given var_type. It returns a tuple (found, remain), where 'found' is an expression that only has variables of type == var_type. If no variables are found, found=(). The 'remain' part contains the leftover after division by 'found' such that: self = found*remain. Works for FloatValue and Symbol. """ if self.t == var_type: return [(self, create_float(1))] return [((), self)] ffc-2019.1.0.post0/ffc/quadrature/floatvalue.py000066400000000000000000000133361346026536000212100ustar00rootroot00000000000000# -*- coding: utf-8 -*- "This file implements a class to represent a float." # Copyright (C) 2009-2010 Kristian B. Oelgaard # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # First added: 2009-07-12 # Last changed: 2010-02-09 # FFC modules. from ffc.log import error from ffc.quadrature.cpp import format # FFC quadrature modules. from .symbolics import CONST from .symbolics import create_float from .symbolics import create_product from .symbolics import create_sum from .symbolics import create_fraction from .expr import Expr class FloatValue(Expr): def __init__(self, value): """Initialise a FloatValue object, it derives from Expr and contains no additional variables. NOTE: self._prec = 0. """ # Initialise value, type and class. self.val = float(value) self.t = CONST self._prec = 0 # Handle 0.0, 1.0 and -1.0 values explicitly. EPS = format["epsilon"] if abs(value) < EPS: self.val = 0.0 elif abs(value - 1.0) < EPS: self.val = 1.0 elif abs(value + 1.0) < EPS: self.val = -1.0 # Compute the representation now, such that we can use it # directly in the __eq__ and __ne__ methods (improves # performance a bit, but only when objects are cached). self._repr = "FloatValue(%s)" % format["float"](self.val) # Use repr as hash value self._hash = hash(self._repr) # Print function. def __str__(self): "Simple string representation which will appear in the generated code." return format["float"](self.val) # Binary operators. def __add__(self, other): "Addition by other objects." # NOTE: We expect expanded objects here. # This is only well-defined if other is a float or if self.val == 0. if other._prec == 0: # float return create_float(self.val + other.val) elif self.val == 0.0: return other # Return a new sum return create_sum([self, other]) def __sub__(self, other): "Subtract other objects." # NOTE: We expect expanded objects here. if other._prec == 0: # float return create_float(self.val - other.val) # Multiply other by -1 elif self.val == 0.0: return create_product([create_float(-1), other]) # Return a new sum where other is multiplied by -1 return create_sum([self, create_product([create_float(-1), other])]) def __mul__(self, other): "Multiplication by other objects." # NOTE: We expect expanded objects here i.e., # Product([FloatValue]) # should not be present. # Only handle case where other is a float, else let the other # object handle the multiplication. if other._prec == 0: # float return create_float(self.val * other.val) return other.__mul__(self) def __truediv__(self, other): "Division by other objects." # If division is illegal (this should definitely not happen). if other.val == 0.0: error("Division by zero") # TODO: Should we also support division by fraction for # generality? # It should not be needed by this module. if other._prec == 4: # frac error("Did not expected to divide by fraction") # If fraction will be zero. if self.val == 0.0: return self # NOTE: We expect expanded objects here i.e., # Product([FloatValue]) # should not be present. # Handle types appropriately. if other._prec == 0: # float return create_float(self.val / other.val) # If other is a symbol, return a simple fraction. elif other._prec == 1: # sym return create_fraction(self, other) # Don't handle division by sum. elif other._prec == 3: # sum # TODO: Here we could do: 4 / (2*x + 4*y) -> 2/(x + 2*y). return create_fraction(self, other) # If other is a product, remove any float value to avoid 4 / # (2*x), this will return 2/x. val = 1.0 for v in other.vrs: if v._prec == 0: # float val *= v.val # If we had any floats, create new numerator and only use # 'real' variables from the product in the denominator. if val != 1.0: # Check if we need to create a new denominator. # TODO: Just use other.vrs[1:] instead. if len(other.get_vrs()) > 1: return create_fraction(create_float(self.val / val), create_product(other.get_vrs())) # TODO: Because we expect all products to be expanded we # shouldn't need to check for this case, just use # other.vrs[1]. elif len(other.get_vrs()) == 1: return create_fraction(create_float(self.val / val), other.vrs[1]) error("No variables left in denominator") # Nothing left to do. return create_fraction(self, other) __div__ = __truediv__ ffc-2019.1.0.post0/ffc/quadrature/fraction.py000066400000000000000000000346701346026536000206570ustar00rootroot00000000000000# -*- coding: utf-8 -*- "This file implements a class to represent a fraction." # Copyright (C) 2009-2010 Kristian B. Oelgaard # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # First added: 2009-07-12 # Last changed: 2010-02-09 # FFC modules. from ffc.log import error from ffc.quadrature.cpp import format # FFC quadrature modules. from .symbolics import create_float from .symbolics import create_product from .symbolics import create_sum from .symbolics import create_fraction from .expr import Expr # FFC quadrature modules. from .floatvalue import FloatValue class Fraction(Expr): __slots__ = ("num", "denom", "_expanded", "_reduced") def __init__(self, numerator, denominator): """Initialise a Fraction object, it derives from Expr and contains the additional variables: num - expr, the numerator. denom - expr, the denominator. _expanded - object, an expanded object of self, e.g., self = 'x*y/x'-> self._expanded = y (a symbol). _reduced - object, a reduced object of self, e.g., self = '(2*x + x*y)/z'-> self._reduced = x*(2 + y)/z (a fraction). NOTE: self._prec = 4.""" # Check for illegal division. if denominator.val == 0.0: error("Division by zero.") # Initialise all variables. self.val = numerator.val self.t = min([numerator.t, denominator.t]) self.num = numerator self.denom = denominator self._prec = 4 self._expanded = False self._reduced = False # Only try to eliminate scalar values. # TODO: If we divide by a float, we could add the inverse to # the numerator as a product, but I don't know if this is # efficient since it will involve creating a new object. if denominator._prec == 0 and numerator._prec == 0: # float self.num = create_float(numerator.val / denominator.val) # Remove denominator, such that it will be excluded when # printing. self.denom = None # Handle zero. if self.val == 0.0: # Remove denominator, such that it will be excluded when # printing self.denom = None # Compute the representation now, such that we can use it # directly in the __eq__ and __ne__ methods (improves # performance a bit, but only when objects are cached). if self.denom: self._repr = "Fraction(%s, %s)" % (self.num._repr, self.denom._repr) else: self._repr = "Fraction(%s, %s)" % (self.num._repr, create_float(1)._repr) # Use repr as hash value. self._hash = hash(self._repr) # Print functions. def __str__(self): "Simple string representation which will appear in the generated code." if not self.denom: return str(self.num) # Get string for numerator and denominator. num = str(self.num) denom = str(self.denom) # Group numerator if it is a fraction, otherwise it should be # handled already. if self.num._prec == 4: # frac num = format["grouping"](num) # Group denominator if it is a fraction or product, or if the # value is negative. # NOTE: This will be removed by the optimisations later before # writing any code. if self.denom._prec in (2, 4) or self.denom.val < 0.0: # prod or frac denom = format["grouping"](denom) return format["div"](num, denom) # Binary operators. def __add__(self, other): "Addition by other objects." # Add two fractions if their denominators are equal by creating # (expanded) sum of their numerators. if other._prec == 4 and self.denom == other.denom: # frac return create_fraction(create_sum([self.num, other.num]).expand(), self.denom) return create_sum([self, other]) def __sub__(self, other): "Subtract other objects." # Return a new sum if other._prec == 4 and self.denom == other.denom: # frac num = create_sum([self.num, create_product([FloatValue(-1), other.num])]).expand() return create_fraction(num, self.denom) return create_sum([self, create_product([FloatValue(-1), other])]) def __mul__(self, other): "Multiplication by other objects." # NOTE: assuming that we get expanded variables. # If product will be zero. if self.val == 0.0 or other.val == 0.0: return create_float(0) # Create new expanded numerator and denominator and use '/' to reduce. if other._prec != 4: # frac return (self.num * other) / self.denom # If we have a fraction, create new numerator and denominator and use # '/' to reduce expression. return create_product([self.num, other.num]).expand() / create_product([self.denom, other.denom]).expand() def __truediv__(self, other): "Division by other objects." # If division is illegal (this should definitely not happen). if other.val == 0.0: error("Division by zero.") # If fraction will be zero. if self.val == 0.0: return self.vrs[0] # The only thing that we shouldn't need to handle is division by other # Fractions if other._prec == 4: error("Did not expected to divide by fraction.") # Handle division by FloatValue, Symbol, Product and Sum in the same # way i.e., multiply other by the donominator and use division # (__div__ or other) in order to (try to) reduce the expression. # TODO: Is it better to first try to divide the numerator by other, # if a Fraction is the return value, then multiply the denominator of # that value by denominator of self. Otherwise the reduction was # successful and we just use the denom of self as denominator. return self.num / (other * self.denom) __div__ = __truediv__ # Public functions. def expand(self): "Expand the fraction expression." # If fraction is already expanded, simply return the expansion. if self._expanded: return self._expanded # If we don't have a denominator just return expansion of numerator. if not self.denom: return self.num.expand() # Expand numerator and denominator. num = self.num.expand() denom = self.denom.expand() # TODO: Is it too expensive to call expand in the below? # If both the numerator and denominator are fractions, create new # numerator and denominator and use division to possibly reduce the # expression. if num._prec == 4 and denom._prec == 4: # frac new_num = create_product([num.num, denom.denom]).expand() new_denom = create_product([num.denom, denom.num]).expand() self._expanded = new_num / new_denom # If the numerator is a fraction, multiply denominators and use # division to reduce expression. elif num._prec == 4: # frac new_denom = create_product([num.denom, denom]).expand() self._expanded = num.num / new_denom # If the denominator is a fraction multiply by the inverse and # use division to reduce expression. elif denom._prec == 4: # frac new_num = create_product([num, denom.denom]).expand() self._expanded = new_num / denom.num # Use division to reduce the expression, no need to call expand(). else: self._expanded = num / denom return self._expanded def get_unique_vars(self, var_type): "Get unique variables (Symbols) as a set." # Simply get the unique variables from numerator and denominator. var = self.num.get_unique_vars(var_type) var.update(self.denom.get_unique_vars(var_type)) return var def get_var_occurrences(self): """Determine the number of minimum number of times all variables occurs in the expression simply by calling the function on the numerator. """ return self.num.get_var_occurrences() def ops(self): "Return number of operations needed to evaluate fraction." # If we have a denominator, add the operations and +1 for '/'. if self.denom: return self.num.ops() + self.denom.ops() + 1 # Else we just return the number of operations for the # numerator. return self.num.ops() def reduce_ops(self): # Try to reduce operations by reducing the numerator and # denominator. # FIXME: We assume expanded variables here, so any common # variables in the numerator and denominator are already # removed i.e, there is no risk of encountering (x + x*y) / x # -> x*(1 + y)/x -> (1 + y). if self._reduced: return self._reduced num = self.num.reduce_ops() # Only return a new Fraction if we still have a denominator. if self.denom: self._reduced = create_fraction(num, self.denom.reduce_ops()) else: self._reduced = num return self._reduced def reduce_var(self, var): "Reduce the fraction by another variable through division of numerator." # We assume that this function is only called by reduce_ops, such that # we just need to consider the numerator. return create_fraction(self.num / var, self.denom) def reduce_vartype(self, var_type): """Reduce expression with given var_type. It returns a tuple (found, remain), where 'found' is an expression that only has variables of type == var_type. If no variables are found, found=(). The 'remain' part contains the leftover after division by 'found' such that: self = found*remain. """ # Reduce the numerator by the var type. if self.num._prec == 3: foo = self.num.reduce_vartype(var_type) if len(foo) == 1: num_found, num_remain = foo[0] else: # meg: I have only a marginal idea of what I'm doing # here! new_sum = [] for num_found, num_remain in foo: if num_found == (): new_sum.append(create_fraction(num_remain, self.denom)) else: new_sum.append(create_fraction(create_product([num_found, num_remain]), self.denom)) return create_sum(new_sum).expand().reduce_vartype(var_type) else: foo = self.num.reduce_vartype(var_type) if len(foo) != 1: raise RuntimeError("This case is not handled") num_found, num_remain = foo[0] # If the denominator is not a Sum things are straightforward. denom_found = None denom_remain = None if self.denom._prec != 3: # sum foo = self.denom.reduce_vartype(var_type) if len(foo) != 1: raise RuntimeError("This case is not handled") denom_found, denom_remain = foo[0] # If we have a Sum in the denominator, all terms must be # reduced by the same terms to make sense else: remain = [] for m in self.denom.vrs: foo = m.reduce_vartype(var_type) d_found, d_remain = foo[0] # If we've found a denom, but the new found is # different from the one already found, terminate loop # since it wouldn't make sense to reduce the fraction. # TODO: handle I0/((I0 + I1)/(G0 + G1) + (I1 + I2)/(G1 # + G2)) # better than just skipping. if len(foo) != 1 or (denom_found is not None and repr(d_found) != repr(denom_found)): # If the denominator of the entire sum has a type # which is lower than or equal to the vartype that # we are currently reducing for, we have to move # it outside the expression as well. # TODO: This is quite application specific, but I # don't see how we can do it differently at the # moment. if self.denom.t <= var_type: if not num_found: num_found = create_float(1) return [(create_fraction(num_found, self.denom), num_remain)] else: # The remainder is always a fraction return [(num_found, create_fraction(num_remain, self.denom))] # Update denom found and add remainder. denom_found = d_found remain.append(d_remain) # There is always a non-const remainder if denominator was a sum. denom_remain = create_sum(remain) # print "den f: ", denom_found # print "den r: ", denom_remain # If we have found a common denominator, but no found numerator, # create a constant. # TODO: Add more checks to avoid expansion. found = None # There is always a remainder. remain = create_fraction(num_remain, denom_remain).expand() # print "remain: ", repr(remain) if num_found: if denom_found: found = create_fraction(num_found, denom_found) else: found = num_found else: if denom_found: found = create_fraction(create_float(1), denom_found) else: found = () return [(found, remain)] ffc-2019.1.0.post0/ffc/quadrature/optimisedquadraturetransformer.py000066400000000000000000001047751346026536000254340ustar00rootroot00000000000000# -*- coding: utf-8 -*- "QuadratureTransformer (optimised) for quadrature code generation to translate UFL expressions." # Copyright (C) 2009-2011 Kristian B. Oelgaard # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # Modified by Anders Logg, 2009 # UFL common. from ufl.utils.sorting import sorted_by_key from ufl.measure import custom_integral_types, point_integral_types # UFL Classes. from ufl.classes import IntValue from ufl.classes import FloatValue from ufl.classes import Coefficient from ufl.classes import Operator # FFC modules. from ffc.log import error, ffc_assert from ffc.quadrature.cpp import format from ffc.quadrature.quadraturetransformerbase import QuadratureTransformerBase from ffc.quadrature.quadratureutils import create_permutations # Symbolics functions from ffc.quadrature.symbolics import (create_float, create_symbol, create_product, create_sum, create_fraction, BASIS, IP, GEO) def firstkey(d): return next(iter(d)) class QuadratureTransformerOpt(QuadratureTransformerBase): "Transform UFL representation to quadrature code." def __init__(self, *args): # Initialise base class. QuadratureTransformerBase.__init__(self, *args) # ------------------------------------------------------------------------- # Start handling UFL classes. # ------------------------------------------------------------------------- # ------------------------------------------------------------------------- # AlgebraOperators (algebra.py). # ------------------------------------------------------------------------- def sum(self, o, *operands): code = {} # Loop operands that has to be summend. for op in operands: # If entries does already exist we can add the code, # otherwise just dump them in the element tensor. for key, val in sorted(op.items()): if key in code: code[key].append(val) else: code[key] = [val] # Add sums and group if necessary. for key, val in sorted_by_key(code): if len(val) > 1: code[key] = create_sum(val) elif val: code[key] = val[0] else: error("Where did the values go?") # If value is zero just ignore it. if abs(code[key].val) < format["epsilon"]: del code[key] return code def product(self, o, *operands): permute = [] not_permute = [] # Sort operands in objects that needs permutation and objects # that does not. for op in operands: # If we get an empty dict, something was zero and so is # the product. if not op: return {} if len(op) > 1 or (op and firstkey(op) != ()): permute.append(op) elif op and firstkey(op) == (): not_permute.append(op[()]) # Create permutations. # TODO: After all indices have been expanded I don't think # that we'll ever get more than a list of entries and values. permutations = create_permutations(permute) # Create code. code = {} if permutations: for key, val in permutations.items(): # Sort key in order to create a unique key. l = sorted(key) # noqa: E741 # TODO: I think this check can be removed for speed # since we just have a list of objects we should never # get any conflicts here. ffc_assert(tuple(l) not in code, "This key should not be in the code.") code[tuple(l)] = create_product(val + not_permute) else: return {(): create_product(not_permute)} return code def division(self, o, *operands): ffc_assert(len(operands) == 2, "Expected exactly two operands (numerator and denominator): " + repr(operands)) # Get the code from the operands. numerator_code, denominator_code = operands # TODO: Are these safety checks needed? ffc_assert(() in denominator_code and len(denominator_code) == 1, "Only support function type denominator: " + repr(denominator_code)) code = {} # Get denominator and create new values for the numerator. denominator = denominator_code[()] for key, val in numerator_code.items(): code[key] = create_fraction(val, denominator) return code def power(self, o): # Get base and exponent. base, expo = o.ufl_operands # Visit base to get base code. base_code = self.visit(base) # TODO: Are these safety checks needed? ffc_assert(() in base_code and len(base_code) == 1, "Only support function type base: " + repr(base_code)) # Get the base code and create power. val = base_code[()] # Handle different exponents if isinstance(expo, IntValue): return {(): create_product([val] * expo.value())} elif isinstance(expo, FloatValue): exp = format["floating point"](expo.value()) sym = create_symbol(format["std power"](str(val), exp), val.t, val, 1) return {(): sym} elif isinstance(expo, (Coefficient, Operator)): exp = self.visit(expo)[()] sym = create_symbol(format["std power"](str(val), exp), val.t, val, 1) return {(): sym} else: error("power does not support this exponent: " + repr(expo)) def abs(self, o, *operands): # TODO: Are these safety checks needed? ffc_assert(len(operands) == 1 and () in operands[0] and len(operands[0]) == 1, "Abs expects one operand of function type: " + repr(operands)) # Take absolute value of operand. val = operands[0][()] new_val = create_symbol(format["absolute value"](str(val)), val.t, val, 1) return {(): new_val} def min_value(self, o, *operands): # Take minimum value of operands. val0 = operands[0][()] val1 = operands[1][()] t = min(val0.t, val1.t) # FIXME: I don't know how to implement this the optimized # way. Is this right? new_val = create_symbol(format["min value"](str(val0), str(val1)), t) return {(): new_val} def max_value(self, o, *operands): # Take maximum value of operands. val0 = operands[0][()] val1 = operands[1][()] t = min(val0.t, val1.t) # FIXME: I don't know how to implement this the optimized # way. Is this right? new_val = create_symbol(format["max value"](str(val0), str(val1)), t) return {(): new_val} # ------------------------------------------------------------------------- # Condition, Conditional (conditional.py). # ------------------------------------------------------------------------- def not_condition(self, o, *operands): # This is a Condition but not a BinaryCondition, and the # operand will be another Condition # Get condition expression and do safety checks. # Might be a bit too strict? c, = operands ffc_assert(len(c) == 1 and firstkey(c) == (), "Condition for NotCondition should only be one function: " + repr(c)) sym = create_symbol(format["not"](str(c[()])), c[()].t, base_op=c[()].ops() + 1) return {(): sym} def binary_condition(self, o, *operands): # Get LHS and RHS expressions and do safety checks. Might be # a bit too strict? lhs, rhs = operands ffc_assert(len(lhs) == 1 and firstkey(lhs) == (), "LHS of Condtion should only be one function: " + repr(lhs)) ffc_assert(len(rhs) == 1 and firstkey(rhs) == (), "RHS of Condtion should only be one function: " + repr(rhs)) # Map names from UFL to cpp.py. name_map = {"==": "is equal", "!=": "not equal", "<": "less than", ">": "greater than", "<=": "less equal", ">=": "greater equal", "&&": "and", "||": "or"} # Get the minimum type t = min(lhs[()].t, rhs[()].t) ops = lhs[()].ops() + rhs[()].ops() + 1 cond = str(lhs[()]) + format[name_map[o._name]] + str(rhs[()]) sym = create_symbol(format["grouping"](cond), t, base_op=ops) return {(): sym} def conditional(self, o, *operands): # Get condition and return values; and do safety check. cond, true, false = operands ffc_assert(len(cond) == 1 and firstkey(cond) == (), "Condtion should only be one function: " + repr(cond)) ffc_assert(len(true) == 1 and firstkey(true) == (), "True value of Condtional should only be one function: " + repr(true)) ffc_assert(len(false) == 1 and firstkey(false) == (), "False value of Condtional should only be one function: " + repr(false)) # Get values and test for None t_val = true[()] f_val = false[()] # Get the minimum type and number of operations # TODO: conditionals are currently always located inside the # ip loop, therefore the type has to be at least IP (fix bug # #1082048). This can be optimised. t = min([cond[()].t, t_val.t, f_val.t, IP]) ops = sum([cond[()].ops(), t_val.ops(), f_val.ops()]) # Create expression for conditional # TODO: Handle this differently to expose the variables which # are used to create the expressions. expr = create_symbol(format["evaluate conditional"](cond[()], t_val, f_val), t) num = len(self.conditionals) name = create_symbol(format["conditional"](num), t) if expr not in self.conditionals: self.conditionals[expr] = (t, ops, num) else: num = self.conditionals[expr][2] name = create_symbol(format["conditional"](num), t) return {(): name} # ------------------------------------------------------------------------- # FacetNormal, CellVolume, Circumradius, FacetArea (geometry.py). # ------------------------------------------------------------------------- def cell_coordinate(self, o): # FIXME error("This object should be implemented by the child class.") def facet_coordinate(self, o): # FIXME error("This object should be implemented by the child class.") def cell_origin(self, o): # FIXME error("This object should be implemented by the child class.") def facet_origin(self, o): # FIXME error("This object should be implemented by the child class.") def cell_facet_origin(self, o): # FIXME error("This object should be implemented by the child class.") def jacobian(self, o): # FIXME error("This object should be implemented by the child class.") def jacobian_determinant(self, o): # FIXME error("This object should be implemented by the child class.") def jacobian_inverse(self, o): # FIXME error("This object should be implemented by the child class.") def facet_jacobian(self, o): # FIXME error("This object should be implemented by the child class.") def facet_jacobian_determinant(self, o): # FIXME error("This object should be implemented by the child class.") def facet_jacobian_inverse(self, o): # FIXME error("This object should be implemented by the child class.") def cell_facet_jacobian(self, o): # FIXME error("This object should be implemented by the child class.") def cell_facet_jacobian_determinant(self, o): # FIXME error("This object should be implemented by the child class.") def cell_facet_jacobian_inverse(self, o): # FIXME error("This object should be implemented by the child class.") def facet_normal(self, o): components = self.component() # Safety check. ffc_assert(len(components) == 1, "FacetNormal expects 1 component index: " + repr(components)) # Handle 1D as a special case. # FIXME: KBO: This has to change for mD elements in R^n : m < # n if self.gdim == 1: # FIXME: MSA UFL uses shape (1,) now, can we remove the special case here then? normal_component = format["normal component"](self.restriction, "") else: normal_component = format["normal component"](self.restriction, components[0]) self.trans_set.add(normal_component) return {(): create_symbol(normal_component, GEO)} def cell_normal(self, o): # FIXME error("This object should be implemented by the child class.") def cell_volume(self, o): # FIXME: KBO: This has to change for higher order elements # detJ = format["det(J)"](self.restriction) # volume = format["absolute value"](detJ) # self.trans_set.add(detJ) volume = format["cell volume"](self.restriction) self.trans_set.add(volume) return {(): create_symbol(volume, GEO)} def circumradius(self, o): # FIXME: KBO: This has to change for higher order elements circumradius = format["circumradius"](self.restriction) self.trans_set.add(circumradius) return {(): create_symbol(circumradius, GEO)} def facet_area(self, o): # FIXME: KBO: This has to change for higher order elements # NOTE: Omitting restriction because the area of a facet is # the same on both sides. # FIXME: Since we use the scale factor, facet area has no # meaning for cell integrals. (Need check in FFC or UFL). area = format["facet area"] self.trans_set.add(area) return {(): create_symbol(area, GEO)} def min_facet_edge_length(self, o): # FIXME: this has no meaning for cell integrals. (Need check # in FFC or UFL). tdim = self.tdim if tdim < 3: return self.facet_area(o) edgelen = format["min facet edge length"](self.restriction) self.trans_set.add(edgelen) return {(): create_symbol(edgelen, GEO)} def max_facet_edge_length(self, o): # FIXME: this has no meaning for cell integrals. (Need check # in FFC or UFL). tdim = self.tdim if tdim < 3: return self.facet_area(o) edgelen = format["max facet edge length"](self.restriction) self.trans_set.add(edgelen) return {(): create_symbol(edgelen, GEO)} def cell_orientation(self, o): # FIXME error("This object should be implemented by the child class.") def quadrature_weight(self, o): # FIXME error("This object should be implemented by the child class.") def create_argument(self, ufl_argument, derivatives, component, local_comp, local_offset, ffc_element, transformation, multiindices, tdim, gdim, avg): "Create code for basis functions, and update relevant tables of used basis." # Prefetch formats to speed up code generation. f_transform = format["transform"] f_detJ = format["det(J)"] # Reset code code = {} # Affine mapping if transformation == "affine": # Loop derivatives and get multi indices. for multi in multiindices: deriv = [multi.count(i) for i in range(tdim)] if not any(deriv): deriv = [] # Create mapping and basis name. mapping, basis = self._create_mapping_basis(component, deriv, avg, ufl_argument, ffc_element) if mapping not in code: code[mapping] = [] if basis is not None: # Add transformation if needed. code[mapping].append(self.__apply_transform(basis, derivatives, multi, tdim, gdim)) # Handle non-affine mappings. else: ffc_assert(avg is None, "Taking average is not supported for non-affine mappings.") # Loop derivatives and get multi indices. for multi in multiindices: deriv = [multi.count(i) for i in range(tdim)] if not any(deriv): deriv = [] if transformation in ["covariant piola", "contravariant piola"]: for c in range(tdim): # Create mapping and basis name. mapping, basis = self._create_mapping_basis(c + local_offset, deriv, avg, ufl_argument, ffc_element) if mapping not in code: code[mapping] = [] if basis is not None: # Multiply basis by appropriate transform. if transformation == "covariant piola": dxdX = create_symbol(f_transform("JINV", c, local_comp, tdim, gdim, self.restriction), GEO) basis = create_product([dxdX, basis]) elif transformation == "contravariant piola": detJ = create_fraction(create_float(1), create_symbol(f_detJ(self.restriction), GEO)) dXdx = create_symbol(f_transform("J", local_comp, c, gdim, tdim, self.restriction), GEO) basis = create_product([detJ, dXdx, basis]) # Add transformation if needed. code[mapping].append(self.__apply_transform(basis, derivatives, multi, tdim, gdim)) elif transformation == "double covariant piola": # g_ij = (Jinv)_ki G_kl (Jinv)lj i = local_comp // tdim j = local_comp % tdim for k in range(tdim): for l in range(tdim): # Create mapping and basis name. mapping, basis = self._create_mapping_basis( k * tdim + l + local_offset, deriv, avg, ufl_argument, ffc_element) if mapping not in code: code[mapping] = [] if basis is not None: J1 = create_symbol( f_transform("JINV", k, i, tdim, gdim, self.restriction), GEO) J2 = create_symbol( f_transform("JINV", l, j, tdim, gdim, self.restriction), GEO) basis = create_product([J1, basis, J2]) # Add transformation if needed. code[mapping].append( self.__apply_transform( basis, derivatives, multi, tdim, gdim)) elif transformation == "double contravariant piola": # g_ij = (detJ)^(-2) J_ik G_kl J_jl i = local_comp // tdim j = local_comp % tdim for k in range(tdim): for l in range(tdim): # Create mapping and basis name. mapping, basis = self._create_mapping_basis( k * tdim + l + local_offset, deriv, avg, ufl_argument, ffc_element) if mapping not in code: code[mapping] = [] if basis is not None: J1 = create_symbol( f_transform("J", i, k, gdim, tdim, self.restriction), GEO) J2 = create_symbol( f_transform("J", j, l, gdim, tdim, self.restriction), GEO) invdetJ = create_fraction( create_float(1), create_symbol(f_detJ(self.restriction), GEO)) basis = create_product([invdetJ, invdetJ, J1, basis, J2]) # Add transformation if needed. code[mapping].append( self.__apply_transform( basis, derivatives, multi, tdim, gdim)) else: error("Transformation is not supported: " + repr(transformation)) # Add sums and group if necessary. for key, val in list(code.items()): if len(val) > 1: code[key] = create_sum(val) else: code[key] = val[0] return code def create_function(self, ufl_function, derivatives, component, local_comp, local_offset, ffc_element, is_quad_element, transformation, multiindices, tdim, gdim, avg): "Create code for basis functions, and update relevant tables of used basis." ffc_assert(ufl_function in self._function_replace_values, "Expecting ufl_function to have been mapped prior to this call.") # Prefetch formats to speed up code generation. f_transform = format["transform"] f_detJ = format["det(J)"] # Reset code code = [] # Handle affine mappings. if transformation == "affine": # Loop derivatives and get multi indices. for multi in multiindices: deriv = [multi.count(i) for i in range(tdim)] if not any(deriv): deriv = [] # Create function name. function_name = self._create_function_name(component, deriv, avg, is_quad_element, ufl_function, ffc_element) if function_name: # Add transformation if needed. code.append(self.__apply_transform(function_name, derivatives, multi, tdim, gdim)) # Handle non-affine mappings. else: ffc_assert(avg is None, "Taking average is not supported for non-affine mappings.") # Loop derivatives and get multi indices. for multi in multiindices: deriv = [multi.count(i) for i in range(tdim)] if not any(deriv): deriv = [] if transformation in ["covariant piola", "contravariant piola"]: for c in range(tdim): function_name = self._create_function_name(c + local_offset, deriv, avg, is_quad_element, ufl_function, ffc_element) if function_name: # Multiply basis by appropriate transform. if transformation == "covariant piola": dxdX = create_symbol(f_transform("JINV", c, local_comp, tdim, gdim, self.restriction), GEO) function_name = create_product([dxdX, function_name]) elif transformation == "contravariant piola": detJ = create_fraction(create_float(1), create_symbol(f_detJ(self.restriction), GEO)) dXdx = create_symbol(f_transform("J", local_comp, c, gdim, tdim, self.restriction), GEO) function_name = create_product([detJ, dXdx, function_name]) # Add transformation if needed. code.append(self.__apply_transform(function_name, derivatives, multi, tdim, gdim)) elif transformation == "double covariant piola": # g_ij = (Jinv)_ki G_kl (Jinv)lj i = local_comp // tdim j = local_comp % tdim for k in range(tdim): for l in range(tdim): # Create mapping and basis name. function_name = self._create_function_name( k * tdim + l + local_offset, deriv, avg, is_quad_element, ufl_function, ffc_element) J1 = create_symbol( f_transform("JINV", k, i, tdim, gdim, self.restriction), GEO) J2 = create_symbol( f_transform("JINV", l, j, tdim, gdim, self.restriction), GEO) function_name = create_product([J1, function_name, J2]) # Add transformation if needed. code.append(self.__apply_transform( function_name, derivatives, multi, tdim, gdim)) elif transformation == "double contravariant piola": # g_ij = (detJ)^(-2) J_ik G_kl J_jl i = local_comp // tdim j = local_comp % tdim for k in range(tdim): for l in range(tdim): # Create mapping and basis name. function_name = self._create_function_name( k * tdim + l + local_offset, deriv, avg, is_quad_element, ufl_function, ffc_element) J1 = create_symbol( f_transform("J", i, k, tdim, gdim, self.restriction), GEO) J2 = create_symbol( f_transform("J", j, l, tdim, gdim, self.restriction), GEO) invdetJ = create_fraction( create_float(1), create_symbol(f_detJ(self.restriction), GEO)) function_name = create_product([invdetJ, invdetJ, J1, function_name, J2]) # Add transformation if needed. code.append(self.__apply_transform(function_name, derivatives, multi, tdim, gdim)) else: error("Transformation is not supported: ", repr(transformation)) if not code: return create_float(0.0) elif len(code) > 1: code = create_sum(code) else: code = code[0] return code # ------------------------------------------------------------------------- # Helper functions for Argument and Coefficient # ------------------------------------------------------------------------- def __apply_transform(self, function, derivatives, multi, tdim, gdim): "Apply transformation (from derivatives) to basis or function." f_transform = format["transform"] # Add transformation if needed. transforms = [] if self.integral_type not in custom_integral_types: for i, direction in enumerate(derivatives): ref = multi[i] t = f_transform("JINV", ref, direction, tdim, gdim, self.restriction) transforms.append(create_symbol(t, GEO)) transforms.append(function) return create_product(transforms) # ------------------------------------------------------------------------- # Helper functions for transformation of UFL objects in base class # ------------------------------------------------------------------------- def _create_symbol(self, symbol, domain): return {(): create_symbol(symbol, domain)} def _create_product(self, symbols): return create_product(symbols) def _format_scalar_value(self, value): # print("format_scalar_value: %d" % value) if value is None: return {(): create_float(0.0)} return {(): create_float(value)} def _math_function(self, operands, format_function): # TODO: Are these safety checks needed? ffc_assert(len(operands) == 1 and () in operands[0] and len(operands[0]) == 1, "MathFunctions expect one operand of function type: " + repr(operands)) # Use format function on value of operand. operand = operands[0] for key, val in list(operand.items()): new_val = create_symbol(format_function(str(val)), val.t, val, 1) operand[key] = new_val # raise Exception("pause") return operand def _bessel_function(self, operands, format_function): # TODO: Are these safety checks needed? # TODO: work on reference instead of copies? (like # math_function) ffc_assert(len(operands) == 2, "BesselFunctions expect two operands of function type: " + repr(operands)) nu, x = operands ffc_assert(len(nu) == 1 and () in nu, "Expecting one operand of function type as first argument to BesselFunction : " + repr(nu)) ffc_assert(len(x) == 1 and () in x, "Expecting one operand of function type as second argument to BesselFunction : " + repr(x)) nu = nu[()] x = x[()] if nu is None: nu = format["floating point"](0.0) if x is None: x = format["floating point"](0.0) sym = create_symbol(format_function(x, nu), x.t, x, 1) return {(): sym} # ------------------------------------------------------------------------- # Helper functions for code_generation() # ------------------------------------------------------------------------- def _count_operations(self, expression): return expression.ops() def _create_entry_data(self, val, integral_type): # Multiply value by weight and determinant ACCESS = GEO weight = format["weight"](self.points) if self.points > 1: weight += format["component"]("", format["integration points"]) ACCESS = IP weight = self._create_symbol(weight, ACCESS)[()] # Create value. if integral_type in (point_integral_types + custom_integral_types): trans_set = set() value = create_product([val, weight]) else: f_scale_factor = format["scale factor"] trans_set = set([f_scale_factor]) value = create_product([val, weight, create_symbol(f_scale_factor, GEO)]) # Update sets of used variables (if they will not be used # because of optimisations later, they will be reset). trans_set.update([str(x) for x in value.get_unique_vars(GEO)]) used_points = set([self.points]) ops = self._count_operations(value) used_psi_tables = set([self.psi_tables_map[b] for b in value.get_unique_vars(BASIS)]) return (value, ops, [trans_set, used_points, used_psi_tables]) ffc-2019.1.0.post0/ffc/quadrature/parameters.py000066400000000000000000000054251346026536000212110ustar00rootroot00000000000000# -*- coding: utf-8 -*- "Quadrature representation class for UFL" # Copyright (C) 2009-2014 Kristian B. Oelgaard # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # Modified by Anders Logg 2009, 2014 # Modified by Martin Sandve Alnæs 2013-2017 # UFL modules from ufl import custom_integral_types # FFC modules from ffc.log import warning def default_optimize_parameters(): return { "eliminate zeros": False, "optimisation": False, "ignore ones": False, "remove zero terms": False, "ignore zero tables": False, } def parse_optimise_parameters(parameters, itg_data): # Initialize parameters optimise_parameters = default_optimize_parameters() # Set optimized parameters if parameters["optimize"] and itg_data.integral_type in custom_integral_types: warning("Optimization not available for custom integrals, skipping optimization.") elif parameters["optimize"]: optimise_parameters["ignore ones"] = True optimise_parameters["remove zero terms"] = True optimise_parameters["ignore zero tables"] = True # Do not include this in below if/else clause since we want to # be able to switch on this optimisation in addition to the # other optimisations. if "eliminate_zeros" in parameters: optimise_parameters["eliminate zeros"] = True if "simplify_expressions" in parameters: optimise_parameters["optimisation"] = "simplify_expressions" elif "precompute_ip_const" in parameters: optimise_parameters["optimisation"] = "precompute_ip_const" elif "precompute_basis_const" in parameters: optimise_parameters["optimisation"] = "precompute_basis_const" # The current default optimisation (for -O) is equal to # '-feliminate_zeros -fsimplify_expressions'. else: # If '-O -feliminate_zeros' was given on the command line, # do not simplify expressions if "eliminate_zeros" not in parameters: optimise_parameters["eliminate zeros"] = True optimise_parameters["optimisation"] = "simplify_expressions" return optimise_parameters ffc-2019.1.0.post0/ffc/quadrature/product.py000066400000000000000000000402001346026536000205140ustar00rootroot00000000000000# -*- coding: utf-8 -*- "This file implements a class to represent a product." # Copyright (C) 2009-2010 Kristian B. Oelgaard # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # First added: 2009-07-12 # Last changed: 2010-03-11 from functools import reduce # FFC modules from ffc.log import error from ffc.quadrature.cpp import format # FFC quadrature modules from .symbolics import create_float from .symbolics import create_product from .symbolics import create_sum from .symbolics import create_fraction from .expr import Expr # FFC quadrature modules from .floatvalue import FloatValue class Product(Expr): __slots__ = ("vrs", "_expanded") def __init__(self, variables): """Initialise a Product object, it derives from Expr and contains the additional variables: vrs - a list of variables _expanded - object, an expanded object of self, e.g., self = x*(2+y) -> self._expanded = (2*x + x*y) (a sum), or self = 2*x -> self._expanded = 2*x (self). NOTE: self._prec = 2.""" # Initialise value, list of variables, class. self.val = 1.0 self.vrs = [] self._prec = 2 # Initially set _expanded to True. self._expanded = True # Process variables if we have any. if variables: # Remove nested Products and test for expansion. float_val = 1.0 for var in variables: # If any value is zero the entire product is zero. if var.val == 0.0: self.val = 0.0 self.vrs = [create_float(0.0)] float_val = 0.0 break # Collect floats into one variable if var._prec == 0: # float float_val *= var.val continue # Take care of product such that we don't create # nested products. elif var._prec == 2: # prod # If expanded product is a float, just add it. if var._expanded and var._expanded._prec == 0: float_val *= var._expanded.val # If expanded product is symbol, this product is # still expanded and add symbol. elif var._expanded and var._expanded._prec == 1: self.vrs.append(var._expanded) # If expanded product is still a product, add the # variables. elif var._expanded and var._expanded._prec == 2: # Add copies of the variables of other product # (collect floats). if var._expanded.vrs[0]._prec == 0: float_val *= var._expanded.vrs[0].val self.vrs += var._expanded.vrs[1:] continue self.vrs += var._expanded.vrs # If expanded product is a sum or fraction, we # must expand this product later. elif var._expanded and var._expanded._prec in (3, 4): self._expanded = False self.vrs.append(var._expanded) # Else the product is not expanded, and we must # expand this one later else: self._expanded = False # Add copies of the variables of other product # (collect floats). if var.vrs[0]._prec == 0: float_val *= var.vrs[0].val self.vrs += var.vrs[1:] continue self.vrs += var.vrs continue # If we have sums or fractions in the variables the # product is not expanded. elif var._prec in (3, 4): # sum or frac self._expanded = False # Just add any variable at this point to list of new # vars. self.vrs.append(var) # If value is 1 there is no need to include it, unless it # is the only parameter left i.e., 2*0.5 = 1. if float_val and float_val != 1.0: self.val = float_val self.vrs.append(create_float(float_val)) # If we no longer have any variables add the float. elif not self.vrs: self.val = float_val self.vrs = [create_float(float_val)] # If 1.0 is the only value left, add it. elif abs(float_val - 1.0) < format["epsilon"] and not self.vrs: self.val = 1.0 self.vrs = [create_float(1)] # If we don't have any variables the product is zero. else: self.val = 0.0 self.vrs = [create_float(0)] # The type is equal to the lowest variable type. self.t = min([v.t for v in self.vrs]) # Sort the variables such that comparisons work. self.vrs.sort() # Compute the representation now, such that we can use it # directly in the __eq__ and __ne__ methods (improves # performance a bit, but only when objects are cached). self._repr = "Product([%s])" % ", ".join([v._repr for v in self.vrs]) # Use repr as hash value. self._hash = hash(self._repr) # Store self as expanded value, if we did not encounter any # sums or fractions. if self._expanded: self._expanded = self # Print functions. def __str__(self): "Simple string representation which will appear in the generated code." # If we have more than one variable and the first float is -1 # exlude the 1. if len(self.vrs) > 1 and self.vrs[0]._prec == 0 and self.vrs[0].val == -1.0: # Join string representation of members by multiplication return format["sub"](["", format["mul"]([str(v) for v in self.vrs[1:]])]) return format["mul"]([str(v) for v in self.vrs]) # Binary operators. def __add__(self, other): "Addition by other objects." # NOTE: Assuming expanded variables. # If two products are equal, add their float values. if other._prec == 2 and self.get_vrs() == other.get_vrs(): # Return expanded product, to get rid of 3*x + -2*x -> x, not 1*x. return create_product([create_float(self.val + other.val)] + list(self.get_vrs())).expand() elif other._prec == 1: # sym if self.get_vrs() == (other,): # Return expanded product, to get rid of -x + x -> 0, # not product(0). return create_product([create_float(self.val + 1.0), other]).expand() # Return sum return create_sum([self, other]) def __sub__(self, other): "Subtract other objects." if other._prec == 2 and self.get_vrs() == other.get_vrs(): # Return expanded product, to get rid of 3*x + -2*x -> x, # not 1*x. return create_product([create_float(self.val - other.val)] + list(self.get_vrs())).expand() elif other._prec == 1: # sym if self.get_vrs() == (other,): # Return expanded product, to get rid of -x + x -> 0, # not product(0). return create_product([create_float(self.val - 1.0), other]).expand() # Return sum return create_sum([self, create_product([FloatValue(-1), other])]) def __mul__(self, other): "Multiplication by other objects." # If product will be zero. if self.val == 0.0 or other.val == 0.0: return create_float(0) # If other is a Sum or Fraction let them handle it. if other._prec in (3, 4): # sum or frac return other.__mul__(self) # NOTE: We expect expanded sub-expressions with no nested # operators. Create new product adding float or symbol. if other._prec in (0, 1): # float or sym return create_product(self.vrs + [other]) # Create new product adding all variables from other Product. return create_product(self.vrs + other.vrs) def __truediv__(self, other): "Division by other objects." # If division is illegal (this should definitely not happen). if other.val == 0.0: error("Division by zero.") # If fraction will be zero. if self.val == 0.0: return self.vrs[0] # If other is a Sum we can only return a fraction. # NOTE: Expect that other is expanded i.e., x + x -> 2*x which # can be handled # TODO: Fix x / (x + x*y) -> 1 / (1 + y). # Or should this be handled when reducing a fraction? if other._prec == 3: # sum return create_fraction(self, other) # Handle division by FloatValue, Symbol, Product and Fraction. # NOTE: assuming that we get expanded variables. # Copy numerator, and create list for denominator. num = self.vrs[:] denom = [] # Add floatvalue, symbol and products to the list of denominators. if other._prec in (0, 1): # float or sym denom = [other] elif other._prec == 2: # prod # Get copy. denom = other.vrs[:] # fraction. else: error("Did not expected to divide by fraction.") # Loop entries in denominator and remove from numerator (and # denominator). for d in denom[:]: # Add the inverse of a float to the numerator and continue. if d._prec == 0: # float num.append(create_float(1.0 / d.val)) denom.remove(d) continue if d in num: num.remove(d) denom.remove(d) # Create appropriate return value depending on remaining data. if len(num) > 1: # TODO: Make this more efficient? # Create product and expand to reduce # Product([5, 0.2]) == Product([1]) -> Float(1). num = create_product(num).expand() elif num: num = num[0] # If all variables in the numerator has been eliminated we # need to add '1'. else: num = create_float(1) if len(denom) > 1: return create_fraction(num, create_product(denom)) elif denom: return create_fraction(num, denom[0]) # If we no longer have a denominater, just return the # numerator. return num __div__ = __truediv__ # Public functions. def expand(self): "Expand all members of the product." # If we just have one variable, compute the expansion of it # (it is not a Product, so it should be safe). We need this to # get rid of Product([Symbol]) type expressions. if len(self.vrs) == 1: self._expanded = self.vrs[0].expand() return self._expanded # If product is already expanded, simply return the expansion. if self._expanded: return self._expanded # Sort variables such that we don't call the '*' operator more # than we have to. float_syms = [] sum_fracs = [] for v in self.vrs: if v._prec in (0, 1): # float or sym float_syms.append(v) continue exp = v.expand() # If the expanded expression is a float, sym or product, # we can add the variables. if exp._prec in (0, 1): # float or sym float_syms.append(exp) elif exp._prec == 2: # prod float_syms += exp.vrs else: sum_fracs.append(exp) # If we have floats or symbols add the symbols to the rest as a single # product (for speed). if len(float_syms) > 1: sum_fracs.append(create_product(float_syms)) elif float_syms: sum_fracs.append(float_syms[0]) # Use __mult__ to reduce list to one single variable. # TODO: Can this be done more efficiently without creating all # the intermediate variables? self._expanded = reduce(lambda x, y: x * y, sum_fracs) return self._expanded def get_unique_vars(self, var_type): "Get unique variables (Symbols) as a set." # Loop all members and update the set. var = set() for v in self.vrs: var.update(v.get_unique_vars(var_type)) return var def get_var_occurrences(self): """Determine the number of times all variables occurs in the expression. Returns a dictionary of variables and the number of times they occur. """ # TODO: The product should be expanded at this stage, should # we check this? # Create dictionary and count number of occurrences of each # variable. d = {} for v in self.vrs: if v in d: d[v] += 1 continue d[v] = 1 return d def get_vrs(self): "Return all 'real' variables." # A product should only have one float value after # initialisation. # TODO: Use this knowledge directly in other classes? if self.vrs[0]._prec == 0: # float return tuple(self.vrs[1:]) return tuple(self.vrs) def ops(self): "Get the number of operations to compute product." # It takes n-1 operations ('*') for a product of n members. op = len(self.vrs) - 1 # Loop members and add their count. for v in self.vrs: op += v.ops() # Subtract 1, if the first member is -1 i.e., -1*x*y -> x*y is # only 1 op. if self.vrs[0]._prec == 0 and self.vrs[0].val == -1.0: op -= 1 return op def reduce_ops(self): "Reduce the number of operations to evaluate the product." # It's not possible to reduce a product if it is already # expanded and it should be at this stage. # TODO: Is it safe to return self.expand().reduce_ops() if # product is not expanded? And do we want to? # TODO: This should crash if it goes wrong return self._expanded def reduce_vartype(self, var_type): """Reduce expression with given var_type. It returns a tuple (found, remain), where 'found' is an expression that only has variables of type == var_type. If no variables are found, found=(). The 'remain' part contains the leftover after division by 'found' such that: self = found*remain. """ # Sort variables according to type. found = [] remains = [] for v in self.vrs: if v.t == var_type: found.append(v) continue remains.append(v) # Create appropriate object for found. if len(found) > 1: found = create_product(found) elif found: found = found.pop() # We did not find any variables. else: return [((), self)] # Create appropriate object for remains. if len(remains) > 1: remains = create_product(remains) elif remains: remains = remains.pop() # We don't have anything left. else: return [(self, create_float(1))] # Return whatever we found. return [(found, remains)] ffc-2019.1.0.post0/ffc/quadrature/quadraturegenerator.py000066400000000000000000001266571346026536000231450ustar00rootroot00000000000000# -*- coding: utf-8 -*- "Code generator for quadrature representation." # Copyright (C) 2009-2014 Kristian B. Oelgaard # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # Modified by Mehdi Nikbakht 2010 # Modified by Anders Logg 2013-2014 # Modified by Martin Sandve Alnæs 2013-2014 # Python modules import functools import numpy # UFL modules from ufl.utils.sorting import sorted_by_key from ufl.utils.derivativetuples import compute_derivative_tuples from ufl import custom_integral_types # FFC modules from ffc.log import ffc_assert, error, warning from ffc.quadrature.cpp import format, remove_unused, indent from ffc.representationutils import initialize_integral_code # Utility and optimization functions for quadraturegenerator from ffc.quadrature.symbolics import generate_aux_constants def generate_integral_code(ir, prefix, parameters): "Generate code for integral from intermediate representation." # Prefetch formatting to speedup code generation (well...) ret = format["return"] # Generate code code = initialize_integral_code(ir, prefix, parameters) code["num_cells"] = indent(ret(ir["num_cells"]), 4) code["tabulate_tensor"] = indent(_tabulate_tensor(ir, prefix, parameters), 4) code["additional_includes_set"] = ir["additional_includes_set"] precision = ir["integrals_metadata"].get("precision") if precision is not None and precision != parameters["precision"]: warning("Ignoring precision in integral metadata compiled " "using quadrature representation. Not implemented.") return code def _tabulate_tensor(ir, prefix, parameters): "Generate code for a single integral (tabulate_tensor())." # Prefetch formatting to speedup code generation f_comment = format["comment"] f_G = format["geometry constant"] f_const_double = format["assign"] f_switch = format["switch"] f_float = format["float"] f_assign = format["assign"] f_A = format["element tensor"] f_r = format["free indices"][0] f_loop = format["generate loop"] f_int = format["int"] f_facet = format["facet"] # Get data opt_par = ir["optimise_parameters"] integral_type = ir["integral_type"] gdim = ir["geometric_dimension"] tdim = ir["topological_dimension"] num_facets = ir["num_facets"] num_vertices = ir["num_vertices"] prim_idims = ir["prim_idims"] integrals = ir["trans_integrals"] geo_consts = ir["geo_consts"] oriented = ir["needs_oriented"] element_data = ir["element_data"] num_cells = ir["num_cells"] # Create sets of used variables used_weights = set() used_psi_tables = set() used_nzcs = set() trans_set = set() sets = [used_weights, used_psi_tables, used_nzcs, trans_set] affine_tables = {} # TODO: This is not populated anywhere, remove? quadrature_rules = ir["quadrature_rules"] operations = [] if integral_type == "cell": # Generate code for computing element tensor tensor_code, mem_code, num_ops = _generate_element_tensor(integrals, sets, opt_par, gdim, tdim) tensor_code = "\n".join(tensor_code) # Set operations equal to num_ops (for printing info on operations). operations.append([num_ops]) # Generate code for basic geometric quantities jacobi_code = "" jacobi_code += format["compute_jacobian"](tdim, gdim) jacobi_code += "\n" jacobi_code += format["compute_jacobian_inverse"](tdim, gdim) if oriented: jacobi_code += format["orientation"](tdim, gdim) jacobi_code += "\n" jacobi_code += format["scale factor snippet"] # Generate code for cell volume and circumradius jacobi_code += "\n\n" + format["generate cell volume"](tdim, gdim, integral_type) jacobi_code += "\n\n" + format["generate circumradius"](tdim, gdim, integral_type) elif integral_type == "exterior_facet": # Iterate over facets cases = [None for i in range(num_facets)] for i in range(num_facets): # Update transformer with facets and generate case code + # set of used geometry terms. c, mem_code, ops = _generate_element_tensor(integrals[i], sets, opt_par, gdim, tdim) case = [f_comment("Total number of operations to compute element tensor (from this point): %d" % ops)] case += c cases[i] = "\n".join(case) # Save number of operations (for printing info on # operations). operations.append([i, ops]) # Generate tensor code for all cases using a switch. tensor_code = f_switch(f_facet(None), cases) # Generate code for basic geometric quantities jacobi_code = "" jacobi_code += format["compute_jacobian"](tdim, gdim) jacobi_code += "\n" jacobi_code += format["compute_jacobian_inverse"](tdim, gdim) if oriented: jacobi_code += format["orientation"](tdim, gdim) jacobi_code += "\n" jacobi_code += "\n\n" + format["facet determinant"](tdim, gdim) jacobi_code += "\n\n" + format["generate normal"](tdim, gdim, integral_type) jacobi_code += "\n\n" + format["generate facet area"](tdim, gdim) if tdim == 3: jacobi_code += "\n\n" + format["generate min facet edge length"](tdim, gdim) jacobi_code += "\n\n" + format["generate max facet edge length"](tdim, gdim) # Generate code for cell volume and circumradius jacobi_code += "\n\n" + format["generate cell volume"](tdim, gdim, integral_type) jacobi_code += "\n\n" + format["generate circumradius"](tdim, gdim, integral_type) elif integral_type == "interior_facet": # Modify the dimensions of the primary indices because we have # a macro element prim_idims = [d * 2 for d in prim_idims] # Iterate over combinations of facets cases = [[None for j in range(num_facets)] for i in range(num_facets)] for i in range(num_facets): for j in range(num_facets): # Update transformer with facets and generate case # code + set of used geometry terms. c, mem_code, ops = _generate_element_tensor(integrals[i][j], sets, opt_par, gdim, tdim) case = [f_comment("Total number of operations to compute element tensor (from this point): %d" % ops)] case += c cases[i][j] = "\n".join(case) # Save number of operations (for printing info on # operations). operations.append([i, j, ops]) # Generate tensor code for all cases using a switch. tensor_code = f_switch(f_facet("+"), [f_switch(f_facet("-"), cases[i]) for i in range(len(cases))]) # Generate code for basic geometric quantities jacobi_code = "" for _r in ["+", "-"]: jacobi_code += format["compute_jacobian"](tdim, gdim, r=_r) jacobi_code += "\n" jacobi_code += format["compute_jacobian_inverse"](tdim, gdim, r=_r) if oriented: jacobi_code += format["orientation"](tdim, gdim, r=_r) jacobi_code += "\n" jacobi_code += "\n\n" + format["facet determinant"](tdim, gdim, r="+") jacobi_code += "\n\n" + format["generate normal"](tdim, gdim, integral_type) jacobi_code += "\n\n" + format["generate facet area"](tdim, gdim) if tdim == 3: jacobi_code += "\n\n" + format["generate min facet edge length"](tdim, gdim, r="+") jacobi_code += "\n\n" + format["generate max facet edge length"](tdim, gdim, r="+") # Generate code for cell volume and circumradius jacobi_code += "\n\n" + format["generate cell volume"](tdim, gdim, integral_type) jacobi_code += "\n\n" + format["generate circumradius"](tdim, gdim, integral_type) elif integral_type == "vertex": # Iterate over vertices cases = [None for i in range(num_vertices)] for i in range(num_vertices): # Update transformer with vertices and generate case code # + set of used geometry terms. c, mem_code, ops = _generate_element_tensor(integrals[i], sets, opt_par, gdim, tdim) case = [f_comment("Total number of operations to compute element tensor (from this point): %d" % ops)] case += c cases[i] = "\n".join(case) # Save number of operations (for printing info on # operations). operations.append([i, ops]) # Generate tensor code for all cases using a switch. tensor_code = f_switch(format["vertex"], cases) # Generate code for basic geometric quantities jacobi_code = "" jacobi_code += format["compute_jacobian"](tdim, gdim) jacobi_code += "\n" jacobi_code += format["compute_jacobian_inverse"](tdim, gdim) if oriented: jacobi_code += format["orientation"](tdim, gdim) jacobi_code += "\n" jacobi_code += "\n\n" + format["facet determinant"](tdim, gdim) # FIXME: This is not defined in a point??? elif integral_type in custom_integral_types: # Set number of cells if integral_type == "cutcell": num_cells = 1 elif integral_type == "interface": num_cells = 2 elif integral_type == "overlap": num_cells = 2 # Warning that more than two cells in only partly supported. # The missing piece is to couple multiple cells to # restrictions other than '+' and '-'. if num_cells > 2: warning("Custom integrals with more than two cells only partly supported.") # Modify the dimensions of the primary indices because we have a macro element if num_cells == 2: prim_idims = [d * 2 for d in prim_idims] # Check whether we need to generate facet normals generate_custom_facet_normal = num_cells == 2 # Generate code for computing element tensor tensor_code, mem_code, num_ops = _generate_element_tensor(integrals, sets, opt_par, gdim, tdim, generate_custom_facet_normal) tensor_code = "\n".join(tensor_code) # Set operations equal to num_ops (for printing info on # operations). operations.append([num_ops]) # FIXME: Jacobi code is only needed when we use cell volume or # circumradius. # FIXME: Does not seem to be removed by removed_unused. # Generate code for basic geometric quantities jacobi_code = "" for i in range(num_cells): r = i if num_cells > 1 else None jacobi_code += "\n" jacobi_code += f_comment("--- Compute geometric quantities on cell %d ---" % i) jacobi_code += "\n\n" if num_cells > 1: jacobi_code += f_comment("Extract vertex coordinates\n") jacobi_code += format["extract_cell_coordinates"]((tdim + 1) * gdim * i, r=i) jacobi_code += "\n\n" jacobi_code += format["compute_jacobian"](tdim, gdim, r=r) jacobi_code += "\n" jacobi_code += format["compute_jacobian_inverse"](tdim, gdim, r=r) jacobi_code += "\n" jacobi_code += format["generate cell volume"](tdim, gdim, integral_type, r=r if num_cells > 1 else None) jacobi_code += "\n" jacobi_code += format["generate circumradius"](tdim, gdim, integral_type, r=r if num_cells > 1 else None) jacobi_code += "\n" else: error("Unhandled integral type: " + str(integral_type)) # After we have generated the element code for all facets we can # remove the unused transformations. common = [remove_unused(jacobi_code, trans_set)] # FIXME: After introduction of custom integrals, the common code # here is not really common anymore. Think about how to # restructure this function. # Add common code except for custom integrals if integral_type not in custom_integral_types: common += _tabulate_weights([quadrature_rules[p] for p in sorted(used_weights)]) # Add common code for updating tables name_map = ir["name_map"] tables = ir["unique_tables"] tables.update(affine_tables) # TODO: This is not populated anywhere, remove? common += _tabulate_psis(tables, used_psi_tables, name_map, used_nzcs, opt_par, integral_type, gdim) # Add special tabulation code for custom integral else: common += _evaluate_basis_at_quadrature_points(used_psi_tables, gdim, element_data, prefix, num_vertices, num_cells) # Reset the element tensor (array 'A' given as argument to # tabulate_tensor() by assembler) # Handle functionals. common += [f_comment("Reset values in the element tensor.")] if prim_idims == []: common += [f_assign(f_A(f_int(0)), f_float(0))] else: dim = functools.reduce(lambda v, u: v * u, prim_idims) common += f_loop([f_assign(f_A(f_r), f_float(0))], [(f_r, 0, dim)]) # Create the constant geometry declarations (only generated if # simplify expressions are enabled). geo_ops, geo_code = generate_aux_constants(geo_consts, f_G, f_const_double) if geo_code: common += [f_comment("Number of operations to compute geometry constants: %d." % geo_ops)] common += [format["declaration"](format["float declaration"], f_G(len(geo_consts)))] common += geo_code # Add comments. common += ["", f_comment("Compute element tensor using UFL quadrature representation")] common += [f_comment("Optimisations: %s" % ", ".join([str((k, opt_par[k])) for k in sorted(opt_par.keys())]))] for ops in operations: # Add geo ops count to integral ops count for writing info. if isinstance(ops[-1], int): ops[-1] += geo_ops return "\n".join(common) + "\n" + tensor_code def _generate_element_tensor(integrals, sets, optimise_parameters, gdim, tdim, generate_custom_facet_normal=False): "Construct quadrature code for element tensors." # Prefetch formats to speed up code generation. f_comment = format["comment"] f_ip = format["integration points"] f_I = format["ip constant"] f_loop = format["generate loop"] f_ip_coords = format["generate ip coordinates"] f_coords = format["coordinate_dofs"] f_double = format["float declaration"] f_decl = format["declaration"] f_X = format["ip coordinates"] f_C = format["conditional"] # Initialise return values. element_code = [] tensor_ops_count = 0 # TODO: KBO: The members_code was used when I generated the # load_table.h file which could load tables of basisfunction. This # feature has not been reimplemented. However, with the new design # where we only tabulate unique tables (and only non-zero entries) # it doesn't seem to be necessary. Should it be deleted? members_code = "" # We receive a dictionary {num_points: form,}. # Loop points and forms. for points, terms, functions, ip_consts, coordinate, conditionals in integrals: element_code += ["", f_comment("Loop quadrature points for integral.")] ip_code = [] num_ops = 0 # Generate code to compute coordinates if used. if coordinate: name, gdim, ip, r = coordinate element_code += ["", f_comment("Declare array to hold physical coordinate of quadrature point.")] element_code += [f_decl(f_double, f_X(points, gdim))] ops, coord_code = f_ip_coords(gdim, tdim, points, name, ip, r) ip_code += ["", f_comment("Compute physical coordinate of quadrature point, operations: %d." % ops)] ip_code += [coord_code] num_ops += ops # Update used psi tables and transformation set. sets[1].add(name) sets[3].add(f_coords(r)) # Generate code to compute function values. if functions: func_code, ops = _generate_functions(functions, sets) ip_code += func_code num_ops += ops # Generate code to compute conditionals (might depend on coordinates # and function values so put here). # TODO: Some conditionals might only depend on geometry so they # should be moved outside if possible. if conditionals: ip_code += [f_decl(f_double, f_C(len(conditionals)))] # Sort conditionals (need to in case of nested conditionals). reversed_conds = dict([(n, (o, e)) for e, (t, o, n) in conditionals.items()]) for num in range(len(conditionals)): name = format["conditional"](num) ops, expr = reversed_conds[num] ip_code += [f_comment("Compute conditional, operations: %d." % ops)] ip_code += [format["assign"](name, expr)] num_ops += ops # Generate code for ip constant declarations. ip_const_ops, ip_const_code = generate_aux_constants(ip_consts, f_I, format["assign"], True) num_ops += ip_const_ops if ip_const_code: ip_code += ["", f_comment("Number of operations to compute ip constants: %d" % ip_const_ops)] ip_code += [format["declaration"](format["float declaration"], f_I(len(ip_consts)))] ip_code += ip_const_code # Generate code to evaluate the element tensor. integral_code, ops = _generate_integral_code(points, terms, sets, optimise_parameters) num_ops += ops if points is None: quadrature_ops = "unknown" tensor_ops_count = "unknown" else: quadrature_ops = num_ops * points tensor_ops_count += quadrature_ops ip_code += integral_code element_code.append(f_comment ("Number of operations to compute element tensor for following IP loop = %s" % str(quadrature_ops))) # Generate code for custom facet normal if necessary if generate_custom_facet_normal: for line in ip_code: if "n_00" in line: ip_code = [format["facet_normal_custom"](gdim)] + ip_code break # Loop code over all IPs. if points == 0: element_code.append(f_comment("Only 1 integration point, omitting IP loop.")) element_code += ip_code elif points is None: num_points = "num_quadrature_points" element_code += f_loop(ip_code, [(f_ip, 0, num_points)]) else: element_code += f_loop(ip_code, [(f_ip, 0, points)]) return (element_code, members_code, tensor_ops_count) def _generate_functions(functions, sets): "Generate declarations for functions and code to compute values." f_comment = format["comment"] f_double = format["float declaration"] f_F = format["function value"] f_float = format["floating point"] f_decl = format["declaration"] f_r = format["free indices"][0] f_iadd = format["iadd"] f_loop = format["generate loop"] # Create the function declarations. code = ["", f_comment("Coefficient declarations.")] code += [f_decl(f_double, f_F(n), f_float(0)) for n in range(len(functions))] # Get sets. used_psi_tables = sets[1] used_nzcs = sets[2] # Sort functions after loop ranges. function_list = {} for key, val in functions.items(): if val[1] in function_list: function_list[val[1]].append(key) else: function_list[val[1]] = [key] total_ops = 0 # Loop ranges and get list of functions. for loop_range, list_of_functions in sorted(function_list.items()): function_expr = {} # Loop functions. func_ops = 0 for function in list_of_functions: # Get name and number. number, range_i, ops, psi_name, u_nzcs, ufl_element = functions[function] # Add name to used psi names and non zeros name to used_nzcs. used_psi_tables.add(psi_name) used_nzcs.update(u_nzcs) # TODO: This check can be removed for speed later. ffc_assert(number not in function_expr, "This is definitely not supposed to happen!") # Convert function (that might be a symbol) to a simple # string and save. function = str(function) function_expr[number] = function # Get number of operations to compute entry and add to # function operations count. func_ops += (ops + 1) * range_i # Add function operations to total count total_ops += func_ops code += ["", f_comment("Total number of operations to compute function values = %d" % func_ops)] # Sort the functions according to name and create loop to # compute the function values. lines = [f_iadd(f_F(n), function_expr[n]) for n in sorted(function_expr.keys())] code += f_loop(lines, [(f_r, 0, loop_range)]) # TODO: If loop_range == 1, this loop may be unneccessary. Not sure if it's safe to just skip it. return code, total_ops def _generate_integral_code(points, terms, sets, optimise_parameters): "Generate code to evaluate the element tensor." # Prefetch formats to speed up code generation. f_comment = format["comment"] f_iadd = format["iadd"] f_A = format["element tensor"] f_loop = format["generate loop"] f_B = format["basis constant"] # Initialise return values. code = [] num_ops = 0 loops = {} # Extract sets. used_weights, used_psi_tables, used_nzcs, trans_set = sets # Loop terms and create code. for loop, (data, entry_vals) in sorted(terms.items()): # If we don't have any entry values, there's no need to # generate the loop. if not entry_vals: continue # Get data. t_set, u_weights, u_psi_tables, u_nzcs, basis_consts = data # If we have a value, then we also need to update the sets of # used variables. trans_set.update(t_set) used_weights.update(u_weights) used_psi_tables.update(u_psi_tables) used_nzcs.update(u_nzcs) # Generate code for basis constant declarations. basis_const_ops, basis_const_code = generate_aux_constants(basis_consts, f_B, format["assign"], True) decl_code = [] if basis_consts: decl_code = [format["declaration"](format["float declaration"], f_B(len(basis_consts)))] loops[loop] = [basis_const_ops, decl_code + basis_const_code] for entry, value, ops in entry_vals: # Compute number of operations to compute entry (add 1 # because of += in assignment). entry_ops = ops + 1 # Create comment for number of operations entry_ops_comment = f_comment("Number of operations to compute entry: %d" % entry_ops) entry_code = f_iadd(f_A(entry), value) loops[loop][0] += entry_ops loops[loop][1] += [entry_ops_comment, entry_code] # Write all the loops of basis functions. for loop, ops_lines in sorted(loops.items()): ops, lines = ops_lines prim_ops = functools.reduce(lambda i, j: i * j, [ops] + [l[2] for l in loop]) # Add number of operations for current loop to total count. num_ops += prim_ops code += ["", f_comment("Number of operations for primary indices: %d" % prim_ops)] code += f_loop(lines, loop) return code, num_ops def _tabulate_weights(quadrature_rules): "Generate table of quadrature weights." # Prefetch formats to speed up code generation. f_float = format["floating point"] f_table = format["static const float declaration"] f_sep = format["list separator"] f_weight = format["weight"] f_component = format["component"] f_group = format["grouping"] f_decl = format["declaration"] f_tensor = format["tabulate tensor"] f_comment = format["comment"] f_int = format["int"] code = ["", f_comment("Array of quadrature weights.")] # Loop tables of weights and create code. for weights, points in quadrature_rules: # FIXME: For now, raise error if we don't have weights. # We might want to change this later. ffc_assert(weights.any(), "No weights.") # Create name and value for weight. num_points = len(points) name = f_weight(num_points) value = f_float(weights[0]) if len(weights) > 1: name += f_component("", f_int(num_points)) value = f_tensor(weights) code += [f_decl(f_table, name, value)] # Tabulate the quadrature points (uncomment for different # parameters). 1) Tabulate the points as: p0, p1, p2, with p0 # = (x0, y0, z0) etc. Use f_float to format the value (enable # variable precision). formatted_points = [f_group(f_sep.join([f_float(val) for val in point])) for point in points] # Create comment. comment = "Quadrature points on the UFC reference element: " \ + f_sep.join(formatted_points) code += [f_comment(comment)] # 2) Tabulate the coordinates of the points p0, p1, p2 etc. # X: x0, x1, x2 # Y: y0, y1, y2 # Z: z0, z1, z2 # comment = "Quadrature coordinates on the UFC reference element: " # code += [format["comment"](comment)] # All points have the same number of coordinates. # num_coord = len(points[0]) # All points have x-coordinates. # xs = [f_float(p[0]) for p in points] # comment = "X: " + f_sep.join(xs) # code += [format["comment"](comment)] # ys = [] # zs = [] # Tabulate y-coordinate if we have 2 or more coordinates. # if num_coord >= 2: # ys = [f_float(p[1]) for p in points] # comment = "Y: " + f_sep.join(ys) # code += [format["comment"](comment)] # Only tabulate z-coordinate if we have 3 coordinates. # if num_coord == 3: # zs = [f_float(p[2]) for p in points] # comment = "Z: " + f_sep.join(zs) # code += [format["comment"](comment)] code += [""] return code def _tabulate_psis(tables, used_psi_tables, inv_name_map, used_nzcs, optimise_parameters, integral_type, gdim): "Tabulate values of basis functions and their derivatives at quadrature points." # Prefetch formats to speed up code generation. f_comment = format["comment"] f_table = format["static const float declaration"] f_component = format["component"] f_const_uint = format["static const uint declaration"] f_nzcolumns = format["nonzero columns"] f_list = format["list"] f_decl = format["declaration"] f_tensor = format["tabulate tensor"] f_new_line = format["new line"] f_int = format["int"] # FIXME: Check if we can simplify the tabulation code = [] code += [f_comment("Values of basis functions at quadrature points.")] # Get list of non zero columns, if we ignore ones, ignore columns # with one component. if optimise_parameters["ignore ones"]: nzcs = [] for key, val in sorted(inv_name_map.items()): # Check if we have a table of ones or if number of # non-zero columns is larger than one. if val[1] and len(val[1][1]) > 1 or not val[3]: nzcs.append(val[1]) else: nzcs = [val[1] for key, val in sorted(inv_name_map.items()) if val[1]] # TODO: Do we get arrays that are not unique? new_nzcs = [] for nz in nzcs: # Only get unique arrays. if nz not in new_nzcs: new_nzcs.append(nz) # Construct name map. name_map = {} if inv_name_map: for name in sorted(inv_name_map): if inv_name_map[name][0] in name_map: name_map[inv_name_map[name][0]].append(name) else: name_map[inv_name_map[name][0]] = [name] # Loop items in table and tabulate. for name in sorted(used_psi_tables): # Only proceed if values are still used (if they're not # remapped). vals = tables[name] if vals is not None: # Add declaration to name. ip, dofs = numpy.shape(vals) decl_name = f_component(name, [ip, dofs]) # Generate array of values. value = f_tensor(vals) code += [f_decl(f_table, decl_name, f_new_line + value), ""] # Tabulate non-zero indices. if optimise_parameters["eliminate zeros"]: if name in sorted(name_map): for n in name_map[name]: if inv_name_map[n][1] and inv_name_map[n][1] in new_nzcs: i, cols = inv_name_map[n][1] if i not in used_nzcs: continue code += [f_comment("Array of non-zero columns")] value = f_list([f_int(c) for c in list(cols)]) name_col = f_component(f_nzcolumns(i), len(cols)) code += [f_decl(f_const_uint, name_col, value), ""] # Remove from list of columns. new_nzcs.remove(inv_name_map[n][1]) return code def _evaluate_basis_at_quadrature_points(psi_tables, gdim, element_data, form_prefix, num_vertices, num_cells): "Generate code for calling evaluate basis (derivatives) at quadrature points" # Prefetch formats to speed up code generation f_comment = format["comment"] f_declaration = format["declaration"] f_static_array = format["static array"] f_loop = format["generate loop"] f_eval_basis_decl = format["eval_basis_decl"] f_eval_basis_init = format["eval_basis_init"] f_eval_basis = format["eval_basis"] f_eval_basis_copy = format["eval_basis_copy"] f_eval_derivs_decl = format["eval_derivs_decl"] f_eval_derivs_init = format["eval_derivs_init"] f_eval_derivs = format["eval_derivs"] f_eval_derivs_copy = format["eval_derivs_copy"] code = [] # Extract prefixes for tables prefixes = sorted(set(table.split("_")[0] for table in psi_tables)) # Use lower case prefix for form name form_prefix = form_prefix.lower() # The psi_tables used by the quadrature code are for scalar # components of specific derivatives, while tabulate_basis_all and # tabulate_basis_derivatives_all return data including all # possible components and derivatives. We therefore need to # iterate over prefixes (= elements), call tabulate_basis_all or # tabulate_basis_derivatives all, and then extract the relevant # data and fill in the psi_tables. We therefore need to extract # for each prefix, which tables need to be filled in. # For each unique prefix, check which derivatives and components # are used used_derivatives_and_components = {} for prefix in prefixes: used_derivatives_and_components[prefix] = {} for table in psi_tables: if prefix not in table: continue # Check for derivative if "_D" in table: d = table.split("_D")[1].split("_")[0] n = sum([int(_d) for _d in d]) # FIXME: Assume at most 9 derivatives... else: n = 0 # Check for component if "_C" in table: c = table.split("_C")[1].split("_")[0] else: c = None # Note that derivative has been used if n not in used_derivatives_and_components[prefix]: used_derivatives_and_components[prefix][n] = set() used_derivatives_and_components[prefix][n].add(c) # Generate code for setting quadrature weights code += [f_comment("Set quadrature weights")] code += [f_declaration("const double*", "W", "quadrature_weights")] code += [""] # Generate code for calling evaluate_basis_[derivatives_]all for prefix in prefixes: # Get element data for current element counter = int(prefix.split("FE")[1]) space_dim = element_data[counter]["num_element_dofs"] value_size = element_data[counter]["physical_value_size"] element_classname = element_data[counter]["classname"] # Iterate over derivative orders for n, components in sorted_by_key(used_derivatives_and_components[prefix]): # components are a set and need to be sorted components = sorted(components) # Code for evaluate_basis_all (n = 0 which means it's not # a derivative) if n == 0: code += [f_comment("--- Evaluation of basis functions ---")] code += [""] # Compute variables for code generation eval_stride = value_size eval_size = space_dim * eval_stride table_size = num_cells * space_dim # Iterate over components and initialize tables for c in components: # Set name of table if c is None: table_name = prefix else: table_name = prefix + "_C%s" % c # Generate code for declaration of table code += [f_comment("Create table %s for basis function values on all cells" % table_name)] code += [f_eval_basis_decl % {"table_name": table_name}] code += [f_eval_basis_init % {"table_name": table_name, "table_size": table_size}] code += [""] # Iterate over cells in macro element and evaluate basis for cell_number in range(num_cells): # Compute variables for code generation eval_name = "%s_values_%d" % (prefix, cell_number) table_offset = cell_number * space_dim vertex_offset = cell_number * num_vertices * gdim # Generate block of code for loop block = [] # Generate code for calling evaluate_basis_all block += [f_eval_basis % {"classname": element_classname, "eval_name": eval_name, "gdim": gdim, "vertex_offset": vertex_offset}] # Iterate over components and extract values for c in components: # Set name of table and component offset if c is None: table_name = prefix eval_offset = 0 else: table_name = prefix + "_C%s" % c eval_offset = int(c) # Generate code for copying values block += [""] block += [f_eval_basis_copy % {"table_name": table_name, "eval_name": eval_name, "eval_stride": eval_stride, "eval_offset": eval_offset, "space_dim": space_dim, "table_offset": table_offset}] # Generate code code += [f_comment("Evaluate basis functions on cell %d" % cell_number)] code += [f_static_array("double", eval_name, eval_size)] code += f_loop(block, [("ip", 0, "num_quadrature_points")]) code += [""] # Code for evaluate_basis_derivatives_all (derivative of degree n > 0) else: code += [f_comment("--- Evaluation of basis function derivatives of order %d ---" % n)] code += [""] # FIXME: We extract values for all possible # derivatives, even # FIXME: if not all are used. (For components, we # extract only # FIXME: components that are actually used.) This may # be optimized # FIXME: but the extra cost is likely small. # Get derivative tuples __, deriv_tuples = compute_derivative_tuples(n, gdim) # Generate names for derivatives derivs = ["".join(str(_d) for _d in d) for d in deriv_tuples] # Compute variables for code generation eval_stride = value_size * len(derivs) eval_size = space_dim * eval_stride table_size = num_cells * space_dim # Iterate over derivatives and initialize tables seen_derivs = set() for d in derivs: # Skip derivative if seen before (d^2/dxdy = d^2/dydx) if d in seen_derivs: continue seen_derivs.add(d) # Iterate over components for c in components: # Set name of table if c is None: table_name = prefix + "_D%s" % d else: table_name = prefix + "_C%s_D%s" % (c, d) # Generate code for declaration of table code += [f_comment("Create table %s for basis function derivatives on all cells" % table_name)] code += [(f_eval_derivs_decl % {"table_name": table_name})] code += [(f_eval_derivs_init % {"table_name": table_name, "table_size": table_size})] code += [""] # Iterate over cells (in macro element) for cell_number in range(num_cells): # Compute variables for code generation eval_name = "%s_dvalues_%d_%d" % (prefix, n, cell_number) table_offset = cell_number * space_dim vertex_offset = cell_number * num_vertices * gdim # Generate block of code for loop block = [] # Generate code for calling evaluate_basis_derivatives_all block += [f_eval_derivs % {"classname": element_classname, "eval_name": eval_name, "gdim": gdim, "vertex_offset": vertex_offset, "n": n}] # Iterate over derivatives and extract values seen_derivs = set() for i, d in enumerate(derivs): # Skip derivative if seen before (d^2/dxdy = d^2/dydx) if d in seen_derivs: continue seen_derivs.add(d) # Iterate over components for c in components: # Set name of table and component offset if c is None: table_name = prefix + "_D%s" % d eval_offset = i else: table_name = prefix + "_C%s_D%s" % (c, d) eval_offset = len(derivs) * int(c) + i # Generate code for copying values block += [""] block += [(f_eval_derivs_copy % {"table_name": table_name, "eval_name": eval_name, "eval_stride": eval_stride, "eval_offset": eval_offset, "space_dim": space_dim, "table_offset": table_offset})] # Generate code code += [f_comment("Evaluate basis function derivatives on cell %d" % cell_number)] code += [f_static_array("double", eval_name, eval_size)] code += f_loop(block, [("ip", 0, "num_quadrature_points")]) code += [""] # Add newline code += [""] return code ffc-2019.1.0.post0/ffc/quadrature/quadratureoptimization.py000066400000000000000000000251621346026536000236720ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2013 Kristian B. Oelgaard # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # Modified by Marie E. Rognes, 2013 # Modified by Martin Sandve Alnæs, 2013-2014 from ufl.utils.sorting import sorted_by_key # FFC modules from ffc.log import info, error from ffc.quadrature.cpp import format from ffc.quadrature.symbolics import optimise_code, BASIS, IP, GEO from ffc.quadrature.symbolics import create_product, create_sum, create_symbol, create_fraction def optimize_integral_ir(ir, parameters): "Compute optimized intermediate representation of integral." # FIXME: input argument "parameters" has been added to optimize_integral_ir # FIXME: which shadows a local parameter # Get integral type and optimization parameters integral_type = ir["integral_type"] parameters = ir["optimise_parameters"] # Check whether we should optimize if parameters["optimisation"]: # Get parameters integrals = ir["trans_integrals"] integral_type = ir["integral_type"] num_facets = ir["num_facets"] num_vertices = ir["num_vertices"] geo_consts = ir["geo_consts"] psi_tables_map = ir["psi_tables_map"] # Optimize based on integral type if integral_type == "cell": info("Optimising expressions for cell integral") if parameters["optimisation"] in ("precompute_ip_const", "precompute_basis_const"): _precompute_expressions(integrals, geo_consts, parameters["optimisation"]) else: _simplify_expression(integrals, geo_consts, psi_tables_map) elif integral_type == "exterior_facet": for i in range(num_facets): info("Optimising expressions for facet integral %d" % i) if parameters["optimisation"] in ("precompute_ip_const", "precompute_basis_const"): _precompute_expressions(integrals[i], geo_consts, parameters["optimisation"]) else: _simplify_expression(integrals[i], geo_consts, psi_tables_map) elif integral_type == "interior_facet": for i in range(num_facets): for j in range(num_facets): info("Optimising expressions for facet integral (%d, %d)" % (i, j)) if parameters["optimisation"] in ("precompute_ip_const", "precompute_basis_const"): _precompute_expressions(integrals[i][j], geo_consts, parameters["optimisation"]) else: _simplify_expression(integrals[i][j], geo_consts, psi_tables_map) elif integral_type == "vertex": for i in range(num_vertices): info("Optimising expressions for poin integral %d" % i) if parameters["optimisation"] in ("precompute_ip_const", "precompute_basis_const"): _precompute_expressions(integrals[i], geo_consts, parameters["optimisation"]) else: _simplify_expression(integrals[i], geo_consts, psi_tables_map) else: error("Unhandled domain type: " + str(integral_type)) return ir def _simplify_expression(integral, geo_consts, psi_tables_map): for points, terms, functions, ip_consts, coordinate, conditionals in integral: # NOTE: sorted is needed to pass the regression tests on the buildbots # but it might be inefficient for speed. # A solution could be to only compare the output of evaluating the # integral, not the header files. for loop, (data, entry_vals) in sorted_by_key(terms): t_set, u_weights, u_psi_tables, u_nzcs, basis_consts = data new_entry_vals = [] psi_tables = set() # NOTE: sorted is needed to pass the regression tests on the buildbots # but it might be inefficient for speed. # A solution could be to only compare the output of evaluating the # integral, not the header files. for entry, val, ops in sorted(entry_vals): value = optimise_code(val, ip_consts, geo_consts, t_set) # Check if value is zero if value.val: new_entry_vals.append((entry, value, value.ops())) psi_tables.update(set([psi_tables_map[b] for b in value.get_unique_vars(BASIS)])) terms[loop][0][2] = psi_tables terms[loop][1] = new_entry_vals def _precompute_expressions(integral, geo_consts, optimisation): for points, terms, functions, ip_consts, coordinate, conditionals in integral: for loop, (data, entry_vals) in sorted_by_key(terms): t_set, u_weights, u_psi_tables, u_nzcs, basis_consts = data new_entry_vals = [] for entry, val, ops in entry_vals: value = _extract_variables(val, basis_consts, ip_consts, geo_consts, t_set, optimisation) # Check if value is zero if value.val: new_entry_vals.append((entry, value, value.ops())) terms[loop][1] = new_entry_vals def _extract_variables(val, basis_consts, ip_consts, geo_consts, t_set, optimisation): f_G = format["geometry constant"] f_I = format["ip constant"] f_B = format["basis constant"] if val._prec == 0: return val elif val._prec == 1: if val.base_expr is None: return val new_base = _extract_variables(val.base_expr, basis_consts, ip_consts, geo_consts, t_set, optimisation) new_sym = create_symbol(val.v, val.t, new_base, val.base_op) if new_sym.t == BASIS: return _reduce_expression(new_sym, [], basis_consts, f_B, True) elif new_sym.t == IP: return _reduce_expression(new_sym, [], ip_consts, f_I, True) elif new_sym.t == GEO: return _reduce_expression(new_sym, [], geo_consts, f_G, True) # First handle child classes of product and sum. elif val._prec in (2, 3): new_vars = [] for v in val.vrs: new_vars.append(_extract_variables(v, basis_consts, ip_consts, geo_consts, t_set, optimisation)) if val._prec == 2: new_val = create_product(new_vars) if val._prec == 3: new_val = create_sum(new_vars) elif val._prec == 4: num = _extract_variables(val.num, basis_consts, ip_consts, geo_consts, t_set, optimisation) denom = _extract_variables(val.denom, basis_consts, ip_consts, geo_consts, t_set, optimisation) return create_fraction(num, denom) else: error("Unknown symbolic type: %s" % repr(val)) # Sort variables of product and sum. b_c, i_c, g_c = [], [], [] for v in new_val.vrs: if v.t == BASIS: if optimisation == "precompute_basis_const": b_c.append(v) elif v.t == IP: i_c.append(v) else: g_c.append(v) vrs = new_val.vrs[:] for v in g_c + i_c + b_c: vrs.remove(v) i_c.extend(_reduce_expression(new_val, g_c, geo_consts, f_G)) vrs.extend(_reduce_expression(new_val, i_c, ip_consts, f_I)) vrs.extend(_reduce_expression(new_val, b_c, basis_consts, f_B)) # print "b_c: " # for b in b_c: # print b # print "basis" # for k,v in basis_consts.items(): # print "k: ", k # print "v: ", v # print "geo" # for k,v in geo_consts.items(): # print "k: ", k # print "v: ", v # print "ret val: ", val if len(vrs) > 1: if new_val._prec == 2: new_object = create_product(vrs) elif new_val._prec == 3: new_object = create_sum(vrs) else: error("Must have product or sum here: %s" % repr(new_val)) if new_object.t == BASIS: if optimisation == "precompute_ip_const": return new_object elif optimisation == "precompute_basis_const": return _reduce_expression(new_object, [], basis_consts, f_B, True) elif new_object.t == IP: return _reduce_expression(new_object, [], ip_consts, f_I, True) elif new_object.t == GEO: return _reduce_expression(new_object, [], geo_consts, f_G, True) return vrs[0] # if new_val._prec == 2: # if len(vrs) > 1: # new_prod = create_product(vrs) # if new_prod.t == BASIS: # if optimisation == "precompute_ip_const": # return new_prod # elif optimisation == "precompute_basis_const": # return _reduce_expression(new_prod, [], basis_consts, f_B, True) # elif new_prod.t == IP: # return _reduce_expression(new_prod, [], ip_consts, f_I, True) # elif new_prod.t == GEO: # return _reduce_expression(new_prod, [], geo_consts, f_G, True) # return vrs[0] # elif new_val._prec == 3: # if len(vrs) > 1: # new_sum = create_sum(vrs) # if new_sum.t == BASIS: # return new_sum # return _reduce_expression(new_sum, [], basis_consts, f_B, True) # elif new_sum.t == IP: # return _reduce_expression(new_sum, [], ip_consts, f_I, True) # elif new_sum.t == GEO: # return _reduce_expression(new_sum, [], geo_consts, f_G, True) # return vrs[0] # else: # error("Must have product or sum here: %s" % repr(new_val)) def _reduce_expression(expr, symbols, const_dict, f_name, use_expr_type=False): if use_expr_type: if expr not in const_dict: const_dict[expr] = len(const_dict) return create_symbol(f_name(const_dict[expr]), expr.t) # Only something to be done if we have more than one symbol. if len(symbols) > 1: sym_type = symbols[0].t # Create new symbol. if expr._prec == 2: new_sym = create_product(symbols) elif expr._prec == 3: new_sym = create_sum(symbols) if new_sym not in const_dict: const_dict[new_sym] = len(const_dict) s = create_symbol(f_name(const_dict[new_sym]), sym_type) return [s] return symbols ffc-2019.1.0.post0/ffc/quadrature/quadraturerepresentation.py000066400000000000000000000241041346026536000242010ustar00rootroot00000000000000# -*- coding: utf-8 -*- "Quadrature representation class for UFL" # Copyright (C) 2009-2017 Kristian B. Oelgaard # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # Modified by Anders Logg 2009, 2014 # Modified by Martin Sandve Alnæs 2013-2017 # Python modules import collections # UFL modules from ufl.classes import Integral from ufl.sorting import sorted_expr_sum from ufl import custom_integral_types # FFC modules from ffc.log import ffc_assert, info, error from ffc.utils import product from ffc.fiatinterface import create_element from ffc.quadrature.cpp import set_float_formatting from ffc.representationutils import initialize_integral_ir from ffc.quadrature.tabulate_basis import tabulate_basis from ffc.quadrature.parameters import parse_optimise_parameters from ffc.quadrature.deprecation import issue_deprecation_warning from ffc.quadrature.quadraturetransformer import QuadratureTransformer from ffc.quadrature.optimisedquadraturetransformer import QuadratureTransformerOpt def compute_integral_ir(itg_data, form_data, form_id, element_numbers, classnames, parameters): "Compute intermediate represention of integral." info("Computing quadrature representation") issue_deprecation_warning() set_float_formatting(parameters["precision"]) # Initialise representation ir = initialize_integral_ir("quadrature", itg_data, form_data, form_id) # Create and save the optisation parameters. ir["optimise_parameters"] = parse_optimise_parameters(parameters, itg_data) # Sort integrals into a dict with quadrature degree and rule as # key sorted_integrals = sort_integrals(itg_data.integrals, itg_data.metadata["quadrature_degree"], itg_data.metadata["quadrature_rule"]) # Tabulate quadrature points and basis function values in these # points integrals_dict, psi_tables, quadrature_rules = \ tabulate_basis(sorted_integrals, form_data, itg_data) # Save tables for quadrature weights and points ir["quadrature_rules"] = quadrature_rules # Create dimensions of primary indices, needed to reset the # argument 'A' given to tabulate_tensor() by the assembler. ir["prim_idims"] = [create_element(ufl_element).space_dimension() for ufl_element in form_data.argument_elements] # Select transformer if ir["optimise_parameters"]["optimisation"]: QuadratureTransformerClass = QuadratureTransformerOpt else: QuadratureTransformerClass = QuadratureTransformer # Create transformer transformer = QuadratureTransformerClass(psi_tables, quadrature_rules, itg_data.domain.geometric_dimension(), itg_data.domain.topological_dimension(), ir["entitytype"], form_data.function_replace_map, ir["optimise_parameters"]) # Transform integrals. cell = itg_data.domain.ufl_cell() ir["trans_integrals"] = _transform_integrals_by_type(ir, transformer, integrals_dict, itg_data.integral_type, cell) # Save tables populated by transformer ir["name_map"] = transformer.name_map ir["unique_tables"] = transformer.unique_tables # Basis values? # Save tables map, to extract table names for optimisation option # -O. ir["psi_tables_map"] = transformer.psi_tables_map ir["additional_includes_set"] = transformer.additional_includes_set # Insert empty data which will be populated if optimization is # turned on ir["geo_consts"] = {} # Extract element data for psi_tables, needed for runtime # quadrature. This is used by integral type custom_integral. ir["element_data"] = _extract_element_data(transformer.element_map, classnames) return ir def sort_integrals(integrals, default_scheme, default_degree): """Sort and accumulate integrals according to the number of quadrature points needed per axis. All integrals should be over the same (sub)domain. """ if not integrals: return {} # Get domain properties from first integral, assuming all are the # same integral_type = integrals[0].integral_type() subdomain_id = integrals[0].subdomain_id() domain = integrals[0].ufl_domain() ffc_assert(all(integral_type == itg.integral_type() for itg in integrals), "Expecting only integrals of the same type.") ffc_assert(all(domain == itg.ufl_domain() for itg in integrals), "Expecting only integrals on the same domain.") ffc_assert(all(subdomain_id == itg.subdomain_id() for itg in integrals), "Expecting only integrals on the same subdomain.") sorted_integrands = collections.defaultdict(list) for integral in integrals: # Override default degree and rule if specified in integral # metadata integral_metadata = integral.metadata() or {} degree = integral_metadata.get("quadrature_degree", default_degree) scheme = integral_metadata.get("quadrature_rule", default_scheme) assert isinstance(degree, int) # Add integrand to dictionary according to degree and scheme. rule = (scheme, degree) sorted_integrands[rule].append(integral.integrand()) # Create integrals from accumulated integrands. sorted_integrals = {} for rule, integrands in list(sorted_integrands.items()): # Summing integrands in a canonical ordering defined by UFL integrand = sorted_expr_sum(integrands) sorted_integrals[rule] = Integral(integrand, integral_type, domain, subdomain_id, {}, None) return sorted_integrals def _transform_integrals_by_type(ir, transformer, integrals_dict, integral_type, cell): num_facets = cell.num_facets() num_vertices = cell.num_vertices() if integral_type == "cell": # Compute transformed integrals. info("Transforming cell integral") transformer.update_cell() terms = _transform_integrals(transformer, integrals_dict, integral_type) elif integral_type == "exterior_facet": # Compute transformed integrals. terms = [None]*num_facets for i in range(num_facets): info("Transforming exterior facet integral %d" % i) transformer.update_facets(i, None) terms[i] = _transform_integrals(transformer, integrals_dict, integral_type) elif integral_type == "interior_facet": # Compute transformed integrals. terms = [[None]*num_facets for i in range(num_facets)] for i in range(num_facets): for j in range(num_facets): info("Transforming interior facet integral (%d, %d)" % (i, j)) transformer.update_facets(i, j) terms[i][j] = _transform_integrals(transformer, integrals_dict, integral_type) elif integral_type == "vertex": # Compute transformed integrals. terms = [None]*num_vertices for i in range(num_vertices): info("Transforming vertex integral (%d)" % i) transformer.update_vertex(i) terms[i] = _transform_integrals(transformer, integrals_dict, integral_type) elif integral_type in custom_integral_types: # Compute transformed integrals: same as for cell integrals info("Transforming custom integral") transformer.update_cell() terms = _transform_integrals(transformer, integrals_dict, integral_type) else: error("Unhandled domain type: " + str(integral_type)) return terms def _transform_integrals(transformer, integrals, integral_type): "Transform integrals from UFL expression to quadrature representation." transformed_integrals = [] for point, integral in sorted(integrals.items()): transformer.update_points(point) terms = transformer.generate_terms(integral.integrand(), integral_type) transformed_integrals.append((point, terms, transformer.function_data, {}, transformer.coordinate, transformer.conditionals)) return transformed_integrals def _extract_element_data(element_map, classnames): "Extract element data for psi_tables" # Iterate over map element_data = {} for elements in element_map.values(): for ufl_element, counter in elements.items(): # Create corresponding FIAT element fiat_element = create_element(ufl_element) # Compute value size value_size = product(ufl_element.value_shape()) # Get element classname element_classname = classnames["finite_element"][ufl_element] # Store data element_data[counter] = {"physical_value_size": value_size, "num_element_dofs": fiat_element.space_dimension(), "classname": element_classname} return element_data ffc-2019.1.0.post0/ffc/quadrature/quadraturetransformer.py000066400000000000000000001104021346026536000234760ustar00rootroot00000000000000# -*- coding: utf-8 -*- "QuadratureTransformer for quadrature code generation to translate UFL expressions." # Copyright (C) 2009-2011 Kristian B. Oelgaard # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # Modified by Peter Brune 2009 # Modified by Anders Logg 2009, 2013 # Modified by Lizao Li, 2015, 2016 # UFL common. from ufl.utils.sorting import sorted_by_key from ufl.measure import custom_integral_types, point_integral_types # UFL Classes. from ufl.classes import IntValue from ufl.classes import FloatValue from ufl.classes import Coefficient from ufl.classes import Operator # FFC modules. from ffc.log import error, ffc_assert from ffc.quadrature.cpp import format # Utility and optimisation functions for quadraturegenerator. from ffc.quadrature.quadraturetransformerbase import QuadratureTransformerBase from ffc.quadrature.quadratureutils import create_permutations from ffc.quadrature.reduce_operations import operation_count from ffc.quadrature.symbolics import IP def firstkey(d): return next(iter(d)) class QuadratureTransformer(QuadratureTransformerBase): "Transform UFL representation to quadrature code." def __init__(self, *args): # Initialise base class. QuadratureTransformerBase.__init__(self, *args) # ------------------------------------------------------------------------- # Start handling UFL classes. # ------------------------------------------------------------------------- # ------------------------------------------------------------------------- # AlgebraOperators (algebra.py). # ------------------------------------------------------------------------- def sum(self, o, *operands): # print("Visiting Sum: " + "\noperands: \n" + "\n".join(map(repr, operands))) # Prefetch formats to speed up code generation. f_group = format["grouping"] f_add = format["add"] f_mult = format["multiply"] f_float = format["floating point"] code = {} # Loop operands that has to be summed and sort according to map (j,k). for op in operands: # If entries does already exist we can add the code, otherwise just # dump them in the element tensor. for key, val in sorted_by_key(op): if key in code: code[key].append(val) else: code[key] = [val] # Add sums and group if necessary. for key, val in sorted_by_key(code): # Exclude all zero valued terms from sum value = [v for v in val if v is not None] if len(value) > 1: # NOTE: Since we no longer call expand_indices, the following # is needed to prevent the code from exploding for forms like # HyperElasticity duplications = {} for val in value: if val in duplications: duplications[val] += 1 continue duplications[val] = 1 # Add a product for each term that has duplicate code expressions = [] for expr, num_occur in sorted_by_key(duplications): if num_occur > 1: # Pre-multiply expression with number of occurrences expressions.append(f_mult([f_float(num_occur), expr])) continue # Just add expression if there is only one expressions.append(expr) ffc_assert(expressions, "Where did the expressions go?") if len(expressions) > 1: code[key] = f_group(f_add(expressions)) continue code[key] = expressions[0] else: # Check for zero valued sum and delete from code # This might result in returning an empty dict, but that should # be interpreted as zero by other handlers. if not value: del code[key] continue code[key] = value[0] return code def product(self, o, *operands): # print("Visiting Product with operands: \n" + "\n".join(map(repr,operands))) # Prefetch formats to speed up code generation. f_mult = format["multiply"] permute = [] not_permute = [] # Sort operands in objects that needs permutation and objects that does not. for op in operands: # If we get an empty dict, something was zero and so is the product. if not op: return {} if len(op) > 1 or (op and firstkey(op) != ()): permute.append(op) elif op and firstkey(op) == (): not_permute.append(op[()]) # Create permutations. # print("\npermute: " + repr(permute)) # print("\nnot_permute: " + repr(not_permute)) permutations = create_permutations(permute) # print("\npermutations: " + repr(permutations)) # Create code. code = {} if permutations: for key, val in sorted(permutations.items()): # Sort key in order to create a unique key. l = sorted(key) # noqa: E741 # Loop products, don't multiply by '1' and if we encounter a None the product is zero. # TODO: Need to find a way to remove and J_inv00 terms that might # disappear as a consequence of eliminating a zero valued term value = [] zero = False for v in val + not_permute: if v is None: ffc_assert(tuple(l) not in code, "This key should not be in the code.") code[tuple(l)] = None zero = True break elif not v: print("v: '%s'" % repr(v)) error("should not happen") elif v == "1": pass else: value.append(v) if not value: value = ["1"] if zero: code[tuple(l)] = None else: code[tuple(l)] = f_mult(value) else: # Loop products, don't multiply by '1' and if we encounter a None the product is zero. # TODO: Need to find a way to remove terms from 'used sets' that might # disappear as a consequence of eliminating a zero valued term value = [] for v in not_permute: if v is None: code[()] = None return code elif not v: print("v: '%s'" % repr(v)) error("should not happen") elif v == "1": pass else: value.append(v) # We did have values, but they might have been all ones. if value == [] and not_permute != []: code[()] = f_mult(["1"]) else: code[()] = f_mult(value) return code def division(self, o, *operands): # print("Visiting Division with operands: \n" + "\n".join(map(repr,operands))) # Prefetch formats to speed up code generation. f_div = format["div"] f_grouping = format["grouping"] ffc_assert(len(operands) == 2, "Expected exactly two operands (numerator and denominator): " + repr(operands)) # Get the code from the operands. numerator_code, denominator_code = operands # TODO: Are these safety checks needed? Need to check for None? ffc_assert(() in denominator_code and len(denominator_code) == 1, "Only support function type denominator: " + repr(denominator_code)) code = {} # Get denominator and create new values for the numerator. denominator = denominator_code[()] ffc_assert(denominator is not None, "Division by zero!") for key, val in numerator_code.items(): # If numerator is None the fraction is also None if val is None: code[key] = None # If denominator is '1', just return numerator elif denominator == "1": code[key] = val # Create fraction and add to code else: code[key] = f_div(val, f_grouping(denominator)) return code def power(self, o): # print("\n\nVisiting Power: " + repr(o)) # Get base and exponent. base, expo = o.ufl_operands # Visit base to get base code. base_code = self.visit(base) # TODO: Are these safety checks needed? Need to check for None? ffc_assert(() in base_code and len(base_code) == 1, "Only support function type base: " + repr(base_code)) # Get the base code. val = base_code[()] # Handle different exponents if isinstance(expo, IntValue): return {(): format["power"](val, expo.value())} elif isinstance(expo, FloatValue): return {(): format["std power"](val, format["floating point"](expo.value()))} elif isinstance(expo, (Coefficient, Operator)): exp = self.visit(expo) return {(): format["std power"](val, exp[()])} else: error("power does not support this exponent: " + repr(expo)) def abs(self, o, *operands): # print("\n\nVisiting Abs: " + repr(o) + "with operands: " + "\n".join(map(repr,operands))) # Prefetch formats to speed up code generation. f_abs = format["absolute value"] # TODO: Are these safety checks needed? Need to check for None? ffc_assert(len(operands) == 1 and () in operands[0] and len(operands[0]) == 1, "Abs expects one operand of function type: " + repr(operands)) # Take absolute value of operand. return {(): f_abs(operands[0][()])} def min_value(self, o, *operands): f_min = format["min value"] return {(): f_min(operands[0][()], operands[1][()])} def max_value(self, o, *operands): f_max = format["max value"] return {(): f_max(operands[0][()], operands[1][()])} # ------------------------------------------------------------------------- # Condition, Conditional (conditional.py). # ------------------------------------------------------------------------- def not_condition(self, o, *operands): # This is a Condition but not a BinaryCondition, and the operand will be another Condition # Get condition expression and do safety checks. # Might be a bit too strict? cond, = operands ffc_assert(len(cond) == 1 and firstkey(cond) == (), "Condition for NotCondition should only be one function: " + repr(cond)) return {(): format["not"](cond[()])} def binary_condition(self, o, *operands): # Get LHS and RHS expressions and do safety checks. # Might be a bit too strict? lhs, rhs = operands ffc_assert(len(lhs) == 1 and firstkey(lhs) == (), "LHS of Condition should only be one function: " + repr(lhs)) ffc_assert(len(rhs) == 1 and firstkey(rhs) == (), "RHS of Condition should only be one function: " + repr(rhs)) # Map names from UFL to cpp.py. name_map = {"==": "is equal", "!=": "not equal", "<": "less than", ">": "greater than", "<=": "less equal", ">=": "greater equal", "&&": "and", "||": "or"} # Get values and test for None l_val = lhs[()] r_val = rhs[()] if l_val is None: l_val = format["float"](0.0) if r_val is None: r_val = format["float"](0.0) return {(): format["grouping"](l_val + format[name_map[o._name]] + r_val)} def conditional(self, o, *operands): # Get condition and return values; and do safety check. cond, true, false = operands ffc_assert(len(cond) == 1 and firstkey(cond) == (), "Condtion should only be one function: " + repr(cond)) ffc_assert(len(true) == 1 and firstkey(true) == (), "True value of Condtional should only be one function: " + repr(true)) ffc_assert(len(false) == 1 and firstkey(false) == (), "False value of Condtional should only be one function: " + repr(false)) # Get values and test for None t_val = true[()] f_val = false[()] if t_val is None: t_val = format["float"](0.0) if f_val is None: f_val = format["float"](0.0) # Create expression for conditional expr = format["evaluate conditional"](cond[()], t_val, f_val) num = len(self.conditionals) name = format["conditional"](num) if expr not in self.conditionals: self.conditionals[expr] = (IP, operation_count(expr, format), num) else: num = self.conditionals[expr][2] name = format["conditional"](num) return {(): name} # ------------------------------------------------------------------------- # FacetNormal, CellVolume, Circumradius, FacetArea (geometry.py). # ------------------------------------------------------------------------- def cell_coordinate(self, o): # FIXME error("This object should be implemented by the child class.") def facet_coordinate(self, o): # FIXME error("This object should be implemented by the child class.") def cell_origin(self, o): # FIXME error("This object should be implemented by the child class.") def facet_origin(self, o): # FIXME error("This object should be implemented by the child class.") def cell_facet_origin(self, o): # FIXME error("This object should be implemented by the child class.") def jacobian(self, o): # FIXME error("This object should be implemented by the child class.") def jacobian_determinant(self, o): # FIXME error("This object should be implemented by the child class.") def jacobian_inverse(self, o): # FIXME error("This object should be implemented by the child class.") def facet_jacobian(self, o): # FIXME error("This object should be implemented by the child class.") def facet_jacobian_determinant(self, o): # FIXME error("This object should be implemented by the child class.") def facet_jacobian_inverse(self, o): # FIXME error("This object should be implemented by the child class.") def cell_facet_jacobian(self, o): # FIXME error("This object should be implemented by the child class.") def cell_facet_jacobian_determinant(self, o): # FIXME error("This object should be implemented by the child class.") def cell_facet_jacobian_inverse(self, o): # FIXME error("This object should be implemented by the child class.") def facet_normal(self, o): # print("Visiting FacetNormal:") # Get the component components = self.component() # Safety check. ffc_assert(len(components) == 1, "FacetNormal expects 1 component index: " + repr(components)) # Handle 1D as a special case. # FIXME: KBO: This has to change for mD elements in R^n : m < n if self.gdim == 1: # FIXME: MSA: UFL uses shape (1,) now, can we remove the special case here then? normal_component = format["normal component"](self.restriction, "") else: normal_component = format["normal component"](self.restriction, components[0]) self.trans_set.add(normal_component) return {(): normal_component} def cell_normal(self, o): # FIXME error("This object should be implemented by the child class.") def cell_volume(self, o): # FIXME: KBO: This has to change for higher order elements volume = format["cell volume"](self.restriction) self.trans_set.add(volume) return {(): volume} def circumradius(self, o): # FIXME: KBO: This has to change for higher order elements circumradius = format["circumradius"](self.restriction) self.trans_set.add(circumradius) return {(): circumradius} def facet_area(self, o): # FIXME: KBO: This has to change for higher order elements # NOTE: Omitting restriction because the area of a facet is the same # on both sides. # FIXME: Since we use the scale factor, facet area has no meaning # for cell integrals. (Need check in FFC or UFL). area = format["facet area"] self.trans_set.add(area) return {(): area} def min_facet_edge_length(self, o): # FIXME: this has no meaning for cell integrals. (Need check in FFC or UFL). tdim = self.tdim if tdim < 3: return self.facet_area(o) edgelen = format["min facet edge length"](self.restriction) self.trans_set.add(edgelen) return {(): edgelen} def max_facet_edge_length(self, o): # FIXME: this has no meaning for cell integrals. (Need check in FFC or UFL). tdim = self.tdim if tdim < 3: return self.facet_area(o) edgelen = format["max facet edge length"](self.restriction) self.trans_set.add(edgelen) return {(): edgelen} def cell_orientation(self, o): # FIXME error("This object should be implemented by the child class.") def quadrature_weight(self, o): # FIXME error("This object should be implemented by the child class.") # ------------------------------------------------------------------------- def create_argument(self, ufl_argument, derivatives, component, local_comp, local_offset, ffc_element, transformation, multiindices, tdim, gdim, avg): "Create code for basis functions, and update relevant tables of used basis." # Prefetch formats to speed up code generation. f_group = format["grouping"] f_add = format["add"] f_mult = format["multiply"] f_transform = format["transform"] f_detJ = format["det(J)"] f_inv = format["inverse"] # Reset code code = {} # Handle affine mappings. if transformation == "affine": # Loop derivatives and get multi indices. for multi in multiindices: deriv = [multi.count(i) for i in range(tdim)] if not any(deriv): deriv = [] # Create mapping and basis name. # print "component = ", component mapping, basis = self._create_mapping_basis(component, deriv, avg, ufl_argument, ffc_element) if mapping not in code: code[mapping] = [] if basis is not None: # Add transformation code[mapping].append(self.__apply_transform(basis, derivatives, multi, tdim, gdim)) # Handle non-affine mappings. else: ffc_assert(avg is None, "Taking average is not supported for non-affine mappings.") # Loop derivatives and get multi indices. for multi in multiindices: deriv = [multi.count(i) for i in range(tdim)] if not any(deriv): deriv = [] if transformation in ["covariant piola", "contravariant piola"]: for c in range(tdim): # Create mapping and basis name. mapping, basis = self._create_mapping_basis(c + local_offset, deriv, avg, ufl_argument, ffc_element) if mapping not in code: code[mapping] = [] if basis is not None: # Multiply basis by appropriate transform. if transformation == "covariant piola": dxdX = f_transform("JINV", c, local_comp, tdim, gdim, self.restriction) self.trans_set.add(dxdX) basis = f_mult([dxdX, basis]) elif transformation == "contravariant piola": self.trans_set.add(f_detJ(self.restriction)) detJ = f_inv(f_detJ(self.restriction)) dXdx = f_transform("J", local_comp, c, gdim, tdim, self.restriction) self.trans_set.add(dXdx) basis = f_mult([detJ, dXdx, basis]) # Add transformation if needed. code[mapping].append(self.__apply_transform(basis, derivatives, multi, tdim, gdim)) elif transformation == "double covariant piola": # g_ij = (Jinv)_ki G_kl (Jinv)lj i = local_comp // tdim j = local_comp % tdim for k in range(tdim): for l in range(tdim): # Create mapping and basis name. mapping, basis = self._create_mapping_basis( k * tdim + l + local_offset, deriv, avg, ufl_argument, ffc_element) if mapping not in code: code[mapping] = [] if basis is not None: J1 = f_transform("JINV", k, i, tdim, gdim, self.restriction) J2 = f_transform("JINV", l, j, tdim, gdim, self.restriction) self.trans_set.add(J1) self.trans_set.add(J2) basis = f_mult([J1, basis, J2]) # Add transformation if needed. code[mapping].append( self.__apply_transform( basis, derivatives, multi, tdim, gdim)) elif transformation == "double contravariant piola": # g_ij = (detJ)^(-2) J_ik G_kl J_jl i = local_comp // tdim j = local_comp % tdim for k in range(tdim): for l in range(tdim): # Create mapping and basis name. mapping, basis = self._create_mapping_basis( k * tdim + l + local_offset, deriv, avg, ufl_argument, ffc_element) if mapping not in code: code[mapping] = [] if basis is not None: J1 = f_transform("J", i, k, gdim, tdim, self.restriction) J2 = f_transform("J", j, l, gdim, tdim, self.restriction) self.trans_set.add(J1) self.trans_set.add(J2) self.trans_set.add(f_detJ(self.restriction)) invdetJ = f_inv(f_detJ(self.restriction)) basis = f_mult([invdetJ, invdetJ, J1, basis, J2]) # Add transformation if needed. code[mapping].append( self.__apply_transform( basis, derivatives, multi, tdim, gdim)) else: error("Transformation is not supported: " + repr(transformation)) # Add sums and group if necessary. for key, val in list(code.items()): if len(val) > 1: code[key] = f_group(f_add(val)) elif val: code[key] = val[0] else: # Return a None (zero) because val == [] code[key] = None return code def create_function(self, ufl_function, derivatives, component, local_comp, local_offset, ffc_element, is_quad_element, transformation, multiindices, tdim, gdim, avg): "Create code for basis functions, and update relevant tables of used basis." ffc_assert(ufl_function in self._function_replace_values, "Expecting ufl_function to have been mapped prior to this call.") # Prefetch formats to speed up code generation. f_mult = format["multiply"] f_transform = format["transform"] f_detJ = format["det(J)"] f_inv = format["inverse"] # Reset code code = [] # Handle affine mappings. if transformation == "affine": # Loop derivatives and get multi indices. for multi in multiindices: deriv = [multi.count(i) for i in range(tdim)] if not any(deriv): deriv = [] # Create function name. function_name = self._create_function_name(component, deriv, avg, is_quad_element, ufl_function, ffc_element) if function_name: # Add transformation if needed. code.append(self.__apply_transform(function_name, derivatives, multi, tdim, gdim)) # Handle non-affine mappings. else: ffc_assert(avg is None, "Taking average is not supported for non-affine mappings.") # Loop derivatives and get multi indices. for multi in multiindices: deriv = [multi.count(i) for i in range(tdim)] if not any(deriv): deriv = [] if transformation in ["covariant piola", "contravariant piola"]: # Vectors for c in range(tdim): function_name = self._create_function_name(c + local_offset, deriv, avg, is_quad_element, ufl_function, ffc_element) if function_name: # Multiply basis by appropriate transform. if transformation == "covariant piola": dxdX = f_transform("JINV", c, local_comp, tdim, gdim, self.restriction) self.trans_set.add(dxdX) function_name = f_mult([dxdX, function_name]) elif transformation == "contravariant piola": self.trans_set.add(f_detJ(self.restriction)) detJ = f_inv(f_detJ(self.restriction)) dXdx = f_transform("J", local_comp, c, gdim, tdim, self.restriction) self.trans_set.add(dXdx) function_name = f_mult([detJ, dXdx, function_name]) else: error("Transformation is not supported: ", repr(transformation)) # Add transformation if needed. code.append(self.__apply_transform(function_name, derivatives, multi, tdim, gdim)) elif transformation == "double covariant piola": # g_ij = (Jinv)_ki G_kl (Jinv)lj i = local_comp // tdim j = local_comp % tdim for k in range(tdim): for l in range(tdim): # Create mapping and basis name. function_name = self._create_function_name(k * tdim + l + local_offset, deriv, avg, is_quad_element, ufl_function, ffc_element) J1 = f_transform("JINV", k, i, tdim, gdim, self.restriction) J2 = f_transform("JINV", l, j, tdim, gdim, self.restriction) self.trans_set.add(J1) self.trans_set.add(J2) function_name = f_mult([J1, function_name, J2]) # Add transformation if needed. code.append(self.__apply_transform(function_name, derivatives, multi, tdim, gdim)) elif transformation == "double contravariant piola": # g_ij = (detJ)^(-2) J_ik G_kl J_jl i = local_comp // tdim j = local_comp % tdim for k in range(tdim): for l in range(tdim): # Create mapping and basis name. function_name = self._create_function_name( k * tdim + l + local_offset, deriv, avg, is_quad_element, ufl_function, ffc_element) J1 = f_transform("J", i, k, tdim, gdim, self.restriction) J2 = f_transform("J", j, l, tdim, gdim, self.restriction) invdetJ = f_inv(f_detJ(self.restriction)) self.trans_set.add(J1) self.trans_set.add(J2) function_name = f_mult([invdetJ, invdetJ, J1, function_name, J2]) # Add transformation if needed. code.append(self.__apply_transform(function_name, derivatives, multi, tdim, gdim)) else: error("Transformation is not supported: " + repr(transformation)) if not code: return None elif len(code) > 1: code = format["grouping"](format["add"](code)) else: code = code[0] return code # ------------------------------------------------------------------------- # Helper functions for Argument and Coefficient # ------------------------------------------------------------------------- def __apply_transform(self, function, derivatives, multi, tdim, gdim): # XXX UFLACS REUSE "Apply transformation (from derivatives) to basis or function." f_transform = format["transform"] # Add transformation if needed. transforms = [] if self.integral_type in custom_integral_types: for i, direction in enumerate(derivatives): # Custom integrals to not need transforms, so in place # of the transform, we insert an identity matrix ref = multi[i] if ref != direction: transforms.append(0) else: for i, direction in enumerate(derivatives): ref = multi[i] t = f_transform("JINV", ref, direction, tdim, gdim, self.restriction) self.trans_set.add(t) transforms.append(t) # Only multiply by basis if it is present. if function: prods = transforms + [function] else: prods = transforms return format["multiply"](prods) # ------------------------------------------------------------------------- # Helper functions for transformation of UFL objects in base class # ------------------------------------------------------------------------- def _create_symbol(self, symbol, domain): return {(): symbol} def _create_product(self, symbols): return format["multiply"](symbols) def _format_scalar_value(self, value): # print("format_scalar_value: %d" % value) if value is None: return {(): None} # TODO: Handle value < 0 better such that we don't have + -2 in the code. return {(): format["floating point"](value)} def _math_function(self, operands, format_function): # TODO: Are these safety checks needed? ffc_assert(len(operands) == 1 and () in operands[0] and len(operands[0]) == 1, "MathFunctions expect one operand of function type: " + repr(operands)) # Use format function on value of operand. new_operand = {} operand = operands[0] for key, val in operand.items(): new_operand[key] = format_function(val) return new_operand def _atan_2_function(self, operands, format_function): x1, x2 = operands x1, x2 = sorted(x1.values())[0], sorted(x2.values())[0] if x1 is None: x1 = format["floating point"](0.0) if x2 is None: x2 = format["floating point"](0.0) return {(): format_function(x1, x2)} def _bessel_function(self, operands, format_function): # TODO: Are these safety checks needed? ffc_assert(len(operands) == 2, "BesselFunctions expect two operands of function type: " + repr(operands)) nu, x = operands ffc_assert(len(nu) == 1 and () in nu, "Expecting one operand of function type as first argument to BesselFunction : " + repr(nu)) ffc_assert(len(x) == 1 and () in x, "Expecting one operand of function type as second argument to BesselFunction : " + repr(x)) nu = nu[()] x = x[()] if nu is None: nu = format["floating point"](0.0) if x is None: x = format["floating point"](0.0) # Use format function on arguments. # NOTE: Order of nu and x is reversed compared to the UFL and C++ # function calls because of how Symbol treats exponents. # this will change once quadrature optimisations has been cleaned up. return {(): format_function(x, nu)} # ------------------------------------------------------------------------- # Helper functions for code_generation() # ------------------------------------------------------------------------- def _count_operations(self, expression): return operation_count(expression, format) def _create_entry_data(self, val, integral_type): # Multiply value by weight and determinant # Create weight and scale factor. weight = format["weight"](self.points) if self.points is None or self.points > 1: weight += format["component"]("", format["integration points"]) # Update sets of used variables. if integral_type in (point_integral_types + custom_integral_types): trans_set = set() value = format["mul"]([val, weight]) else: f_scale_factor = format["scale factor"] trans_set = set([f_scale_factor]) value = format["mul"]([val, weight, f_scale_factor]) trans_set.update(self.trans_set) used_points = set([self.points]) ops = self._count_operations(value) used_psi_tables = set([v for k, v in self.psi_tables_map.items()]) return (value, ops, [trans_set, used_points, used_psi_tables]) ffc-2019.1.0.post0/ffc/quadrature/quadraturetransformerbase.py000066400000000000000000001443761346026536000243520ustar00rootroot00000000000000# -*- coding: utf-8 -*- """QuadratureTransformerBase, a common class for quadrature transformers to translate UFL expressions.""" # Copyright (C) 2009-2013 Kristian B. Oelgaard # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # Modified by Martin Sandve Alnæs, 2013 # Modified by Garth N. Wells, 2013 # Modified by Lizao Li, 2015 # Modified by Anders Logg, 2015 # Python modules import functools from numpy import shape, array # UFL Classes from ufl.classes import FixedIndex, Index from ufl.utils.stacks import StackDict, Stack from ufl.permutation import build_component_numbering from ufl import custom_integral_types # UFL Algorithms from ufl.algorithms import Transformer # FFC modules. from ffc.log import ffc_assert, error from ffc.fiatinterface import MixedElement, create_element, map_facet_points from ffc.quadrature.cpp import format # FFC utils from ffc.utils import listcopy from ffc.representationutils import transform_component # Utility and optimisation functions for quadraturegenerator. from ffc.quadrature.quadratureutils import create_psi_tables from ffc.quadrature.symbolics import BASIS, IP, GEO class FFCMultiIndex: """A MultiIndex represents a list of indices and holds the following data: rank - rank of multiindex dims - a list of dimensions indices - a list of all possible multiindex values """ def __init__(self, dims): "Create multiindex from given list of ranges" def outer_join(a, b): """Let a be a list of lists and b a list. We append each element of b to each list in a and return the resulting list of lists. """ outer = [] for i in range(len(a)): for j in range(len(b)): outer += [a[i] + [b[j]]] return outer def build_indices(dims): "Create a list of all index combinations." if not dims: return [[]] ranges = listcopy(dims) return functools.reduce(outer_join, ranges, [[]]) self.rank = len(dims) self.dims = [len(dim) for dim in dims] self.indices = build_indices(dims) return def __str__(self): "Return informal string representation (pretty-print)." return "rank = %d dims = %s indices = %s" % (self.rank, str(self.dims), str(self.indices)) class QuadratureTransformerBase(Transformer): "Transform UFL representation to quadrature code." def __init__(self, psi_tables, quad_weights, gdim, tdim, entity_type, function_replace_map, optimise_parameters): Transformer.__init__(self) # Save optimise_parameters, weights and fiat_elements_map. self.optimise_parameters = optimise_parameters # Map from original functions with possibly incomplete # elements to functions with properly completed elements self._function_replace_map = function_replace_map self._function_replace_values = set(function_replace_map.values()) # For assertions # Create containers and variables. self.used_psi_tables = set() self.psi_tables_map = {} self.used_weights = set() self.quad_weights = quad_weights self.used_nzcs = set() self.ip_consts = {} self.trans_set = set() self.function_data = {} self.tdim = tdim self.gdim = gdim self.entity_type = entity_type self.points = 0 self.facet0 = None self.facet1 = None self.vertex = None self.restriction = None self.avg = None self.coordinate = None self.conditionals = {} self.additional_includes_set = set() # Stacks. self._derivatives = [] self._index2value = StackDict() self._components = Stack() self.element_map, self.name_map, self.unique_tables =\ create_psi_tables(psi_tables, self.optimise_parameters["eliminate zeros"], self.entity_type) # Cache. self.argument_cache = {} self.function_cache = {} def update_cell(self): ffc_assert(self.entity_type == "cell", "Not expecting update_cell on a %s." % self.entity_type) self.facet0 = None self.facet1 = None self.vertex = None self.coordinate = None self.conditionals = {} def update_facets(self, facet0, facet1): ffc_assert(self.entity_type == "facet", "Not expecting update_facet on a %s." % self.entity_type) self.facet0 = facet0 self.facet1 = facet1 self.vertex = None self.coordinate = None self.conditionals = {} def update_vertex(self, vertex): ffc_assert(self.entity_type == "vertex", "Not expecting update_vertex on a %s." % self.entity_type) self.facet0 = None self.facet1 = None self.vertex = vertex self.coordinate = None self.conditionals = {} def update_points(self, points): self.points = points self.coordinate = None # Reset functions everytime we move to a new quadrature loop self.conditionals = {} self.function_data = {} # Reset cache self.argument_cache = {} self.function_cache = {} def disp(self): print("\n\n **** Displaying QuadratureTransformer ****") print("\nQuadratureTransformer, element_map:\n", self.element_map) print("\nQuadratureTransformer, name_map:\n", self.name_map) print("\nQuadratureTransformer, unique_tables:\n", self.unique_tables) print("\nQuadratureTransformer, used_psi_tables:\n", self.used_psi_tables) print("\nQuadratureTransformer, psi_tables_map:\n", self.psi_tables_map) print("\nQuadratureTransformer, used_weights:\n", self.used_weights) def component(self): "Return current component tuple." if len(self._components): return self._components.peek() return () def derivatives(self): "Return all derivatives tuple." if len(self._derivatives): return tuple(self._derivatives[:]) return () # ------------------------------------------------------------------------- # Start handling UFL classes. # ------------------------------------------------------------------------- # Nothing in expr.py is handled. Can only handle children of these clases. def expr(self, o): print("\n\nVisiting basic Expr:", repr(o), "with operands:") error("This expression is not handled: " + repr(o)) # Nothing in terminal.py is handled. Can only handle children of # these clases. def terminal(self, o): print("\n\nVisiting basic Terminal:", repr(o), "with operands:") error("This terminal is not handled: " + repr(o)) # ------------------------------------------------------------------------- # Things which should not be here (after expansion etc.) from: # algebra.py, differentiation.py, finiteelement.py, form.py, # geometry.py, indexing.py, integral.py, tensoralgebra.py, # variable.py. # ------------------------------------------------------------------------- def derivative(self, o, *operands): print("\n\nVisiting Derivative: ", repr(o)) error("All derivatives apart from Grad should have been expanded!!") def compound_tensor_operator(self, o): print("\n\nVisiting CompoundTensorOperator: ", repr(o)) error("CompoundTensorOperator should have been expanded.") def label(self, o): print("\n\nVisiting Label: ", repr(o)) error("What is a Lable doing in the integrand?") # ------------------------------------------------------------------------- # Things which are not supported yet, from: # condition.py, constantvalue.py, function.py, geometry.py, lifting.py, # mathfunctions.py, restriction.py # ------------------------------------------------------------------------- def condition(self, o): print("\n\nVisiting Condition:", repr(o)) error("This type of Condition is not supported (yet).") def constant_value(self, o): print("\n\nVisiting ConstantValue:", repr(o)) error("This type of ConstantValue is not supported (yet).") def geometric_quantity(self, o): print("\n\nVisiting GeometricQuantity:", repr(o)) error("This type of GeometricQuantity is not supported (yet).") def math_function(self, o): print("\n\nVisiting MathFunction:", repr(o)) error("This MathFunction is not supported (yet).") def atan_2_function(self, o): print("\n\nVisiting Atan2Function:", repr(o)) error("Atan2Function is not implemented (yet).") def bessel_function(self, o): print("\n\nVisiting BesselFunction:", repr(o)) error("BesselFunction is not implemented (yet).") def restricted(self, o): print("\n\nVisiting Restricted:", repr(o)) error("This type of Restricted is not supported (only positive and negative are currently supported).") # ------------------------------------------------------------------------- # Handlers that should be implemented by child classes. # ------------------------------------------------------------------------- # ------------------------------------------------------------------------- # AlgebraOperators (algebra.py). # ------------------------------------------------------------------------- def sum(self, o, *operands): print("\n\nVisiting Sum: ", repr(o)) error("This object should be implemented by the child class.") def product(self, o, *operands): print("\n\nVisiting Product: ", repr(o)) error("This object should be implemented by the child class.") def division(self, o, *operands): print("\n\nVisiting Division: ", repr(o)) error("This object should be implemented by the child class.") def power(self, o): print("\n\nVisiting Power: ", repr(o)) error("This object should be implemented by the child class.") def abs(self, o, *operands): print("\n\nVisiting Abs: ", repr(o)) error("This object should be implemented by the child class.") # ------------------------------------------------------------------------- # FacetNormal, CellVolume, Circumradius (geometry.py). # ------------------------------------------------------------------------- def cell_coordinate(self, o): error("This object should be implemented by the child class.") def facet_coordinate(self, o): error("This object should be implemented by the child class.") def cell_origin(self, o): error("This object should be implemented by the child class.") def facet_origin(self, o): error("This object should be implemented by the child class.") def cell_facet_origin(self, o): error("This object should be implemented by the child class.") def jacobian(self, o): error("This object should be implemented by the child class.") def jacobian_determinant(self, o): error("This object should be implemented by the child class.") def jacobian_inverse(self, o): error("This object should be implemented by the child class.") def facet_jacobian(self, o): error("This object should be implemented by the child class.") def facet_jacobian_determinant(self, o): error("This object should be implemented by the child class.") def facet_jacobian_inverse(self, o): error("This object should be implemented by the child class.") def cell_facet_jacobian(self, o): error("This object should be implemented by the child class.") def cell_facet_jacobian_determinant(self, o): error("This object should be implemented by the child class.") def cell_facet_jacobian_inverse(self, o): error("This object should be implemented by the child class.") def facet_normal(self, o): error("This object should be implemented by the child class.") def cell_normal(self, o): error("This object should be implemented by the child class.") def cell_volume(self, o): error("This object should be implemented by the child class.") def circumradius(self, o): error("This object should be implemented by the child class.") def facet_area(self, o): error("This object should be implemented by the child class.") def min_facet_edge_length(self, o): error("This object should be implemented by the child class.") def max_facet_edge_length(self, o): error("This object should be implemented by the child class.") def cell_orientation(self, o): error("This object should be implemented by the child class.") def quadrature_weight(self, o): error("This object should be implemented by the child class.") # ------------------------------------------------------------------------- # Things that can be handled by the base class. # ------------------------------------------------------------------------- # ------------------------------------------------------------------------- # Argument (basisfunction.py). # ------------------------------------------------------------------------- def argument(self, o): # print("\nVisiting Argument:" + repr(o)) # Create aux. info. components = self.component() derivatives = self.derivatives() # Check if basis is already in cache key = (o, components, derivatives, self.restriction, self.avg) basis = self.argument_cache.get(key, None) tdim = self.tdim # FIXME: Why does using a code dict from cache make the # expression manipulations blow (MemoryError) up later? if basis is None or self.optimise_parameters["optimisation"]: # Get auxiliary variables to generate basis (component, local_elem, local_comp, local_offset, ffc_element, transformation, multiindices) = self._get_auxiliary_variables(o, components, derivatives) # Create mapping and code for basis function and add to # dict. basis = self.create_argument(o, derivatives, component, local_comp, local_offset, ffc_element, transformation, multiindices, tdim, self.gdim, self.avg) self.argument_cache[key] = basis return basis # ------------------------------------------------------------------------- # Constant values (constantvalue.py). # ------------------------------------------------------------------------- def identity(self, o): # Get components i, j = self.component() # Only return a value if i==j if i == j: return self._format_scalar_value(1.0) else: return self._format_scalar_value(None) def scalar_value(self, o): "ScalarValue covers IntValue and FloatValue" return self._format_scalar_value(o.value()) def zero(self, o): return self._format_scalar_value(None) # ------------------------------------------------------------------------- # Grad (differentiation.py). # ------------------------------------------------------------------------- def grad(self, o): # Get expression derivative_expr, = o.ufl_operands # Get components components = self.component() en = len(derivative_expr.ufl_shape) cn = len(components) ffc_assert(len(o.ufl_shape) == cn, "Expecting rank of grad expression to match components length.") # Get direction of derivative if cn == en + 1: der = components[en] self._components.push(components[:en]) elif cn == en: # This happens in 1D, sligtly messy result of defining # grad(f) == f.dx(0) der = 0 else: error("Unexpected rank %d and component length %d in grad expression." % (en, cn)) # Add direction to list of derivatives self._derivatives.append(der) # Visit children to generate the derivative code. code = self.visit(derivative_expr) # Remove the direction from list of derivatives self._derivatives.pop() if cn == en + 1: self._components.pop() return code # ------------------------------------------------------------------------- # Coefficient and Constants (function.py). # ------------------------------------------------------------------------- def coefficient(self, o): # print("\nVisiting Coefficient: " + repr(o)) # Map o to object with proper element and count o = self._function_replace_map[o] # Create aux. info. components = self.component() derivatives = self.derivatives() # Check if function is already in cache key = (o, components, derivatives, self.restriction, self.avg) function_code = self.function_cache.get(key) # FIXME: Why does using a code dict from cache make the # expression manipulations blow (MemoryError) up later? if function_code is None or self.optimise_parameters["optimisation"]: # Get auxiliary variables to generate function (component, local_elem, local_comp, local_offset, ffc_element, transformation, multiindices) = self._get_auxiliary_variables(o, components, derivatives) # Check that we don't take derivatives of # QuadratureElements. is_quad_element = local_elem.family() == "Quadrature" ffc_assert(not (derivatives and is_quad_element), "Derivatives of Quadrature elements are not supported: " + repr(o)) tdim = self.tdim # Create code for function and add empty tuple to cache # dict. function_code = {(): self.create_function(o, derivatives, component, local_comp, local_offset, ffc_element, is_quad_element, transformation, multiindices, tdim, self.gdim, self.avg)} self.function_cache[key] = function_code return function_code # ------------------------------------------------------------------------- # SpatialCoordinate (geometry.py). # ------------------------------------------------------------------------- def spatial_coordinate(self, o): # Get the component. components = self.component() c, = components if self.vertex is not None: error("Spatial coordinates (x) not implemented for point measure (dP)") # TODO: Implement this, should be just the point. elif self.points is None: gdim, = o.ufl_shape coordinate = "quadrature_points[ip*%d + %d]" % (gdim, c) return self._create_symbol(coordinate, IP) else: # Generate the appropriate coordinate and update tables. coordinate = format["ip coordinates"](self.points, c) self._generate_affine_map() return self._create_symbol(coordinate, IP) # ------------------------------------------------------------------------- # Indexed (indexed.py). # ------------------------------------------------------------------------- def indexed(self, o): # Get indexed expression and index, map index to current value # and update components indexed_expr, index = o.ufl_operands self._components.push(self.visit(index)) # Visit expression subtrees and generate code. code = self.visit(indexed_expr) # Remove component again self._components.pop() return code # ------------------------------------------------------------------------- # MultiIndex (indexing.py). # ------------------------------------------------------------------------- def multi_index(self, o): # Loop all indices in MultiIndex and get current values subcomp = [] for i in o: if isinstance(i, FixedIndex): subcomp.append(i._value) elif isinstance(i, Index): subcomp.append(self._index2value[i]) return tuple(subcomp) # ------------------------------------------------------------------------- # IndexSum (indexsum.py). # ------------------------------------------------------------------------- def index_sum(self, o): # Get expression and index that we're summing over summand, multiindex = o.ufl_operands index, = multiindex # Loop index range, update index/value dict and generate code ops = [] for i in range(o.dimension()): self._index2value.push(index, i) ops.append(self.visit(summand)) self._index2value.pop() # Call sum to generate summation code = self.sum(o, *ops) return code # ------------------------------------------------------------------------- # MathFunctions (mathfunctions.py). # ------------------------------------------------------------------------- def sqrt(self, o, *operands): self.additional_includes_set.add("#include ") return self._math_function(operands, format["sqrt"]) def exp(self, o, *operands): self.additional_includes_set.add("#include ") return self._math_function(operands, format["exp"]) def ln(self, o, *operands): self.additional_includes_set.add("#include ") return self._math_function(operands, format["ln"]) def cos(self, o, *operands): self.additional_includes_set.add("#include ") return self._math_function(operands, format["cos"]) def sin(self, o, *operands): self.additional_includes_set.add("#include ") return self._math_function(operands, format["sin"]) def tan(self, o, *operands): self.additional_includes_set.add("#include ") return self._math_function(operands, format["tan"]) def cosh(self, o, *operands): self.additional_includes_set.add("#include ") return self._math_function(operands, format["cosh"]) def sinh(self, o, *operands): self.additional_includes_set.add("#include ") return self._math_function(operands, format["sinh"]) def tanh(self, o, *operands): self.additional_includes_set.add("#include ") return self._math_function(operands, format["tanh"]) def acos(self, o, *operands): self.additional_includes_set.add("#include ") return self._math_function(operands, format["acos"]) def asin(self, o, *operands): self.additional_includes_set.add("#include ") return self._math_function(operands, format["asin"]) def atan(self, o, *operands): self.additional_includes_set.add("#include ") return self._math_function(operands, format["atan"]) def atan_2(self, o, *operands): self.additional_includes_set.add("#include ") return self._atan_2_function(operands, format["atan_2"]) def erf(self, o, *operands): self.additional_includes_set.add("#include ") return self._math_function(operands, format["erf"]) def bessel_i(self, o, *operands): self.additional_includes_set.add("#include ") return self._bessel_function(operands, format["bessel_i"]) def bessel_j(self, o, *operands): self.additional_includes_set.add("#include ") return self._bessel_function(operands, format["bessel_j"]) def bessel_k(self, o, *operands): self.additional_includes_set.add("#include ") return self._bessel_function(operands, format["bessel_k"]) def bessel_y(self, o, *operands): self.additional_includes_set.add("#include ") return self._bessel_function(operands, format["bessel_y"]) # ------------------------------------------------------------------------- # PositiveRestricted and NegativeRestricted (restriction.py). # ------------------------------------------------------------------------- def positive_restricted(self, o): # Just get the first operand, there should only be one. restricted_expr = o.ufl_operands ffc_assert(len(restricted_expr) == 1, "Only expected one operand for restriction: " + repr(restricted_expr)) ffc_assert(self.restriction is None, "Expression is restricted twice: " + repr(restricted_expr)) # Set restriction, visit operand and reset restriction self.restriction = "+" code = self.visit(restricted_expr[0]) self.restriction = None return code def negative_restricted(self, o): # Just get the first operand, there should only be one. restricted_expr = o.ufl_operands ffc_assert(len(restricted_expr) == 1, "Only expected one operand for restriction: " + repr(restricted_expr)) ffc_assert(self.restriction is None, "Expression is restricted twice: " + repr(restricted_expr)) # Set restriction, visit operand and reset restriction self.restriction = "-" code = self.visit(restricted_expr[0]) self.restriction = None return code def cell_avg(self, o): ffc_assert(self.avg is None, "Not expecting nested averages.") # Just get the first operand, there should only be one. expr, = o.ufl_operands # Set average marker, visit operand and reset marker self.avg = "cell" code = self.visit(expr) self.avg = None return code def facet_avg(self, o): ffc_assert(self.avg is None, "Not expecting nested averages.") ffc_assert(self.entity_type != "cell", "Cannot take facet_avg in a cell integral.") # Just get the first operand, there should only be one. expr, = o.ufl_operands # Set average marker, visit operand and reset marker self.avg = "facet" code = self.visit(expr) self.avg = None return code # ------------------------------------------------------------------------- # ComponentTensor (tensors.py). # ------------------------------------------------------------------------- def component_tensor(self, o): # Get expression and indices component_expr, indices = o.ufl_operands # Get current component(s) components = self.component() ffc_assert(len(components) == len(indices), "The number of known components must be equal to the number of components of the ComponentTensor for this to work.") # Update the index dict (map index values of current known # indices to those of the component tensor) for i, v in zip(indices._indices, components): self._index2value.push(i, v) # Push an empty component tuple self._components.push(()) # Visit expression subtrees and generate code. code = self.visit(component_expr) # Remove the index map from the StackDict for i in range(len(components)): self._index2value.pop() # Remove the empty component tuple self._components.pop() return code def list_tensor(self, o): # Get the component component = self.component() # Extract first and the rest of the components c0, c1 = component[0], component[1:] # Get first operand op = o.ufl_operands[c0] # Evaluate subtensor with this subcomponent self._components.push(c1) code = self.visit(op) self._components.pop() return code # ------------------------------------------------------------------------- # Variable (variable.py). # ------------------------------------------------------------------------- def variable(self, o): return self.visit(o.expression()) # ------------------------------------------------------------------------- # Generate terms for representation. # ------------------------------------------------------------------------- def generate_terms(self, integrand, integral_type): "Generate terms for code generation." # Set domain type self.integral_type = integral_type # Get terms terms = self.visit(integrand) # Get formatting f_nzc = format["nonzero columns"](0).split("0")[0] # Loop code and add weight and scale factor to value and sort # after loop ranges. new_terms = {} for key, val in sorted(terms.items()): # If value was zero continue. if val is None: continue # Create data. value, ops, sets = self._create_entry_data(val, integral_type) # Extract nzc columns if any and add to sets. used_nzcs = set([int(k[1].split(f_nzc)[1].split("[")[0]) for k in key if f_nzc in k[1]]) sets.append(used_nzcs) # Create loop information and entry from key info and # insert into dict. loop, entry = self._create_loop_entry(key, f_nzc) if loop not in new_terms: sets.append({}) new_terms[loop] = [sets, [(entry, value, ops)]] else: for i, s in enumerate(sets): new_terms[loop][0][i].update(s) new_terms[loop][1].append((entry, value, ops)) return new_terms def _create_loop_entry(self, key, f_nzc): indices = {0: format["first free index"], 1: format["second free index"]} # Create appropriate entries. # FIXME: We only support rank 0, 1 and 2. entry = "" loop = () if len(key) == 0: entry = "0" elif len(key) == 1: key = key[0] # Checking if the basis was a test function. # TODO: Make sure test function indices are always # rearranged to 0. ffc_assert(key[0] == -2 or key[0] == 0, "Linear forms must be defined using test functions only: " + repr(key)) index_j, entry, range_j, space_dim_j = key loop = ((indices[index_j], 0, range_j),) if range_j == 1 and self.optimise_parameters["ignore ones"] and not (f_nzc in entry): loop = () elif len(key) == 2: # Extract test and trial loops in correct order and check # if for is legal. key0, key1 = (0, 0) for k in key: ffc_assert(k[0] in indices, "Bilinear forms must be defined using test and trial functions (index -2, -1, 0, 1): " + repr(k)) if k[0] == -2 or k[0] == 0: key0 = k else: key1 = k index_j, entry_j, range_j, space_dim_j = key0 index_k, entry_k, range_k, space_dim_k = key1 loop = [] if not (range_j == 1 and self.optimise_parameters["ignore ones"]) or f_nzc in entry_j: loop.append((indices[index_j], 0, range_j)) if not (range_k == 1 and self.optimise_parameters["ignore ones"]) or f_nzc in entry_k: loop.append((indices[index_k], 0, range_k)) entry = format["add"]([format["mul"]([entry_j, str(space_dim_k)]), entry_k]) loop = tuple(loop) else: error("Only rank 0, 1 and 2 tensors are currently supported: " + repr(key)) # Generate the code line for the entry. # Try to evaluate entry ("3*6 + 2" --> "20"). try: entry = str(eval(entry)) except Exception: pass return loop, entry # ------------------------------------------------------------------------- # Helper functions for transformation of UFL objects in base class # ------------------------------------------------------------------------- def _create_symbol(self, symbol, domain): error("This function should be implemented by the child class.") def _create_product(self, symbols): error("This function should be implemented by the child class.") def _format_scalar_value(self, value): error("This function should be implemented by the child class.") def _math_function(self, operands, format_function): error("This function should be implemented by the child class.") def _atan_2_function(self, operands, format_function): error("This function should be implemented by the child class.") def _get_auxiliary_variables(self, ufl_function, component, derivatives): "Helper function for both Coefficient and Argument." # Get UFL element. ufl_element = ufl_function.ufl_element() # Get subelement and the relative (flattened) component (in # case we have mixed elements). local_comp, local_elem = ufl_element.extract_component(component) # For basic tensor elements, local_comp should be flattened if len(local_comp) and len(local_elem.value_shape()) > 0: # Map component using component map from UFL. (TODO: # inefficient use of this function) comp_map, _ = build_component_numbering(local_elem.value_shape(), local_elem.symmetry()) local_comp = comp_map[local_comp] # Set local_comp to 0 if it is () if not local_comp: local_comp = 0 # Check that component != not () since the UFL component map # will turn it into 0, and () does not mean zeroth component # in this context. if len(component): # Map component using component map from UFL. (TODO: # inefficient use of this function) comp_map, comp_num = build_component_numbering(ufl_element.value_shape(), ufl_element.symmetry()) component = comp_map[component] # Map physical components into reference components component, dummy = transform_component(component, 0, ufl_element) # Compute the local offset (needed for non-affine mappings). local_offset = component - local_comp else: # Compute the local offset (needed for non-affine mappings). local_offset = 0 # Create FFC element. ffc_element = create_element(ufl_element) # Assuming that mappings for all basisfunctions are equal ffc_sub_element = create_element(local_elem) transformation = ffc_sub_element.mapping()[0] ffc_assert(all(transformation == mapping for mapping in ffc_sub_element.mapping()), "Assuming subelement mappings are equal but they differ.") # Generate FFC multi index for derivatives. tdim = self.tdim multiindices = FFCMultiIndex([list(range(tdim))] * len(derivatives)).indices return (component, local_elem, local_comp, local_offset, ffc_element, transformation, multiindices) def _get_current_entity(self): if self.entity_type == "cell": # If we add macro cell integration, I guess the 'current # cell number' would go here? return 0 elif self.entity_type == "facet": # Handle restriction through facet. return {"+": self.facet0, "-": self.facet1, None: self.facet0}[self.restriction] elif self.entity_type == "vertex": return self.vertex else: error("Unknown entity type %s." % self.entity_type) def _create_mapping_basis(self, component, deriv, avg, ufl_argument, ffc_element): "Create basis name and mapping from given basis_info." # Get string for integration points. f_ip = "0" if (avg or self.points == 1) else format["integration points"] generate_psi_name = format["psi name"] # Only support test and trial functions. indices = {0: format["first free index"], 1: format["second free index"]} # Check that we have a basis function. ffc_assert(ufl_argument.number() in indices, "Currently, Argument number must be either 0 or 1: " + repr(ufl_argument)) ffc_assert(ufl_argument.part() is None, "Currently, Argument part is not supporte: " + repr(ufl_argument)) # Get element counter and loop index. element_counter = self.element_map[1 if avg else self.points][ufl_argument.ufl_element()] loop_index = indices[ufl_argument.number()] # Offset element space dimension in case of negative # restriction, need to use the complete element for offset in # case of mixed element. space_dim = ffc_element.space_dimension() offset = {"+": "", "-": str(space_dim), None: ""}[self.restriction] # If we have a restricted function multiply space_dim by two. if self.restriction in ("+", "-"): space_dim *= 2 # Create basis access, we never need to map the entry in the # basis table since we will either loop the entire space # dimension or the non-zeros. if self.restriction in ("+", "-") and self.integral_type in custom_integral_types and offset != "": # Special case access for custom integrals (all basis # functions stored in flattened array) basis_access = format["component"]("", [f_ip, format["add"]([loop_index, offset])]) else: # Normal basis function access basis_access = format["component"]("", [f_ip, loop_index]) # Get current cell entity, with current restriction considered entity = self._get_current_entity() name = generate_psi_name(element_counter, self.entity_type, entity, component, deriv, avg) name, non_zeros, zeros, ones = self.name_map[name] loop_index_range = shape(self.unique_tables[name])[1] # If domain type is custom, then special-case set loop index # range since table is empty if self.integral_type in custom_integral_types: loop_index_range = ffc_element.space_dimension() # different from `space_dimension`... basis = "" # Ignore zeros if applicable if zeros and (self.optimise_parameters["ignore zero tables"] or self.optimise_parameters["remove zero terms"]): basis = self._format_scalar_value(None)[()] # If the loop index range is one we can look up the first # component in the psi array. If we only have ones we don't # need the basis. elif self.optimise_parameters["ignore ones"] and loop_index_range == 1 and ones: loop_index = "0" basis = self._format_scalar_value(1.0)[()] else: # Add basis name to the psi tables map for later use. basis = self._create_symbol(name + basis_access, BASIS)[()] self.psi_tables_map[basis] = name # Create the correct mapping of the basis function into the # local element tensor. basis_map = loop_index if non_zeros and basis_map == "0": basis_map = str(non_zeros[1][0]) elif non_zeros: basis_map = format["component"](format["nonzero columns"](non_zeros[0]), basis_map) if offset: basis_map = format["grouping"](format["add"]([basis_map, offset])) # Try to evaluate basis map ("3 + 2" --> "5"). try: basis_map = str(eval(basis_map)) except Exception: pass # Create mapping (index, map, loop_range, space_dim). # Example dx and ds: (0, j, 3, 3) # Example dS: (0, (j + 3), 3, 6), 6=2*space_dim # Example dS optimised: (0, (nz2[j] + 3), 2, 6), 6=2*space_dim mapping = ((ufl_argument.number(), basis_map, loop_index_range, space_dim),) return (mapping, basis) def _create_function_name(self, component, deriv, avg, is_quad_element, ufl_function, ffc_element): ffc_assert(ufl_function in self._function_replace_values, "Expecting ufl_function to have been mapped prior to this call.") # Get string for integration points. f_ip = "0" if (avg or self.points == 1) else format["integration points"] # Get the element counter. element_counter = self.element_map[1 if avg else self.points][ufl_function.ufl_element()] # Get current cell entity, with current restriction considered entity = self._get_current_entity() # Set to hold used nonzero columns used_nzcs = set() # Create basis name and map to correct basis and get info. generate_psi_name = format["psi name"] psi_name = generate_psi_name(element_counter, self.entity_type, entity, component, deriv, avg) psi_name, non_zeros, zeros, ones = self.name_map[psi_name] # If all basis are zero we just return None. if zeros and self.optimise_parameters["ignore zero tables"]: return self._format_scalar_value(None)[()] # Get the index range of the loop index. loop_index_range = shape(self.unique_tables[psi_name])[1] # If domain type is custom, then special-case set loop index # range since table is empty if self.integral_type in custom_integral_types: loop_index_range = ffc_element.space_dimension() # Create loop index if loop_index_range > 1: # Pick first free index of secondary type (could use # primary indices, but it's better to avoid confusion). loop_index = format["free indices"][0] # If we have a quadrature element we can use the ip number to look # up the value directly. Need to add offset in case of components. if is_quad_element: quad_offset = 0 if component: # FIXME: Should we add a member function elements() to # FiniteElement? if isinstance(ffc_element, MixedElement): for i in range(component): quad_offset += ffc_element.elements()[i].space_dimension() elif component != 1: error("Can't handle components different from 1 if we don't have a MixedElement.") else: quad_offset += ffc_element.space_dimension() if quad_offset: coefficient_access = format["add"]([f_ip, str(quad_offset)]) else: if non_zeros and f_ip == "0": # If we have non zero column mapping but only one # value just pick it. # MSA: This should be an exact refactoring of the # previous logic, but I'm not sure if these # lines were originally intended here in the # quad_element section, or what this even # does: coefficient_access = str(non_zeros[1][0]) else: coefficient_access = f_ip elif non_zeros: if loop_index_range == 1: # If we have non zero column mapping but only one # value just pick it. coefficient_access = str(non_zeros[1][0]) else: used_nzcs.add(non_zeros[0]) coefficient_access = format["component"](format["nonzero columns"](non_zeros[0]), loop_index) elif loop_index_range == 1: # If the loop index range is one we can look up the first # component in the coefficient array. coefficient_access = "0" else: # Or just set default coefficient access. coefficient_access = loop_index # Offset by element space dimension in case of negative # restriction. offset = {"+": "", "-": str(ffc_element.space_dimension()), None: ""}[self.restriction] if offset: coefficient_access = format["add"]([coefficient_access, offset]) # Try to evaluate coefficient access ("3 + 2" --> "5"). try: coefficient_access = str(eval(coefficient_access)) C_ACCESS = GEO except Exception: C_ACCESS = IP # Format coefficient access coefficient = format["coefficient"](str(ufl_function.count()), coefficient_access) # Build and cache some function data only if we need the basis # MSA: I don't understand the mix of loop index range check # and ones check here, but that's how it was. if is_quad_element or (loop_index_range == 1 and ones and self.optimise_parameters["ignore ones"]): # If we only have ones or if we have a quadrature element # we don't need the basis. function_symbol_name = coefficient F_ACCESS = C_ACCESS else: # Add basis name to set of used tables and add matrix # access. # TODO: We should first add this table if the function is # used later in the expressions. If some term is # multiplied by zero and it falls away there is no need to # compute the function value self.used_psi_tables.add(psi_name) # Create basis access, we never need to map the entry in # the basis table since we will either loop the entire # space dimension or the non-zeros. basis_index = "0" if loop_index_range == 1 else loop_index basis_access = format["component"]("", [f_ip, basis_index]) basis_name = psi_name + basis_access # Try to set access to the outermost possible loop if f_ip == "0" and basis_access == "0": B_ACCESS = GEO F_ACCESS = C_ACCESS else: B_ACCESS = IP F_ACCESS = IP # Format expression for function function_expr = self._create_product([self._create_symbol(basis_name, B_ACCESS)[()], self._create_symbol(coefficient, C_ACCESS)[()]]) # Check if the expression to compute the function value is # already in the dictionary of used function. If not, # generate a new name and add. data = self.function_data.get(function_expr) if data is None: function_count = len(self.function_data) data = (function_count, loop_index_range, self._count_operations(function_expr), psi_name, used_nzcs, ufl_function.ufl_element()) self.function_data[function_expr] = data function_symbol_name = format["function value"](data[0]) # TODO: This access stuff was changed subtly during my # refactoring, the # X_ACCESS vars is an attempt at making it right, make sure it # is correct now! return self._create_symbol(function_symbol_name, F_ACCESS)[()] def _generate_affine_map(self): """Generate psi table for affine map, used by spatial coordinate to map integration point to physical element. """ # TODO: KBO: Perhaps it is better to create a fiat element and # tabulate the values at the integration points? f_FEA = format["affine map table"] f_ip = format["integration points"] affine_map = {1: lambda x: [1.0 - x[0], x[0]], 2: lambda x: [1.0 - x[0] - x[1], x[0], x[1]], 3: lambda x: [1.0 - x[0] - x[1] - x[2], x[0], x[1], x[2]]} num_ip = self.points w, points = self.quad_weights[num_ip] if self.facet0 is not None: # Extract the geometric dimension of the points we want to map dim = len(points[0]) + 1 dim2cellname = ["interval", "triangle", "tetrahedron"] cellname = dim2cellname[dim-1] points = map_facet_points(points, self.facet0, cellname) name = f_FEA(num_ip, self.facet0) elif self.vertex is not None: error("Spatial coordinates (x) not implemented for point measure (dP)") # TODO: Implement this, should be just the point. else: name = f_FEA(num_ip, 0) if name not in self.unique_tables: self.unique_tables[name] = array([affine_map[len(p)](p) for p in points]) if self.coordinate is None: ip = f_ip if num_ip > 1 else 0 r = None if self.facet1 is None else "+" self.coordinate = [name, self.gdim, ip, r] # ------------------------------------------------------------------------- # Helper functions for code_generation() # ------------------------------------------------------------------------- def _count_operations(self, expression): error("This function should be implemented by the child class.") def _create_entry_data(self, val): error("This function should be implemented by the child class.") ffc-2019.1.0.post0/ffc/quadrature/quadratureutils.py000066400000000000000000000402751346026536000223060ustar00rootroot00000000000000# -*- coding: utf-8 -*- "Utility functions for quadrature representation." # Copyright (C) 2007-2010 Kristian B. Oelgaard # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # First added: 2007-03-16 # Last changed: 2015-03-27 # # Hacked by Marie E. Rognes 2013 # Modified by Anders Logg 2014 # Modified by Lizao Li 2015 # Python modules. import numpy # FFC modules. from ffc.log import debug, error from ffc.quadrature.cpp import format def create_psi_tables(tables, eliminate_zeros, entity_type): "Create names and maps for tables and non-zero entries if appropriate." debug("\nQG-utils, psi_tables:\n" + str(tables)) # Create element map {points:{element:number,},} and a plain # dictionary {name:values,}. element_map, flat_tables = flatten_psi_tables(tables, entity_type) debug("\nQG-utils, psi_tables, flat_tables:\n" + str(flat_tables)) # Reduce tables such that we only have those tables left with # unique values. Create a name map for those tables that are # redundant. name_map, unique_tables = unique_psi_tables(flat_tables, eliminate_zeros) debug("\nQG-utils, psi_tables, unique_tables:\n" + str(unique_tables)) debug("\nQG-utils, psi_tables, name_map:\n" + str(name_map)) return (element_map, name_map, unique_tables) def flatten_psi_tables(tables, entity_type): """Create a 'flat' dictionary of tables with unique names and a name map that maps number of quadrature points and element name to a unique element number. Input tables on the format for scalar and non-scalar elements respectively: tables[num_points][element][entity][derivs][ip][dof] tables[num_points][element][entity][derivs][ip][component][dof] Planning to change this into: tables[num_points][element][avg][entity][derivs][ip][dof] tables[num_points][element][avg][entity][derivs][ip][component][dof] Returns: element_map - { num_quad_points: {ufl_element: element_number} }. flat_tables - { unique_table_name: values[ip,dof] }. """ generate_psi_name = format["psi name"] def sorted_items(mapping, **sorted_args): return [(k, mapping[k]) for k in sorted(mapping.keys(), **sorted_args)] # Initialise return values and element counter. flat_tables = {} element_map = {} counter = 0 # There's a separate set of tables for each number of quadrature points for num_points, element_tables in sorted_items(tables): element_map[num_points] = {} # There's a set of tables for each element for element, avg_tables in sorted_items(element_tables, key=lambda x: str(x)): element_map[num_points][element] = counter # There's a set of tables for non-averaged and averaged # (averaged only occurs with num_points == 1) for avg, entity_tables in sorted_items(avg_tables): # There's a set of tables for each entity number (only # 1 for the cell, >1 for facets and vertices) for entity, derivs_tables in sorted_items(entity_tables): # There's a set of tables for each derivative combination for derivs, fiat_tables in sorted_items(derivs_tables): # Flatten fiat_table for tensor-valued basis # This is necessary for basic # (non-tensor-product) tensor elements if len(numpy.shape(fiat_tables)) > 3: shape = fiat_tables.shape value_shape = shape[1:-1] fiat_tables = fiat_tables.reshape((shape[0], numpy.product(value_shape), shape[-1])) # Transform fiat_tables to a list of tables on # the form psi_table[dof][ip] for each scalar # component if element.value_shape(): # Table is on the form # fiat_tables[ip][component][dof]. transposed_table = numpy.transpose(fiat_tables, (1, 2, 0)) component_tables = list(enumerate(transposed_table)) else: # Scalar element, table is on the form # fiat_tables[ip][dof]. Using () for the # component because generate_psi_name # expects that component_tables = [((), numpy.transpose(fiat_tables))] # Iterate over the innermost tables for each # scalar component for component, psi_table in component_tables: # Generate the table name. name = generate_psi_name(counter, entity_type, entity, component, derivs, avg) # Verify uniqueness of names if name in flat_tables: error("Table name is not unique, something is wrong:\n name = %s\n table = %s\n" % (name, flat_tables)) # Store table with unique name flat_tables[name] = psi_table # Increase unique numpoints*element counter counter += 1 return (element_map, flat_tables) def unique_psi_tables(tables, eliminate_zeros): """Returns a name map and a dictionary of unique tables. The function checks if values in the tables are equal, if this is the case it creates a name mapping. It also create additional information (depending on which parameters are set) such as if the table contains all ones, or only zeros, and a list on non-zero columns. unique_tables - {name:values,}. name_map - {original_name:[new_name, non-zero-columns (list), is zero (bool), is ones (bool)],}. """ # Get unique tables (from old table utility). name_map, inverse_name_map = unique_tables(tables) debug("\ntables: " + str(tables)) debug("\nname_map: " + str(name_map)) debug("\ninv_name_map: " + str(inverse_name_map)) # Set values to zero if they are lower than threshold. format_epsilon = format["epsilon"] for name in tables: # Get values. vals = tables[name] for r in range(numpy.shape(vals)[0]): for c in range(numpy.shape(vals)[1]): if abs(vals[r][c]) < format_epsilon: vals[r][c] = 0 tables[name] = vals # Extract the column numbers that are non-zero. If optimisation # option is set counter for non-zero column arrays. i = 0 non_zero_columns = {} if eliminate_zeros: for name in sorted(tables.keys()): # Get values. vals = tables[name] # Skip if values are missing if len(vals) == 0: continue # Use the first row as reference. non_zeros = list(vals[0].nonzero()[0]) # If all columns in the first row are non zero, there's no # point in continuing. if len(non_zeros) == numpy.shape(vals)[1]: continue # If we only have one row (IP) we just need the nonzero # columns. if numpy.shape(vals)[0] == 1: if list(non_zeros): non_zeros.sort() non_zero_columns[name] = (i, non_zeros) # Compress values. tables[name] = vals[:, non_zeros] i += 1 # Check if the remaining rows are nonzero in the same # positions, else expand. else: for j in range(1, numpy.shape(vals)[0]): # All rows must have the same non-zero columns for # the optimization to work (at this stage). new_non_zeros = list(vals[j].nonzero()[0]) if non_zeros != new_non_zeros: non_zeros = non_zeros + [c for c in new_non_zeros if c not in non_zeros] # If this results in all columns being # non-zero, continue. if len(non_zeros) == numpy.shape(vals)[1]: continue # Only add nonzeros if it results in a reduction of # columns. if len(non_zeros) != numpy.shape(vals)[1]: if list(non_zeros): non_zeros.sort() non_zero_columns[name] = (i, non_zeros) # Compress values. tables[name] = vals[:, non_zeros] i += 1 # Check if we have some zeros in the tables. names_zeros = contains_zeros(tables) # Get names of tables with all ones. names_ones = get_ones(tables) # Add non-zero column, zero and ones info to inverse_name_map (so # we only need to pass around one name_map to code generating # functions). for name in inverse_name_map: if inverse_name_map[name] in non_zero_columns: nzc = non_zero_columns[inverse_name_map[name]] zero = inverse_name_map[name] in names_zeros ones = inverse_name_map[name] in names_ones inverse_name_map[name] = [inverse_name_map[name], nzc, zero, ones] else: zero = inverse_name_map[name] in names_zeros ones = inverse_name_map[name] in names_ones inverse_name_map[name] = [inverse_name_map[name], (), zero, ones] # If we found non zero columns we might be able to reduce number # of tables further. if non_zero_columns: # Try reducing the tables. This is possible if some tables # have become identical as a consequence of compressing the # tables. This happens with e.g., gradients of linear basis # FE0 = {-1,0,1}, nzc0 = [0,2] # FE1 = {-1,1,0}, nzc1 = [0,1] -> FE0 = {-1,1}, nzc0 = [0,2], nzc1 = [0,1]. # Call old utility function again. nm, inv_nm = unique_tables(tables) # Update name maps. for name in inverse_name_map: if inverse_name_map[name][0] in inv_nm: inverse_name_map[name][0] = inv_nm[inverse_name_map[name][0]] for name in nm: maps = nm[name] for m in maps: if name not in name_map: name_map[name] = [] if m in name_map: name_map[name] += name_map[m] + [m] del name_map[m] else: name_map[name].append(m) # Get new names of tables with all ones (for vector # constants). names = get_ones(tables) # Because these tables now contain ones as a consequence of # compression we still need to consider the non-zero columns # when looking up values in coefficient arrays. The psi # entries can however we neglected and we don't need to # tabulate the values (if option is set). for name in names: if name in name_map: maps = name_map[name] for m in maps: inverse_name_map[m][3] = True if name in inverse_name_map: inverse_name_map[name][3] = True # Write protect info and return values for name in inverse_name_map: inverse_name_map[name] = tuple(inverse_name_map[name]) # Note: inverse_name_map here is called name_map in # create_psi_tables and the quadraturetransformerbase class return (inverse_name_map, tables) def unique_tables(tables): """Removes tables with redundant values and returns a name_map and a inverse_name_map. E.g., tables = {a:[0,1,2], b:[0,2,3], c:[0,1,2], d:[0,1,2]} results in: tables = {a:[0,1,2], b:[0,2,3]} name_map = {a:[c,d]} inverse_name_map = {a:a, b:b, c:a, d:a}. """ format_epsilon = format["epsilon"] name_map = {} inverse_name_map = {} names = sorted(tables.keys()) mapped = [] # Loop all tables to see if some are redundant. for i in range(len(names)): name0 = names[i] if name0 in mapped: continue val0 = numpy.array(tables[name0]) for j in range(i + 1, len(names)): name1 = names[j] if name1 in mapped: continue val1 = numpy.array(tables[name1]) # Check if dimensions match. if numpy.shape(val0) == numpy.shape(val1): # Check if values are the same. if len(val0) > 0 and abs(val0 - val1).max() < format_epsilon: mapped.append(name1) del tables[name1] if name0 in name_map: name_map[name0].append(name1) else: name_map[name0] = [name1] # Create inverse name map. inverse_name_map[name1] = name0 # Add self. for name in tables: if name not in inverse_name_map: inverse_name_map[name] = name return (name_map, inverse_name_map) def get_ones(tables): "Return names of tables for which all values are 1.0." f_epsilon = format["epsilon"] names = [] for name in tables: vals = tables[name] if len(vals) > 0 and abs(vals - numpy.ones(numpy.shape(vals))).max() < f_epsilon: names.append(name) return names def contains_zeros(tables): "Checks if any tables contains only zeros." f_epsilon = format["epsilon"] names = [] for name in tables: vals = tables[name] if len(vals) > 0 and abs(vals).max() < f_epsilon: names.append(name) return names def create_permutations(expr): # This is probably not used. if len(expr) == 0: return expr # Format keys and values to lists and tuples. if len(expr) == 1: new = {} for key, val in expr[0].items(): if key == (): pass elif not isinstance(key[0], tuple): key = (key,) if not isinstance(val, list): val = [val] new[key] = val return new # Create permutations of two lists. # TODO: there could be a cleverer way of changing types of keys and vals. if len(expr) == 2: new = {} for key0, val0 in expr[0].items(): if isinstance(key0[0], tuple): key0 = list(key0) if not isinstance(key0, list): key0 = [key0] if not isinstance(val0, list): val0 = [val0] for key1, val1 in expr[1].items(): if key1 == (): key1 = [] elif isinstance(key1[0], tuple): key1 = list(key1) if not isinstance(key1, list): key1 = [key1] if not isinstance(val1, list): val1 = [val1] if tuple(key0 + key1) in new: error("This is not supposed to happen.") new[tuple(key0 + key1)] = val0 + val1 return new # Create permutations by calling this function recursively. This # is only used for rank > 2 tensors I think. # if len(expr) > 2: # new = permutations(expr[0:2]) # return permutations(new + expr[2:]) ffc-2019.1.0.post0/ffc/quadrature/reduce_operations.py000066400000000000000000000642601346026536000225620ustar00rootroot00000000000000# -*- coding: utf-8 -*- "Some simple functions for manipulating expressions symbolically" # Copyright (C) 2008-2010 Kristian B. Oelgaard # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . from ufl.utils.sorting import sorted_by_key # FFC modules from ffc.log import error from collections import deque def split_expression(expression, format, operator, allow_split=False): """Split the expression at the given operator, return list. Do not split () or [] unless told to split (). This is to enable easy count of double operations which can be in (), but in [] we only have integer operations.""" # Get formats access = format["component"]("", [""]) group = format["grouping"]("") la = access[0] ra = access[1] lg = group[0] rg = group[1] # Split with given operator prods = deque(expression.split(operator)) new_prods = [prods.popleft()] while prods: # Continue while we still have list of potential products p is # the first string in the product p = prods.popleft() # If the number of "[" and "]" doesn't add up in the last # entry of the new_prods list, add p and see if it helps for # next iteration if new_prods[-1].count(la) != new_prods[-1].count(ra): new_prods[-1] = operator.join([new_prods[-1], p]) # If the number of "(" and ")" doesn't add up (and we didn't # allow a split) in the last entry of the new_prods list, add # p and see if it helps for next iteration elif new_prods[-1].count(lg) != new_prods[-1].count(rg) and not allow_split: new_prods[-1] = operator.join([new_prods[-1], p]) # If everything was fine, we can start a new entry in the # new_prods list else: new_prods.append(p) return new_prods def operation_count(expression, format): """This function returns the number of double operations in an expression. We do split () but not [] as we only have unsigned integer operations in []. """ # Note we do not subtract 1 for the additions, because there is # also an assignment involved adds = len(split_expression(expression, format, format["add"](["", ""]), True)) - 1 mults = len(split_expression(expression, format, format["multiply"](["", ""]), True)) - 1 return mults + adds def get_simple_variables(expression, format): """This function takes as argument an expression (preferably expanded): expression = "x*x + y*x + x*y*z" returns a list of products and a dictionary: prods = ["x*x", "y*x", "x*y*z"] variables = {variable: [num_occurences, [pos_in_prods]]} variables = {"x":[3, [0,1,2]], "y":[2, [1,2]], "z":[1, [2]]} """ # Get formats add = format["add"](["", ""]) mult = format["multiply"](["", ""]) format_float = format["floating point"] prods = split_expression(expression, format, add) prods = [p for p in prods if p] variables = {} for i, p in enumerate(prods): # Only extract unique variables vrs = list(set(split_expression(p, format, mult))) for v in vrs: # Try to convert variable to floats and back (so '2' == '2.0' etc.) try: v = format_float(float(v)) except Exception: pass if v in variables: variables[v][0] += 1 variables[v][1].append(i) else: variables[v] = [1, [i]] return (prods, variables) def group_vars(expr, format): """Group variables in an expression, such that: "x + y + z + 2*y + 6*z" = "x + 3*y + 7*z" "x*x + x*x + 2*x + 3*x + 5" = "2.0*x*x + 5.0*x + 5" "x*y + y*x + 2*x*y + 3*x + 0*x + 5" = "5.0*x*y + 3.0*x + 5" "(y + z)*x + 5*(y + z)*x" = "6.0*(y + z)*x" "1/(x*x) + 2*1/(x*x) + std::sqrt(x) + 6*std::sqrt(x)" = "3*1/(x*x) + 7*std::sqrt(x)" """ # Get formats format_float = format["floating point"] add = format["add"](["", ""]) mult = format["multiply"](["", ""]) new_prods = {} # Get list of products prods = split_expression(expr, format, add) # Loop products and collect factors for p in prods: # Get list of variables, and do a basic sort vrs = split_expression(p, format, mult) factor = 1 new_var = [] # Try to multiply factor with variable, else variable must be # multiplied by factor later # If we don't have a variable, set factor to zero and break for v in vrs: if v: try: f = float(v) factor *= f except Exception: new_var.append(v) else: factor = 0 break # Create new variable that must be multiplied with factor. Add # this variable to dictionary, if it already exists add factor # to other factors new_var.sort() new_var = mult.join(new_var) if new_var in new_prods: new_prods[new_var] += factor else: new_prods[new_var] = factor # Reset products prods = [] for prod, f in sorted_by_key(new_prods): # If we have a product append mult of both if prod: # If factor is 1.0 we don't need it if f == 1.0: prods.append(prod) else: prods.append(mult.join([format_float(f), prod])) # If we just have a factor elif f: prods.append(format_float(f)) prods.sort() return add.join(prods) def reduction_possible(variables): """Find the variable that occurs in the most products, if more variables occur the same number of times and in the same products add them to list. """ # Find the variable that appears in the most products max_val = 1 max_var = "" max_vars = [] for key, val in sorted_by_key(variables): if max_val < val[0]: max_val = val[0] max_var = key # If we found a variable that appears in products multiple times, # check if other variables appear in the exact same products if max_var: for key, val in sorted_by_key(variables): # Check if we have more variables in the same products if max_val == val[0] and variables[max_var][1] == val[1]: max_vars.append(key) return max_vars def is_constant(variable, format, constants=[], from_is_constant=False): """Determine if a variable is constant or not. The function accepts an optional list of variables (loop indices) that will be regarded as constants for the given variable. If none are supplied it is assumed that all array accesses will result in a non-constant variable. v = 2.0, is constant v = Jinv_00*det, is constant v = w[0][1], is constant v = 2*w[0][1], is constant v = W0[ip], is constant if constants = ['ip'] else not v = P_t0[ip][j], is constant if constants = ['j','ip'] else not """ # Get formats access = format["array access"]("") add = format["add"](["", ""]) mult = format["multiply"](["", ""]) l = access[0] # noqa: E741 r = access[1] if not variable.count(l) == variable.count(r): print("variable: ", variable) error("Something wrong with variable") # Be sure that we don't have a compound variable = expand_operations(variable, format) prods = split_expression(variable, format, add) # Loop all products and variables and check if they're constant for p in prods: vrs = split_expression(p, format, mult) for v in vrs: # Check if each variable is constant, if just one fails # the entire variable is considered not to be constant const_var = False # If variable is in constants, well.... if v in constants: const_var = True continue # If we don't have any '[' or ']' we have a constant # (unless we're dealing with a call from this funtions) elif not v.count(l) and not from_is_constant: const_var = True continue # If we have an array access variable, see if the index is # regarded a constant elif v.count(l): # Check if access is OK ('[' is before ']') if not v.index(l) < v.index(r): print("variable: ", v) error("Something is wrong with the array access") # Auxiliary variables index = "" left = 0 inside = False indices = [] # Loop all characters in variable and find indices for c in v: # If character is ']' reduce left count if c == r: left -= 1 # If the '[' count has returned to zero, we have a # complete index if left == 0 and inside: const_index = False # Aux. var if index in constants: const_index = True try: int(index) const_index = True except Exception: # Last resort, call recursively if is_constant(index, format, constants, True): const_index = True pass # Append index and reset values if const_index: indices.append(const_index) else: indices = [False] break index = "" inside = False # If we're inside an access, add character to index if inside: index += c # If character is '[' increase the count, and # we're inside an access if c == l: inside = True left += 1 # If all indices were constant, the variable is constant if all(indices): const_var = True continue else: # If it is a float, it is also constant try: float(v) const_var = True continue except Exception: pass # I no tests resulted in a constant variable, there is no # need to continue if not const_var: return False # If all variables were constant return True return True def expand_operations(expression, format): """This function expands an expression and returns the value. E.g., ((x + y)) --> x + y 2*(x + y) --> 2*x + 2*y (x + y)*(x + y) --> x*x + y*y + 2*x*y z*(x*(y + 3) + 2) + 1 --> 1 + 2*z + x*y*z + x*z*3 z*((y + 3)*x + 2) + 1 --> 1 + 2*z + x*y*z + x*z*3""" # Get formats add = format["add"](["", ""]) mult = format["multiply"](["", ""]) group = format["grouping"]("") l = group[0] # noqa: E741 r = group[1] # Check that we have the same number of left/right parenthesis in # expression if not expression.count(l) == expression.count(r): error("Number of left/right parenthesis do not match") # If we don't have any parenthesis, group variables and return if expression.count(l) == 0: return group_vars(expression, format) # Get list of additions adds = split_expression(expression, format, add) new_adds = [] # Loop additions and get products for a in adds: prods = sorted(split_expression(a, format, mult)) new_prods = [] # FIXME: Should we use deque here? expanded = [] for i, p in enumerate(prods): # If we have a group, expand inner expression if p[0] == l and p[-1] == r: # Add remaining products to new products and multiply # with all terms from expanded variable expanded_var = expand_operations(p[1:-1], format) expanded.append(split_expression(expanded_var, format, add)) # Else, just add variable to list of new products else: new_prods.append(p) if expanded: # Combine all expanded variables and multiply by factor while len(expanded) > 1: first = expanded.pop(0) second = expanded.pop(0) expanded = [[mult.join([i] + [j]) for i in first for j in second]] + expanded new_adds += [mult.join(new_prods + [e]) for e in expanded[0]] else: # Else, just multiply products and add to list of products new_adds.append(mult.join(new_prods)) # Group variables and return return group_vars(add.join(new_adds), format) def reduce_operations(expression, format): """This function reduces the number of opertions needed to compute a given expression. It looks for the variable that appears the most and groups terms containing this variable inside parenthesis. The function is called recursively until no further reductions are possible. "x + y + x" = 2*x + y "x*x + 2.0*x*y + y*y" = y*y + (2.0*y + x)*x, not (x + y)*(x + y) as it should be!! z*x*y + z*x*3 + 2*z + 1" = z*(x*(y + 3) + 2) + 1 """ # Get formats add = format["add"](["", ""]) mult = format["multiply"](["", ""]) # Be sure that we have an expanded expression expression = expand_operations(expression, format) # Group variables to possibly reduce complexity expression = group_vars(expression, format) # Get variables and products prods, variables = get_simple_variables(expression, format) # Get the variables for which we can reduce the expression max_vars = reduction_possible(variables) new_prods = [] no_mult = [] max_vars.sort() # If we have variables that can be moved outside if max_vars: for p in prods: # Get the list of variables in current product li = sorted(split_expression(p, format, mult)) # If the list of products is the same as what we intend of # moving outside the parenthesis, leave it (because x + # x*x + x*y should be x + (x + y)*x NOT (1.0 + x + y)*x) if li == max_vars: no_mult.append(p) continue else: # Get list of all variables from max_vars that are in # li indices = [i for i in max_vars if i in li] # If not all were present add to list of terms that # shouldn't be multiplied with variables and continue if indices != max_vars: no_mult.append(p) continue # Remove variables that we are moving outside for v in max_vars: li.remove(v) # Add to list of products p = mult.join(li) new_prods.append(p) # Sort lists no_mult.sort() new_prods.sort() else: # No reduction possible return expression # Recursively reduce sums with and without reduced variable new_prods = add.join(new_prods) if new_prods: new_prods = reduce_operations(new_prods, format) if no_mult: no_mult = [reduce_operations(add.join(no_mult), format)] # Group new products if we have a sum g = new_prods len_new_prods = len(split_expression(new_prods, format, add)) if len_new_prods > 1: g = format["grouping"](new_prods) # The new expression is the sum of terms that couldn't be reduced # and terms that could be reduced multiplied by the reduction # e.g., expr = z + (x + y)*x new_expression = add.join(no_mult + [mult.join([g, mult.join(max_vars)])]) return new_expression def get_geo_terms(expression, geo_terms, offset, format): """This function returns a new expression where all geometry terms have been substituted with geometry declarations, these declarations are added to the geo_terms dictionary. """ # Get formats add = format["add"](["", ""]) mult = format["multiply"](["", ""]) grouping = format["grouping"] group = grouping("") format_G = format["geometry tensor"] gl = group[0] gr = group[1] # Get the number of geometry declaration, possibly offset value num_geo = offset + len(geo_terms) new_prods = [] # Split the expression into products prods = split_expression(expression, format, add) consts = [] # Loop products and check if the variables are constant for p in prods: vrs = split_expression(p, format, mult) geos = [] # Generate geo code for constant coefficients e.g., w[0][5] new_vrs = [] for v in vrs: # If variable is a group, get the geometry terms and # update geo number if v[0] == gl and v[-1] == gr: v = get_geo_terms(v[1:-1], geo_terms, offset, format) num_geo = offset + len(geo_terms) # If we still have a sum, regroup if len(v.split(add)) > 1: v = grouping(v) # Append to new variables new_vrs.append(v) # If variable is constants, add to geo terms constant = is_constant(v, format) if constant: geos.append(v) # Update variable list vrs = new_vrs vrs.sort() # Sort geo and create geometry term geos.sort() geo = mult.join(geos) # Handle geometry term appropriately if geo: if geos != vrs: if len(geos) > 1: for g in geos: vrs.remove(g) if geo not in geo_terms: geo_terms[geo] = format_G + str(num_geo) num_geo += 1 vrs.append(geo_terms[geo]) new_prods.append(mult.join(vrs)) else: consts.append(mult.join(vrs)) else: new_prods.append(mult.join(vrs)) if consts: if len(consts) > 1: c = grouping(add.join(consts)) else: c = add.join(consts) if c not in geo_terms: geo_terms[c] = format_G + str(num_geo) num_geo += 1 consts = [geo_terms[c]] return add.join(new_prods + consts) def get_constants(expression, const_terms, format, constants=[]): """This function returns a new expression where all geometry terms have been substituted with geometry declarations, these declarations are added to the const_terms dictionary. """ # Get formats add = format["add"](["", ""]) mult = format["multiply"](["", ""]) grouping = format["grouping"] format_G = format["geometry tensor"] + "".join(constants) # format["geometry tensor"] # Get the number of geometry declaration, possibly offset value num_geo = len(const_terms) new_prods = [] # Split the expression into products prods = split_expression(expression, format, add) consts = [] # Loop products and check if the variables are constant for p in prods: vrs = split_expression(p, format, mult) geos = [] # Generate geo code for constant coefficients e.g., w[0][5] new_vrs = [] for v in vrs: # If variable is constants, add to geo terms constant = is_constant(v, format, constants) if constant: geos.append(v) # Append to new variables new_vrs.append(v) # Update variable list vrs = new_vrs vrs.sort() # Sort geo and create geometry term geos.sort() geo = mult.join(geos) if geo: if geos != vrs: for g in geos: vrs.remove(g) if geo not in const_terms: const_terms[geo] = format_G + str(num_geo) num_geo += 1 vrs.append(const_terms[geo]) new_prods.append(mult.join(vrs)) else: consts.append(mult.join(vrs)) else: new_prods.append(mult.join(vrs)) if consts: if len(consts) > 1: c = grouping(add.join(consts)) else: c = add.join(consts) if c not in const_terms: const_terms[c] = format_G + str(num_geo) num_geo += 1 consts = [const_terms[c]] return add.join(new_prods + consts) def get_indices(variable, format, from_get_indices=False): """This function returns the indices of a given variable. E.g., P[0][j], returns ['j'] P[ip][k], returns ['ip','k'] P[ip][nzc0[j] + 3], returns ['ip','j'] w[0][j + 2] , returns [j]""" add = format["add"](["", ""]) mult = format["multiply"](["", ""]) format_access = format["array access"] access = format_access("") l = access[0] # noqa: E741 r = access[1] indices = [] # If there are no '[' in variable and self is the caller if not variable.count(l) and from_get_indices: adds = split_expression(variable, format, add) for a in adds: mults = split_expression(a, format, mult) for m in mults: try: float(m) except Exception: if m not in indices: indices.append(m) else: index = "" left = 0 inside = False # Loop all characters in variable and find indices for c in variable: # If character is ']' reduce left count if c == r: left -= 1 # If the '[' count has returned to zero, we have a # complete index if left == 0 and inside: try: eval(index) except Exception: indices += get_indices(index, format, True) index = "" inside = False # If we're inside an access, add character to index if inside: index += c # If character is '[' increase the count, and we're inside # an access if c == l: inside = True left += 1 return indices def get_variables(expression, variables, format, constants=[]): """This function returns a new expression where all geometry terms have been substituted with geometry declarations, these declarations are added to the const_terms dictionary. """ # Get formats add = format["add"](["", ""]) mult = format["multiply"](["", ""]) format_access = format["array access"] access = format_access("") format_F = format["function value"] l = access[0] # noqa: E741 # If we don't have any access operators in expression, # we don't have any variables if expression.count(l) == 0: return expression # Get the number of geometry declaration, possibly offset value num_var = len(variables) new_prods = [] used_vars = [] # Split the expression into products prods = split_expression(expression, format, add) # Loop products and check if the variables are constant for p in prods: vrs = split_expression(p, format, mult) # Variables with respect to the constants in list variables_of_interest = [] # Generate geo code for constant coefficients e.g., w[0][5] new_vrs = [] for v in vrs: # If we don't have any access operators, we don't have a # variable if v.count(l) == 0: new_vrs.append(v) continue # Check if we have a variable that depends on one of the # constants First check the easy way is_var = False for c in constants: if format_access(c) in v: is_var = True break if is_var: variables_of_interest.append(v) continue # Then check the hard way # Get list of indices indices = get_indices(v, format) depends = [True for c in constants if c in indices] if any(depends): variables_of_interest.append(v) else: new_vrs.append(v) variables_of_interest.sort() variables_of_interest = mult.join(variables_of_interest) # If we have some variables, declare new variable if needed # and add to list of variables if variables_of_interest: # If we didn't already declare this variable do so if variables_of_interest not in variables: variables[variables_of_interest] = format_F + str(num_var) num_var += 1 # Get mapped variable mv = variables[variables_of_interest] new_vrs.append(mv) if mv not in used_vars: used_vars.append(mv) # Sort variables and add to list of products new_vrs.sort() new_prods.append(mult.join(new_vrs)) # Sort list of products and return the sum new_prods.sort() return (add.join(new_prods), used_vars) ffc-2019.1.0.post0/ffc/quadrature/sumobj.py000066400000000000000000000515001346026536000203400ustar00rootroot00000000000000# -*- coding: utf-8 -*- "This file implements a class to represent a sum." # Copyright (C) 2009-2010 Kristian B. Oelgaard # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . from ufl.utils.sorting import sorted_by_key # FFC modules. from ffc.log import error from ffc.quadrature.cpp import format # FFC quadrature modules. from .symbolics import create_float from .symbolics import create_product from .symbolics import create_sum from .symbolics import create_fraction from .expr import Expr from .floatvalue import FloatValue class Sum(Expr): __slots__ = ("vrs", "_expanded", "_reduced") def __init__(self, variables): """Initialise a Sum object, it derives from Expr and contains the additional variables: vrs - list, a list of variables. _expanded - object, an expanded object of self, e.g., self = 'x + x'-> self._expanded = 2*x (a product). _reduced - object, a reduced object of self, e.g., self = '2*x + x*y'-> self._reduced = x*(2 + y) (a product). NOTE: self._prec = 3.""" # Initialise value, list of variables, class, expanded and reduced. self.val = 1.0 self.vrs = [] self._prec = 3 self._expanded = False self._reduced = False # Get epsilon EPS = format["epsilon"] # Process variables if we have any. if variables: # Loop variables and remove nested Sums and collect all floats in # 1 variable. We don't collect [x, x, x] into 3*x to avoid creating # objects, instead we do this when expanding the object. float_val = 0.0 for var in variables: # Skip zero terms. if abs(var.val) < EPS: continue elif var._prec == 0: # float float_val += var.val continue elif var._prec == 3: # sum # Loop and handle variables of nested sum. for v in var.vrs: if abs(v.val) < EPS: continue elif v._prec == 0: # float float_val += v.val continue self.vrs.append(v) continue self.vrs.append(var) # Only create new float if value is different from 0. if abs(float_val) > EPS: self.vrs.append(create_float(float_val)) # If we don't have any variables the sum is zero. else: self.val = 0.0 self.vrs = [create_float(0)] # Handle zero value. if not self.vrs: self.val = 0.0 self.vrs = [create_float(0)] # Type is equal to the smallest type in both lists. self.t = min([v.t for v in self.vrs]) # Sort variables, (for representation). self.vrs.sort() # Compute the representation now, such that we can use it directly # in the __eq__ and __ne__ methods (improves performance a bit, but # only when objects are cached). self._repr = "Sum([%s])" % ", ".join([v._repr for v in self.vrs]) # Use repr as hash value. self._hash = hash(self._repr) # Print functions. def __str__(self): "Simple string representation which will appear in the generated code." # First add all the positive variables using plus, then add all # negative variables. s = format["add"]([str(v) for v in self.vrs if not v.val < 0]) +\ "".join([str(v) for v in self.vrs if v.val < 0]) # Group only if we have more that one variable. if len(self.vrs) > 1: return format["grouping"](s) return s # Binary operators. def __add__(self, other): "Addition by other objects." # Return a new sum return create_sum([self, other]) def __sub__(self, other): "Subtract other objects." # Return a new sum return create_sum([self, create_product([FloatValue(-1), other])]) def __mul__(self, other): "Multiplication by other objects." # If product will be zero. if self.val == 0.0 or other.val == 0.0: return create_float(0) # NOTE: We expect expanded sub-expressions with no nested operators. # Create list of new products using the '*' operator # TODO: Is this efficient? new_prods = [v * other for v in self.vrs] # Remove zero valued terms. # TODO: Can this still happen? new_prods = [v for v in new_prods if v.val != 0.0] # Create new sum. if not new_prods: return create_float(0) elif len(new_prods) > 1: # Expand sum to collect terms. return create_sum(new_prods).expand() # TODO: Is it necessary to call expand? return new_prods[0].expand() def __truediv__(self, other): "Division by other objects." # If division is illegal (this should definitely not happen). if other.val == 0.0: error("Division by zero.") # If fraction will be zero. if self.val == 0.0: return create_float(0) # NOTE: assuming that we get expanded variables. # If other is a Sum we can only return a fraction. # TODO: We could check for equal sums if Sum.__eq__ could be trusted. # As it is now (2*x + y) == (3*x + y), which works for the other things I do. # NOTE: Expect that other is expanded i.e., x + x -> 2*x which can be handled. # TODO: Fix (1 + y) / (x + x*y) -> 1 / x # Will this be handled when reducing operations on a fraction? if other._prec == 3: # sum return create_fraction(self, other) # NOTE: We expect expanded sub-expressions with no nested operators. # Create list of new products using the '*' operator. # TODO: Is this efficient? new_fracs = [v / other for v in self.vrs] # Remove zero valued terms. # TODO: Can this still happen? new_fracs = [v for v in new_fracs if v.val != 0.0] # Create new sum. # TODO: No need to call expand here, using the '/' operator should have # taken care of this. if not new_fracs: return create_float(0) elif len(new_fracs) > 1: return create_sum(new_fracs) return new_fracs[0] __div__ = __truediv__ # Public functions. def expand(self): "Expand all members of the sum." # If sum is already expanded, simply return the expansion. if self._expanded: return self._expanded # TODO: This function might need some optimisation. # Sort variables into symbols, products and fractions (add # floats directly to new list, will be handled later). Add # fractions if possible else add to list. new_variables = [] syms = [] prods = [] # TODO: Rather than using '+', would it be more efficient to # collect the terms first? for var in self.vrs: exp = var.expand() # TODO: Should we also group fractions, or put this in a # separate function? if exp._prec in (0, 4): # float or frac new_variables.append(exp) elif exp._prec == 1: # sym syms.append(exp) elif exp._prec == 2: # prod prods.append(exp) elif exp._prec == 3: # sum for v in exp.vrs: if v._prec in (0, 4): # float or frac new_variables.append(v) elif v._prec == 1: # sym syms.append(v) elif v._prec == 2: # prod prods.append(v) # Sort all variables in groups: [2*x, -7*x], [(x + y), (2*x + # 4*y)] etc. First handle product in order to add symbols if # possible. prod_groups = {} for v in prods: if v.get_vrs() in prod_groups: prod_groups[v.get_vrs()] += v else: prod_groups[v.get_vrs()] = v sym_groups = {} # Loop symbols and add to appropriate groups. for v in syms: # First try to add to a product group. if (v,) in prod_groups: prod_groups[(v,)] += v # Then to other symbols. elif v in sym_groups: sym_groups[v] += v # Create a new entry in the symbols group. else: sym_groups[v] = v # Loop groups and add to new variable list. for k, v in sorted_by_key(sym_groups): new_variables.append(v) for k, v in sorted_by_key(prod_groups): new_variables.append(v) if len(new_variables) > 1: # Return new sum (will remove multiple instances of floats # during construction). self._expanded = create_sum(sorted(new_variables)) return self._expanded elif new_variables: # If we just have one variable left, return it since it is # already expanded. self._expanded = new_variables[0] return self._expanded error("Where did the variables go?") def get_unique_vars(self, var_type): "Get unique variables (Symbols) as a set." # Loop all variables of self update the set. var = set() for v in self.vrs: var.update(v.get_unique_vars(var_type)) return var def get_var_occurrences(self): """Determine the number of minimum number of times all variables occurs in the expression. Returns a dictionary of variables and the number of times they occur. x*x + x returns {x:1}, x + y returns {}. """ # NOTE: This function is only used if the numerator of a # Fraction is a Sum. # Get occurrences in first expression. d0 = self.vrs[0].get_var_occurrences() for var in self.vrs[1:]: # Get the occurrences. d = var.get_var_occurrences() # Delete those variables in d0 that are not in d. for k, v in list(d0.items()): if k not in d: del d0[k] # Set the number of occurrences equal to the smallest # number. for k, v in sorted_by_key(d): if k in d0: d0[k] = min(d0[k], v) return d0 def ops(self): "Return number of operations to compute value of sum." # Subtract one operation as it only takes n-1 ops to sum n # members. op = -1 # Add the number of operations from sub-expressions. for v in self.vrs: # +1 for the +/- symbol. op += v.ops() + 1 return op def reduce_ops(self): "Reduce the number of operations needed to evaluate the sum." if self._reduced: return self._reduced # NOTE: Assuming that sum has already been expanded. # TODO: Add test for this and handle case if it is not. # TODO: The entire function looks expensive, can it be optimised? # TODO: It is not necessary to create a new Sum if we do not # have more than one Fraction. # First group all fractions in the sum. new_sum = _group_fractions(self) if new_sum._prec != 3: # sum self._reduced = new_sum.reduce_ops() return self._reduced # Loop all variables of the sum and collect the number of # common variables that can be factored out. common_vars = {} for var in new_sum.vrs: # Get dictonary of occurrences and add the variable and # the number of occurrences to common dictionary. for k, v in sorted_by_key(var.get_var_occurrences()): if k in common_vars: common_vars[k].append((v, var)) else: common_vars[k] = [(v, var)] # Determine the maximum reduction for each variable sorted as: # {(x*x*y, x*y*z, 2*y):[2, [y]]}. terms_reductions = {} for k, v in sorted_by_key(common_vars): # If the number of expressions that can be reduced is only # one there is nothing to be done. if len(v) > 1: # TODO: Is there a better way to compute the reduction # gain and the number of occurrences we should remove? # Get the list of number of occurences of 'k' in # expressions in 'v'. occurrences = [t[0] for t in v] # Determine the favorable number of occurences and an # estimate of the maximum reduction for current # variable. fav_occur = 0 reduc = 0 for i in set(occurrences): # Get number of terms that has a number of # occcurences equal to or higher than the current # number. num_terms = len([o for o in occurrences if o >= i]) # An estimate of the reduction in operations is: # (number_of_terms - 1) * number_occurrences. new_reduc = (num_terms - 1) * i if new_reduc > reduc: reduc = new_reduc fav_occur = i # Extract the terms of v where the number of # occurrences is equal to or higher than the most # favorable number of occurrences. terms = sorted([t[1] for t in v if t[0] >= fav_occur]) # We need to reduce the expression with the favorable # number of occurrences of the current variable. red_vars = [k] * fav_occur # If the list of terms is already present in the # dictionary, add the reduction count and the # variables. if tuple(terms) in terms_reductions: terms_reductions[tuple(terms)][0] += reduc terms_reductions[tuple(terms)][1] += red_vars else: terms_reductions[tuple(terms)] = [reduc, red_vars] if terms_reductions: # Invert dictionary of terms. reductions_terms = dict([((v[0], tuple(v[1])), k) for k, v in terms_reductions.items()]) # Create a sorted list of those variables that give the # highest reduction. sorted_reduc_var = sorted(reductions_terms.keys(), reverse=True) # Create a new dictionary of terms that should be reduced, # if some terms overlap, only pick the one which give the # highest reduction to ensure that a*x*x + b*x*x + x*x*y + # 2*y -> x*x*(a + b + y) + 2*y NOT x*x*(a + b) + y*(2 + # x*x). reduction_vars = {} rejections = {} for var in sorted_reduc_var: terms = reductions_terms[var] if _overlap(terms, reduction_vars) or _overlap(terms, rejections): rejections[var[1]] = terms else: reduction_vars[var[1]] = terms # Reduce each set of terms with appropriate variables. all_reduced_terms = [] reduced_expressions = [] for reduc_var, terms in sorted(reduction_vars.items()): # Add current terms to list of all variables that have # been reduced. all_reduced_terms += list(terms) # Create variable that we will use to reduce the terms. reduction_var = None if len(reduc_var) > 1: reduction_var = create_product(list(reduc_var)) else: reduction_var = reduc_var[0] # Reduce all terms that need to be reduced. reduced_terms = [t.reduce_var(reduction_var) for t in terms] # Create reduced expression. reduced_expr = None if len(reduced_terms) > 1: # Try to reduce the reduced terms further. reduced_expr = create_product([reduction_var, create_sum(reduced_terms).reduce_ops()]) else: reduced_expr = create_product(reduction_var, reduced_terms[0]) # Add reduced expression to list of reduced # expressions. reduced_expressions.append(reduced_expr) # Create list of terms that should not be reduced. dont_reduce_terms = [] for v in new_sum.vrs: if v not in all_reduced_terms: dont_reduce_terms.append(v) # Create expression from terms that was not reduced. not_reduced_expr = None if dont_reduce_terms and len(dont_reduce_terms) > 1: # Try to reduce the remaining terms that were not # reduced at first. not_reduced_expr = create_sum(dont_reduce_terms).reduce_ops() elif dont_reduce_terms: not_reduced_expr = dont_reduce_terms[0] # Create return expression. if not_reduced_expr: self._reduced = create_sum(reduced_expressions + [not_reduced_expr]) elif len(reduced_expressions) > 1: self._reduced = create_sum(reduced_expressions) else: self._reduced = reduced_expressions[0] return self._reduced # Return self if we don't have any variables for which we can # reduce the sum. self._reduced = self return self._reduced def reduce_vartype(self, var_type): """Reduce expression with given var_type. It returns a list of tuples [(found, remain)], where 'found' is an expression that only has variables of type == var_type. If no variables are found, found=(). The 'remain' part contains the leftover after division by 'found' such that: self = Sum([f*r for f,r in self.reduce_vartype(Type)]). """ found = {} # Loop members and reduce them by vartype. for v in self.vrs: for f, r in v.reduce_vartype(var_type): if f in found: found[f].append(r) else: found[f] = [r] # Create the return value. returns = [] for f, r in sorted_by_key(found): if len(r) > 1: # Use expand to group expressions. r = create_sum(r) elif r: r = r.pop() returns.append((f, r)) return sorted(returns) def _overlap(l, d): "Check if a member in list l is in the value (list) of dictionary d." for m in l: for k, v in sorted_by_key(d): if m in v: return True return False def _group_fractions(expr): "Group Fractions in a Sum: 2/x + y/x -> (2 + y)/x." if expr._prec != 3: # sum return expr # Loop variables and group those with common denominator. not_frac = [] fracs = {} for v in expr.vrs: if v._prec == 4: # frac if v.denom in fracs: fracs[v.denom][1].append(v.num) fracs[v.denom][0] += 1 else: fracs[v.denom] = [1, [v.num], v] continue not_frac.append(v) if not fracs: return expr # Loop all fractions and create new ones using an appropriate # numerator. for k, v in sorted(fracs.items()): if v[0] > 1: # TODO: Is it possible to avoid expanding the Sum? I # think we have to because x/a + 2*x/a -> 3*x/a. not_frac.append(create_fraction(create_sum(v[1]).expand(), k)) else: not_frac.append(v[2]) # Create return value. if len(not_frac) > 1: return create_sum(not_frac) return not_frac[0] ffc-2019.1.0.post0/ffc/quadrature/symbol.py000066400000000000000000000204571346026536000203550ustar00rootroot00000000000000# -*- coding: utf-8 -*- "This file implements a class to represent a symbol." # Copyright (C) 2009-2011 Kristian B. Oelgaard # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # First added: 2009-07-12 # Last changed: 2011-06-28 # FFC modules. from ffc.log import error # FFC quadrature modules. from .symbolics import type_to_string from .symbolics import create_float from .symbolics import create_product from .symbolics import create_sum from .symbolics import create_fraction from .expr import Expr class Symbol(Expr): __slots__ = ("v", "base_expr", "base_op", "exp", "cond") def __init__(self, variable, symbol_type, base_expr=None, base_op=0): """Initialise a Symbols object, it derives from Expr and contains the additional variables: v - string, variable name base_expr - Other expression type like 'x*y + z' base_op - number of operations for the symbol itself if it's a math operation like std::cos(.) -> base_op = 1. NOTE: self._prec = 1.""" # Dummy value, a symbol is always one. self.val = 1.0 # Initialise variable, type and class. self.v = variable self.t = symbol_type self._prec = 1 # Needed for symbols like std::cos(x*y + z), # where base_expr = x*y + z. # ops = base_expr.ops() + base_ops = 2 + 1 = 3 self.base_expr = base_expr self.base_op = base_op # If type of the base_expr is lower than the given symbol_type change type. # TODO: Should we raise an error here? Or simply require that one # initalise the symbol by Symbol('std::cos(x*y)', (x*y).t, x*y, 1). if base_expr and base_expr.t < self.t: self.t = base_expr.t # Compute the representation now, such that we can use it directly # in the __eq__ and __ne__ methods (improves performance a bit, but # only when objects are cached). if self.base_expr: # and self.exp is None: self._repr = "Symbol('%s', %s, %s, %d)" % (self.v, type_to_string[self.t], self.base_expr._repr, self.base_op) else: self._repr = "Symbol('%s', %s)" % (self.v, type_to_string[self.t]) # Use repr as hash value. self._hash = hash(self._repr) # Print functions. def __str__(self): "Simple string representation which will appear in the generated code." # print "sym str: ", self.v return self.v # Binary operators. def __add__(self, other): "Addition by other objects." # NOTE: We expect expanded objects # symbols, if other is a product, try to let product handle the addition. # Returns x + x -> 2*x, x + 2*x -> 3*x. if self._repr == other._repr: return create_product([create_float(2), self]) elif other._prec == 2: # prod return other.__add__(self) return create_sum([self, other]) def __sub__(self, other): "Subtract other objects." # NOTE: We expect expanded objects # symbols, if other is a product, try to let product handle the addition. if self._repr == other._repr: return create_float(0) elif other._prec == 2: # prod if other.get_vrs() == (self,): return create_product([create_float(1.0 - other.val), self]).expand() return create_sum([self, create_product([create_float(-1), other])]) def __mul__(self, other): "Multiplication by other objects." # NOTE: We assume expanded objects. # If product will be zero. if self.val == 0.0 or other.val == 0.0: return create_float(0) # If other is Sum or Fraction let them handle the multiply. if other._prec in (3, 4): # sum or frac return other.__mul__(self) # If other is a float or symbol, create simple product. if other._prec in (0, 1): # float or sym return create_product([self, other]) # Else add variables from product. return create_product([self] + other.vrs) def __truediv__(self, other): "Division by other objects." # NOTE: We assume expanded objects. # If division is illegal (this should definitely not happen). if other.val == 0.0: error("Division by zero.") # Return 1 if the two symbols are equal. if self._repr == other._repr: return create_float(1) # If other is a Sum we can only return a fraction. # TODO: Refine this later such that x / (x + x*y) -> 1 / (1 + y)? if other._prec == 3: # sum return create_fraction(self, other) # Handle division by FloatValue, Symbol, Product and Fraction. # Create numerator and list for denominator. num = [self] denom = [] # Add floatvalue, symbol and products to the list of denominators. if other._prec in (0, 1): # float or sym denom = [other] elif other._prec == 2: # prod # Need copies, so can't just do denom = other.vrs. denom += other.vrs # fraction. else: # TODO: Should we also support division by fraction for generality? # It should not be needed by this module. error("Did not expected to divide by fraction.") # Remove one instance of self in numerator and denominator if # present in denominator i.e., x/(x*y) --> 1/y. if self in denom: denom.remove(self) num.remove(self) # Loop entries in denominator and move float value to numerator. for d in denom: # Add the inverse of a float to the numerator, remove it from # the denominator and continue. if d._prec == 0: # float num.append(create_float(1.0 / other.val)) denom.remove(d) continue # Create appropriate return value depending on remaining data. # Can only be for x / (2*y*z) -> 0.5*x / (y*z). if len(num) > 1: num = create_product(num) # x / (y*z) -> x/(y*z), elif num: num = num[0] # else x / (x*y) -> 1/y. else: num = create_float(1) # If we have a long denominator, create product and fraction. if len(denom) > 1: return create_fraction(num, create_product(denom)) # If we do have a denominator, but only one variable don't create a # product, just return a fraction using the variable as denominator. elif denom: return create_fraction(num, denom[0]) # If we don't have any donominator left, return the numerator. # x / 2.0 -> 0.5*x. return num.expand() __div__ = __truediv__ # Public functions. def get_unique_vars(self, var_type): "Get unique variables (Symbols) as a set." # Return self if type matches, also return base expression variables. s = set() if self.t == var_type: s.add(self) if self.base_expr: s.update(self.base_expr.get_unique_vars(var_type)) return s def get_var_occurrences(self): """Determine the number of times all variables occurs in the expression. Returns a dictionary of variables and the number of times they occur.""" # There is only one symbol. return {self: 1} def ops(self): "Returning the number of floating point operation for symbol." # Get base ops, typically 1 for sin() and then add the operations # for the base (sin(2*x + 1)) --> 2 + 1. if self.base_expr: return self.base_op + self.base_expr.ops() return self.base_op ffc-2019.1.0.post0/ffc/quadrature/symbolics.py000066400000000000000000000200411346026536000210410ustar00rootroot00000000000000# -*- coding: utf-8 -*- "This file contains functions to optimise the code generated for quadrature representation." # Copyright (C) 2009-2010 Kristian B. Oelgaard # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . from ufl.utils.sorting import sorted_by_key # FFC modules from ffc.log import error from ffc.quadrature.cpp import format # TODO: Use proper errors, not just RuntimeError. # TODO: Change all if value == 0.0 to something more safe. # Some basic variables. BASIS = 0 IP = 1 GEO = 2 CONST = 3 type_to_string = {BASIS: "BASIS", IP: "IP", GEO: "GEO", CONST: "CONST"} # Functions and dictionaries for cache implementation. # Increases speed and should also reduce memory consumption. _float_cache = {} def create_float(val): if val in _float_cache: return _float_cache[val] float_val = FloatValue(val) _float_cache[val] = float_val return float_val _symbol_cache = {} def create_symbol(variable, symbol_type, base_expr=None, base_op=0): key = (variable, symbol_type, base_expr, base_op) if key in _symbol_cache: return _symbol_cache[key] symbol = Symbol(variable, symbol_type, base_expr, base_op) _symbol_cache[key] = symbol return symbol _product_cache = {} def create_product(variables): # NOTE: If I switch on the sorted line, it might be possible to find more # variables in the cache, but it adds some overhead so I don't think it # pays off. The member variables are also sorted in the classes # (Product and Sum) so the list 'variables' is probably already sorted. key = tuple(variables) if key in _product_cache: return _product_cache[key] product = Product(key) _product_cache[key] = product return product _sum_cache = {} def create_sum(variables): # NOTE: If I switch on the sorted line, it might be possible to # find more variables in the cache, but it adds some overhead so I # don't think it pays off. The member variables are also sorted in # the classes (Product and Sum) so the list 'variables' is # probably already sorted. key = tuple(variables) if key in _sum_cache: return _sum_cache[key] s = Sum(key) _sum_cache[key] = s return s _fraction_cache = {} def create_fraction(num, denom): key = (num, denom) if key in _fraction_cache: return _fraction_cache[key] fraction = Fraction(num, denom) _fraction_cache[key] = fraction return fraction def generate_aux_constants(constant_decl, name, var_type, print_ops=False): "A helper tool to generate code for constant declarations." format_comment = format["comment"] code = [] append = code.append ops = 0 for num, expr in sorted((v, k) for k, v in sorted_by_key(constant_decl)): # Expand and reduce expression (If we don't already get # reduced expressions.) expr = expr.expand().reduce_ops() if print_ops: op = expr.ops() ops += op append(format_comment("Number of operations: %d" % op)) append(var_type(name(num), str(expr))) append("") else: ops += expr.ops() append(var_type(name(num), str(expr))) return (ops, code) def optimise_code(expr, ip_consts, geo_consts, trans_set): """Optimise a given expression with respect to, basis functions, integration points variables and geometric constants. The function will update the dictionaries ip_const and geo_consts with new declarations and update the trans_set (used transformations). """ format_G = format["geometry constant"] format_I = format["ip constant"] trans_set_update = trans_set.update # Return constant symbol if expanded value is zero. exp_expr = expr.expand() if exp_expr.val == 0.0: return create_float(0) # Reduce expression with respect to basis function variable. basis_expressions = exp_expr.reduce_vartype(BASIS) # If we had a product instance we'll get a tuple back so embed in # list. if not isinstance(basis_expressions, list): basis_expressions = [basis_expressions] basis_vals = [] # Process each instance of basis functions. for basis, ip_expr in basis_expressions: # Get the basis and the ip expression. # If we have no basis (like functionals) create a const. if not basis: basis = create_float(1) # If the ip expression doesn't contain any operations skip # remainder if not ip_expr or ip_expr.val == 0.0: basis_vals.append(basis) continue if not ip_expr.ops() > 0: basis_vals.append(create_product([basis, ip_expr])) continue # Reduce the ip expressions with respect to IP variables. ip_expressions = ip_expr.expand().reduce_vartype(IP) # If we had a product instance we'll get a tuple back so embed in list. if not isinstance(ip_expressions, list): ip_expressions = [ip_expressions] ip_vals = [] # Loop ip expressions. for ip in sorted(ip_expressions): ip_dec, geo = ip # Update transformation set with those values that might # be embedded in IP terms. if ip_dec and ip_dec.val != 0.0: trans_set_update([str(x) for x in ip_dec.get_unique_vars(GEO)]) # Append and continue if we did not have any geo values. if not geo or geo.val == 0.0: if ip_dec and ip_dec.val != 0.0: ip_vals.append(ip_dec) continue # Update the transformation set with the variables in the # geo term. trans_set_update([str(x) for x in geo.get_unique_vars(GEO)]) # Only declare auxiliary geo terms if we can save # operations. if geo.ops() > 0: # If the geo term is not in the dictionary append it. if geo not in geo_consts: geo_consts[geo] = len(geo_consts) # Substitute geometry expression. geo = create_symbol(format_G(geo_consts[geo]), GEO) # If we did not have any ip_declarations use geo, else # create a product and append to the list of ip_values. if not ip_dec or ip_dec.val == 0.0: ip_dec = geo else: ip_dec = create_product([ip_dec, geo]) ip_vals.append(ip_dec) # Create sum of ip expressions to multiply by basis. if len(ip_vals) > 1: ip_expr = create_sum(ip_vals) elif ip_vals: ip_expr = ip_vals.pop() # If we can save operations by declaring it as a constant do # so, if it is not in IP dictionary, add it and use new name. if ip_expr.ops() > 0 and ip_expr.val != 0.0: if ip_expr not in ip_consts: ip_consts[ip_expr] = len(ip_consts) # Substitute ip expression. ip_expr = create_symbol(format_I(ip_consts[ip_expr]), IP) # Multiply by basis and append to basis vals. basis_vals.append(create_product([basis, ip_expr])) # Return (possible) sum of basis values. if len(basis_vals) > 1: return create_sum(basis_vals) elif basis_vals: return basis_vals[0] # Where did the values go? error("Values disappeared.") from .floatvalue import FloatValue from .symbol import Symbol from .product import Product from .sumobj import Sum from .fraction import Fraction ffc-2019.1.0.post0/ffc/quadrature/tabulate_basis.py000066400000000000000000000304341346026536000220260ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2009-2014 Kristian B. Oelgaard # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # Modified by Anders Logg, 2009, 2015 # Modified by Martin Sandve Alnæs, 2013-2014 "Quadrature representation class." import numpy import itertools # UFL modules from ufl.cell import num_cell_entities from ufl.classes import ReferenceGrad, Grad, CellAvg, FacetAvg from ufl.algorithms import extract_unique_elements, extract_type, extract_elements from ufl import custom_integral_types # FFC modules from ffc.log import error from ffc.utils import product, insert_nested_dict from ffc.fiatinterface import create_element from ffc.fiatinterface import map_facet_points, reference_cell_vertices from ffc.representationutils import create_quadrature_points_and_weights from ffc.representationutils import integral_type_to_entity_dim def _find_element_derivatives(expr, elements, element_replace_map): "Find the highest derivatives of given elements in expression." # TODO: This is most likely not the best way to get the highest # derivative of an element, but it works! # Initialise dictionary of elements and the number of derivatives. # (Note that elements are already mapped through the # element_replace_map) num_derivatives = dict((e, 0) for e in elements) # Extract the derivatives from the integral. derivatives = set(extract_type(expr, Grad)) | set(extract_type(expr, ReferenceGrad)) # Loop derivatives and extract multiple derivatives. for d in list(derivatives): # After UFL has evaluated derivatives, only one element can be # found inside any single Grad expression elem, = extract_elements(d.ufl_operands[0]) elem = element_replace_map[elem] # Set the number of derivatives to the highest value # encountered so far. num_derivatives[elem] = max(num_derivatives[elem], len(extract_type(d, Grad)), len(extract_type(d, ReferenceGrad))) return num_derivatives def _tabulate_empty_psi_table(tdim, deriv_order, element): "Tabulate psi table when there are no points (custom integrals)." # All combinations of partial derivatives up to given order gdim = tdim # hack, consider passing gdim variable here derivs = [d for d in itertools.product(*(gdim*[list(range(0, deriv_order + 1))]))] derivs = [d for d in derivs if sum(d) <= deriv_order] # Return empty table table = {} for d in derivs: value_shape = element.value_shape() if value_shape == (): table[d] = [[]] else: value_size = product(value_shape) table[d] = [[[] for c in range(value_size)]] # Let entity be 0 even for non-cells, this is for # custom integrals where we don't need tables to # contain multiple entitites entity = 0 return {entity: table} def _map_entity_points(cellname, tdim, points, entity_dim, entity): # Not sure if this is useful anywhere else than in _tabulate_psi_table! if entity_dim == tdim: assert entity == 0 return points elif entity_dim == tdim-1: return map_facet_points(points, entity, cellname) elif entity_dim == 0: return (reference_cell_vertices(cellname)[entity],) def _tabulate_psi_table(integral_type, cellname, tdim, element, deriv_order, points): "Tabulate psi table for different integral types." # Handle case when list of points is empty if points is None: return _tabulate_empty_psi_table(tdim, deriv_order, element) # Otherwise, call FIAT to tabulate entity_dim = integral_type_to_entity_dim(integral_type, tdim) num_entities = num_cell_entities[cellname][entity_dim] psi_table = {} for entity in range(num_entities): entity_points = _map_entity_points(cellname, tdim, points, entity_dim, entity) psi_table[entity] = element.tabulate(deriv_order, entity_points) return psi_table # MSA: This function is in serious need for some refactoring and # splitting up. Or perhaps I should just add a new # implementation for uflacs, but I'd rather not have two versions # to maintain. def tabulate_basis(sorted_integrals, form_data, itg_data): "Tabulate the basisfunctions and derivatives." # MER: Note to newbies: this code assumes that each integral in # the dictionary of sorted_integrals that enters here, has a # unique number of quadrature points ... # Initialise return values. quadrature_rules = {} psi_tables = {} integrals = {} avg_elements = {"cell": [], "facet": []} # Get some useful variables in short form integral_type = itg_data.integral_type cell = itg_data.domain.ufl_cell() cellname = cell.cellname() tdim = itg_data.domain.topological_dimension() entity_dim = integral_type_to_entity_dim(integral_type, tdim) num_entities = num_cell_entities[cellname][entity_dim] # Create canonical ordering of quadrature rules rules = sorted(sorted_integrals.keys()) # Loop the quadrature points and tabulate the basis values. for rule in rules: scheme, degree = rule # --------- Creating quadrature rule # Make quadrature rule and get points and weights. (points, weights) = create_quadrature_points_and_weights(integral_type, cell, degree, scheme) # The TOTAL number of weights/points num_points = None if weights is None else len(weights) # Add points and rules to dictionary if num_points in quadrature_rules: error("This number of points is already present in the weight table: " + repr(quadrature_rules)) quadrature_rules[num_points] = (weights, points) # --------- Store integral # Add the integral with the number of points as a key to the # return integrals. integral = sorted_integrals[rule] if num_points in integrals: error("This number of points is already present in the integrals: " + repr(integrals)) integrals[num_points] = integral # --------- Analyse UFL elements in integral # Get all unique elements in integral. ufl_elements = [form_data.element_replace_map[e] for e in extract_unique_elements(integral)] # Insert elements for x and J domain = integral.ufl_domain() # FIXME: For all domains to be sure? Better to rewrite though. x_element = domain.ufl_coordinate_element() if x_element not in ufl_elements: if integral_type in custom_integral_types: # FIXME: Not yet implemented, in progress # warning("Vector elements not yet supported in custom integrals so element for coordinate function x will not be generated.") pass else: ufl_elements.append(x_element) # Find all CellAvg and FacetAvg in integrals and extract # elements for avg, AvgType in (("cell", CellAvg), ("facet", FacetAvg)): expressions = extract_type(integral, AvgType) avg_elements[avg] = [form_data.element_replace_map[e] for expr in expressions for e in extract_unique_elements(expr)] # Find the highest number of derivatives needed for each element num_derivatives = _find_element_derivatives(integral.integrand(), ufl_elements, form_data.element_replace_map) # Need at least 1 for the Jacobian num_derivatives[x_element] = max(num_derivatives.get(x_element, 0), 1) # --------- Evaluate FIAT elements in quadrature points and # --------- store in tables # Add the number of points to the psi tables dictionary if num_points in psi_tables: error("This number of points is already present in the psi table: " + repr(psi_tables)) psi_tables[num_points] = {} # Loop FIAT elements and tabulate basis as usual. for ufl_element in ufl_elements: fiat_element = create_element(ufl_element) # Tabulate table of basis functions and derivatives in # points psi_table = _tabulate_psi_table(integral_type, cellname, tdim, fiat_element, num_derivatives[ufl_element], points) # Insert table into dictionary based on UFL elements # (None=not averaged) avg = None psi_tables[num_points][ufl_element] = { avg: psi_table } # Loop over elements found in CellAvg and tabulate basis averages num_points = 1 for avg in ("cell", "facet"): # Doesn't matter if it's exterior or interior if avg == "cell": avg_integral_type = "cell" elif avg == "facet": avg_integral_type = "exterior_facet" for element in avg_elements[avg]: fiat_element = create_element(element) # Make quadrature rule and get points and weights. (points, weights) = create_quadrature_points_and_weights(avg_integral_type, cell, element.degree(), "default") wsum = sum(weights) # Tabulate table of basis functions and derivatives in # points entity_psi_tables = _tabulate_psi_table(avg_integral_type, cellname, tdim, fiat_element, 0, points) rank = len(element.value_shape()) # Hack, duplicating table with per-cell values for each # facet in the case of cell_avg(f) in a facet integral if num_entities > len(entity_psi_tables): assert len(entity_psi_tables) == 1 assert avg_integral_type == "cell" assert "facet" in integral_type v, = sorted(entity_psi_tables.values()) entity_psi_tables = dict((e, v) for e in range(num_entities)) for entity, deriv_table in sorted(entity_psi_tables.items()): deriv, = sorted(deriv_table.keys()) # Not expecting derivatives of averages psi_table = deriv_table[deriv] if rank: # Compute numeric integral num_dofs, num_components, num_points = psi_table.shape if num_points != len(weights): error("Weights and table shape does not match.") avg_psi_table = numpy.asarray([[[numpy.dot(psi_table[j, k, :], weights) / wsum] for k in range(num_components)] for j in range(num_dofs)]) else: # Compute numeric integral num_dofs, num_points = psi_table.shape if num_points != len(weights): error("Weights and table shape does not match.") avg_psi_table = numpy.asarray([[numpy.dot(psi_table[j, :], weights) / wsum] for j in range(num_dofs)]) # Insert table into dictionary based on UFL elements insert_nested_dict(psi_tables, (num_points, element, avg, entity, deriv), avg_psi_table) return (integrals, psi_tables, quadrature_rules) ffc-2019.1.0.post0/ffc/representation.py000066400000000000000000001127251346026536000177350ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ Compiler stage 2: Code representation ------------------------------------- This module computes intermediate representations of forms, elements and dofmaps. For each UFC function, we extract the data needed for code generation at a later stage. The representation should conform strictly to the naming and order of functions in UFC. Thus, for code generation of the function "foo", one should only need to use the data stored in the intermediate representation under the key "foo". """ # Copyright (C) 2009-2017 Anders Logg, Martin Sandve Alnæs, Marie E. Rognes, # Kristian B. Oelgaard, and others # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # Python modules from itertools import chain import numpy # Import UFL import ufl # FFC modules from ffc.utils import compute_permutations, product from ffc.log import info, error, begin, end from ffc.fiatinterface import create_element, reference_cell from ffc.fiatinterface import EnrichedElement, HDivTrace, MixedElement, SpaceOfReals, QuadratureElement from ffc.classname import make_classname, make_integral_classname # List of supported integral types ufc_integral_types = ("cell", "exterior_facet", "interior_facet", "vertex", "custom", "cutcell", "interface", "overlap") def pick_representation(representation): "Return one of the specialized code generation modules from a representation string." if representation == "quadrature": from ffc import quadrature as r elif representation == "uflacs": from ffc import uflacs as r elif representation == "tsfc": from ffc import tsfc as r else: error("Unknown representation: %s" % str(representation)) return r def make_finite_element_jit_classname(ufl_element, parameters): from ffc.jitcompiler import compute_jit_prefix # FIXME circular file dependency kind, prefix = compute_jit_prefix(ufl_element, parameters) return make_classname(prefix, "finite_element", "main") def make_dofmap_jit_classname(ufl_element, parameters): from ffc.jitcompiler import compute_jit_prefix # FIXME circular file dependency kind, prefix = compute_jit_prefix(ufl_element, parameters) return make_classname(prefix, "dofmap", "main") def make_coordinate_mapping_jit_classname(ufl_mesh, parameters): from ffc.jitcompiler import compute_jit_prefix # FIXME circular file dependency kind, prefix = compute_jit_prefix(ufl_mesh, parameters, kind="coordinate_mapping") return make_classname(prefix, "coordinate_mapping", "main") def make_all_element_classnames(prefix, elements, coordinate_elements, element_numbers, parameters, jit): if jit: # Make unique classnames to match separately jit-compiled # module classnames = { "finite_element": { e: make_finite_element_jit_classname(e, parameters) for e in elements }, "dofmap": { e: make_dofmap_jit_classname(e, parameters) for e in elements }, "coordinate_mapping": { e: make_coordinate_mapping_jit_classname(e, parameters) for e in coordinate_elements }, } else: # Make unique classnames only within this module (using a # shared prefix and element numbers that are only unique # within this module) classnames = { "finite_element": { e: make_classname(prefix, "finite_element", element_numbers[e]) for e in elements }, "dofmap": { e: make_classname(prefix, "dofmap", element_numbers[e]) for e in elements }, "coordinate_mapping": { e: make_classname(prefix, "coordinate_mapping", element_numbers[e]) for e in coordinate_elements }, } return classnames def compute_ir(analysis, prefix, parameters, jit=False): "Compute intermediate representation." begin("Compiler stage 2: Computing intermediate representation") # Set code generation parameters (this is not actually a 'formatting' # parameter, used for table value clamping as well) # FIXME: Global state?! # set_float_formatting(parameters["precision"]) # Extract data from analysis form_datas, elements, element_numbers, coordinate_elements = analysis # Construct classnames for all element objects and coordinate mappings classnames = make_all_element_classnames(prefix, elements, coordinate_elements, element_numbers, parameters, jit) # Skip processing elements if jitting forms # NB! it's important that this happens _after_ the element numbers and classnames # above have been created. if jit and form_datas: # While we may get multiple forms during command line action, # not so during jit assert len(form_datas) == 1, "Expecting only one form data instance during jit." # Drop some processing elements = [] coordinate_elements = [] elif jit and coordinate_elements: # While we may get multiple coordinate elements during command # line action, or during form jit, not so during coordinate # mapping jit assert len(coordinate_elements) == 1, "Expecting only one form data instance during jit." # Drop some processing elements = [] elif jit and elements: # Assuming a topological sorting of the elements, # only process the last (main) element from here on elements = [elements[-1]] # Compute representation of elements info("Computing representation of %d elements" % len(elements)) ir_elements = [_compute_element_ir(e, element_numbers, classnames, parameters, jit) for e in elements] # Compute representation of dofmaps info("Computing representation of %d dofmaps" % len(elements)) ir_dofmaps = [_compute_dofmap_ir(e, element_numbers, classnames, parameters, jit) for e in elements] # Compute representation of coordinate mappings info("Computing representation of %d coordinate mappings" % len(coordinate_elements)) ir_coordinate_mappings = [_compute_coordinate_mapping_ir(e, element_numbers, classnames, parameters, jit) for e in coordinate_elements] # Compute and flatten representation of integrals info("Computing representation of integrals") irs = [_compute_integral_ir(fd, form_id, prefix, element_numbers, classnames, parameters, jit) for (form_id, fd) in enumerate(form_datas)] ir_integrals = list(chain(*irs)) # Compute representation of forms info("Computing representation of forms") ir_forms = [_compute_form_ir(fd, form_id, prefix, element_numbers, classnames, parameters, jit) for (form_id, fd) in enumerate(form_datas)] end() return ir_elements, ir_dofmaps, ir_coordinate_mappings, ir_integrals, ir_forms def _compute_element_ir(ufl_element, element_numbers, classnames, parameters, jit): "Compute intermediate representation of element." # Create FIAT element fiat_element = create_element(ufl_element) cell = ufl_element.cell() cellname = cell.cellname() # Store id ir = {"id": element_numbers[ufl_element]} ir["classname"] = classnames["finite_element"][ufl_element] # Remember jit status ir["jit"] = jit # Compute data for each function ir["signature"] = repr(ufl_element) ir["cell_shape"] = cellname ir["topological_dimension"] = cell.topological_dimension() ir["geometric_dimension"] = cell.geometric_dimension() ir["space_dimension"] = fiat_element.space_dimension() ir["value_shape"] = ufl_element.value_shape() ir["reference_value_shape"] = ufl_element.reference_value_shape() ir["degree"] = ufl_element.degree() ir["family"] = ufl_element.family() ir["evaluate_basis"] = _evaluate_basis(ufl_element, fiat_element, parameters["epsilon"]) ir["evaluate_dof"] = _evaluate_dof(ufl_element, fiat_element) ir["interpolate_vertex_values"] = _interpolate_vertex_values(ufl_element, fiat_element) ir["tabulate_dof_coordinates"] = _tabulate_dof_coordinates(ufl_element, fiat_element) ir["num_sub_elements"] = ufl_element.num_sub_elements() ir["create_sub_element"] = [classnames["finite_element"][e] for e in ufl_element.sub_elements()] # debug_ir(ir, "finite_element") return ir def _compute_dofmap_ir(ufl_element, element_numbers, classnames, parameters, jit=False): "Compute intermediate representation of dofmap." # Create FIAT element fiat_element = create_element(ufl_element) cell = ufl_element.cell() # Precompute repeatedly used items num_dofs_per_entity = _num_dofs_per_entity(fiat_element) entity_dofs = fiat_element.entity_dofs() facet_dofs = _tabulate_facet_dofs(fiat_element, cell) entity_closure_dofs, num_dofs_per_entity_closure = \ _tabulate_entity_closure_dofs(fiat_element, cell) # Store id ir = {"id": element_numbers[ufl_element]} ir["classname"] = classnames["dofmap"][ufl_element] # Remember jit status ir["jit"] = jit # Compute data for each function ir["signature"] = "FFC dofmap for " + repr(ufl_element) ir["needs_mesh_entities"] = _needs_mesh_entities(fiat_element) ir["topological_dimension"] = cell.topological_dimension() ir["geometric_dimension"] = cell.geometric_dimension() ir["global_dimension"] = _global_dimension(fiat_element) ir["num_global_support_dofs"] = _num_global_support_dofs(fiat_element) ir["num_element_support_dofs"] = fiat_element.space_dimension() - ir["num_global_support_dofs"] ir["num_element_dofs"] = fiat_element.space_dimension() ir["num_facet_dofs"] = len(facet_dofs[0]) ir["num_entity_dofs"] = num_dofs_per_entity ir["num_entity_closure_dofs"] = num_dofs_per_entity_closure ir["tabulate_dofs"] = _tabulate_dofs(fiat_element, cell) ir["tabulate_facet_dofs"] = facet_dofs ir["tabulate_entity_dofs"] = (entity_dofs, num_dofs_per_entity) ir["tabulate_entity_closure_dofs"] = (entity_closure_dofs, entity_dofs, num_dofs_per_entity) ir["num_sub_dofmaps"] = ufl_element.num_sub_elements() ir["create_sub_dofmap"] = [classnames["dofmap"][e] for e in ufl_element.sub_elements()] return ir _midpoints = { "interval": (0.5,), "triangle": (1.0 / 3.0, 1.0 / 3.0), "tetrahedron": (0.25, 0.25, 0.25), "quadrilateral": (0.5, 0.5), "hexahedron": (0.5, 0.5, 0.5), } def cell_midpoint(cell): # TODO: Is this defined somewhere more central where we can get it from? return _midpoints[cell.cellname()] def _tabulate_coordinate_mapping_basis(ufl_element): # TODO: Move this function to a table generation module? # Get scalar element, assuming coordinates are represented # with a VectorElement of scalar subelements selement = ufl_element.sub_elements()[0] fiat_element = create_element(selement) cell = selement.cell() tdim = cell.topological_dimension() tables = {} # Get points origo = (0.0,) * tdim midpoint = cell_midpoint(cell) # Tabulate basis t0 = fiat_element.tabulate(1, [origo]) tm = fiat_element.tabulate(1, [midpoint]) # Get basis values at cell origo tables["x0"] = t0[(0,) * tdim][:, 0] # Get basis values at cell midpoint tables["xm"] = tm[(0,) * tdim][:, 0] # Single direction derivatives, e.g. [(1,0), (0,1)] in 2d derivatives = [(0,) * i + (1,) + (0,) * (tdim - 1 - i) for i in range(tdim)] # Get basis derivative values at cell origo tables["J0"] = numpy.asarray([t0[d][:, 0] for d in derivatives]) # Get basis derivative values at cell midpoint tables["Jm"] = numpy.asarray([tm[d][:, 0] for d in derivatives]) return tables def _compute_coordinate_mapping_ir(ufl_coordinate_element, element_numbers, classnames, parameters, jit=False): "Compute intermediate representation of coordinate mapping." cell = ufl_coordinate_element.cell() cellname = cell.cellname() assert ufl_coordinate_element.value_shape() == (cell.geometric_dimension(),) # Compute element values via fiat element tables = _tabulate_coordinate_mapping_basis(ufl_coordinate_element) # Store id ir = {"id": element_numbers[ufl_coordinate_element]} ir["classname"] = classnames["coordinate_mapping"][ufl_coordinate_element] # Remember jit status ir["jit"] = jit # Compute data for each function ir["signature"] = "FFC coordinate_mapping from " + repr(ufl_coordinate_element) ir["cell_shape"] = cellname ir["topological_dimension"] = cell.topological_dimension() ir["geometric_dimension"] = ufl_coordinate_element.value_size() ir["create_coordinate_finite_element"] = \ classnames["finite_element"][ufl_coordinate_element] ir["create_coordinate_dofmap"] = \ classnames["dofmap"][ufl_coordinate_element] ir["compute_physical_coordinates"] = None # currently unused, corresponds to function name ir["compute_reference_coordinates"] = None # currently unused, corresponds to function name ir["compute_jacobians"] = None # currently unused, corresponds to function name ir["compute_jacobian_determinants"] = None # currently unused, corresponds to function name ir["compute_jacobian_inverses"] = None # currently unused, corresponds to function name ir["compute_geometry"] = None # currently unused, corresponds to function name # NB! The entries below breaks the pattern of using ir keywords == code keywords, # which I personally don't find very useful anyway (martinal). # Store tables and other coordinate element data ir["tables"] = tables ir["coordinate_element_degree"] = ufl_coordinate_element.degree() ir["num_scalar_coordinate_element_dofs"] = tables["x0"].shape[0] # Get classnames for coordinate element and its scalar subelement: ir["coordinate_finite_element_classname"] = \ classnames["finite_element"][ufl_coordinate_element] ir["scalar_coordinate_finite_element_classname"] = \ classnames["finite_element"][ufl_coordinate_element.sub_elements()[0]] return ir def _num_global_support_dofs(fiat_element): "Compute number of global support dofs." if not isinstance(fiat_element, MixedElement): if isinstance(fiat_element, SpaceOfReals): return 1 return 0 num_reals = 0 for e in fiat_element.elements(): if isinstance(e, SpaceOfReals): num_reals += 1 return num_reals def _global_dimension(fiat_element): "Compute intermediate representation for global_dimension." if not isinstance(fiat_element, MixedElement): if isinstance(fiat_element, SpaceOfReals): return ([], 1) return (_num_dofs_per_entity(fiat_element), 0) elements = [] num_reals = 0 for e in fiat_element.elements(): if not isinstance(e, SpaceOfReals): elements += [e] else: num_reals += 1 fiat_cell, = set(e.get_reference_element() for e in fiat_element.elements()) fiat_element = MixedElement(elements, ref_el=fiat_cell) return (_num_dofs_per_entity(fiat_element), num_reals) def _needs_mesh_entities(fiat_element): "Compute intermediate representation for needs_mesh_entities." # Note: The dof map for Real elements does not depend on the mesh num_dofs_per_entity = _num_dofs_per_entity(fiat_element) if isinstance(fiat_element, SpaceOfReals): return [False for d in num_dofs_per_entity] else: return [d > 0 for d in num_dofs_per_entity] def _compute_integral_ir(form_data, form_id, prefix, element_numbers, classnames, parameters, jit): "Compute intermediate represention for form integrals." # For consistency, all jit objects now have classnames with postfix "main" if jit: assert form_id == 0 form_id = "main" irs = [] # Iterate over integrals for itg_data in form_data.integral_data: # Select representation # TODO: Is it possible to detach this metadata from # IntegralData? It's a bit strange from the ufl side. r = pick_representation(itg_data.metadata["representation"]) # Compute representation ir = r.compute_integral_ir(itg_data, form_data, form_id, # FIXME: Can we remove this? element_numbers, classnames, parameters) # Remember jit status ir["jit"] = jit # Build classname ir["classname"] = make_integral_classname(prefix, itg_data.integral_type, form_id, itg_data.subdomain_id) ir["classnames"] = classnames # FIXME XXX: Use this everywhere needed? # Storing prefix here for reconstruction of classnames on code # generation side ir["prefix"] = prefix # FIXME: Drop this? # Store metadata for later reference (eg. printing as comment) # NOTE: We make a commitment not to modify it! ir["integrals_metadata"] = itg_data.metadata ir["integral_metadata"] = [integral.metadata() for integral in itg_data.integrals] # Append representation irs.append(ir) return irs def _compute_form_ir(form_data, form_id, prefix, element_numbers, classnames, parameters, jit=False): "Compute intermediate representation of form." # For consistency, all jit objects now have classnames with postfix "main" if jit: assert form_id == 0 form_id = "main" # Store id ir = {"id": form_id} # Storing prefix here for reconstruction of classnames on code # generation side ir["prefix"] = prefix # Remember jit status ir["jit"] = jit # Compute common data ir["classname"] = make_classname(prefix, "form", form_id) # ir["members"] = None # unused # ir["constructor"] = None # unused # ir["destructor"] = None # unused ir["signature"] = form_data.original_form.signature() ir["rank"] = len(form_data.original_form.arguments()) ir["num_coefficients"] = len(form_data.reduced_coefficients) ir["original_coefficient_position"] = form_data.original_coefficient_positions # TODO: Remove create_coordinate_{finite_element,dofmap} and # access through coordinate_mapping instead in dolfin, when that's # in place ir["create_coordinate_finite_element"] = [ classnames["finite_element"][e] for e in form_data.coordinate_elements ] ir["create_coordinate_dofmap"] = [ classnames["dofmap"][e] for e in form_data.coordinate_elements ] ir["create_coordinate_mapping"] = [ classnames["coordinate_mapping"][e] for e in form_data.coordinate_elements ] ir["create_finite_element"] = [ classnames["finite_element"][e] for e in form_data.argument_elements + form_data.coefficient_elements ] ir["create_dofmap"] = [ classnames["dofmap"][e] for e in form_data.argument_elements + form_data.coefficient_elements ] # Create integral ids and names using form prefix # (integrals are always generated as part of form so don't get # their own prefix) for integral_type in ufc_integral_types: ir["max_%s_subdomain_id" % integral_type] = \ form_data.max_subdomain_ids.get(integral_type, 0) ir["has_%s_integrals" % integral_type] = \ _has_foo_integrals(prefix, form_id, integral_type, form_data) ir["create_%s_integral" % integral_type] = \ _create_foo_integral(prefix, form_id, integral_type, form_data) ir["create_default_%s_integral" % integral_type] = \ _create_default_foo_integral(prefix, form_id, integral_type, form_data) return ir # --- Computation of intermediate representation for non-trivial functions --- def _generate_reference_offsets(fiat_element, offset=0): """Generate offsets: i.e value offset for each basis function relative to a reference element representation.""" if isinstance(fiat_element, MixedElement): offsets = [] for e in fiat_element.elements(): offsets += _generate_reference_offsets(e, offset) # NB! This is the fiat element and therefore value_shape # means reference_value_shape offset += product(e.value_shape()) return offsets elif isinstance(fiat_element, EnrichedElement): offsets = [] for e in fiat_element.elements(): offsets += _generate_reference_offsets(e, offset) return offsets else: return [offset] * fiat_element.space_dimension() def _generate_physical_offsets(ufl_element, offset=0): """Generate offsets: i.e value offset for each basis function relative to a physical element representation.""" cell = ufl_element.cell() gdim = cell.geometric_dimension() tdim = cell.topological_dimension() # Refer to reference if gdim == tdim. This is a hack to support # more stuff (in particular restricted elements) if gdim == tdim: return _generate_reference_offsets(create_element(ufl_element)) if isinstance(ufl_element, ufl.MixedElement): offsets = [] for e in ufl_element.sub_elements(): offsets += _generate_physical_offsets(e, offset) # e is a ufl element, so value_size means the physical value size offset += e.value_size() return offsets elif isinstance(ufl_element, ufl.EnrichedElement): offsets = [] for e in ufl_element._elements: # TODO: Avoid private member access offsets += _generate_physical_offsets(e, offset) return offsets elif isinstance(ufl_element, ufl.FiniteElement): fiat_element = create_element(ufl_element) return [offset] * fiat_element.space_dimension() else: raise NotImplementedError("This element combination is not implemented") def _generate_offsets(ufl_element, reference_offset=0, physical_offset=0): """Generate offsets: i.e value offset for each basis function relative to a physical element representation.""" if isinstance(ufl_element, ufl.MixedElement): offsets = [] for e in ufl_element.sub_elements(): offsets += _generate_offsets(e, reference_offset, physical_offset) # e is a ufl element, so value_size means the physical value size reference_offset += e.reference_value_size() physical_offset += e.value_size() return offsets elif isinstance(ufl_element, ufl.EnrichedElement): offsets = [] for e in ufl_element._elements: # TODO: Avoid private member access offsets += _generate_offsets(e, reference_offset, physical_offset) return offsets elif isinstance(ufl_element, ufl.FiniteElement): fiat_element = create_element(ufl_element) return [(reference_offset, physical_offset)] * fiat_element.space_dimension() else: # TODO: Support RestrictedElement, QuadratureElement, # TensorProductElement, etc.! and replace # _generate_{physical|reference}_offsets with this # function. raise NotImplementedError("This element combination is not implemented") def _evaluate_dof(ufl_element, fiat_element): "Compute intermediate representation of evaluate_dof." cell = ufl_element.cell() if fiat_element.is_nodal(): dofs = [L.pt_dict for L in fiat_element.dual_basis()] else: dofs = [None] * fiat_element.space_dimension() return {"mappings": fiat_element.mapping(), "reference_value_size": ufl_element.reference_value_size(), "physical_value_size": ufl_element.value_size(), "geometric_dimension": cell.geometric_dimension(), "topological_dimension": cell.topological_dimension(), "dofs": dofs, "physical_offsets": _generate_physical_offsets(ufl_element), "cell_shape": cell.cellname()} def _extract_elements(fiat_element): new_elements = [] if isinstance(fiat_element, (MixedElement, EnrichedElement)): for e in fiat_element.elements(): new_elements += _extract_elements(e) else: new_elements.append(fiat_element) return new_elements def _evaluate_basis(ufl_element, fiat_element, epsilon): "Compute intermediate representation for evaluate_basis." cell = ufl_element.cell() cellname = cell.cellname() # Handle Mixed and EnrichedElements by extracting 'sub' elements. elements = _extract_elements(fiat_element) physical_offsets = _generate_physical_offsets(ufl_element) reference_offsets = _generate_reference_offsets(fiat_element) mappings = fiat_element.mapping() # This function is evidently not implemented for TensorElements for e in elements: if (len(e.value_shape()) > 1) and (e.num_sub_elements() != 1): return "Function not supported/implemented for TensorElements." # Handle QuadratureElement, not supported because the basis is # only defined at the dof coordinates where the value is 1, so not # very interesting. for e in elements: if isinstance(e, QuadratureElement): return "Function not supported/implemented for QuadratureElement." if isinstance(e, HDivTrace): return "Function not supported for Trace elements" # Skip this function for TensorProductElement if get_coeffs is not implemented for e in elements: try: e.get_coeffs() except NotImplementedError: return "Function is not supported/implemented." # Initialise data with 'global' values. data = {"reference_value_size": ufl_element.reference_value_size(), "physical_value_size": ufl_element.value_size(), "cellname": cellname, "topological_dimension": cell.topological_dimension(), "geometric_dimension": cell.geometric_dimension(), "space_dimension": fiat_element.space_dimension(), "needs_oriented": needs_oriented_jacobian(fiat_element), "max_degree": max([e.degree() for e in elements]) } # Loop element and space dimensions to generate dof data. dof = 0 dofs_data = [] for e in elements: num_components = product(e.value_shape()) coeffs = e.get_coeffs() num_expansion_members = e.get_num_members(e.degree()) dmats = e.dmats() # Clamp dmats zeros dmats = numpy.asarray(dmats) dmats[numpy.where(numpy.isclose(dmats, 0.0, rtol=epsilon, atol=epsilon))] = 0.0 # Extracted parts of dd below that are common for the element # here. These dict entries are added to each dof_data dict # for each dof, because that's what the code generation # implementation expects. If the code generation needs this # structure to be optimized in the future, we can store this # data for each subelement instead of for each dof. subelement_data = { "embedded_degree": e.degree(), "num_components": num_components, "dmats": dmats, "num_expansion_members": num_expansion_members, } value_rank = len(e.value_shape()) for i in range(e.space_dimension()): if num_components == 1: coefficients = [coeffs[i]] elif value_rank == 1: # Handle coefficients for vector valued basis elements # [Raviart-Thomas, Brezzi-Douglas-Marini (BDM)]. coefficients = [coeffs[i][c] for c in range(num_components)] elif value_rank == 2: # Handle coefficients for tensor valued basis elements. # [Regge] coefficients = [coeffs[i][p][q] for p in range(e.value_shape()[0]) for q in range(e.value_shape()[1])] else: error("Unknown situation with num_components > 1") # Clamp coefficient zeros coefficients = numpy.asarray(coefficients) coefficients[numpy.where(numpy.isclose(coefficients, 0.0, rtol=epsilon, atol=epsilon))] = 0.0 dof_data = { "coeffs": coefficients, "mapping": mappings[dof], "physical_offset": physical_offsets[dof], "reference_offset": reference_offsets[dof], } # Still storing element data in dd to avoid rewriting dependent code dof_data.update(subelement_data) # This list will hold one dd dict for each dof dofs_data.append(dof_data) dof += 1 data["dofs_data"] = dofs_data return data def _tabulate_dof_coordinates(ufl_element, element): "Compute intermediate representation of tabulate_dof_coordinates." if uses_integral_moments(element): return {} # Bail out if any dual basis member is missing (element is not nodal), # this is strictly not necessary but simpler if any(L is None for L in element.dual_basis()): return {} cell = ufl_element.cell() data = {} data["tdim"] = cell.topological_dimension() data["gdim"] = cell.geometric_dimension() data["points"] = [sorted(L.pt_dict.keys())[0] for L in element.dual_basis()] data["cell_shape"] = cell.cellname() return data def _tabulate_dofs(element, cell): "Compute intermediate representation of tabulate_dofs." if isinstance(element, SpaceOfReals): return None # Extract number of dofs per entity for each element elements = all_elements(element) num_dofs_per_element = [_num_dofs_per_entity(e) for e in elements] # Extract local dof numbers per entity for each element all_entity_dofs = [e.entity_dofs() for e in elements] dofs_per_element = [[[list(dofs[dim][entity]) for entity in sorted(dofs[dim].keys())] for dim in sorted(dofs.keys())] for dofs in all_entity_dofs] # Check whether we need offset multiple_entities = any([sum(n > 0 for n in num_dofs) - 1 for num_dofs in num_dofs_per_element]) need_offset = len(elements) > 1 or multiple_entities num_dofs_per_element = [e.space_dimension() for e in elements] # Handle global "elements" fakes = [isinstance(e, SpaceOfReals) for e in elements] return (dofs_per_element, num_dofs_per_element, need_offset, fakes) def _tabulate_facet_dofs(element, cell): "Compute intermediate representation of tabulate_facet_dofs." # Get topological dimension D = cell.topological_dimension() # Get the number of facets num_facets = cell.num_facets() # Make list of dofs facet_dofs = list(element.entity_closure_dofs()[D-1].values()) assert num_facets == len(facet_dofs) facet_dofs = [facet_dofs[facet] for facet in range(num_facets)] return facet_dofs def _tabulate_entity_closure_dofs(element, cell): "Compute intermediate representation of tabulate_entity_closure_dofs." # Get topological dimension D = cell.topological_dimension() # Get entity closure dofs from FIAT element fiat_entity_closure_dofs = element.entity_closure_dofs() entity_closure_dofs = {} for d0 in sorted(fiat_entity_closure_dofs.keys()): for e0 in sorted(fiat_entity_closure_dofs[d0].keys()): entity_closure_dofs[(d0, e0)] = fiat_entity_closure_dofs[d0][e0] num_entity_closure_dofs = [len(fiat_entity_closure_dofs[d0][0]) for d0 in sorted(fiat_entity_closure_dofs.keys())] return entity_closure_dofs, num_entity_closure_dofs def _interpolate_vertex_values(ufl_element, fiat_element): "Compute intermediate representation of interpolate_vertex_values." # Check for QuadratureElement for e in all_elements(fiat_element): if isinstance(e, QuadratureElement): return "Function is not supported/implemented for QuadratureElement." if isinstance(e, HDivTrace): return "Function is not implemented for HDivTrace." cell = ufl_element.cell() cellname = cell.cellname() tdim = cell.topological_dimension() gdim = cell.geometric_dimension() ir = {} ir["geometric_dimension"] = gdim ir["topological_dimension"] = tdim # Check whether computing the Jacobian is necessary mappings = fiat_element.mapping() ir["needs_jacobian"] = any("piola" in m for m in mappings) ir["needs_oriented"] = needs_oriented_jacobian(fiat_element) # See note in _evaluate_dofs ir["reference_value_size"] = ufl_element.reference_value_size() ir["physical_value_size"] = ufl_element.value_size() # Get vertices of reference cell fiat_cell = reference_cell(cellname) vertices = fiat_cell.get_vertices() # Compute data for each constituent element all_fiat_elm = all_elements(fiat_element) ir["element_data"] = [ { # NB! value_shape of fiat element e means reference_value_shape "reference_value_size": product(e.value_shape()), # FIXME: THIS IS A BUG: "physical_value_size": product(e.value_shape()), # FIXME: Get from corresponding ufl element? "basis_values": e.tabulate(0, vertices)[(0,) * tdim].transpose(), "mapping": e.mapping()[0], "space_dim": e.space_dimension(), } for e in all_fiat_elm] # FIXME: Temporary hack! if len(ir["element_data"]) == 1: ir["element_data"][0]["physical_value_size"] = ir["physical_value_size"] # Consistency check, related to note in _evaluate_dofs # This will fail for e.g. (RT1 x DG0) on a manifold because of the above bug if sum(data["physical_value_size"] for data in ir["element_data"]) != ir["physical_value_size"]: ir = "Failed to set physical value size correctly for subelements." elif sum(data["reference_value_size"] for data in ir["element_data"]) != ir["reference_value_size"]: ir = "Failed to set reference value size correctly for subelements." return ir def _has_foo_integrals(prefix, form_id, integral_type, form_data): "Compute intermediate representation of has_foo_integrals." v = (form_data.max_subdomain_ids.get(integral_type, 0) > 0 or _create_default_foo_integral(prefix, form_id, integral_type, form_data) is not None) return bool(v) def _create_foo_integral(prefix, form_id, integral_type, form_data): "Compute intermediate representation of create_foo_integral." subdomain_ids = [itg_data.subdomain_id for itg_data in form_data.integral_data if (itg_data.integral_type == integral_type and isinstance(itg_data.subdomain_id, int))] classnames = [make_integral_classname(prefix, integral_type, form_id, subdomain_id) for subdomain_id in subdomain_ids] return subdomain_ids, classnames def _create_default_foo_integral(prefix, form_id, integral_type, form_data): "Compute intermediate representation of create_default_foo_integral." itg_data = [itg_data for itg_data in form_data.integral_data if (itg_data.integral_type == integral_type and itg_data.subdomain_id == "otherwise")] if len(itg_data) > 1: error("Expecting at most one default integral of each type.") if itg_data: classname = make_integral_classname(prefix, integral_type, form_id, "otherwise") return classname else: return None #--- Utility functions --- def all_elements(fiat_element): if isinstance(fiat_element, MixedElement): return fiat_element.elements() return [fiat_element] def _num_dofs_per_entity(fiat_element): """ Compute list of integers representing the number of dofs associated with a single mesh entity. Example: Lagrange of degree 3 on triangle: [1, 2, 1] """ entity_dofs = fiat_element.entity_dofs() return [len(entity_dofs[e][0]) for e in sorted(entity_dofs.keys())] def uses_integral_moments(fiat_element): "True if element uses integral moments for its degrees of freedom." integrals = set(["IntegralMoment", "FrobeniusIntegralMoment"]) tags = set([L.get_type_tag() for L in fiat_element.dual_basis() if L]) return len(integrals & tags) > 0 def needs_oriented_jacobian(fiat_element): # Check whether this element needs an oriented jacobian # (only contravariant piolas seem to need it) return "contravariant piola" in fiat_element.mapping() ffc-2019.1.0.post0/ffc/representationutils.py000066400000000000000000000214341346026536000210120ustar00rootroot00000000000000# -*- coding: utf-8 -*- """This module contains utility functions for some code shared between quadrature and tensor representation.""" # Copyright (C) 2012-2017 Marie Rognes # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # Modified by Martin Sandve Alnæs 2013-2017 # Modified by Anders Logg 2014 import numpy from ufl.measure import integral_type_to_measure_name, point_integral_types, facet_integral_types, custom_integral_types from ufl.cell import cellname2facetname from ffc.log import error from ffc.fiatinterface import create_element from ffc.fiatinterface import create_quadrature from ffc.fiatinterface import map_facet_points from ffc.fiatinterface import reference_cell_vertices from ffc.classname import make_integral_classname def create_quadrature_points_and_weights(integral_type, cell, degree, rule): "Create quadrature rule and return points and weights." if integral_type == "cell": (points, weights) = create_quadrature(cell.cellname(), degree, rule) elif integral_type in facet_integral_types: (points, weights) = create_quadrature(cellname2facetname[cell.cellname()], degree, rule) elif integral_type in point_integral_types: (points, weights) = create_quadrature("vertex", degree, rule) elif integral_type in custom_integral_types: (points, weights) = (None, None) else: error("Unknown integral type: " + str(integral_type)) return (points, weights) def integral_type_to_entity_dim(integral_type, tdim): "Given integral_type and domain tdim, return the tdim of the integration entity." if integral_type == "cell": entity_dim = tdim elif integral_type in facet_integral_types: entity_dim = tdim - 1 elif integral_type in point_integral_types: entity_dim = 0 elif integral_type in custom_integral_types: entity_dim = tdim else: error("Unknown integral_type: %s" % integral_type) return entity_dim def map_integral_points(points, integral_type, cell, entity): """Map points from reference entity to its parent reference cell.""" tdim = cell.topological_dimension() entity_dim = integral_type_to_entity_dim(integral_type, tdim) if entity_dim == tdim: assert points.shape[1] == tdim assert entity == 0 return numpy.asarray(points) elif entity_dim == tdim - 1: assert points.shape[1] == tdim - 1 return numpy.asarray(map_facet_points(points, entity, cell.cellname())) elif entity_dim == 0: return numpy.asarray([reference_cell_vertices(cell.cellname())[entity]]) else: error("Can't map points from entity_dim=%s" % (entity_dim,)) def transform_component(component, offset, ufl_element): """ This function accounts for the fact that if the geometrical and topological dimension does not match, then for native vector elements, in particular the Piola-mapped ones, the physical value dimensions and the reference value dimensions are not the same. This has certain consequences for mixed elements, aka 'fun with offsets'. """ # This code is used for tensor/monomialtransformation.py and # quadrature/quadraturetransformerbase.py. cell = ufl_element.cell() gdim = cell.geometric_dimension() tdim = cell.topological_dimension() # Do nothing if we are not in a special case: The special cases # occur if we have piola mapped elements (for which value_shape != # ()), and if gdim != tdim) if gdim == tdim: return component, offset all_mappings = create_element(ufl_element).mapping() special_case = (any(['piola' in m for m in all_mappings]) and ufl_element.num_sub_elements() > 1) if not special_case: return component, offset # Extract lists of reference and physical value dimensions by # sub-element reference_value_dims = [] physical_value_dims = [] for sub_element in ufl_element.sub_elements(): assert (len(sub_element.value_shape()) < 2), \ "Vector-valued assumption failed" if sub_element.value_shape() == (): reference_value_dims += [1] physical_value_dims += [1] else: reference_value_dims += [sub_element.value_shape()[0] - (gdim - tdim)] physical_value_dims += [sub_element.value_shape()[0]] # Figure out which sub-element number 'component' is in, # 'sub_element_number' contains the result tot = physical_value_dims[0] for sub_element_number in range(len(physical_value_dims)): if component < tot: break else: tot += physical_value_dims[sub_element_number + 1] # Compute the new reference offset: reference_offset = sum(reference_value_dims[:sub_element_number]) physical_offset = sum(physical_value_dims[:sub_element_number]) shift = physical_offset - reference_offset # Compute the component relative to the reference frame reference_component = component - shift return reference_component, reference_offset def needs_oriented_jacobian(form_data): # Check whether this form needs an oriented jacobian (only forms # involgin contravariant piola mappings seem to need it) for ufl_element in form_data.unique_elements: element = create_element(ufl_element) if "contravariant piola" in element.mapping(): return True return False # Mapping from recognized domain types to entity types _entity_types = { "cell": "cell", "exterior_facet": "facet", "interior_facet": "facet", "vertex": "vertex", # "point": "vertex", # TODO: Not sure, clarify here what 'entity_type' refers to? "custom": "cell", "cutcell": "cell", "interface": "cell", # "facet" # TODO: ? "overlap": "cell", } def entity_type_from_integral_type(integral_type): return _entity_types[integral_type] def initialize_integral_ir(representation, itg_data, form_data, form_id): """Initialize a representation dict with common information that is expected independently of which representation is chosen.""" entitytype = entity_type_from_integral_type(itg_data.integral_type) cell = itg_data.domain.ufl_cell() #cellname = cell.cellname() tdim = cell.topological_dimension() assert all(tdim == itg.ufl_domain().topological_dimension() for itg in itg_data.integrals) # Set number of cells if not set TODO: Get automatically from number of domains num_cells = itg_data.metadata.get("num_cells") return {"representation": representation, "integral_type": itg_data.integral_type, "subdomain_id": itg_data.subdomain_id, "form_id": form_id, "rank": form_data.rank, "geometric_dimension": form_data.geometric_dimension, "topological_dimension": tdim, "entitytype": entitytype, "num_facets": cell.num_facets(), "num_vertices": cell.num_vertices(), "needs_oriented": needs_oriented_jacobian(form_data), "num_cells": num_cells, "enabled_coefficients": itg_data.enabled_coefficients, } def generate_enabled_coefficients(enabled_coefficients): # TODO: I don't know how to implement this using the format dict, this will do for now: initializer_list = ", ".join("true" if enabled else "false" for enabled in enabled_coefficients) code = '\n'.join([ "static const std::vector enabled({%s});" % initializer_list, "return enabled;", ]) return code def initialize_integral_code(ir, prefix, parameters): "Representation independent default initialization of code dict for integral from intermediate representation." code = {} code["class_type"] = ir["integral_type"] + "_integral" code["classname"] = make_integral_classname(prefix, ir["integral_type"], ir["form_id"], ir["subdomain_id"]) code["members"] = "" code["constructor"] = "" code["constructor_arguments"] = "" code["initializer_list"] = "" code["destructor"] = "" code["enabled_coefficients"] = generate_enabled_coefficients(ir["enabled_coefficients"]) code["additional_includes_set"] = set() # FIXME: Get this out of code[] return code ffc-2019.1.0.post0/ffc/tsfc/000077500000000000000000000000001346026536000152505ustar00rootroot00000000000000ffc-2019.1.0.post0/ffc/tsfc/__init__.py000066400000000000000000000002611346026536000173600ustar00rootroot00000000000000from ffc.tsfc.tsfcrepresentation import compute_integral_ir from ffc.tsfc.tsfcoptimization import optimize_integral_ir from ffc.tsfc.tsfcgenerator import generate_integral_code ffc-2019.1.0.post0/ffc/tsfc/tsfcgenerator.py000066400000000000000000000044001346026536000204660ustar00rootroot00000000000000# Copyright (C) 2016 Jan Blechta # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . import coffee.base as coffee from coffee.plan import ASTKernel from coffee.visitors import Find from ffc.log import info from ffc.representationutils import initialize_integral_code from tsfc.driver import compile_integral import tsfc.kernel_interface.ufc as ufc_interface def generate_integral_code(ir, prefix, parameters): "Generate code for integral from intermediate representation." info("Generating code from tsfc representation") # Generate generic ffc code snippets code = initialize_integral_code(ir, prefix, parameters) # Go unoptimized if TSFC mode has not been set yet integral_data, form_data, prefix, parameters = ir["compile_integral"] parameters = parameters.copy() parameters.setdefault("mode", "vanilla") # Generate tabulate_tensor body ast = compile_integral(integral_data, form_data, None, parameters, interface=ufc_interface) # COFFEE vectorize knl = ASTKernel(ast) knl.plan_cpu(dict(optlevel='Ov')) tsfc_code = "".join(b.gencode() for b in ast.body) tsfc_code = tsfc_code.replace("#pragma coffee", "//#pragma coffee") # FIXME code["tabulate_tensor"] = tsfc_code includes = set() includes.update(ir.get("additional_includes_set", ())) includes.update(ast.headers) includes.add("#include ") # memset if any(node.funcall.symbol.startswith("boost::math::") for node in Find(coffee.FunCall).visit(ast)[coffee.FunCall]): includes.add("#include ") code["additional_includes_set"] = includes return code ffc-2019.1.0.post0/ffc/tsfc/tsfcoptimization.py000066400000000000000000000005171346026536000212330ustar00rootroot00000000000000def optimize_integral_ir(ir, parameters): ir = ir.copy() integral_data, form_data, prefix, parameters = ir["compile_integral"] parameters = parameters.copy() parameters.setdefault("mode", "spectral") # default optimization mode ir["compile_integral"] = (integral_data, form_data, prefix, parameters) return ir ffc-2019.1.0.post0/ffc/tsfc/tsfcrepresentation.py000066400000000000000000000026701346026536000215510ustar00rootroot00000000000000# Copyright (C) 2016 Jan Blechta # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . from ffc.log import info from ffc.representationutils import initialize_integral_ir def compute_integral_ir(integral_data, form_data, form_id, element_numbers, classnames, parameters): "Compute intermediate represention of integral." info("Computing tsfc representation") # Initialise representation ir = initialize_integral_ir("tsfc", integral_data, form_data, form_id) # TSFC treats None and unset differently, so remove None values. parameters = {k: v for k, v in parameters.items() if v is not None} # Delay TSFC compilation ir["compile_integral"] = (integral_data, form_data, None, parameters) return ir ffc-2019.1.0.post0/ffc/uflacs/000077500000000000000000000000001346026536000155665ustar00rootroot00000000000000ffc-2019.1.0.post0/ffc/uflacs/__init__.py000066400000000000000000000020051346026536000176740ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2011-2017 Martin Sandve Alnæs # # This file is part of UFLACS. # # UFLACS is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # UFLACS is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with UFLACS. If not, see """This is UFLACS, the UFL Analyser and Compiler System.""" __author__ = u"Martin Sandve Alnæs" from ffc.uflacs.uflacsrepresentation import compute_integral_ir from ffc.uflacs.uflacsoptimization import optimize_integral_ir from ffc.uflacs.uflacsgenerator import generate_integral_code ffc-2019.1.0.post0/ffc/uflacs/analysis/000077500000000000000000000000001346026536000174115ustar00rootroot00000000000000ffc-2019.1.0.post0/ffc/uflacs/analysis/__init__.py000066400000000000000000000014471346026536000215300ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2011-2017 Martin Sandve Alnæs # # This file is part of UFLACS. # # UFLACS is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # UFLACS is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with UFLACS. If not, see . """Algorithms for the analysis phase of the form compilation.""" ffc-2019.1.0.post0/ffc/uflacs/analysis/balancing.py000066400000000000000000000056011346026536000217030ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2011-2017 Martin Sandve Alnæs # # This file is part of UFLACS. # # UFLACS is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # UFLACS is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with UFLACS. If not, see """Algorithms for the representation phase of the form compilation.""" from ufl.classes import (ReferenceValue, ReferenceGrad, Grad, CellAvg, FacetAvg, PositiveRestricted, NegativeRestricted, Indexed) from ufl.corealg.multifunction import MultiFunction from ufl.corealg.map_dag import map_expr_dag modifier_precedence = [ReferenceValue, ReferenceGrad, Grad, CellAvg, FacetAvg, PositiveRestricted, NegativeRestricted, Indexed] modifier_precedence = { m._ufl_handler_name_: i for i, m in enumerate(modifier_precedence) } # TODO: Move this to ufl? # TODO: Add expr._ufl_modifier_precedence_ ? Add Terminal first and Operator last in the above list. def balance_modified_terminal(expr): # NB! Assuminge e.g. grad(cell_avg(expr)) does not occur, # i.e. it is simplified to 0 immediately. if expr._ufl_is_terminal_: return expr assert expr._ufl_is_terminal_modifier_ orig = expr # Build list of modifier layers layers = [expr] while not expr._ufl_is_terminal_: assert expr._ufl_is_terminal_modifier_ expr = expr.ufl_operands[0] layers.append(expr) assert layers[-1] is expr assert expr._ufl_is_terminal_ # Apply modifiers in order layers = sorted(layers[:-1], key=lambda e: modifier_precedence[e._ufl_handler_name_]) for op in layers: ops = (expr,) + op.ufl_operands[1:] expr = op._ufl_expr_reconstruct_(*ops) # Preserve id if nothing has changed return orig if expr == orig else expr class BalanceModifiers(MultiFunction): def expr(self, expr, *ops): return expr._ufl_expr_reconstruct_(*ops) def terminal(self, expr): return expr def _modifier(self, expr, *ops): return balance_modified_terminal(expr) reference_value = _modifier reference_grad = _modifier grad = _modifier cell_avg = _modifier facet_avg = _modifier positive_restricted = _modifier negative_restricted = _modifier def balance_modifiers(expr): mf = BalanceModifiers() return map_expr_dag(mf, expr) ffc-2019.1.0.post0/ffc/uflacs/analysis/crsarray.py000066400000000000000000000050121346026536000216070ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2011-2017 Martin Sandve Alnæs # # This file is part of UFLACS. # # UFLACS is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # UFLACS is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with UFLACS. If not, see """Compressed row storage 'matrix' (actually just a non-rectangular 2d array).""" import numpy def sufficient_int(maxval): return numpy.int16 if maxval < 2**15 else numpy.int32 class CRSArray(object): """An array of variable length dense arrays. Stored efficiently with simple compressed row storage. This CRS array variant doesn't have a sparsity pattern, as each row is simply a dense vector. Values are stored in one flat array 'data[]', and 'row_offsets[i]' contains the index to the first element on row i for 0<=i<=num_rows. There is no column index. """ def __init__(self, row_capacity, element_capacity, dtype): itype = sufficient_int(element_capacity) self.row_offsets = numpy.zeros(row_capacity + 1, dtype=itype) self.data = numpy.zeros(element_capacity, dtype=dtype) self.num_rows = 0 def push_row(self, elements): a = self.row_offsets[self.num_rows] b = a + len(elements) self.data[a:b] = elements self.num_rows += 1 self.row_offsets[self.num_rows] = b @property def num_elements(self): return self.row_offsets[self.num_rows] def __getitem__(self, row): if row < 0 or row >= self.num_rows: raise IndexError("Row number out of range!") a = self.row_offsets[row] b = self.row_offsets[row + 1] return self.data[a:b] def __len__(self): return self.num_rows def __str__(self): return "[%s]" % ('\n'.join(str(row) for row in self),) @classmethod def from_rows(cls, rows, num_rows, num_elements, dtype): "Construct a CRSArray from a list of row element lists." crs = CRSArray(num_rows, num_elements, dtype) for row in rows: crs.push_row(row) return crs ffc-2019.1.0.post0/ffc/uflacs/analysis/dependencies.py000066400000000000000000000064311346026536000224150ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2011-2017 Martin Sandve Alnæs # # This file is part of UFLACS. # # UFLACS is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # UFLACS is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with UFLACS. If not, see . """Tools for analysing dependencies within expression graphs.""" import numpy from ffc.uflacs.analysis.crsarray import CRSArray, sufficient_int def compute_dependencies(e2i, V, ignore_terminal_modifiers=True): # Use numpy int type sufficient to hold num_rows num_rows = len(V) itype = sufficient_int(num_rows) # Preallocate CRSArray matrix of sufficient capacity num_nonzeros = sum(len(v.ufl_operands) for v in V) dependencies = CRSArray(num_rows, num_nonzeros, itype) for v in V: if v._ufl_is_terminal_ or (ignore_terminal_modifiers and v._ufl_is_terminal_modifier_): dependencies.push_row(()) else: dependencies.push_row([e2i[o] for o in v.ufl_operands]) return dependencies def mark_active(dependencies, targets): """Return an array marking the recursive dependencies of targets. Input: - dependencies - CRSArray of ints, a mapping from a symbol to the symbols of its dependencies. - targets - Sequence of symbols to mark the dependencies of. Output: - active - Truth value for each symbol. - num_used - Number of true values in active array. """ n = len(dependencies) # Initial state where nothing is marked as used active = numpy.zeros(n, dtype=numpy.int8) num_used = 0 # Seed with initially used symbols active[tuple(targets)] = 1 # Mark dependencies by looping backwards through symbols array for s in range(n - 1, -1, -1): if active[s]: num_used += 1 active[dependencies[s]] = 1 # Return array marking which symbols are used and the number of positives return active, num_used def mark_image(inverse_dependencies, sources): """Return an array marking the set of symbols dependent on the sources. Input: - dependencies - CRSArray of ints, a mapping from a symbol to the symbols of its dependencies. - sources - Sequence of symbols to mark the dependants of. Output: - image - Truth value for each symbol. - num_used - Number of true values in active array. """ n = len(inverse_dependencies) # Initial state where nothing is marked as used image = numpy.zeros(n, dtype=numpy.int8) num_used = 0 # Seed with initially used symbols image[sources] = 1 # Mark dependencies by looping forwards through symbols array for s in range(n): if image[s]: num_used += 1 image[inverse_dependencies[s]] = 1 # Return array marking which symbols are used and the number of positives return image, num_used ffc-2019.1.0.post0/ffc/uflacs/analysis/expr_shapes.py000066400000000000000000000032221346026536000223030ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2011-2017 Martin Sandve Alnæs # # This file is part of UFLACS. # # UFLACS is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # UFLACS is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with UFLACS. If not, see . """Tools for computing various shapes of ufl expressions. The total shape is the regular shape tuple plus the index shape tuple. The index shape tuple is the tuple of index dimensions of the free indices of the expression, sorted by the count of the free indices. The total shape of a tensor valued expression ``A`` and ``A[*indices(len(A.ufl_shape))]`` is therefore the same. """ def compute_index_shape(v): """Compute the 'index shape' of v.""" return v.ufl_index_dimensions def compute_all_shapes(v): """Compute the tensor-, index-, and total shape of an expr. Returns (shape, size, index_shape, index_size, total_shape, total_size). """ shape = v.ufl_shape index_shape = v.ufl_index_dimensions total_shape = shape + index_shape return (shape, index_shape, total_shape) def total_shape(v): """Compute the total shape of an expr.""" sh, ish, tsh = compute_all_shapes(v) return tsh ffc-2019.1.0.post0/ffc/uflacs/analysis/factorization.py000066400000000000000000000347061346026536000226510ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2011-2017 Martin Sandve Alnæs # # This file is part of UFLACS. # # UFLACS is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # UFLACS is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with UFLACS. If not, see """Algorithms for factorizing argument dependent monomials.""" import numpy from itertools import chain from ufl import as_ufl, conditional from ufl.classes import Argument from ufl.classes import Division from ufl.classes import Product from ufl.classes import Sum from ufl.classes import Conditional from ufl.classes import Zero from ufl.algorithms import extract_type from ffc.log import error from ffc.uflacs.analysis.dependencies import compute_dependencies from ffc.uflacs.analysis.modified_terminals import analyse_modified_terminal, strip_modified_terminal def _build_arg_sets(V): "Build arg_sets = { argument number: set(j for j where V[j] is a modified Argument with this number) }" arg_sets = {} for i, v in enumerate(V): arg = strip_modified_terminal(v) if not isinstance(arg, Argument): continue num = arg.number() arg_set = arg_sets.get(num) if arg_set is None: arg_set = {} arg_sets[num] = arg_set arg_set[i] = v return arg_sets def _build_argument_indices_from_arg_sets(V, arg_sets): "Build ordered list of indices to modified arguments." # Build set of all indices of V referring to modified arguments arg_indices = set() for js in arg_sets.values(): arg_indices.update(js) # Make a canonical ordering of vertex indices for modified arguments def arg_ordering_key(i): "Return a key for sorting argument vertex indices based on the properties of the modified terminal." mt = analyse_modified_terminal(arg_ordering_key.V[i]) return mt.argument_ordering_key() arg_ordering_key.V = V ordered_arg_indices = sorted(arg_indices, key=arg_ordering_key) return ordered_arg_indices def build_argument_indices(V): "Build ordered list of indices to modified arguments." arg_sets = _build_arg_sets(V) ordered_arg_indices = _build_argument_indices_from_arg_sets(V, arg_sets) return ordered_arg_indices def build_argument_dependencies(dependencies, arg_indices): "Preliminary algorithm: build list of argument vertex indices each vertex (indirectly) depends on." n = len(dependencies) A = numpy.empty(n, dtype=object) for i, deps in enumerate(dependencies): argdeps = [] for j in deps: if j in arg_indices: argdeps.append(j) else: argdeps.extend(A[j]) A[i] = sorted(argdeps) return A class Factors(object): # TODO: Refactor code in this file by using a class like this def __init__(self): self.FV = [] self.e2fi = {} def add(self, expr): add_to_fv(expr, self.FV, self.e2fi) def add_to_fv(expr, FV, e2fi): "Add expression expr to factor vector FV and expr->FVindex mapping e2fi." fi = e2fi.get(expr) if fi is None: fi = len(e2fi) FV.append(expr) e2fi[expr] = fi return fi # Reuse these empty objects where appropriate to save memory noargs = {} def handle_sum(v, si, deps, SV_factors, FV, sv2fv, e2fi): if len(deps) != 2: error("Assuming binary sum here. This can be fixed if needed.") fac0 = SV_factors[deps[0]] fac1 = SV_factors[deps[1]] argkeys = set(fac0) | set(fac1) if argkeys: # f*arg + g*arg = (f+g)*arg argkeys = sorted(argkeys) keylen = len(argkeys[0]) factors = {} for argkey in argkeys: if len(argkey) != keylen: error("Expecting equal argument rank terms among summands.") fi0 = fac0.get(argkey) fi1 = fac1.get(argkey) if fi0 is None: fisum = fi1 elif fi1 is None: fisum = fi0 else: f0 = FV[fi0] f1 = FV[fi1] fisum = add_to_fv(f0 + f1, FV, e2fi) factors[argkey] = fisum else: # non-arg + non-arg factors = noargs sv2fv[si] = add_to_fv(v, FV, e2fi) return factors def handle_product(v, si, deps, SV_factors, FV, sv2fv, e2fi): if len(deps) != 2: error("Assuming binary product here. This can be fixed if needed.") fac0 = SV_factors[deps[0]] fac1 = SV_factors[deps[1]] if not fac0 and not fac1: # non-arg * non-arg # Record non-argument product factors = noargs f0 = FV[sv2fv[deps[0]]] f1 = FV[sv2fv[deps[1]]] assert f1 * f0 == v sv2fv[si] = add_to_fv(v, FV, e2fi) assert FV[sv2fv[si]] == v elif not fac0: # non-arg * arg # Record products of non-arg operand with each factor of arg-dependent operand f0 = FV[sv2fv[deps[0]]] factors = {} for k1 in sorted(fac1): fi1 = fac1[k1] factors[k1] = add_to_fv(f0 * FV[fi1], FV, e2fi) elif not fac1: # arg * non-arg # Record products of non-arg operand with each factor of arg-dependent operand f1 = FV[sv2fv[deps[1]]] factors = {} for k0 in sorted(fac0): f0 = FV[fac0[k0]] factors[k0] = add_to_fv(f1 * f0, FV, e2fi) else: # arg * arg # Record products of each factor of arg-dependent operand factors = {} for k0 in sorted(fac0): f0 = FV[fac0[k0]] for k1 in sorted(fac1): f1 = FV[fac1[k1]] argkey = tuple(sorted(k0 + k1)) # sort key for canonical representation factors[argkey] = add_to_fv(f0 * f1, FV, e2fi) return factors def handle_division(v, si, deps, SV_factors, FV, sv2fv, e2fi): fac0 = SV_factors[deps[0]] fac1 = SV_factors[deps[1]] assert not fac1, "Cannot divide by arguments." if fac0: # arg / non-arg # Record products of non-arg operand with each factor of arg-dependent operand f1 = FV[sv2fv[deps[1]]] factors = {} for k0 in sorted(fac0): f0 = FV[fac0[k0]] factors[k0] = add_to_fv(f0 / f1, FV, e2fi) else: # non-arg / non-arg # Record non-argument subexpression factors = noargs sv2fv[si] = add_to_fv(v, FV, e2fi) return factors def handle_conditional(v, si, deps, SV_factors, FV, sv2fv, e2fi): fac0 = SV_factors[deps[0]] fac1 = SV_factors[deps[1]] fac2 = SV_factors[deps[2]] assert not fac0, "Cannot have argument in condition." if not (fac1 or fac2): # non-arg ? non-arg : non-arg # Record non-argument subexpression sv2fv[si] = add_to_fv(v, FV, e2fi) factors = noargs else: f0 = FV[sv2fv[deps[0]]] f1 = FV[sv2fv[deps[1]]] f2 = FV[sv2fv[deps[2]]] # Term conditional(c, argument, non-argument) is not legal unless non-argument is 0.0 assert fac1 or isinstance(f1, Zero) assert fac2 or isinstance(f2, Zero) assert () not in fac1 assert () not in fac2 z = as_ufl(0.0) # In general, can decompose like this: # conditional(c, sum_i fi*ui, sum_j fj*uj) -> sum_i conditional(c, fi, 0)*ui + sum_j conditional(c, 0, fj)*uj mas = sorted(set(fac1.keys()) | set(fac2.keys())) factors = {} for k in mas: fi1 = fac1.get(k) fi2 = fac2.get(k) f1 = z if fi1 is None else FV[fi1] f2 = z if fi2 is None else FV[fi2] factors[k] = add_to_fv(conditional(f0, f1, f2), FV, e2fi) return factors def handle_operator(v, si, deps, SV_factors, FV, sv2fv, e2fi): # Error checking if any(SV_factors[d] for d in deps): error("Assuming that a {0} cannot be applied to arguments. If this is wrong please report a bug.".format(type(v))) # Record non-argument subexpression sv2fv[si] = add_to_fv(v, FV, e2fi) factors = noargs return factors def compute_argument_factorization(SV, SV_deps, SV_targets, rank): """Factorizes a scalar expression graph w.r.t. scalar Argument components. The result is a triplet (AV, FV, IM): - The scalar argument component subgraph: AV[ai] = v with the property SV[arg_indices] == AV[:] - An expression graph vertex list with all non-argument factors: FV[fi] = f with the property that none of the expressions depend on Arguments. - A dict representation of the final integrand of rank r: IM = { (ai1_1, ..., ai1_r): fi1, (ai2_1, ..., ai2_r): fi2, } This mapping represents the factorization of SV[-1] w.r.t. Arguments s.t.: SV[-1] := sum(FV[fik] * product(AV[ai] for ai in aik) for aik, fik in IM.items()) where := means equivalence in the mathematical sense, of course in a different technical representation. """ # Extract argument component subgraph arg_indices = build_argument_indices(SV) #A = build_argument_dependencies(SV_deps, arg_indices) AV = [SV[si] for si in arg_indices] #av2sv = arg_indices sv2av = { si: ai for ai, si in enumerate(arg_indices) } assert all(AV[ai] == SV[si] for ai, si in enumerate(arg_indices)) assert all(AV[ai] == SV[si] for si, ai in sv2av.items()) # Data structure for building non-argument factors FV = [] e2fi = {} # Adding 0.0 as an expression to fix issue in conditional zero_index = add_to_fv(as_ufl(0.0), FV, e2fi) # Adding 1.0 as an expression allows avoiding special representation # of arguments when first visited by representing "v" as "1*v" one_index = add_to_fv(as_ufl(1.0), FV, e2fi) # Adding 2 as an expression fixes an issue with FV entries that change K*K -> K**2 two_index = add_to_fv(as_ufl(2), FV, e2fi) # Intermediate factorization for each vertex in SV on the format # SV_factors[si] = None # if SV[si] does not depend on arguments # SV_factors[si] = { argkey: fi } # if SV[si] does depend on arguments, where: # FV[fi] is the expression SV[si] with arguments factored out # argkey is a tuple with indices into SV for each of the argument components SV[si] depends on # SV_factors[si] = { argkey1: fi1, argkey2: fi2, ... } # if SV[si] is a linear combination of multiple argkey configurations SV_factors = numpy.empty(len(SV), dtype=object) si2fi = numpy.zeros(len(SV), dtype=int) # Factorize each subexpression in order: for si, v in enumerate(SV): deps = SV_deps[si] # These handlers insert values in si2fi and SV_factors if not len(deps): if si in arg_indices: # v is a modified Argument factors = { (si,): one_index } else: # v is a modified non-Argument terminal si2fi[si] = add_to_fv(v, FV, e2fi) factors = noargs else: # These quantities could be better input args to handlers: #facs = [SV_factors[d] for d in deps] #fs = [FV[sv2fv[d]] for d in deps] if isinstance(v, Sum): handler = handle_sum elif isinstance(v, Product): handler = handle_product elif isinstance(v, Division): handler = handle_division elif isinstance(v, Conditional): handler = handle_conditional else: # All other operators handler = handle_operator factors = handler(v, si, deps, SV_factors, FV, si2fi, e2fi) SV_factors[si] = factors assert not noargs, "This dict was not supposed to be filled with anything!" # Throw away superfluous items in array # FV = FV[:len(e2fi)] assert len(FV) == len(e2fi) # Get the factorizations of the target values IMs = [] for si in SV_targets: if SV_factors[si] == {}: if rank == 0: # Functionals and expressions: store as no args * factor factors = { (): si2fi[si] } else: # Zero form of arity 1 or higher: make factors empty factors = {} else: # Forms of arity 1 or higher: # Map argkeys from indices into SV to indices into AV, # and resort keys for canonical representation factors = { tuple(sorted(sv2av[si] for si in argkey)): fi for argkey, fi in SV_factors[si].items() } # Expecting all term keys to have length == rank # (this assumption will eventually have to change if we # implement joint bilinear+linear form factorization here) assert all(len(k) == rank for k in factors) IMs.append(factors) # Recompute dependencies in FV FV_deps = compute_dependencies(e2fi, FV) # Indices into FV that are needed for final result FV_targets = list(chain(sorted(IM.values()) for IM in IMs)) return IMs, AV, FV, FV_deps, FV_targets def rebuild_scalar_graph_from_factorization(AV, FV, IM): # TODO: What about multiple target_variables? # Build initial graph SV = [] SV.extend(AV) SV.extend(FV) se2i = dict((s, i) for i, s in enumerate(SV)) def add_vertex(h): # Avoid adding vertices twice i = se2i.get(h) if i is None: se2i[h] = len(SV) SV.append(h) # Add factorization monomials argkeys = sorted(IM.keys()) fs = [] for argkey in argkeys: # Start with coefficients f = FV[IM[argkey]] # f = 1 # Add binary products with each argument in order for argindex in argkey: f = f * AV[argindex] add_vertex(f) # Add product with coefficients last # f = f*FV[IM[argkey]] # add_vertex(f) # f is now the full monomial, store it as a term for sum below fs.append(f) # Add sum of factorization monomials g = 0 for f in fs: g = g + f add_vertex(g) # Rebuild dependencies dependencies = compute_dependencies(se2i, SV) return SV, se2i, dependencies ffc-2019.1.0.post0/ffc/uflacs/analysis/graph.py000066400000000000000000000031461346026536000210700ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2011-2017 Martin Sandve Alnæs # # This file is part of UFLACS. # # UFLACS is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # UFLACS is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with UFLACS. If not, see . """Linearized data structure for the computational graph.""" from ffc.uflacs.analysis.graph_vertices import build_graph_vertices from ffc.uflacs.analysis.graph_symbols import build_graph_symbols class Graph2(object): def __init__(self): self.nv = 0 self.V = [] self.e2i = {} self.expression_vertices = [] self.V_shapes = [] self.V_symbols = None # Crs matrix self.total_unique_symbols = 0 def build_graph(expressions, DEBUG=False): # Make empty graph G = Graph2() # Populate with vertices G.e2i, G.V, G.expression_vertices = build_graph_vertices(expressions) G.nv = len(G.V) # Populate with symbols G.V_shapes, G.V_symbols, G.total_unique_symbols = \ build_graph_symbols(G.V, G.e2i, DEBUG) if DEBUG: assert G.total_unique_symbols == len(set(G.V_symbols.data)) return G ffc-2019.1.0.post0/ffc/uflacs/analysis/graph_rebuild.py000066400000000000000000000265461346026536000226070ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2011-2017 Martin Sandve Alnæs # # This file is part of UFLACS. # # UFLACS is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # UFLACS is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with UFLACS. If not, see """Rebuilding UFL expressions from linearized representation of computational graph.""" import numpy from ufl import product from ufl.permutation import compute_indices from ufl import as_vector from ufl.classes import MultiIndex, IndexSum, Product from ufl.corealg.multifunction import MultiFunction from ufl.utils.indexflattening import flatten_multiindex, shape_to_strides from ffc.log import error from ffc.uflacs.analysis.modified_terminals import is_modified_terminal class ReconstructScalarSubexpressions(MultiFunction): def __init__(self): super(ReconstructScalarSubexpressions, self).__init__() # No fallbacks, need to specify each type or group of types explicitly def expr(self, o, *args, **kwargs): error("No handler for type %s" % type(o)) def terminal(self, o): error("Not expecting terminal expression in here, got %s." % type(o)) # These types are not expected to be part of the graph at this point def unexpected(self, o, *args, **kwargs): error("Not expecting expression of type %s in here." % type(o)) multi_index = unexpected expr_list = unexpected expr_mapping = unexpected utility_type = unexpected label = unexpected component_tensor = unexpected list_tensor = unexpected transposed = unexpected variable = unexpected def scalar_nary(self, o, ops): if o.ufl_shape != (): error("Expecting scalar.") sops = [op[0] for op in ops] return [o._ufl_expr_reconstruct_(*sops)] # Unary scalar functions math_function = scalar_nary abs = scalar_nary min_value = scalar_nary max_value = scalar_nary # Binary scalar functions power = scalar_nary bessel_function = scalar_nary # TODO: Is this ok? atan_2 = scalar_nary def condition(self, o, ops): # A condition is always scalar, so len(op) == 1 sops = [op[0] for op in ops] return [o._ufl_expr_reconstruct_(*sops)] def conditional(self, o, ops): # A condition can be non scalar symbols = [] n = len(ops[1]) if len(ops[0]) != 1: error("Condition should be scalar.") if n != len(ops[2]): error("Conditional branches should have same shape.") for i in range(len(ops[1])): sops = (ops[0][0], ops[1][i], ops[2][i]) symbols.append(o._ufl_expr_reconstruct_(*sops)) return symbols def division(self, o, ops): if len(ops) != 2: error("Expecting two operands.") if len(ops[1]) != 1: error("Expecting scalar divisor.") b, = ops[1] return [o._ufl_expr_reconstruct_(a, b) for a in ops[0]] def sum(self, o, ops): if len(ops) != 2: error("Expecting two operands.") if len(ops[0]) != len(ops[1]): error("Expecting scalar divisor.") return [o._ufl_expr_reconstruct_(a, b) for a, b in zip(ops[0], ops[1])] def product(self, o, ops): if len(ops) != 2: error("Expecting two operands.") # Get the simple cases out of the way if len(ops[0]) == 1: # True scalar * something a, = ops[0] return [Product(a, b) for b in ops[1]] elif len(ops[1]) == 1: # Something * true scalar b, = ops[1] return [Product(a, b) for a in ops[0]] # Neither of operands are true scalars, this is the tricky part o0, o1 = o.ufl_operands # Get shapes and index shapes fi = o.ufl_free_indices fi0 = o0.ufl_free_indices fi1 = o1.ufl_free_indices fid = o.ufl_index_dimensions fid0 = o0.ufl_index_dimensions fid1 = o1.ufl_index_dimensions # Need to map each return component to one component of o0 and one component of o1. indices = compute_indices(fid) # Compute which component of o0 is used in component (comp,ind) of o # Compute strides within free index spaces ist0 = shape_to_strides(fid0) ist1 = shape_to_strides(fid1) # Map o0 and o1 indices to o indices indmap0 = [fi.index(i) for i in fi0] indmap1 = [fi.index(i) for i in fi1] indks = [(flatten_multiindex([ind[i] for i in indmap0], ist0), flatten_multiindex([ind[i] for i in indmap1], ist1)) for ind in indices] # Build products for scalar components results = [Product(ops[0][k0], ops[1][k1]) for k0, k1 in indks] return results def index_sum(self, o, ops): summand, mi = o.ufl_operands ic = mi[0].count() fi = summand.ufl_free_indices fid = summand.ufl_index_dimensions ipos = fi.index(ic) d = fid[ipos] # Compute "macro-dimensions" before and after i in the total shape of a predim = product(summand.ufl_shape) * product(fid[:ipos]) postdim = product(fid[ipos+1:]) # Map each flattened total component of summand to # flattened total component of indexsum o by removing # axis corresponding to summation index ii. ss = ops[0] # Scalar subexpressions of summand if len(ss) != predim * postdim * d: error("Mismatching number of subexpressions.") sops = [] for i in range(predim): iind = i * (postdim * d) for k in range(postdim): ind = iind + k sops.append([ss[ind + j * postdim] for j in range(d)]) # For each scalar output component, sum over collected subcomponents # TODO: Need to split this into binary additions to work with future CRSArray format, # i.e. emitting more expressions than there are symbols for this node. results = [sum(sop) for sop in sops] return results # TODO: To implement compound tensor operators such as dot and inner, # we need to identify which index to do the contractions over, # and build expressions such as sum(a*b for a,b in zip(aops, bops)) def rebuild_expression_from_graph(G): "This is currently only used by tests." w = rebuild_with_scalar_subexpressions(G) # Find expressions of final v if len(w) == 1: return w[0] else: return as_vector(w) # TODO: Consider shape of initial v def rebuild_with_scalar_subexpressions(G, targets=None): """Build a new expression2index mapping where each subexpression is scalar valued. Input: - G.e2i - G.V - G.V_symbols - G.total_unique_symbols Output: - NV - Array with reverse mapping from index to expression - nvs - Tuple of ne2i indices corresponding to the last vertex of G.V Old output now no longer returned but possible to restore if needed: - ne2i - Mapping from scalar subexpressions to a contiguous unique index - W - Array with reconstructed scalar subexpressions for each original symbol """ # From simplefsi3d.ufl: # GRAPH SIZE: len(G.V), G.total_unique_symbols # GRAPH SIZE: 16251 635272 # GRAPH SIZE: 473 8210 # GRAPH SIZE: 9663 238021 # GRAPH SIZE: 88913 3448634 # 3.5 M!!! # Algorithm to apply to each subexpression reconstruct_scalar_subexpressions = ReconstructScalarSubexpressions() # Array to store the scalar subexpression in for each symbol W = numpy.empty(G.total_unique_symbols, dtype=object) # Iterate over each graph node in order for i, v in enumerate(G.V): # Find symbols of v components vs = G.V_symbols[i] # Skip if there's nothing new here (should be the case for indexing types) if all(W[s] is not None for s in vs): continue if is_modified_terminal(v): # if v.ufl_free_indices: # error("Expecting no free indices.") sh = v.ufl_shape if sh: # Store each terminal expression component. # We may not actually need all of these later, # but that will be optimized away. # Note: symmetries will be dealt with in the value numbering. ws = [v[c] for c in compute_indices(sh)] else: # Store single modified terminal expression component if len(vs) != 1: error("Expecting single symbol for scalar valued modified terminal.") ws = [v] # FIXME: Replace ws[:] with 0's if its table is empty # Possible redesign: loop over modified terminals only first, # then build tables for them, set W[s] = 0.0 for modified terminals with zero table, # then loop over non-(modified terminal)s to reconstruct expression. else: # Find symbols of operands sops = [] for j, vop in enumerate(v.ufl_operands): if isinstance(vop, MultiIndex): # TODO: Store MultiIndex in G.V and allocate a symbol to it for this to work if not isinstance(v, IndexSum): error("Not expecting a %s." % type(v)) sops.append(()) else: # TODO: Build edge datastructure and use instead? # k = G.E[i][j] k = G.e2i[vop] sops.append(G.V_symbols[k]) # Fetch reconstructed operand expressions wops = [tuple(W[k] for k in so) for so in sops] # Reconstruct scalar subexpressions of v ws = reconstruct_scalar_subexpressions(v, wops) # Store all scalar subexpressions for v symbols if len(vs) != len(ws): error("Expecting one symbol for each expression.") # Store each new scalar subexpression in W at the index of its symbol handled = set() for s, w in zip(vs, ws): if W[s] is None: W[s] = w handled.add(s) else: assert s in handled # Result of symmetry! # Find symbols of requested targets or final v from input graph if targets is None: targets = [G.V[-1]] # Attempt to extend this to multiple target expressions scalar_target_expressions = [] for target in targets: ti = G.e2i[target] vs = G.V_symbols[ti] # Sanity check: assert that we've handled these symbols if any(W[s] is None for s in vs): error("Expecting that all symbols in vs are handled at this point.") scalar_target_expressions.append([W[s] for s in vs]) # Return the scalar expressions for each of the components assert len(scalar_target_expressions) == 1 # TODO: Currently expected by callers, fix those first return scalar_target_expressions[0] # ... TODO: then return list ffc-2019.1.0.post0/ffc/uflacs/analysis/graph_ssa.py000066400000000000000000000201101346026536000217240ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2011-2017 Martin Sandve Alnæs # # This file is part of UFLACS. # # UFLACS is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # UFLACS is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with UFLACS. If not, see . """Algorithms for working with computational graphs.""" import numpy from ufl.classes import (GeometricQuantity, ConstantValue, Argument, Coefficient, Grad, Restricted, Indexed, MathFunction) from ufl.checks import is_cellwise_constant from ffc.log import error from ffc.uflacs.analysis.crsarray import CRSArray def default_partition_seed(expr, rank): """ Partition 0: Piecewise constant on each cell (including Real and DG0 coefficients) Partition 1: Depends on x Partition 2: Depends on x and coefficients Partitions [3,3+rank): depend on argument with count partition-3 """ # TODO: Use named constants for the partition numbers here modifiers = (Grad, Restricted, Indexed) # FIXME: Add CellAvg, FacetAvg types here, others? if isinstance(expr, modifiers): return default_partition_seed(expr.ufl_operands[0], rank) elif isinstance(expr, Argument): ac = expr.number() assert 0 <= ac < rank poffset = 3 p = poffset + ac return p elif isinstance(expr, Coefficient): if is_cellwise_constant(expr): # This is crap, doesn't include grad modifier return 0 else: return 2 elif isinstance(expr, GeometricQuantity): if is_cellwise_constant(expr): # This is crap, doesn't include grad modifier return 0 else: return 1 elif isinstance(expr, ConstantValue): return 0 else: error("Don't know how to handle %s" % expr) def mark_partitions(V, active, dependencies, rank, partition_seed=default_partition_seed, partition_combiner=max): """FIXME: Cover this with tests. Input: - V - Array of expressions. - active - Boolish array. - dependencies - CRSArray with V dependencies. - partition_seed - Policy for determining the partition of a terminalish. - partition_combiner - Policy for determinging the partition of an operator. Output: - partitions - Array of partition int ids. """ n = len(V) assert len(active) == n assert len(dependencies) == n partitions = numpy.zeros(n, dtype=int) for i, v in enumerate(V): deps = dependencies[i] if active[i]: if len(deps): p = partition_combiner([partitions[d] for d in deps]) else: p = partition_seed(v, rank) else: p = -1 partitions[i] = p return partitions """ def build_factorized_partitions(): num_points = [3] # dofrange = (begin, end) # dofblock = () | (dofrange0,) | (dofrange0, dofrange1) partitions = {} # partitions["piecewise"] = partition of expressions independent of quadrature and argument loops partitions["piecewise"] = [] # partitions["varying"][np] = partition of expressions dependent on np quadrature but independent of argument loops partitions["varying"] = dict((np, []) for np in num_points) # partitions["argument"][np][iarg][dofrange] = partition depending on this dofrange of argument iarg partitions["argument"] = dict((np, [dict() for i in range(rank)]) for np in num_points) # partitions["integrand"][np][dofrange] = partition depending on this dofrange of argument iarg partitions["integrand"] = dict((np, dict()) for np in num_points) """ def compute_dependency_count(dependencies): """FIXME: Test""" n = len(dependencies) depcount = numpy.zeros(n, dtype=int) for i in range(n): for d in dependencies[i]: depcount[d] += 1 return depcount def invert_dependencies(dependencies, depcount): """FIXME: Test""" n = len(dependencies) m = sum(depcount) invdeps = [()] * n for i in range(n): for d in dependencies[i]: invdeps[d] = invdeps[d] + (i,) return CRSArray.from_rows(invdeps, n, m, int) def default_cache_score_policy(vtype, ndeps, ninvdeps, partition): # Start at 1 and then multiply with various heuristic factors s = 1 # Is the type particularly expensive to compute? expensive = (MathFunction,) if vtype in expensive: # Could make a type-to-cost mapping, but this should do. s *= 20 # More deps roughly correlates to more operations s *= ndeps # If it is reused several times let that count significantly s *= ninvdeps ** 3 # 1->1, 2->8, 3->27 # Finally let partition count for something? # Or perhaps we need some more information, such as # when x from outer loop is used by y within inner loop. return s def compute_cache_scores(V, active, dependencies, inverse_dependencies, partitions, cache_score_policy=default_cache_score_policy): """FIXME: Cover with tests. TODO: Experiment with heuristics later when we have functional code generation. """ n = len(V) score = numpy.zeros(n, dtype=int) for i, v in enumerate(V): if active[i]: deps = dependencies[i] ndeps = len(deps) invdeps = inverse_dependencies[i] ninvdeps = len(invdeps) p = partitions[i] s = cache_score_policy(type(v), ndeps, ninvdeps, p) else: s = -1 score[i] = s return score import heapq def allocate_registers(active, partitions, targets, scores, max_registers, score_threshold): """FIXME: Cover with tests. TODO: Allow reuse of registers, reducing memory usage. TODO: Probably want to sort within partitions. """ # Check for consistent number of variables n = len(scores) assert n == len(active) assert n == len(partitions) num_targets = len(targets) # Analyse scores #min_score = min(scores) #max_score = max(scores) #mean_score = sum(scores) // n # Can allocate a number of registers up to given threshold num_to_allocate = max(num_targets, min(max_registers, n) - num_targets) to_allocate = set() # For now, just using an arbitrary heuristic algorithm to select m largest scores queue = [(-scores[i], i) for i in range(n) if active[i]] heapq.heapify(queue) # Always allocate registers for all targets, for simplicity # in the rest of the code generation pipeline to_allocate.update(targets) # Allocate one register each for max_registers largest symbols for r in range(num_to_allocate): s, i = heapq.heappop(queue) if -s <= score_threshold: break if i in targets: continue to_allocate.add(i) registers_used = len(to_allocate) # Some consistency checks assert num_to_allocate <= max_registers assert registers_used <= num_to_allocate + len(targets) assert registers_used <= max(max_registers, len(targets)) # Mark allocations allocations = numpy.zeros(n, dtype=int) allocations[:] = -1 for r, i in enumerate(sorted(to_allocate)): allocations[i] = r # Possible data structures for improved register allocations # register_status = numpy.zeros(max_registers, dtype=int) # Stack/set of free registers (should wrap in stack abstraction): # free_registers = numpy.zeros(max_registers, dtype=int) # num_free_registers = max_registers # free_registers[:] = reversed(xrange(max_registers)) return allocations ffc-2019.1.0.post0/ffc/uflacs/analysis/graph_symbols.py000066400000000000000000000075621346026536000226460ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2011-2017 Martin Sandve Alnæs # # This file is part of UFLACS. # # UFLACS is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # UFLACS is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with UFLACS. If not, see . """Assigning symbols to computational graph nodes.""" import numpy from ufl import product from ffc.uflacs.analysis.crsarray import CRSArray from ffc.uflacs.analysis.valuenumbering import ValueNumberer from ffc.uflacs.analysis.expr_shapes import total_shape def build_node_shapes(V): """Build total shapes for each node in list representation of expression graph. V is an array of ufl expressions, possibly nonscalar and with free indices. Returning a CRSArray where row i is the total shape of V[i]. """ # Dimensions of returned CRSArray nv = len(V) k = 0 # Store shapes intermediately in an array of tuples V_shapes = numpy.empty(nv, dtype=object) for i, v in enumerate(V): # Compute total shape of V[i] tsh = total_shape(v) V_shapes[i] = tsh # Count number of elements for CRSArray representation k += len(tsh) # Return a more memory efficient CRSArray representation return CRSArray.from_rows(V_shapes, nv, k, int) def build_node_sizes(V_shapes): "Compute all the products of a sequence of shapes." nv = len(V_shapes) V_sizes = numpy.zeros(nv, dtype=int) for i, sh in enumerate(V_shapes): V_sizes[i] = product(sh) return V_sizes def build_node_symbols(V, e2i, V_shapes, V_sizes): """Tabulate scalar value numbering of all nodes in a a list based representation of an expression graph. Returns: V_symbols - CRSArray of symbols (value numbers) of each component of each node in V. total_unique_symbols - The number of symbol values assigned to unique scalar components of the nodes in V. """ # "Sparse" int matrix for storing variable number of entries (symbols) per row (vertex), # with a capasity bounded by the number of scalar subexpressions including repetitions V_symbols = CRSArray(len(V), sum(V_sizes), int) # Visit each node with value numberer algorithm, storing the result for each as a row in the V_symbols CRSArray value_numberer = ValueNumberer(e2i, V_sizes, V_symbols) for i, v in enumerate(V): V_symbols.push_row(value_numberer(v, i)) # Get the actual number of symbols created total_unique_symbols = value_numberer.symbol_count # assert all(x < total_unique_symbols for x in V_symbols.data) # assert (total_unique_symbols-1) in V_symbols.data return V_symbols, total_unique_symbols def build_graph_symbols(V, e2i, DEBUG): """Tabulate scalar value numbering of all nodes in a a list based representation of an expression graph. Returns: V_shapes - CRSArray of the total shapes of nodes in V. V_symbols - CRSArray of symbols (value numbers) of each component of each node in V. total_unique_symbols - The number of symbol values assigned to unique scalar components of the nodes in V. """ # Compute the total shape (value shape x index dimensions) for each node V_shapes = build_node_shapes(V) # Compute the total value size for each node V_sizes = build_node_sizes(V_shapes) # Mark values with symbols V_symbols, total_unique_symbols = build_node_symbols(V, e2i, V_shapes, V_sizes) return V_shapes, V_symbols, total_unique_symbols ffc-2019.1.0.post0/ffc/uflacs/analysis/graph_vertices.py000066400000000000000000000062401346026536000227720ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2011-2017 Martin Sandve Alnæs # # This file is part of UFLACS. # # UFLACS is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # UFLACS is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with UFLACS. If not, see . """Algorithms for working with graphs.""" import numpy from ufl.classes import MultiIndex, Label from ffc.uflacs.analysis.modified_terminals import is_modified_terminal def count_nodes_with_unique_post_traversal(expr, e2i=None, skip_terminal_modifiers=False): """Yields o for each node o in expr, child before parent. Never visits a node twice.""" if e2i is None: e2i = {} def getops(e): "Get a modifyable list of operands of e, optionally treating modified terminals as a unit." # TODO: Maybe use e._ufl_is_terminal_modifier_ if e._ufl_is_terminal_ or (skip_terminal_modifiers and is_modified_terminal(e)): return [] else: return list(e.ufl_operands) stack = [] stack.append((expr, getops(expr))) while stack: expr, ops = stack[-1] for i, o in enumerate(ops): if o is not None and o not in e2i: stack.append((o, getops(o))) ops[i] = None break else: if not isinstance(expr, (MultiIndex, Label)): count = len(e2i) e2i[expr] = count stack.pop() return e2i def build_array_from_counts(e2i): nv = len(e2i) V = numpy.empty(nv, dtype=object) for e, i in e2i.items(): V[i] = e return V def build_node_counts(expressions): e2i = {} for expr in expressions: count_nodes_with_unique_post_traversal(expr, e2i, False) return e2i def build_scalar_node_counts(expressions): # Count unique expression nodes across multiple expressions e2i = {} for expr in expressions: count_nodes_with_unique_post_traversal(expr, e2i, True) return e2i def build_graph_vertices(expressions): # Count unique expression nodes e2i = build_node_counts(expressions) # Make a list of the nodes by their ordering V = build_array_from_counts(e2i) # Get vertex indices representing input expression roots ri = [e2i[expr] for expr in expressions] return e2i, V, ri def build_scalar_graph_vertices(expressions): # Count unique expression nodes across multiple expressions, treating modified terminals as a unit e2i = build_scalar_node_counts(expressions) # Make a list of the nodes by their ordering V = build_array_from_counts(e2i) # Get vertex indices representing input expression roots ri = [e2i[expr] for expr in expressions] return e2i, V, ri ffc-2019.1.0.post0/ffc/uflacs/analysis/indexing.py000066400000000000000000000121651346026536000215750ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2011-2017 Martin Sandve Alnæs # # This file is part of UFLACS. # # UFLACS is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # UFLACS is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with UFLACS. If not, see . """Algorithms for working with multiindices.""" from ufl import product from ufl.permutation import compute_indices from ufl.utils.indexflattening import shape_to_strides, flatten_multiindex from ufl.classes import ComponentTensor, FixedIndex, Index, Indexed def map_indexed_arg_components(indexed): """Build integer list mapping between flattended components of indexed expression and its underlying tensor-valued subexpression.""" assert isinstance(indexed, Indexed) # AKA indexed = tensor[multiindex] tensor, multiindex = indexed.ufl_operands # AKA e1 = e2[multiindex] # (this renaming is historical, but kept for consistency with all the variables *1,*2 below) e2 = tensor e1 = indexed # Get tensor and index shape sh1 = e1.ufl_shape sh2 = e2.ufl_shape fi1 = e1.ufl_free_indices fi2 = e2.ufl_free_indices fid1 = e1.ufl_index_dimensions fid2 = e2.ufl_index_dimensions # Compute regular and total shape tsh1 = sh1 + fid1 tsh2 = sh2 + fid2 # r1 = len(tsh1) r2 = len(tsh2) # str1 = shape_to_strides(tsh1) str2 = shape_to_strides(tsh2) assert not sh1 assert sh2 # Must have shape to be indexed in the first place assert product(tsh1) <= product(tsh2) # Build map from fi2/fid2 position (-offset nmui) to fi1/fid1 position ind2_to_ind1_map = [None] * len(fi2) for k, i in enumerate(fi2): ind2_to_ind1_map[k] = fi1.index(i) # Build map from fi1/fid1 position to mi position nmui = len(multiindex) multiindex_to_ind1_map = [None] * nmui for k, i in enumerate(multiindex): if isinstance(i, Index): multiindex_to_ind1_map[k] = fi1.index(i.count()) # Build map from flattened e1 component to flattened e2 component perm1 = compute_indices(tsh1) ni1 = product(tsh1) # Situation: e1 = e2[mi] d1 = [None] * ni1 p2 = [None] * r2 assert len(sh2) == nmui for k, i in enumerate(multiindex): if isinstance(i, FixedIndex): p2[k] = int(i) for c1, p1 in enumerate(perm1): for k, i in enumerate(multiindex): if isinstance(i, Index): p2[k] = p1[multiindex_to_ind1_map[k]] for k, i in enumerate(ind2_to_ind1_map): p2[nmui + k] = p1[i] c2 = flatten_multiindex(p2, str2) d1[c1] = c2 # Consistency checks assert all(isinstance(x, int) for x in d1) assert len(set(d1)) == len(d1) return d1 def map_component_tensor_arg_components(tensor): """Build integer list mapping between flattended components of tensor and its underlying indexed subexpression.""" assert isinstance(tensor, ComponentTensor) # AKA tensor = as_tensor(indexed, multiindex) indexed, multiindex = tensor.ufl_operands e1 = indexed e2 = tensor # e2 = as_tensor(e1, multiindex) mi = [i for i in multiindex if isinstance(i, Index)] # Get tensor and index shapes sh1 = e1.ufl_shape # (sh)ape of e1 sh2 = e2.ufl_shape # (sh)ape of e2 fi1 = e1.ufl_free_indices # (f)ree (i)ndices of e1 fi2 = e2.ufl_free_indices # ... fid1 = e1.ufl_index_dimensions # (f)ree (i)ndex (d)imensions of e1 fid2 = e2.ufl_index_dimensions # ... # Compute total shape (tsh) of e1 and e2 tsh1 = sh1 + fid1 tsh2 = sh2 + fid2 r1 = len(tsh1) # 'total rank' or e1 r2 = len(tsh2) # ... str1 = shape_to_strides(tsh1) assert not sh1 assert sh2 assert len(mi) == len(multiindex) assert product(tsh1) == product(tsh2) assert fi1 assert all(i in fi1 for i in fi2) nmui = len(multiindex) assert nmui == len(sh2) # Build map from fi2/fid2 position (-offset nmui) to fi1/fid1 position p2_to_p1_map = [None] * r2 for k, i in enumerate(fi2): p2_to_p1_map[k + nmui] = fi1.index(i) # Build map from fi1/fid1 position to mi position for k, i in enumerate(mi): p2_to_p1_map[k] = fi1.index(mi[k].count()) # Build map from flattened e1 component to flattened e2 component perm2 = compute_indices(tsh2) ni2 = product(tsh2) # Situation: e2 = as_tensor(e1, mi) d2 = [None] * ni2 p1 = [None] * r1 for c2, p2 in enumerate(perm2): for k2, k1 in enumerate(p2_to_p1_map): p1[k1] = p2[k2] c1 = flatten_multiindex(p1, str1) d2[c2] = c1 # Consistency checks assert all(isinstance(x, int) for x in d2) assert len(set(d2)) == len(d2) return d2 ffc-2019.1.0.post0/ffc/uflacs/analysis/modified_terminals.py000066400000000000000000000257551346026536000236370ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2011-2017 Martin Sandve Alnæs # # This file is part of UFLACS. # # UFLACS is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # UFLACS is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with UFLACS. If not, see . """Definitions of 'modified terminals', a core concept in uflacs.""" from ufl.permutation import build_component_numbering from ufl.classes import (FormArgument, Argument, Indexed, FixedIndex, SpatialCoordinate, Jacobian, ReferenceValue, Grad, ReferenceGrad, Restricted, FacetAvg, CellAvg) from ffc.log import error class ModifiedTerminal(object): """A modified terminal expression is an object of a Terminal subtype, wrapped in terminal modifier types. The variables of this class are: expr - The original UFL expression terminal - the underlying Terminal object global_derivatives - tuple of ints, each meaning derivative in that global direction local_derivatives - tuple of ints, each meaning derivative in that local direction reference_value - bool, whether this is represented in reference frame averaged - None, 'facet' or 'cell' restriction - None, '+' or '-' component - tuple of ints, the global component of the Terminal flat_component - single int, flattened local component of the Terminal, considering symmetry Possibly other component model: - global_component - reference_component - flat_component """ def __init__(self, expr, terminal, reference_value, base_shape, base_symmetry, component, flat_component, global_derivatives, local_derivatives, averaged, restriction): # The original expression self.expr = expr # The underlying terminal expression self.terminal = terminal # Are we seeing the terminal in physical or reference frame self.reference_value = reference_value # Get the shape of the core terminal or its reference value, # this is the shape that component and flat_component refers to self.base_shape = base_shape self.base_symmetry = base_symmetry # Components self.component = component self.flat_component = flat_component # Derivatives self.global_derivatives = global_derivatives self.local_derivatives = local_derivatives # Evaluation method (alternatives: { None, 'facet_midpoint', # 'cell_midpoint', 'facet_avg', 'cell_avg' }) self.averaged = averaged # Restriction to one cell or the other for interior facet integrals self.restriction = restriction def as_tuple(self): """Return a tuple with hashable values that uniquely identifies this modified terminal. Some of the derived variables can be omitted here as long as they are fully determined from the variables that are included here. """ t = self.terminal # FIXME: Terminal is not sortable... rv = self.reference_value #bs = self.base_shape #bsy = self.base_symmetry #c = self.component fc = self.flat_component gd = self.global_derivatives ld = self.local_derivatives a = self.averaged r = self.restriction return (t, rv, fc, gd, ld, a, r) def argument_ordering_key(self): """Return a key for deterministic sorting of argument vertex indices based on the properties of the modified terminal. Used in factorization but moved here for closeness with ModifiedTerminal attributes.""" t = self.terminal assert isinstance(t, Argument) n = t.number() assert n >= 0 p = t.part() rv = self.reference_value #bs = self.base_shape #bsy = self.base_symmetry #c = self.component fc = self.flat_component gd = self.global_derivatives ld = self.local_derivatives a = self.averaged r = self.restriction return (n, p, rv, fc, gd, ld, a, r) def __hash__(self): return hash(self.as_tuple()) def __eq__(self, other): return isinstance(other, ModifiedTerminal) and self.as_tuple() == other.as_tuple() #def __lt__(self, other): # error("Shouldn't use this?") # # FIXME: Terminal is not sortable, so the as_tuple contents # # must be changed for this to work properly # return self.as_tuple() < other.as_tuple() def __str__(self): s = [] s += ["terminal: {0}".format(self.terminal)] s += ["global_derivatives: {0}".format(self.global_derivatives)] s += ["local_derivatives: {0}".format(self.local_derivatives)] s += ["averaged: {0}".format(self.averaged)] s += ["component: {0}".format(self.component)] s += ["restriction: {0}".format(self.restriction)] return '\n'.join(s) def is_modified_terminal(v): "Check if v is a terminal or a terminal wrapped in terminal modifier types." while not v._ufl_is_terminal_: if v._ufl_is_terminal_modifier_: v = v.ufl_operands[0] else: return False return True def strip_modified_terminal(v): "Extract core Terminal from a modified terminal or return None." while not v._ufl_is_terminal_: if v._ufl_is_terminal_modifier_: v = v.ufl_operands[0] else: return None return v def analyse_modified_terminal(expr): """Analyse a so-called 'modified terminal' expression. Return its properties in more compact form as a ModifiedTerminal object. A modified terminal expression is an object of a Terminal subtype, wrapped in terminal modifier types. The wrapper types can include 0-* Grad or ReferenceGrad objects, and 0-1 ReferenceValue, 0-1 Restricted, 0-1 Indexed, and 0-1 FacetAvg or CellAvg objects. """ # Data to determine component = None global_derivatives = [] local_derivatives = [] reference_value = None restriction = None averaged = None # Start with expr and strip away layers of modifiers t = expr while not t._ufl_is_terminal_: if isinstance(t, Indexed): if component is not None: error("Got twice indexed terminal.") t, i = t.ufl_operands component = [int(j) for j in i] if not all(isinstance(j, FixedIndex) for j in i): error("Expected only fixed indices.") elif isinstance(t, ReferenceValue): if reference_value is not None: error("Got twice pulled back terminal!") t, = t.ufl_operands reference_value = True elif isinstance(t, ReferenceGrad): if not component: # covers None or () error("Got local gradient of terminal without prior indexing.") t, = t.ufl_operands local_derivatives.append(component[-1]) component = component[:-1] elif isinstance(t, Grad): if not component: # covers None or () error("Got local gradient of terminal without prior indexing.") t, = t.ufl_operands global_derivatives.append(component[-1]) component = component[:-1] elif isinstance(t, Restricted): if restriction is not None: error("Got twice restricted terminal!") restriction = t._side t, = t.ufl_operands elif isinstance(t, CellAvg): if averaged is not None: error("Got twice averaged terminal!") t, = t.ufl_operands averaged = "cell" elif isinstance(t, FacetAvg): if averaged is not None: error("Got twice averaged terminal!") t, = t.ufl_operands averaged = "facet" elif t._ufl_terminal_modifiers_: error("Missing handler for terminal modifier type %s, object is %s." % (type(t), repr(t))) else: error("Unexpected type %s object %s." % (type(t), repr(t))) # Make canonical representation of derivatives global_derivatives = tuple(sorted(global_derivatives)) local_derivatives = tuple(sorted(local_derivatives)) # TODO: Temporarily letting local_derivatives imply reference_value, # but this was not intended to be the case #if local_derivatives: # reference_value = True # Make reference_value true or false reference_value = reference_value or False # Consistency check if isinstance(t, (SpatialCoordinate, Jacobian)): pass else: if local_derivatives and not reference_value: error("Local derivatives of non-local value is not legal.") if global_derivatives and reference_value: error("Global derivatives of local value is not legal.") # Make sure component is an integer tuple if component is None: component = () else: component = tuple(component) # Get the shape of the core terminal or its reference value, # this is the shape that component refers to if isinstance(t, FormArgument): element = t.ufl_element() if reference_value: # Ignoring symmetry, assuming already applied in conversion to reference frame base_symmetry = {} base_shape = element.reference_value_shape() else: base_symmetry = element.symmetry() base_shape = t.ufl_shape else: base_symmetry = {} base_shape = t.ufl_shape # Assert that component is within the shape of the (reference) terminal if len(component) != len(base_shape): error("Length of component does not match rank of (reference) terminal.") if not all(c >= 0 and c < d for c, d in zip(component, base_shape)): error("Component indices %s are outside value shape %s" % (component, base_shape)) # Flatten component vi2si, si2vi = build_component_numbering(base_shape, base_symmetry) flat_component = vi2si[component] # num_flat_components = len(si2vi) return ModifiedTerminal(expr, t, reference_value, base_shape, base_symmetry, component, flat_component, global_derivatives, local_derivatives, averaged, restriction) ffc-2019.1.0.post0/ffc/uflacs/analysis/valuenumbering.py000066400000000000000000000167551346026536000230240ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2011-2017 Martin Sandve Alnæs # # This file is part of UFLACS. # # UFLACS is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # UFLACS is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with UFLACS. If not, see . """Algorithms for value numbering within computational graphs.""" from ufl import product from ufl.permutation import compute_indices from ufl.corealg.multifunction import MultiFunction from ufl.classes import FormArgument from ffc.log import error from ffc.uflacs.analysis.indexing import map_indexed_arg_components, map_component_tensor_arg_components from ffc.uflacs.analysis.modified_terminals import analyse_modified_terminal class ValueNumberer(MultiFunction): """An algorithm to map the scalar components of an expression node to unique value numbers, with fallthrough for types that can be mapped to the value numbers of their operands.""" def __init__(self, e2i, V_sizes, V_symbols): MultiFunction.__init__(self) self.symbol_count = 0 self.e2i = e2i self.V_sizes = V_sizes self.V_symbols = V_symbols def new_symbols(self, n): "Generator for new symbols with a running counter." begin = self.symbol_count end = begin + n self.symbol_count = end return list(range(begin, end)) def new_symbol(self): "Generator for new symbols with a running counter." begin = self.symbol_count self.symbol_count += 1 return begin def get_node_symbols(self, expr): return self.V_symbols[self.e2i[expr]] def expr(self, v, i): "Create new symbols for expressions that represent new values." n = self.V_sizes[i] return self.new_symbols(n) def form_argument(self, v, i): "Create new symbols for expressions that represent new values." symmetry = v.ufl_element().symmetry() if symmetry: # Build symbols with symmetric components skipped symbols = [] mapped_symbols = {} for c in compute_indices(v.ufl_shape): # Build mapped component mc with symmetries from element considered mc = symmetry.get(c, c) # Get existing symbol or create new and store with mapped component mc as key s = mapped_symbols.get(mc) if s is None: s = self.new_symbol() mapped_symbols[mc] = s symbols.append(s) else: n = self.V_sizes[i] symbols = self.new_symbols(n) return symbols def _modified_terminal(self, v, i): """Modifiers: terminal - the underlying Terminal object global_derivatives - tuple of ints, each meaning derivative in that global direction local_derivatives - tuple of ints, each meaning derivative in that local direction reference_value - bool, whether this is represented in reference frame averaged - None, 'facet' or 'cell' restriction - None, '+' or '-' component - tuple of ints, the global component of the Terminal flat_component - single int, flattened local component of the Terminal, considering symmetry """ # (1) mt.terminal.ufl_shape defines a core indexing space UNLESS mt.reference_value, # in which case the reference value shape of the element must be used. # (2) mt.terminal.ufl_element().symmetry() defines core symmetries # (3) averaging and restrictions define distinct symbols, no additional symmetries # (4) two or more grad/reference_grad defines distinct symbols with additional symmetries # v is not necessary scalar here, indexing in (0,...,0) picks the first scalar component # to analyse, which should be sufficient to get the base shape and derivatives if v.ufl_shape: mt = analyse_modified_terminal(v[(0,) * len(v.ufl_shape)]) else: mt = analyse_modified_terminal(v) # Get derivatives num_ld = len(mt.local_derivatives) num_gd = len(mt.global_derivatives) assert not (num_ld and num_gd) if num_ld: domain = mt.terminal.ufl_domain() tdim = domain.topological_dimension() d_components = compute_indices((tdim,) * num_ld) elif num_gd: domain = mt.terminal.ufl_domain() gdim = domain.geometric_dimension() d_components = compute_indices((gdim,) * num_gd) else: d_components = [()] # Get base shape without the derivative axes base_components = compute_indices(mt.base_shape) # Build symbols with symmetric components and derivatives skipped symbols = [] mapped_symbols = {} for bc in base_components: for dc in d_components: # Build mapped component mc with symmetries from element and derivatives combined mbc = mt.base_symmetry.get(bc, bc) mdc = tuple(sorted(dc)) mc = mbc + mdc # Get existing symbol or create new and store with mapped component mc as key s = mapped_symbols.get(mc) if s is None: s = self.new_symbol() mapped_symbols[mc] = s symbols.append(s) # Consistency check before returning symbols assert not v.ufl_free_indices if product(v.ufl_shape) != len(symbols): error("Internal error in value numbering.") return symbols # Handle modified terminals with element symmetries and second derivative symmetries! # terminals are implemented separately, or maybe they don't need to be? grad = _modified_terminal reference_grad = _modified_terminal facet_avg = _modified_terminal cell_avg = _modified_terminal restricted = _modified_terminal reference_value = _modified_terminal # indexed is implemented as a fall-through operation def indexed(self, Aii, i): # Reuse symbols of arg A for Aii A = Aii.ufl_operands[0] # Get symbols of argument A A_symbols = self.get_node_symbols(A) # Map A_symbols to Aii_symbols d = map_indexed_arg_components(Aii) symbols = [A_symbols[k] for k in d] return symbols def component_tensor(self, A, i): # Reuse symbols of arg Aii for A Aii = A.ufl_operands[0] # Get symbols of argument Aii Aii_symbols = self.get_node_symbols(Aii) # Map A_symbols to Aii_symbols d = map_component_tensor_arg_components(A) symbols = [Aii_symbols[k] for k in d] return symbols def list_tensor(self, v, i): row_symbols = [self.get_node_symbols(row) for row in v.ufl_operands] symbols = [] for rowsymb in row_symbols: symbols.extend(rowsymb) return symbols def variable(self, v, i): "Direct reuse of all symbols." return self.get_node_symbols(v.ufl_operands[0]) ffc-2019.1.0.post0/ffc/uflacs/backends/000077500000000000000000000000001346026536000173405ustar00rootroot00000000000000ffc-2019.1.0.post0/ffc/uflacs/backends/__init__.py000066400000000000000000000016721346026536000214570ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2011-2017 Martin Sandve Alnæs # # This file is part of UFLACS. # # UFLACS is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # UFLACS is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with UFLACS. If not, see """Collection of backends to UFLACS form compiler algorithms. In the end we want to specify a clean and mature API here. We may or may not want to move e.g. the FFC backend to FFC instead of having it here. """ ffc-2019.1.0.post0/ffc/uflacs/backends/ffc/000077500000000000000000000000001346026536000200765ustar00rootroot00000000000000ffc-2019.1.0.post0/ffc/uflacs/backends/ffc/__init__.py000066400000000000000000000014551346026536000222140ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2011-2017 Martin Sandve Alnæs # # This file is part of UFLACS. # # UFLACS is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # UFLACS is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with UFLACS. If not, see . """The FFC specific backend to the UFLACS form compiler algorithms.""" ffc-2019.1.0.post0/ffc/uflacs/backends/ffc/access.py000066400000000000000000000413631346026536000217200ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2011-2017 Martin Sandve Alnæs # # This file is part of UFLACS. # # UFLACS is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # UFLACS is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with UFLACS. If not, see """FFC/UFC specific variable access.""" from ufl.corealg.multifunction import MultiFunction from ufl.permutation import build_component_numbering from ufl.measure import custom_integral_types from ufl.finiteelement import MixedElement from ffc.log import error, warning, debug from ffc.fiatinterface import create_element from ffc.uflacs.backends.ffc.symbols import FFCBackendSymbols from ffc.uflacs.backends.ffc.common import num_coordinate_component_dofs class FFCBackendAccess(MultiFunction): """FFC specific cpp formatter class.""" def __init__(self, ir, language, symbols, parameters): MultiFunction.__init__(self) # Store ir and parameters self.entitytype = ir["entitytype"] self.integral_type = ir["integral_type"] self.language = language self.symbols = symbols self.parameters = parameters # === Rules for all modified terminal types === def expr(self, e, mt, tabledata, num_points): error("Missing handler for type {0}.".format(e._ufl_class_.__name__)) # === Rules for literal constants === def zero(self, e, mt, tabledata, num_points): # We shouldn't have derivatives of constants left at this point assert not (mt.global_derivatives or mt.local_derivatives) # NB! UFL doesn't retain float/int type information for zeros... L = self.language return L.LiteralFloat(0.0) def int_value(self, e, mt, tabledata, num_points): # We shouldn't have derivatives of constants left at this point assert not (mt.global_derivatives or mt.local_derivatives) L = self.language return L.LiteralInt(int(e)) def float_value(self, e, mt, tabledata, num_points): # We shouldn't have derivatives of constants left at this point assert not (mt.global_derivatives or mt.local_derivatives) L = self.language return L.LiteralFloat(float(e)) #def quadrature_weight(self, e, mt, tabledata, num_points): # "Quadrature weights are precomputed and need no code." # return [] def coefficient(self, e, mt, tabledata, num_points): ttype = tabledata.ttype assert ttype != "zeros" begin, end = tabledata.dofrange if ttype == "ones" and (end - begin) == 1: # f = 1.0 * f_{begin}, just return direct reference to dof array at dof begin # (if mt is restricted, begin contains cell offset) idof = begin return self.symbols.coefficient_dof_access(mt.terminal, idof) elif ttype == "quadrature": # Dofmap should be contiguous in this case assert len(tabledata.dofmap) == end - begin # f(x_q) = sum_i f_i * delta_iq = f_q, just return direct # reference to dof array at quadrature point index + begin iq = self.symbols.quadrature_loop_index() idof = begin + iq return self.symbols.coefficient_dof_access(mt.terminal, idof) else: # Return symbol, see definitions for computation return self.symbols.coefficient_value(mt) #, num_points) def spatial_coordinate(self, e, mt, tabledata, num_points): #L = self.language if mt.global_derivatives: error("Not expecting global derivatives of SpatialCoordinate.") if mt.averaged: error("Not expecting average of SpatialCoordinates.") if self.integral_type in custom_integral_types: if mt.local_derivatives: error("FIXME: Jacobian in custom integrals is not implemented.") # Access predefined quadrature points table x = self.symbols.custom_points_table() iq = self.symbols.quadrature_loop_index() gdim, = mt.terminal.ufl_shape if gdim == 1: index = iq else: index = iq * gdim + mt.flat_component return x[index] else: # Physical coordinates are computed by code generated in definitions return self.symbols.x_component(mt) def cell_coordinate(self, e, mt, tabledata, num_points): #L = self.language if mt.global_derivatives: error("Not expecting derivatives of CellCoordinate.") if mt.local_derivatives: error("Not expecting derivatives of CellCoordinate.") if mt.averaged: error("Not expecting average of CellCoordinate.") if self.integral_type == "cell" and not mt.restriction: # Access predefined quadrature points table X = self.symbols.points_table(num_points) tdim, = mt.terminal.ufl_shape iq = self.symbols.quadrature_loop_index() if num_points == 1: index = mt.flat_component elif tdim == 1: index = iq else: index = iq * tdim + mt.flat_component return X[index] else: # X should be computed from x or Xf symbolically instead of getting here error("Expecting reference cell coordinate to be symbolically rewritten.") def facet_coordinate(self, e, mt, tabledata, num_points): L = self.language if mt.global_derivatives: error("Not expecting derivatives of FacetCoordinate.") if mt.local_derivatives: error("Not expecting derivatives of FacetCoordinate.") if mt.averaged: error("Not expecting average of FacetCoordinate.") if mt.restriction: error("Not expecting restriction of FacetCoordinate.") if self.integral_type in ("interior_facet", "exterior_facet"): tdim, = mt.terminal.ufl_shape if tdim == 0: error("Vertices have no facet coordinates.") elif tdim == 1: # 0D vertex coordinate warning("Vertex coordinate is always 0, should get rid of this in ufl geometry lowering.") return L.LiteralFloat(0.0) Xf = self.points_table(num_points) iq = self.symbols.quadrature_loop_index() assert 0 <= mt.flat_component < (tdim-1) if num_points == 1: index = mt.flat_component elif tdim == 2: index = iq else: index = iq * (tdim - 1) + mt.flat_component return Xf[index] else: # Xf should be computed from X or x symbolically instead of getting here error("Expecting reference facet coordinate to be symbolically rewritten.") def jacobian(self, e, mt, tabledata, num_points): L = self.language if mt.global_derivatives: error("Not expecting global derivatives of Jacobian.") if mt.averaged: error("Not expecting average of Jacobian.") return self.symbols.J_component(mt) def reference_cell_volume(self, e, mt, tabledata, access): L = self.language cellname = mt.terminal.ufl_domain().ufl_cell().cellname() if cellname in ("interval", "triangle", "tetrahedron", "quadrilateral", "hexahedron"): return L.Symbol("{0}_reference_cell_volume".format(cellname)) else: error("Unhandled cell types {0}.".format(cellname)) def reference_facet_volume(self, e, mt, tabledata, access): L = self.language cellname = mt.terminal.ufl_domain().ufl_cell().cellname() if cellname in ("interval", "triangle", "tetrahedron", "quadrilateral", "hexahedron"): return L.Symbol("{0}_reference_facet_volume".format(cellname)) else: error("Unhandled cell types {0}.".format(cellname)) def reference_normal(self, e, mt, tabledata, access): L = self.language cellname = mt.terminal.ufl_domain().ufl_cell().cellname() if cellname in ("interval", "triangle", "tetrahedron", "quadrilateral", "hexahedron"): table = L.Symbol("{0}_reference_facet_normals".format(cellname)) facet = self.symbols.entity("facet", mt.restriction) return table[facet][mt.component[0]] else: error("Unhandled cell types {0}.".format(cellname)) def cell_facet_jacobian(self, e, mt, tabledata, num_points): L = self.language cellname = mt.terminal.ufl_domain().ufl_cell().cellname() if cellname in ("triangle", "tetrahedron", "quadrilateral", "hexahedron"): table = L.Symbol("{0}_reference_facet_jacobian".format(cellname)) facet = self.symbols.entity("facet", mt.restriction) return table[facet][mt.component[0]][mt.component[1]] elif cellname == "interval": error("The reference facet jacobian doesn't make sense for interval cell.") else: error("Unhandled cell types {0}.".format(cellname)) def reference_cell_edge_vectors(self, e, mt, tabledata, num_points): L = self.language cellname = mt.terminal.ufl_domain().ufl_cell().cellname() if cellname in ("triangle", "tetrahedron", "quadrilateral", "hexahedron"): table = L.Symbol("{0}_reference_edge_vectors".format(cellname)) return table[mt.component[0]][mt.component[1]] elif cellname == "interval": error("The reference cell edge vectors doesn't make sense for interval cell.") else: error("Unhandled cell types {0}.".format(cellname)) def reference_facet_edge_vectors(self, e, mt, tabledata, num_points): L = self.language cellname = mt.terminal.ufl_domain().ufl_cell().cellname() if cellname in ("tetrahedron", "hexahedron"): table = L.Symbol("{0}_reference_edge_vectors".format(cellname)) facet = self.symbols.entity("facet", mt.restriction) return table[facet][mt.component[0]][mt.component[1]] elif cellname in ("interval", "triangle", "quadrilateral"): error("The reference cell facet edge vectors doesn't make sense for interval or triangle cell.") else: error("Unhandled cell types {0}.".format(cellname)) def cell_orientation(self, e, mt, tabledata, num_points): # Error if not in manifold case: domain = mt.terminal.ufl_domain() assert domain.geometric_dimension() > domain.topological_dimension() return self.symbols.cell_orientation_internal(mt.restriction) def facet_orientation(self, e, mt, tabledata, num_points): L = self.language cellname = mt.terminal.ufl_domain().ufl_cell().cellname() if cellname not in ("interval", "triangle", "tetrahedron"): error("Unhandled cell types {0}.".format(cellname)) table = L.Symbol("{0}_facet_orientations".format(cellname)) facet = self.symbols.entity("facet", mt.restriction) return table[facet] def cell_vertices(self, e, mt, tabledata, num_points): L = self.language # Get properties of domain domain = mt.terminal.ufl_domain() gdim = domain.geometric_dimension() coordinate_element = domain.ufl_coordinate_element() # Get dimension and dofmap of scalar element assert isinstance(coordinate_element, MixedElement) assert coordinate_element.value_shape() == (gdim,) ufl_scalar_element, = set(coordinate_element.sub_elements()) assert ufl_scalar_element.family() in ("Lagrange", "Q", "S") fiat_scalar_element = create_element(ufl_scalar_element) vertex_scalar_dofs = fiat_scalar_element.entity_dofs()[0] num_scalar_dofs = fiat_scalar_element.space_dimension() # Get dof and component dof, = vertex_scalar_dofs[mt.component[0]] component = mt.component[1] expr = self.symbols.domain_dof_access(dof, component, gdim, num_scalar_dofs, mt.restriction) return expr def cell_edge_vectors(self, e, mt, tabledata, num_points): L = self.language # Get properties of domain domain = mt.terminal.ufl_domain() cellname = domain.ufl_cell().cellname() gdim = domain.geometric_dimension() coordinate_element = domain.ufl_coordinate_element() if cellname in ("triangle", "tetrahedron", "quadrilateral", "hexahedron"): pass elif cellname == "interval": error("The physical cell edge vectors doesn't make sense for interval cell.") else: error("Unhandled cell types {0}.".format(cellname)) # Get dimension and dofmap of scalar element assert isinstance(coordinate_element, MixedElement) assert coordinate_element.value_shape() == (gdim,) ufl_scalar_element, = set(coordinate_element.sub_elements()) assert ufl_scalar_element.family() in ("Lagrange", "Q", "S") fiat_scalar_element = create_element(ufl_scalar_element) vertex_scalar_dofs = fiat_scalar_element.entity_dofs()[0] num_scalar_dofs = fiat_scalar_element.space_dimension() # Get edge vertices edge = mt.component[0] edge_vertices = fiat_scalar_element.get_reference_element().get_topology()[1][edge] vertex0, vertex1 = edge_vertices # Get dofs and component dof0, = vertex_scalar_dofs[vertex0] dof1, = vertex_scalar_dofs[vertex1] component = mt.component[1] expr = ( self.symbols.domain_dof_access(dof0, component, gdim, num_scalar_dofs, mt.restriction) - self.symbols.domain_dof_access(dof1, component, gdim, num_scalar_dofs, mt.restriction) ) return expr def facet_edge_vectors(self, e, mt, tabledata, num_points): L = self.language # Get properties of domain domain = mt.terminal.ufl_domain() cellname = domain.ufl_cell().cellname() gdim = domain.geometric_dimension() coordinate_element = domain.ufl_coordinate_element() if cellname in ("tetrahedron", "hexahedron"): pass elif cellname in ("interval", "triangle", "quadrilateral"): error("The physical facet edge vectors doesn't make sense for {0} cell.".format(cellname)) else: error("Unhandled cell types {0}.".format(cellname)) # Get dimension and dofmap of scalar element assert isinstance(coordinate_element, MixedElement) assert coordinate_element.value_shape() == (gdim,) ufl_scalar_element, = set(coordinate_element.sub_elements()) assert ufl_scalar_element.family() in ("Lagrange", "Q", "S") fiat_scalar_element = create_element(ufl_scalar_element) vertex_scalar_dofs = fiat_scalar_element.entity_dofs()[0] num_scalar_dofs = fiat_scalar_element.space_dimension() # Get edge vertices facet = self.symbols.entity("facet", mt.restriction) facet_edge = mt.component[0] facet_edge_vertices = L.Symbol("{0}_facet_edge_vertices".format(cellname)) vertex0 = facet_edge_vertices[facet][facet_edge][0] vertex1 = facet_edge_vertices[facet][facet_edge][1] # Get dofs and component component = mt.component[1] assert coordinate_element.degree() == 1, "Assuming degree 1 element" dof0 = vertex0 dof1 = vertex1 expr = ( self.symbols.domain_dof_access(dof0, component, gdim, num_scalar_dofs, mt.restriction) - self.symbols.domain_dof_access(dof1, component, gdim, num_scalar_dofs, mt.restriction) ) return expr def _expect_symbolic_lowering(self, e, mt, tabledata, num_points): error("Expecting {0} to be replaced in symbolic preprocessing.".format(type(e))) facet_normal = _expect_symbolic_lowering cell_normal = _expect_symbolic_lowering jacobian_inverse = _expect_symbolic_lowering jacobian_determinant = _expect_symbolic_lowering facet_jacobian = _expect_symbolic_lowering facet_jacobian_inverse = _expect_symbolic_lowering facet_jacobian_determinant = _expect_symbolic_lowering ffc-2019.1.0.post0/ffc/uflacs/backends/ffc/backend.py000066400000000000000000000033231346026536000220400ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2011-2017 Martin Sandve Alnæs # # This file is part of UFLACS. # # UFLACS is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # UFLACS is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with UFLACS. If not, see """Collection of FFC specific pieces for the code generation phase.""" import ffc.uflacs.language.cnodes from ffc.uflacs.language.ufl_to_cnodes import UFL2CNodesTranslatorCpp from ffc.uflacs.backends.ffc.symbols import FFCBackendSymbols from ffc.uflacs.backends.ffc.access import FFCBackendAccess from ffc.uflacs.backends.ffc.definitions import FFCBackendDefinitions class FFCBackend(object): "Class collecting all aspects of the FFC backend." def __init__(self, ir, parameters): # This is the seam where cnodes/C++ is chosen for the ffc backend self.language = ffc.uflacs.language.cnodes self.ufl_to_language = UFL2CNodesTranslatorCpp(self.language) coefficient_numbering = ir["coefficient_numbering"] self.symbols = FFCBackendSymbols(self.language, coefficient_numbering) self.definitions = FFCBackendDefinitions(ir, self.language, self.symbols, parameters) self.access = FFCBackendAccess(ir, self.language, self.symbols, parameters) ffc-2019.1.0.post0/ffc/uflacs/backends/ffc/common.py000066400000000000000000000040611346026536000217410ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2011-2017 Martin Sandve Alnæs # # This file is part of UFLACS. # # UFLACS is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # UFLACS is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with UFLACS. If not, see """FFC/UFC specific symbol naming.""" from ffc.log import error # TODO: Move somewhere else def num_coordinate_component_dofs(coordinate_element): """Get the number of dofs for a coordinate component for this degree. This is a local hack that works for Lagrange 1-3, better would be to get this passed by ffc from fiat through the ir. The table data is to messy to figure out a clean design for that quickly. """ from ufl.cell import num_cell_entities degree = coordinate_element.degree() cell = coordinate_element.cell() tdim = cell.topological_dimension() cellname = cell.cellname() d = 0 for entity_dim in range(tdim+1): # n = dofs per cell entity if entity_dim == 0: n = 1 elif entity_dim == 1: n = degree - 1 elif entity_dim == 2: n = (degree - 2)*(degree - 1) // 2 elif entity_dim == 3: n = (degree - 3)*(degree - 2)*(degree - 1) // 6 else: error("Entity dimension out of range") # Accumulate num_entities = num_cell_entities[cellname][entity_dim] d += num_entities * n return d # TODO: Get restriction postfix from somewhere central def ufc_restriction_offset(restriction, length): if restriction == "-": return length else: return 0 ffc-2019.1.0.post0/ffc/uflacs/backends/ffc/definitions.py000066400000000000000000000262121346026536000227660ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2011-2017 Martin Sandve Alnæs # # This file is part of UFLACS. # # UFLACS is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # UFLACS is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with UFLACS. If not, see """FFC/UFC specific variable definitions.""" from ufl.corealg.multifunction import MultiFunction from ufl.measure import custom_integral_types from ffc.log import error, warning from ffc.uflacs.backends.ffc.symbols import FFCBackendSymbols from ffc.uflacs.backends.ffc.common import num_coordinate_component_dofs class FFCBackendDefinitions(MultiFunction): """FFC specific code definitions.""" def __init__(self, ir, language, symbols, parameters): MultiFunction.__init__(self) # Store ir and parameters self.integral_type = ir["integral_type"] self.entitytype = ir["entitytype"] self.language = language self.symbols = symbols self.parameters = parameters # === Generate code to define variables for ufl types === def expr(self, t, mt, tabledata, num_points, access): error("Unhandled type {0}".format(type(t))) #def quadrature_weight(self, e, mt, tabledata, num_points, access): # "Quadrature weights are precomputed and need no code." # return [] def constant_value(self, e, mt, tabledata, num_points, access): "Constants simply use literals in the target language." return [] def argument(self, t, mt, tabledata, num_points, access): "Arguments are accessed through element tables." return [] def coefficient(self, t, mt, tabledata, num_points, access): "Return definition code for coefficients." L = self.language ttype = tabledata.ttype begin, end = tabledata.dofrange #fe_classname = ir["classnames"]["finite_element"][t.ufl_element()] if ttype == "zeros": debug("Not expecting zero coefficients to get this far.") return [] # For a constant coefficient we reference the dofs directly, so no definition needed if ttype == "ones" and (end - begin) == 1: return [] # For quadrature elements we reference the dofs directly, so no definition needed if ttype == "quadrature": return [] assert begin < end # Get access to element table FE = self.symbols.element_table(tabledata, self.entitytype, mt.restriction) unroll = len(tabledata.dofmap) != end - begin #unroll = True if unroll: # TODO: Could also use a generated constant dofmap here like in block code # Unrolled loop to accumulate linear combination of dofs and tables values = [self.symbols.coefficient_dof_access(mt.terminal, idof) * FE[i] for i, idof in enumerate(tabledata.dofmap)] value = L.Sum(values) code = [ L.VariableDecl("const double", access, value) ] else: # Loop to accumulate linear combination of dofs and tables ic = self.symbols.coefficient_dof_sum_index() dof_access = self.symbols.coefficient_dof_access(mt.terminal, ic + begin) code = [ L.VariableDecl("double", access, 0.0), L.ForRange(ic, 0, end - begin, body=[L.AssignAdd(access, dof_access * FE[ic])]) ] return code def _define_coordinate_dofs_lincomb(self, e, mt, tabledata, num_points, access): "Define x or J as a linear combination of coordinate dofs with given table data." L = self.language # Get properties of domain domain = mt.terminal.ufl_domain() #tdim = domain.topological_dimension() gdim = domain.geometric_dimension() coordinate_element = domain.ufl_coordinate_element() #degree = coordinate_element.degree() num_scalar_dofs = num_coordinate_component_dofs(coordinate_element) # Reference coordinates are known, no coordinate field, so we compute # this component as linear combination of coordinate_dofs "dofs" and table # Find table name and dof range it corresponds to ttype = tabledata.ttype begin, end = tabledata.dofrange assert end - begin <= num_scalar_dofs assert ttype != "zeros" assert ttype != "quadrature" #xfe_classname = ir["classnames"]["finite_element"][coordinate_element] #sfe_classname = ir["classnames"]["finite_element"][coordinate_element.sub_elements()[0]] # Get access to element table FE = self.symbols.element_table(tabledata, self.entitytype, mt.restriction) inline = True if ttype == "zeros": # Not sure if this will ever happen debug("Not expecting zeros for %s." % (e._ufl_class_.__name__,)) code = [ L.VariableDecl("const double", access, L.LiteralFloat(0.0)) ] elif ttype == "ones": # Not sure if this will ever happen debug("Not expecting ones for %s." % (e._ufl_class_.__name__,)) # Inlined version (we know this is bounded by a small number) dof_access = self.symbols.domain_dofs_access(gdim, num_scalar_dofs, mt.restriction) values = [dof_access[idof] for idof in tabledata.dofmap] value = L.Sum(values) code = [ L.VariableDecl("const double", access, value) ] elif inline: # Inlined version (we know this is bounded by a small number) dof_access = self.symbols.domain_dofs_access(gdim, num_scalar_dofs, mt.restriction) # Inlined loop to accumulate linear combination of dofs and tables value = L.Sum([dof_access[idof] * FE[i] for i, idof in enumerate(tabledata.dofmap)]) code = [ L.VariableDecl("const double", access, value) ] else: # TODO: Make an option to test this version for performance # Assuming contiguous dofmap here assert len(tabledata.dofmap) == end - begin # Generated loop version: ic = self.symbols.coefficient_dof_sum_index() dof_access = self.symbols.domain_dof_access(ic + begin, mt.flat_component, gdim, num_scalar_dofs, mt.restriction) # Loop to accumulate linear combination of dofs and tables code = [ L.VariableDecl("double", access, 0.0), L.ForRange(ic, 0, end - begin, body=[L.AssignAdd(access, dof_access * FE[ic])]) ] return code def spatial_coordinate(self, e, mt, tabledata, num_points, access): """Return definition code for the physical spatial coordinates. If physical coordinates are given: No definition needed. If reference coordinates are given: x = sum_k xdof_k xphi_k(X) If reference facet coordinates are given: x = sum_k xdof_k xphi_k(Xf) """ if self.integral_type in custom_integral_types: # FIXME: Jacobian may need adjustment for custom_integral_types if mt.local_derivatives: error("FIXME: Jacobian in custom integrals is not implemented.") return [] else: return self._define_coordinate_dofs_lincomb(e, mt, tabledata, num_points, access) def cell_coordinate(self, e, mt, tabledata, num_points, access): """Return definition code for the reference spatial coordinates. If reference coordinates are given:: No definition needed. If physical coordinates are given and domain is affine:: X = K*(x-x0) This is inserted symbolically. If physical coordinates are given and domain is non- affine:: Not currently supported. """ # Should be either direct access to points array or symbolically computed return [] def jacobian(self, e, mt, tabledata, num_points, access): """Return definition code for the Jacobian of x(X). J = sum_k xdof_k grad_X xphi_k(X) """ # TODO: Jacobian may need adjustment for custom_integral_types return self._define_coordinate_dofs_lincomb(e, mt, tabledata, num_points, access) def cell_orientation(self, e, mt, tabledata, num_points, access): # Would be nicer if cell_orientation was a double variable input, # but this is how dolfin/ufc/ffc currently passes this information. # 0 means up and gives +1.0, 1 means down and gives -1.0. L = self.language co = self.symbols.cell_orientation_argument(mt.restriction) expr = L.Conditional(L.EQ(co, L.LiteralInt(1)), L.LiteralFloat(-1.0), L.LiteralFloat(+1.0)) code = [ L.VariableDecl("const double", access, expr) ] return code def _expect_table(self, e, mt, tabledata, num_points, access): "These quantities refer to constant tables defined in ufc_geometry.h." # TODO: Inject const static table here instead? return [] reference_cell_volume = _expect_table reference_facet_volume = _expect_table reference_normal = _expect_table cell_facet_jacobian = _expect_table reference_cell_edge_vectors = _expect_table reference_facet_edge_vectors = _expect_table facet_orientation = _expect_table def _expect_physical_coords(self, e, mt, tabledata, num_points, access): "These quantities refer to coordinate_dofs" # TODO: Generate more efficient inline code for Max/MinCell/FacetEdgeLength # and CellDiameter here rather than lowering these quantities? return [] cell_vertices = _expect_physical_coords cell_edge_vectors = _expect_physical_coords facet_edge_vectors = _expect_physical_coords def _expect_symbolic_lowering(self, e, mt, tabledata, num_points, access): "These quantities are expected to be replaced in symbolic preprocessing." error("Expecting {0} to be replaced in symbolic preprocessing.".format(type(e))) facet_normal = _expect_symbolic_lowering cell_normal = _expect_symbolic_lowering jacobian_inverse = _expect_symbolic_lowering jacobian_determinant = _expect_symbolic_lowering facet_jacobian = _expect_symbolic_lowering facet_jacobian_inverse = _expect_symbolic_lowering facet_jacobian_determinant = _expect_symbolic_lowering ffc-2019.1.0.post0/ffc/uflacs/backends/ffc/symbols.py000066400000000000000000000155211346026536000221440ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2011-2017 Martin Sandve Alnæs # # This file is part of UFLACS. # # UFLACS is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # UFLACS is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with UFLACS. If not, see """FFC/UFC specific symbol naming.""" from ffc.log import error # TODO: Get restriction postfix from somewhere central def ufc_restriction_postfix(restriction): if restriction == "+": res = "_0" elif restriction == "-": res = "_1" else: res = "" return res def format_mt_name(basename, mt): "Format variable name for modified terminal." access = str(basename) # Add averaged state to name if mt.averaged: avg = "_a{0}".format(mt.averaged) access += avg # Format restriction res = ufc_restriction_postfix(mt.restriction).replace("_", "_r") access += res # Format local derivatives assert not mt.global_derivatives if mt.local_derivatives: der = "_d{0}".format(''.join(map(str, mt.local_derivatives))) access += der # Add flattened component to name if mt.component: comp = "_c{0}".format(mt.flat_component) access += comp return access class FFCBackendSymbols(object): """FFC specific symbol definitions. Provides non-ufl symbols.""" def __init__(self, language, coefficient_numbering): self.L = language self.S = self.L.Symbol self.coefficient_numbering = coefficient_numbering # Used for padding variable names based on restriction self.restriction_postfix = { r: ufc_restriction_postfix(r) for r in ("+", "-", None) } # TODO: Make this configurable for easy experimentation with dolfin! # Coordinate dofs for each component are interleaved? Must match dolfin. # True = XYZXYZXYZXYZ, False = XXXXYYYYZZZZ self.interleaved_components = True def element_tensor(self): "Symbol for the element tensor itself." return self.S("A") def entity(self, entitytype, restriction): "Entity index for lookup in element tables." if entitytype == "cell": # Always 0 for cells (even with restriction) return self.L.LiteralInt(0) elif entitytype == "facet": return self.S("facet" + ufc_restriction_postfix(restriction)) elif entitytype == "vertex": return self.S("vertex") else: error("Unknown entitytype {}".format(entitytype)) def cell_orientation_argument(self, restriction): "Cell orientation argument in ufc. Not same as cell orientation in generated code." return self.S("cell_orientation" + ufc_restriction_postfix(restriction)) def cell_orientation_internal(self, restriction): "Internal value for cell orientation in generated code." return self.S("co" + ufc_restriction_postfix(restriction)) def argument_loop_index(self, iarg): "Loop index for argument #iarg." indices = ["i", "j", "k", "l"] return self.S(indices[iarg]) def coefficient_dof_sum_index(self): """Index for loops over coefficient dofs, assumed to never be used in two nested loops.""" return self.S("ic") def quadrature_loop_index(self): """Reusing a single index name for all quadrature loops, assumed not to be nested.""" return self.S("iq") def num_custom_quadrature_points(self): "Number of quadrature points, argument to custom integrals." return self.S("num_quadrature_points") def custom_quadrature_weights(self): "Quadrature weights including cell measure scaling, argument to custom integrals." return self.S("quadrature_weights") def custom_quadrature_points(self): "Physical quadrature points, argument to custom integrals." return self.S("quadrature_points") def custom_weights_table(self): "Table for chunk of custom quadrature weights (including cell measure scaling)." return self.S("weights_chunk") def custom_points_table(self): "Table for chunk of custom quadrature points (physical coordinates)." return self.S("points_chunk") def weights_table(self, num_points): "Table of quadrature weights." return self.S("weights%d" % (num_points,)) def points_table(self, num_points): "Table of quadrature points (points on the reference integration entity)." return self.S("points%d" % (num_points,)) def x_component(self, mt): "Physical coordinate component." return self.S(format_mt_name("x", mt)) def X_component(self, mt): "Reference coordinate component." return self.S(format_mt_name("X", mt)) def J_component(self, mt): "Jacobian component." # FIXME: Add domain number! return self.S(format_mt_name("J", mt)) def domain_dof_access(self, dof, component, gdim, num_scalar_dofs, restriction): # FIXME: Add domain number or offset! vc = self.S("coordinate_dofs" + ufc_restriction_postfix(restriction)) if self.interleaved_components: return vc[gdim*dof + component] else: return vc[num_scalar_dofs*component + dof] def domain_dofs_access(self, gdim, num_scalar_dofs, restriction): # FIXME: Add domain number or offset! return [self.domain_dof_access(dof, component, gdim, num_scalar_dofs, restriction) for component in range(gdim) for dof in range(num_scalar_dofs)] def coefficient_dof_access(self, coefficient, dof_number): # TODO: Add domain number? c = self.coefficient_numbering[coefficient] w = self.S("w") return w[c, dof_number] def coefficient_value(self, mt): "Symbol for variable holding value or derivative component of coefficient." c = self.coefficient_numbering[mt.terminal] return self.S(format_mt_name("w%d" % (c,), mt)) def element_table(self, tabledata, entitytype, restriction): if tabledata.is_uniform: entity = 0 else: entity = self.entity(entitytype, restriction) if tabledata.is_piecewise: iq = 0 else: iq = self.quadrature_loop_index() # Return direct access to element table return self.S(tabledata.name)[entity][iq] ffc-2019.1.0.post0/ffc/uflacs/backends/ufc/000077500000000000000000000000001346026536000201155ustar00rootroot00000000000000ffc-2019.1.0.post0/ffc/uflacs/backends/ufc/__init__.py000066400000000000000000000014151346026536000222270ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2015-2017 Martin Sandve Alnæs # # This file is part of UFLACS. # # UFLACS is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # UFLACS is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with UFLACS. If not, see . """Backend for generating UFC code.""" ffc-2019.1.0.post0/ffc/uflacs/backends/ufc/apply_mappings.py000066400000000000000000000142531346026536000235170ustar00rootroot00000000000000 # CURRENTLY UNUSED CODE, CONVERT TO CREATE NEW ufc::finite_element::apply_mappings functions def _generate_apply_mapping_to_computed_values(L, dof_data): mapping = dof_data["mapping"] num_components = dof_data["num_components"] reference_offset = dof_data["reference_offset"] physical_offset = dof_data["physical_offset"] #physical_values[num_points][num_dofs][physical_value_size] #reference_values[num_points][num_dofs][reference_value_size] code = [] # FIXME: Define values numbering if mapping == "affine": # Just copy values if num_components == 1: code += [ L.ForRange(ip, 0, num_points, body= L.Assign(physical_values[ip][physical_offset], reference_values[ip][reference_offset])), ] else: code += [ L.ForRange(ip, 0, num_points, body= L.ForRange(k, 0, num_components, body= L.Assign(physical_values[ip][physical_offset + k], reference_values[ip][reference_offset + k]))), ] return code else: FIXME return code def __ffc_implementation_of__generate_apply_mapping_to_computed_values(L): # Apply transformation if applicable. mapping = dof_data["mapping"] num_components = dof_data["num_components"] reference_offset = dof_data["reference_offset"] physical_offset = dof_data["physical_offset"] if mapping == "affine": pass elif mapping == "contravariant piola": code += ["", f_comment("Using contravariant Piola transform to map values back to the physical element")] # Get temporary values before mapping. code += [f_const_float(f_tmp_ref(i), f_component(f_values, i + offset)) for i in range(num_components)] # Create names for inner product. basis_col = [f_tmp_ref(j) for j in range(tdim)] for i in range(gdim): # Create Jacobian. jacobian_row = [f_trans("J", i, j, gdim, tdim, None) for j in range(tdim)] # Create inner product and multiply by inverse of Jacobian. inner = f_group(f_inner(jacobian_row, basis_col)) # sum_j J[i,j], values[offset + j]) value = f_mul([f_inv(f_detJ(None)), inner]) name = f_component(f_values, i + offset) # FIXME: This is writing values[offset+:] = M[:,:] * values[offset+:], # i.e. offset must be physical (unless there's a bug). # We want to use reference offset for evaluate_reference_basis, # and to make this mapping read from one array using reference offset # and write to another array using physical offset. code += [f_assign(name, value)] elif mapping == "covariant piola": code += ["", f_comment("Using covariant Piola transform to map values back to the physical element")] # Get temporary values before mapping. code += [f_const_float(f_tmp_ref(i), f_component(f_values, i + offset)) for i in range(num_components)] # Create names for inner product. tdim = data["topological_dimension"] gdim = data["geometric_dimension"] basis_col = [f_tmp_ref(j) for j in range(tdim)] for i in range(gdim): # Create inverse of Jacobian. inv_jacobian_column = [f_trans("JINV", j, i, tdim, gdim, None) for j in range(tdim)] # Create inner product of basis values and inverse of Jacobian. value = f_group(f_inner(inv_jacobian_column, basis_col)) name = f_component(f_values, i + offset) code += [f_assign(name, value)] elif mapping == "double covariant piola": code += ["", f_comment("Using double covariant Piola transform to map values back to the physical element")] # Get temporary values before mapping. code += [f_const_float(f_tmp_ref(i), f_component(f_values, i + offset)) for i in range(num_components)] # Create names for inner product. tdim = data["topological_dimension"] gdim = data["geometric_dimension"] basis_col = [f_tmp_ref(j) for j in range(num_components)] for p in range(num_components): # unflatten the indices i = p // tdim l = p % tdim # noqa: E741 # g_il = K_ji G_jk K_kl value = f_group(f_inner( [f_inner([f_trans("JINV", j, i, tdim, gdim, None) for j in range(tdim)], [basis_col[j * tdim + k] for j in range(tdim)]) for k in range(tdim)], [f_trans("JINV", k, l, tdim, gdim, None) for k in range(tdim)])) name = f_component(f_values, p + offset) code += [f_assign(name, value)] elif mapping == "double contravariant piola": code += ["", f_comment("Pullback of a matrix-valued funciton as contravariant 2-tensor mapping values back to the physical element")] # Get temporary values before mapping. code += [f_const_float(f_tmp_ref(i), f_component(f_values, i + offset)) for i in range(num_components)] # Create names for inner product. tdim = data["topological_dimension"] gdim = data["geometric_dimension"] basis_col = [f_tmp_ref(j) for j in range(num_components)] for p in range(num_components): # unflatten the indices i = p // tdim l = p % tdim # noqa: E741 # g_il = (detJ)^(-2) J_ij G_jk J_lk value = f_group(f_inner( [f_inner([f_trans("J", i, j, tdim, gdim, None) for j in range(tdim)], [basis_col[j * tdim + k] for j in range(tdim)]) for k in range(tdim)], [f_trans("J", l, k, tdim, gdim, None) for k in range(tdim)])) value = f_mul([f_inv(f_detJ(None)), f_inv(f_detJ(None)), value]) name = f_component(f_values, p + offset) code += [f_assign(name, value)] else: error("Unknown mapping: %s" % mapping) ffc-2019.1.0.post0/ffc/uflacs/backends/ufc/coordinate_mapping.py000066400000000000000000000704561346026536000243450ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2015-2017 Martin Sandve Alnæs # # This file is part of UFLACS. # # UFLACS is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # UFLACS is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with UFLACS. If not, see . from ffc.uflacs.backends.ufc.generator import ufc_generator from ffc.uflacs.backends.ufc.utils import generate_return_new # TODO: Test everything here! Cover all combinations of gdim,tdim=1,2,3! index_type = "std::size_t" ### Code generation utilities: def generate_compute_ATA(L, ATA, A, m, n, index_prefix=""): "Generate code to declare and compute ATA[i,j] = sum_k A[k,i]*A[k,j] with given A shaped (m,n)." # Loop indices i = L.Symbol(index_prefix + "i") j = L.Symbol(index_prefix + "j") k = L.Symbol(index_prefix + "k") # Build A^T*A matrix code = [ L.ArrayDecl("double", ATA, (n, n), values=0), L.ForRanges( (i, 0, n), (j, 0, n), (k, 0, m), index_type=index_type, body=L.AssignAdd(ATA[i, j], A[k, i] * A[k, j]) ), ] return L.StatementList(code) def cross_expr(a, b): def cr(i, j): return a[i]*b[j] - a[j]*b[i] return [cr(1, 2), cr(2, 0), cr(0, 1)] def generate_cross_decl(c, a, b): return L.ArrayDecl("double", c, values=cross_expr(a, b)) ### Inline math expressions: def det_22(B, i, j, k, l): return B[i, k]*B[j, l] - B[i, l]*B[j, k] def codet_nn(A, rows, cols): n = len(rows) if n == 2: return det_22(A, rows[0], rows[1], cols[0], cols[1]) else: r = rows[0] subrows = rows[1:] parts = [] for i, c in enumerate(cols): subcols = cols[i+1:] + cols[:i] parts.append(A[r, c] * codet_nn(A, subrows, subcols)) return sum(parts[1:], parts[0]) def det_nn(A, n): if n == 1: return A[0, 0] else: ns = list(range(n)) return codet_nn(A, ns, ns) def pdet_m1(L, A, m): # Special inlined case 1xm for simpler expression A2 = A[0,0]*A[0,0] for i in range(1, m): A2 = A2 + A[i,0]*A[i,0] return L.Sqrt(A2) def adj_expr_2x2(A): return [[ A[1, 1], -A[0, 1]], [-A[1, 0], A[0, 0]]] def adj_expr_3x3(A): return [[A[2, 2]*A[1, 1] - A[1, 2]*A[2, 1], A[0, 2]*A[2, 1] - A[0, 1]*A[2, 2], A[0, 1]*A[1, 2] - A[0, 2]*A[1, 1]], [A[1, 2]*A[2, 0] - A[2, 2]*A[1, 0], A[2, 2]*A[0, 0] - A[0, 2]*A[2, 0], A[0, 2]*A[1, 0] - A[1, 2]*A[0, 0]], [A[1, 0]*A[2, 1] - A[2, 0]*A[1, 1], A[0, 1]*A[2, 0] - A[0, 0]*A[2, 1], A[0, 0]*A[1, 1] - A[0, 1]*A[1, 0]]] def generate_assign_inverse(L, K, J, detJ, gdim, tdim): if gdim == tdim: if gdim == 1: return L.Assign(K[0, 0], L.LiteralFloat(1.0) / J[0, 0]) elif gdim == 2: adj_values = adj_expr_2x2(J) elif gdim == 3: adj_values = adj_expr_3x3(J) else: error("Not implemented.") return L.StatementList([L.Assign(K[j,i], adj_values[j][i] / detJ) for j in range(tdim) for i in range(gdim)]) else: if tdim == 1: # Simpler special case for embedded 1d prods = [J[i,0]*J[i,0] for i in range(gdim)] s = sum(prods[1:], prods[0]) return L.StatementList([L.Assign(K[0,i], J[i,0] / s) for i in range(gdim)]) else: # Generic formulation of Penrose-Moore pseudo-inverse of J: (J.T*J)^-1 * J.T i = L.Symbol("i") j = L.Symbol("j") k = L.Symbol("k") JTJ = L.Symbol("JTJ") JTJf = L.FlattenedArray(JTJ, dims=(tdim, tdim)) detJTJ = detJ*detJ JTJinv = L.Symbol("JTJinv") JTJinvf = L.FlattenedArray(JTJinv, dims=(tdim, tdim)) code = [ L.Comment("Compute J^T J"), L.ArrayDecl("double", JTJ, (tdim*tdim,), values=0), L.ForRanges( (k, 0, tdim), (j, 0, tdim), (i, 0, gdim), index_type=index_type, body=L.AssignAdd(JTJf[k, j], J[i, k] * J[i, j]) ), L.Comment("Compute inverse(J^T J)"), L.ArrayDecl("double", JTJinv, (tdim*tdim,)), generate_assign_inverse(L, JTJinvf, JTJf, detJTJ, tdim, tdim), L.Comment("Compute K = inverse(J^T J) * J"), L.ForRanges( (k, 0, tdim), (i, 0, gdim), index_type=index_type, body=L.Assign(K[k, i], L.LiteralFloat(0.0)) ), L.ForRanges( (k, 0, tdim), (i, 0, gdim), (j, 0, tdim), index_type=index_type, body=L.AssignAdd(K[k, i], JTJinvf[k, j] * J[i, j]) ), ] return L.StatementList(code) class ufc_coordinate_mapping(ufc_generator): "Each function maps to a keyword in the template. See documentation of ufc_generator." def __init__(self): ufc_generator.__init__(self, "coordinate_mapping") def cell_shape(self, L, ir): name = ir["cell_shape"] return L.Return(L.Symbol("ufc::shape::" + name)) def topological_dimension(self, L, topological_dimension): return L.Return(topological_dimension) def geometric_dimension(self, L, geometric_dimension): return L.Return(geometric_dimension) def create_coordinate_finite_element(self, L, ir): classname = ir["create_coordinate_finite_element"] return generate_return_new(L, classname, factory=ir["jit"]) def create_coordinate_dofmap(self, L, ir): classname = ir["create_coordinate_dofmap"] return generate_return_new(L, classname, factory=ir["jit"]) def compute_physical_coordinates(self, L, ir): num_dofs = ir["num_scalar_coordinate_element_dofs"] scalar_coordinate_element_classname = ir["scalar_coordinate_finite_element_classname"] # Dimensions gdim = ir["geometric_dimension"] tdim = ir["topological_dimension"] num_points = L.Symbol("num_points") # Loop indices ip = L.Symbol("ip") i = L.Symbol("i") j = L.Symbol("j") d = L.Symbol("d") # Input cell data coordinate_dofs = L.FlattenedArray(L.Symbol("coordinate_dofs"), dims=(num_dofs, gdim)) # Output geometry x = L.FlattenedArray(L.Symbol("x"), dims=(num_points, gdim)) # Input geometry X = L.FlattenedArray(L.Symbol("X"), dims=(num_points, tdim)) # Computing table one point at a time instead of # using num_points to avoid dynamic allocation one_point = 1 # Symbols for local basis values table phi_sym = L.Symbol("phi") # NB! Must match array layout of evaluate_reference_basis phi = L.FlattenedArray(phi_sym, dims=(num_dofs,)) # For each point, compute basis values and accumulate into the right x code = [ # Define scalar finite element instance # (stateless, so placing this on the stack is basically free) L.VariableDecl(scalar_coordinate_element_classname, "xelement"), L.ArrayDecl("double", phi_sym, (one_point*num_dofs,)), L.ForRange(ip, 0, num_points, index_type=index_type, body=[ L.Comment("Compute basis values of coordinate element"), L.Call("xelement.evaluate_reference_basis", (phi_sym, 1, L.AddressOf(X[ip, 0]))), L.Comment("Compute x"), L.ForRanges( (i, 0, gdim), (d, 0, num_dofs), index_type=index_type, body=L.AssignAdd(x[ip, i], coordinate_dofs[d, i]*phi[d]) ) ]) ] return code def compute_reference_geometry(self, L, ir): degree = ir["coordinate_element_degree"] if degree == 1: # Special case optimized for affine mesh (possibly room for further optimization) return self._compute_reference_coordinates_affine(L, ir, output_all=True) else: # General case with newton loop to solve F(X) = x(X) - x0 = 0 return self._compute_reference_coordinates_newton(L, ir, output_all=True) # TODO: Maybe we don't need this version, see what we need in dolfin first def compute_reference_coordinates(self, L, ir): degree = ir["coordinate_element_degree"] if degree == 1: # Special case optimized for affine mesh (possibly room for further optimization) return self._compute_reference_coordinates_affine(L, ir) else: # General case with newton loop to solve F(X) = x(X) - x0 = 0 return self._compute_reference_coordinates_newton(L, ir) def _compute_reference_coordinates_affine(self, L, ir, output_all=False): # Dimensions gdim = ir["geometric_dimension"] tdim = ir["topological_dimension"] cellname = ir["cell_shape"] num_points = L.Symbol("num_points") # Number of dofs for a scalar component num_dofs = ir["num_scalar_coordinate_element_dofs"] # Loop indices ip = L.Symbol("ip") # point i = L.Symbol("i") # gdim j = L.Symbol("j") # tdim k = L.Symbol("k") # sum iteration # Output geometry X = L.FlattenedArray(L.Symbol("X"), dims=(num_points, tdim)) # if output_all, this is also output: Jsym = L.Symbol("J") detJsym = L.Symbol("detJ") Ksym = L.Symbol("K") # Input geometry x = L.FlattenedArray(L.Symbol("x"), dims=(num_points, gdim)) # Input cell data coordinate_dofs = L.FlattenedArray(L.Symbol("coordinate_dofs"), dims=(num_dofs, gdim)) cell_orientation = L.Symbol("cell_orientation") if output_all: decls = [] else: decls = [ L.ArrayDecl("double", Jsym, sizes=(gdim*tdim,)), L.ArrayDecl("double", detJsym, sizes=(1,)), L.ArrayDecl("double", Ksym, sizes=(tdim*gdim,)), ] # Tables of coordinate basis function values and derivatives at # X=0 and X=midpoint available through ir. This is useful in # several geometry functions. tables = ir["tables"] # Check the table shapes against our expectations x_table = tables["x0"] J_table = tables["J0"] assert x_table.shape == (num_dofs,) assert J_table.shape == (tdim, num_dofs) # TODO: Use epsilon parameter here? # TODO: Move to a more 'neutral' utility file from ffc.uflacs.elementtables import clamp_table_small_numbers x_table = clamp_table_small_numbers(x_table) J_table = clamp_table_small_numbers(J_table) # Table symbols phi_X0 = L.Symbol("phi_X0") dphi_X0 = L.Symbol("dphi_X0") # Table declarations table_decls = [ L.ArrayDecl("const double", phi_X0, sizes=x_table.shape, values=x_table), L.ArrayDecl("const double", dphi_X0, sizes=J_table.shape, values=J_table), ] # Compute x0 = x(X=0) (optimized by precomputing basis at X=0) x0 = L.Symbol("x0") compute_x0 = [ L.ArrayDecl("double", x0, sizes=(gdim,), values=0), L.ForRanges( (i, 0, gdim), (k, 0, num_dofs), index_type=index_type, body=L.AssignAdd(x0[i], coordinate_dofs[k, i] * phi_X0[k]) ), ] # For more convenient indexing detJ = detJsym[0] J = L.FlattenedArray(Jsym, dims=(gdim, tdim)) K = L.FlattenedArray(Ksym, dims=(tdim, gdim)) # Compute J = J(X=0) (optimized by precomputing basis at X=0) compute_J0 = [ L.ForRanges( (i, 0, gdim), (j, 0, tdim), index_type=index_type, body=[ L.Assign(J[i, j], 0.0), L.ForRange(k, 0, num_dofs, index_type=index_type, body= L.AssignAdd(J[i, j], coordinate_dofs[k, i] * dphi_X0[j, k]) ) ] ), ] # Compute K = inv(J) (and intermediate value det(J)) compute_K0 = [ L.Call("compute_jacobian_determinants", (detJsym, 1, Jsym, cell_orientation)), L.Call("compute_jacobian_inverses", (Ksym, 1, Jsym, detJsym)), ] # Compute X = K0*(x-x0) for each physical point x compute_X = [ L.ForRanges( (ip, 0, num_points), (j, 0, tdim), (i, 0, gdim), index_type=index_type, body=L.AssignAdd(X[ip, j], K[j, i]*(x[ip, i] - x0[i])) ) ] # Stitch it together code = table_decls + decls + compute_x0 + compute_J0 + compute_K0 + compute_X return code def _compute_reference_coordinates_newton(self, L, ir, output_all=False): """Solves x(X) = x0 for X. Find X such that, given x0, F(X) = x(X) - x0 = 0 Newton iteration is: X^0 = midpoint of reference cell until convergence: dF/dX(X^k) dX^k = -F(X^k) X^{k+1} = X^k + dX^k The Jacobian is just the usual Jacobian of the geometry mapping: dF/dX = dx/dX = J and its inverse is the usual K = J^-1. Because J is small, we can invert it directly and rewrite Jk dX = -(xk - x0) = x0 - xk into Xk = midpoint of reference cell for k in range(maxiter): xk = x(Xk) Jk = J(Xk) Kk = inverse(Jk) dX = Kk (x0 - xk) Xk += dX if dX sufficiently small: break """ # Dimensions gdim = ir["geometric_dimension"] tdim = ir["topological_dimension"] cellname = ir["cell_shape"] num_points = L.Symbol("num_points") degree = ir["coordinate_element_degree"] # Computing table one point at a time instead of vectorized # over num_points will allow skipping dynamic allocation one_point = 1 # Loop indices ip = L.Symbol("ip") # point i = L.Symbol("i") # gdim j = L.Symbol("j") # tdim k = L.Symbol("k") # iteration # Input cell data coordinate_dofs = L.Symbol("coordinate_dofs") cell_orientation = L.Symbol("cell_orientation") # Output geometry X = L.FlattenedArray(L.Symbol("X"), dims=(num_points, tdim)) # Input geometry x = L.FlattenedArray(L.Symbol("x"), dims=(num_points, gdim)) # Find X=Xk such that xk(Xk) = x or F(Xk) = xk(Xk) - x = 0. # Newtons method is then: # X0 = cell midpoint # dF/dX dX = -F # Xk = Xk + dX # Note that # dF/dX = dx/dX = J # giving # inverse(dF/dX) = K # such that dX = -K*(xk(Xk) - x) # Newtons method is then: # for each ip: # xgoal = x[ip] # X0 = cell midpoint # until convergence: # K = K(Xk) # dX = K*(xk(Xk) - x) # Xk -= dX # X[ip] = Xk # Symbols for arrays used below Xk = L.Symbol("Xk") xgoal = L.Symbol("xgoal") dX = L.Symbol("dX") xk = L.Symbol("xk") J = L.Symbol("J") detJ = L.Symbol("detJ") K = L.Symbol("K") xm = L.Symbol("xm") Km = L.Symbol("Km") # Symbol for ufc_geometry cell midpoint definition Xm = L.Symbol("%s_midpoint" % cellname) # Variables for stopping criteria # TODO: Check if these are good convergence criteria, # e.g. is epsilon=1e-6 and iterations=degree sufficient? max_iter = L.LiteralInt(degree) epsilon = L.LiteralFloat(1e-6) # TODO: Could also easily make criteria input if desired #max_iter = L.Symbol("iterations") #epsilon = L.Symbol("epsilon") dX2 = L.Symbol("dX2") # Wrap K as flattened array for convenient indexing Kf[j,i] Kf = L.FlattenedArray(K, dims=(tdim, gdim)) decls = [ L.Comment("Declare intermediate arrays to hold results of compute_geometry call"), L.ArrayDecl("double", xk, (gdim,), 0.0), ] if not output_all: decls += [ L.ArrayDecl("double", J, (gdim*tdim,), 0.0), L.ArrayDecl("double", detJ, (1,)), L.ArrayDecl("double", K, (tdim*gdim,), 0.0), ] # By computing x and K at the cell midpoint once, # we can optimize the first iteration for each target point # by initializing Xk = Xm + Km * (xgoal - xm) # which is the affine approximation starting at the midpoint. midpoint_geometry = [ L.Comment("Compute K = J^-1 and x at midpoint of cell"), L.ArrayDecl("double", xm, (gdim,), 0.0), L.ArrayDecl("double", Km, (tdim*gdim,)), L.Call("compute_midpoint_geometry", (xm, J, coordinate_dofs)), L.Call("compute_jacobian_determinants", (detJ, one_point, J, cell_orientation)), L.Call("compute_jacobian_inverses", (Km, one_point, J, detJ)), ] # declare xgoal = x[ip] # declare Xk = initial value newton_init = [ L.Comment("Declare xgoal to hold the current x coordinate value"), L.ArrayDecl("const double", xgoal, (gdim,), values=[x[ip][iv] for iv in range(gdim)]), L.Comment("Declare Xk iterate with initial value equal to reference cell midpoint"), L.ArrayDecl("double", Xk, (tdim,), values=[Xm[c] for c in range(tdim)]), L.ForRanges( (j, 0, tdim), (i, 0, gdim), index_type=index_type, body=L.AssignAdd(Xk[j], Kf[j,i] * (xgoal[i] - xm[i])) ) ] part1 = [ L.Comment("Compute K = J^-1 for one point, (J and detJ are only used as"), L.Comment("intermediate storage inside compute_geometry, not used out here"), L.Call("compute_geometry", (xk, J, detJ, K, one_point, Xk, coordinate_dofs, cell_orientation)), ] # Newton body with stopping criteria |dX|^2 < epsilon newton_body = part1 + [ L.Comment("Declare dX increment to be computed, initialized to zero"), L.ArrayDecl("double", dX, (tdim,), values=0.0), L.Comment("Compute dX[j] = sum_i K_ji * (x_i - x(Xk)_i)"), L.ForRanges( (j, 0, tdim), (i, 0, gdim), index_type=index_type, body=L.AssignAdd(dX[j], Kf[j,i]*(xgoal[i] - xk[i])) ), L.Comment("Compute |dX|^2"), L.VariableDecl("double", dX2, value=0.0), L.ForRange(j, 0, tdim, index_type=index_type, body=L.AssignAdd(dX2, dX[j]*dX[j])), L.Comment("Break if converged (before X += dX such that X,J,detJ,K are consistent)"), L.If(L.LT(dX2, epsilon), L.Break()), L.Comment("Update Xk += dX"), L.ForRange(j, 0, tdim, index_type=index_type, body=L.AssignAdd(Xk[j], dX[j])), ] # Use this instead to have a fixed iteration number alternative_newton_body = part1 + [ L.Comment("Compute Xk[j] += sum_i K_ji * (x_i - x(Xk)_i)"), L.ForRanges( (j, 0, tdim), (i, 0, gdim), index_type=index_type, body=L.AssignAdd(Xk[j], Kf[j,i]*(xgoal[i] - xk[i])) ), ] # Carry out newton loop for each point point_loop = [ L.ForRange(ip, 0, num_points, index_type=index_type, body=[ newton_init, # Loop until convergence L.ForRange(k, 0, max_iter, index_type=index_type, body=newton_body), # Copy X[ip] = Xk L.ForRange(j, 0, tdim, index_type=index_type, body=L.Assign(X[ip][j], Xk[j])), ]) ] code = decls + midpoint_geometry + point_loop return code def compute_jacobians(self, L, ir): num_dofs = ir["num_scalar_coordinate_element_dofs"] scalar_coordinate_element_classname = ir["scalar_coordinate_finite_element_classname"] # Dimensions gdim = ir["geometric_dimension"] tdim = ir["topological_dimension"] num_points = L.Symbol("num_points") # Loop indices ip = L.Symbol("ip") i = L.Symbol("i") j = L.Symbol("j") d = L.Symbol("d") # Input cell data coordinate_dofs = L.FlattenedArray(L.Symbol("coordinate_dofs"), dims=(num_dofs, gdim)) # Output geometry J = L.FlattenedArray(L.Symbol("J"), dims=(num_points, gdim, tdim)) # Input geometry X = L.FlattenedArray(L.Symbol("X"), dims=(num_points, tdim)) # Computing table one point at a time instead of using # num_points will allow skipping dynamic allocation one_point = 1 # Symbols for local basis derivatives table dphi_sym = L.Symbol("dphi") dphi = L.FlattenedArray(dphi_sym, dims=(num_dofs, tdim)) # For each point, compute basis derivatives and accumulate into the right J code = [ L.VariableDecl(scalar_coordinate_element_classname, "xelement"), L.ArrayDecl("double", dphi_sym, (one_point*num_dofs*tdim,)), L.ForRange(ip, 0, num_points, index_type=index_type, body=[ L.Comment("Compute basis derivatives of coordinate element"), L.Call("xelement.evaluate_reference_basis_derivatives", (dphi_sym, 1, 1, L.AddressOf(X[ip, 0]))), L.Comment("Compute J"), L.ForRanges( (i, 0, gdim), (j, 0, tdim), (d, 0, num_dofs), index_type=index_type, body=L.AssignAdd(J[ip, i, j], coordinate_dofs[d, i]*dphi[d, j]) ) ]) ] return code def compute_jacobian_determinants(self, L, ir): # Dimensions gdim = ir["geometric_dimension"] tdim = ir["topological_dimension"] num_points = L.Symbol("num_points") # Loop indices ip = L.Symbol("ip") # Output geometry detJ = L.Symbol("detJ")[ip] # Input geometry J = L.FlattenedArray(L.Symbol("J"), dims=(num_points, gdim, tdim)) cell_orientation = L.Symbol("cell_orientation") orientation_scaling = L.Conditional(L.EQ(cell_orientation, 1), -1.0, +1.0) # Assign det expression to detJ if gdim == tdim: body = L.Assign(detJ, det_nn(J[ip], gdim)) elif tdim == 1: body = L.Assign(detJ, orientation_scaling*pdet_m1(L, J[ip], gdim)) #elif tdim == 2 and gdim == 3: # body = L.Assign(detJ, orientation_scaling*pdet_32(L, J[ip])) # Possible optimization not implemented here else: JTJ = L.Symbol("JTJ") body = [ generate_compute_ATA(L, JTJ, J[ip], gdim, tdim), L.Assign(detJ, orientation_scaling*L.Sqrt(det_nn(JTJ, tdim))), ] # Carry out for all points code = L.ForRange(ip, 0, num_points, index_type=index_type, body=body) return code def compute_jacobian_inverses(self, L, ir): # Dimensions gdim = ir["geometric_dimension"] tdim = ir["topological_dimension"] num_points = L.Symbol("num_points") # Loop indices ip = L.Symbol("ip") # Output geometry K = L.FlattenedArray(L.Symbol("K"), dims=(num_points, tdim, gdim)) # Input geometry J = L.FlattenedArray(L.Symbol("J"), dims=(num_points, gdim, tdim)) detJ = L.Symbol("detJ") # Assign to K[j][i] for each component j,i body = generate_assign_inverse(L, K[ip], J[ip], detJ[ip], gdim, tdim) # Carry out for all points return L.ForRange(ip, 0, num_points, index_type=index_type, body=body) def compute_geometry(self, L, ir): # Output geometry x = L.Symbol("x") J = L.Symbol("J") detJ = L.Symbol("detJ") K = L.Symbol("K") # Dimensions num_points = L.Symbol("num_points") # Input geometry X = L.Symbol("X") # Input cell data coordinate_dofs = L.Symbol("coordinate_dofs") cell_orientation = L.Symbol("cell_orientation") # All arguments args = (x, J, detJ, K, num_points, X, coordinate_dofs, cell_orientation) # Just chain calls to other functions here code = [ L.Call("compute_physical_coordinates", (x, num_points, X, coordinate_dofs)), L.Call("compute_jacobians", (J, num_points, X, coordinate_dofs)), L.Call("compute_jacobian_determinants", (detJ, num_points, J, cell_orientation)), L.Call("compute_jacobian_inverses", (K, num_points, J, detJ)), ] return code def compute_midpoint_geometry(self, L, ir): # Dimensions gdim = ir["geometric_dimension"] tdim = ir["topological_dimension"] num_dofs = ir["num_scalar_coordinate_element_dofs"] # Tables of coordinate basis function values and derivatives at # X=0 and X=midpoint available through ir. This is useful in # several geometry functions. tables = ir["tables"] # Check the table shapes against our expectations xm_table = tables["xm"] Jm_table = tables["Jm"] assert xm_table.shape == (num_dofs,) assert Jm_table.shape == (tdim, num_dofs) # TODO: Use epsilon parameter here? # TODO: Move to a more 'neutral' utility file from ffc.uflacs.elementtables import clamp_table_small_numbers xm_table = clamp_table_small_numbers(xm_table) Jm_table = clamp_table_small_numbers(Jm_table) # Table symbols phi_Xm = L.Symbol("phi_Xm") dphi_Xm = L.Symbol("dphi_Xm") # Table declarations table_decls = [ L.ArrayDecl("const double", phi_Xm, sizes=xm_table.shape, values=xm_table), L.ArrayDecl("const double", dphi_Xm, sizes=Jm_table.shape, values=Jm_table), ] # Symbol for ufc_geometry cell midpoint definition cellname = ir["cell_shape"] Xm = L.Symbol("%s_midpoint" % cellname) # Output geometry x = L.Symbol("x") J = L.Symbol("J") detJ = L.Symbol("detJ") K = L.Symbol("K") # Dimensions num_points = 1 # Input geometry X = Xm # Input cell data coordinate_dofs = L.Symbol("coordinate_dofs") coordinate_dofs = L.FlattenedArray(coordinate_dofs, dims=(num_dofs, gdim)) Jf = L.FlattenedArray(J, dims=(gdim, tdim)) i = L.Symbol("i") j = L.Symbol("j") d = L.Symbol("d") xm_code = [ L.Comment("Compute x"), L.ForRanges( (i, 0, gdim), (d, 0, num_dofs), index_type=index_type, body=L.AssignAdd(x[i], coordinate_dofs[d, i]*phi_Xm[d]) ), ] Jm_code = [ L.Comment("Compute J"), L.ForRanges( (i, 0, gdim), (j, 0, tdim), (d, 0, num_dofs), index_type=index_type, body=L.AssignAdd(Jf[i, j], coordinate_dofs[d, i]*dphi_Xm[j, d]) ), ] # Reuse functions for detJ and K code = table_decls + xm_code + Jm_code return code ffc-2019.1.0.post0/ffc/uflacs/backends/ufc/dofmap.py000066400000000000000000000250621346026536000217420ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2009-2017 Anders Logg and Martin Sandve Alnæs # # This file is part of UFLACS. # # UFLACS is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # UFLACS is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with UFLACS. If not, see . # Note: Most of the code in this file is a direct translation from the old implementation in FFC from ffc.uflacs.backends.ufc.generator import ufc_generator from ffc.uflacs.backends.ufc.utils import generate_return_new_switch, generate_return_sizet_switch, generate_return_bool_switch class ufc_dofmap(ufc_generator): "Each function maps to a keyword in the template. See documentation of ufc_generator." def __init__(self): ufc_generator.__init__(self, "dofmap") def topological_dimension(self, L, topological_dimension): return L.Return(topological_dimension) def needs_mesh_entities(self, L, needs_mesh_entities): "needs_mesh_entities is a list of num dofs per entity." return generate_return_bool_switch(L, "d", needs_mesh_entities, False) def global_dimension(self, L, ir): # A list of num dofs per entity num_entity_dofs, num_global_dofs = ir["global_dimension"] # Input array entities = L.Symbol("num_global_entities") # Accumulate number of dofs per entity times given number of entities in mesh entity_dofs = [] for dim, num in enumerate(num_entity_dofs): if num == 1: entity_dofs.append(entities[dim]) elif num > 1: entity_dofs.append(num * entities[dim]) if entity_dofs: dimension = L.Sum(entity_dofs) else: dimension = 0 # Add global dofs if any if num_global_dofs: dimension = dimension + num_global_dofs return L.Return(dimension) def num_global_support_dofs(self, L, num_global_support_dofs): return L.Return(num_global_support_dofs) def num_element_support_dofs(self, L, num_element_support_dofs): return L.Return(num_element_support_dofs) def num_element_dofs(self, L, num_element_dofs): return L.Return(num_element_dofs) def num_facet_dofs(self, L, num_facet_dofs): return L.Return(num_facet_dofs) def num_entity_dofs(self, L, num_entity_dofs): return generate_return_sizet_switch(L, "d", num_entity_dofs, 0) def num_entity_closure_dofs(self, L, num_entity_closure_dofs): return generate_return_sizet_switch(L, "d", num_entity_closure_dofs, 0) def tabulate_dofs(self, L, ir): # Input arguments entity_indices = L.Symbol("entity_indices") num_mesh_entities = L.Symbol("num_global_entities") # Output arguments dofs_variable = L.Symbol("dofs") ir = ir["tabulate_dofs"] if ir is None: # Special case for SpaceOfReals, ir returns None # FIXME: This is how the old code did in this case, # is it correct? Is there only 1 dof? # I guess VectorElement(Real) handled elsewhere? code = [L.Assign(dofs_variable[0], 0)] return L.StatementList(code) # Extract representation #(dofs_per_element, num_dofs_per_element, need_offset, fakes) = ir # names from original ffc code (subelement_dofs, num_dofs_per_subelement, need_offset, is_subelement_real) = ir #len(is_subelement_real) == len(subelement_dofs) == len(num_dofs_per_subelement) == number of combined (flattened) subelements? #is_subelement_real[subelement_number] is bool, is subelement Real? #num_dofs_per_subelement[subelement_number] is int, number of dofs for each (flattened) subelement #subelement_dofs[subelement_number] is list of entity_dofs for each (flattened) subelement #len(entity_dofs) == tdim+1 #len(entity_dofs[cell_entity_dim]) == "num_cell_entities[dim]" (i.e. 3 vertices, 3 edges, 1 face for triangle) #len(entity_dofs[cell_entity_dim][entity_index[dim][k]]) == number of dofs for this entity #num = entity_dofs[dim] #entity_dofs =? entity_dofs[d][i][:] # dofs on entity (d,i) # Collect code pieces in list code = [] # Declare offset if needed if need_offset: offset = L.Symbol("offset") code.append(L.VariableDecl("std::size_t", offset, value=0)) else: offset = 0 # Generate code for each element subelement_offset = 0 for (subelement_index, entity_dofs) in enumerate(subelement_dofs): # Handle is_subelement_real (Space of reals) if is_subelement_real[subelement_index]: assert num_dofs_per_subelement[subelement_index] == 1 code.append(L.Assign(dofs_variable[subelement_offset], offset)) if need_offset: code.append(L.AssignAdd(offset, 1)) subelement_offset += 1 continue # Generate code for each degree of freedom for each dimension for (cell_entity_dim, dofs_on_cell_entity) in enumerate(entity_dofs): num_dofs_per_mesh_entity = len(dofs_on_cell_entity[0]) assert all(num_dofs_per_mesh_entity == len(dofs) for dofs in dofs_on_cell_entity) # Ignore if no dofs for this dimension if num_dofs_per_mesh_entity == 0: continue # For each cell entity of this dimension for (cell_entity_index, dofs) in enumerate(dofs_on_cell_entity): # dofs is a list of the local dofs that live on this cell entity # find offset for this particular mesh entity entity_offset = len(dofs) * entity_indices[cell_entity_dim, cell_entity_index] for (j, dof) in enumerate(dofs): # dof is the local dof index on the subelement # j is the local index of dof among the dofs # on this particular cell/mesh entity local_dof_index = subelement_offset + dof global_dof_index = offset + entity_offset + j code.append(L.Assign(dofs_variable[local_dof_index], global_dof_index)) # Update offset corresponding to mesh entity: if need_offset: value = num_dofs_per_mesh_entity * num_mesh_entities[cell_entity_dim] code.append(L.AssignAdd(offset, value)) subelement_offset += num_dofs_per_subelement[subelement_index] return L.StatementList(code) def tabulate_facet_dofs(self, L, ir): all_facet_dofs = ir["tabulate_facet_dofs"] # Input arguments facet = L.Symbol("facet") dofs = L.Symbol("dofs") # For each facet, copy all_facet_dofs[facet][:] into output argument array dofs[:] cases = [] for f, single_facet_dofs in enumerate(all_facet_dofs): assignments = [L.Assign(dofs[i], dof) for (i, dof) in enumerate(single_facet_dofs)] if assignments: cases.append((f, L.StatementList(assignments))) if cases: return L.Switch(facet, cases, autoscope=False) else: return L.NoOp() def tabulate_entity_dofs(self, L, ir): entity_dofs, num_dofs_per_entity = ir["tabulate_entity_dofs"] # Output argument array dofs = L.Symbol("dofs") # Input arguments d = L.Symbol("d") i = L.Symbol("i") # TODO: Removed check for (d <= tdim + 1) tdim = len(num_dofs_per_entity) - 1 # Generate cases for each dimension: all_cases = [] for dim in range(tdim + 1): # Ignore if no entities for this dimension if num_dofs_per_entity[dim] == 0: continue # Generate cases for each mesh entity cases = [] for entity in range(len(entity_dofs[dim])): casebody = [] for (j, dof) in enumerate(entity_dofs[dim][entity]): casebody += [L.Assign(dofs[j], dof)] cases.append((entity, L.StatementList(casebody))) # Generate inner switch # TODO: Removed check for (i <= num_entities-1) inner_switch = L.Switch(i, cases, autoscope=False) all_cases.append((dim, inner_switch)) if all_cases: return L.Switch(d, all_cases, autoscope=False) else: return L.NoOp() def tabulate_entity_closure_dofs(self, L, ir): # Extract variables from ir entity_closure_dofs, entity_dofs, num_dofs_per_entity = \ ir["tabulate_entity_closure_dofs"] # Output argument array dofs = L.Symbol("dofs") # Input arguments d = L.Symbol("d") i = L.Symbol("i") # TODO: Removed check for (d <= tdim + 1) tdim = len(num_dofs_per_entity) - 1 # Generate cases for each dimension: all_cases = [] for dim in range(tdim + 1): num_entities = len(entity_dofs[dim]) # Generate cases for each mesh entity cases = [] for entity in range(num_entities): casebody = [] for (j, dof) in enumerate(entity_closure_dofs[(dim, entity)]): casebody += [L.Assign(dofs[j], dof)] cases.append((entity, L.StatementList(casebody))) # Generate inner switch unless empty if cases: # TODO: Removed check for (i <= num_entities-1) inner_switch = L.Switch(i, cases, autoscope=False) all_cases.append((dim, inner_switch)) if all_cases: return L.Switch(d, all_cases, autoscope=False) else: return L.NoOp() def num_sub_dofmaps(self, L, num_sub_dofmaps): return L.Return(num_sub_dofmaps) def create_sub_dofmap(self, L, ir): classnames = ir["create_sub_dofmap"] return generate_return_new_switch(L, "i", classnames, factory=ir["jit"]) ffc-2019.1.0.post0/ffc/uflacs/backends/ufc/evalderivs.py000066400000000000000000000314331346026536000226370ustar00rootroot00000000000000# -*- coding: utf-8 -*- """Work in progress translation of FFC evaluatebasis code to uflacs CNodes format.""" import numpy from ffc.log import error from ffc.uflacs.backends.ufc.utils import generate_error from ffc.uflacs.backends.ufc.evaluatebasis import generate_expansion_coefficients, generate_compute_basisvalues # Used for various indices and arrays in this file index_type = "std::size_t" def generate_evaluate_reference_basis_derivatives(L, data, parameters): # Cutoff for feature to disable generation of this code (consider removing after benchmarking final result) if isinstance(data, str): msg = "evaluate_reference_basis_derivatives: %s" % (data,) return generate_error(L, msg, parameters["convert_exceptions_to_warnings"]) # Get some known dimensions element_cellname = data["cellname"] tdim = data["topological_dimension"] max_degree = data["max_degree"] reference_value_size = data["reference_value_size"] num_dofs = len(data["dofs_data"]) # Output argument reference_values = L.Symbol("reference_values") # Input arguments order = L.Symbol("order") num_points = L.Symbol("num_points") X = L.Symbol("X") # Loop indices ip = L.Symbol("ip") # point idof = L.Symbol("i") # dof c = L.Symbol("c") # component r = L.Symbol("r") # derivative number # Define symbol for number of derivatives of given order num_derivatives = L.Symbol("num_derivatives") reference_values_size = num_points * num_dofs * num_derivatives * reference_value_size # FIXME: validate these dimensions ref_values = L.FlattenedArray(reference_values, dims=(num_points, num_dofs, num_derivatives, reference_value_size)) # From evaluatebasis.py: #ref_values = L.FlattenedArray(reference_values, dims=(num_points, num_dofs, reference_value_size)) # Initialization (zeroing) and cutoffs outside valid range of orders setup_code = [ # Cutoff to evaluate_basis for order 0 L.If(L.EQ(order, 0), [ L.Call("evaluate_reference_basis", (reference_values, num_points, X)), L.Return() ]), # Compute number of derivatives of this order L.VariableDecl("const " + index_type, num_derivatives, value=L.Call("std::pow", (tdim, order))), # Reset output values to zero L.MemZero(reference_values, reference_values_size), # Cutoff for higher order than we have L.If(L.GT(order, max_degree), L.Return()), ] # If max_degree is zero, we don't need to generate any more code if max_degree == 0: return setup_code # Tabulate dmats tables for all dofs and all derivative directions dmats_names, dmats_code = generate_tabulate_dmats(L, data["dofs_data"]) # Generate code with static tables of expansion coefficients tables_code, coefficients_for_dof = generate_expansion_coefficients(L, data["dofs_data"]) # Generate code to compute tables of basisvalues basisvalues_code, basisvalues_for_degree, need_fiat_coordinates = \ generate_compute_basisvalues(L, data["dofs_data"], element_cellname, tdim, X, ip) # Generate all possible combinations of derivatives. combinations_code, combinations = _generate_combinations(L, tdim, max_degree, order, num_derivatives) # Define symbols for variables populated inside dof switch derivatives = L.Symbol("derivatives") reference_offset = L.Symbol("reference_offset") num_components = L.Symbol("num_components") # Get number of components of each basis function (>1 for dofs of piola mapped subelements) num_components_values = [dof_data["num_components"] for dof_data in data["dofs_data"]] # Offset into parent mixed element to first component of each basis function reference_offset_values = [dof_data["reference_offset"] for dof_data in data["dofs_data"]] # Max dimensions for the reference derivatives for each dof max_num_derivatives = tdim**max_degree max_num_components = max(num_components_values) # Add constant tables of these numbers tables_code += [ L.ArrayDecl("const " + index_type, reference_offset, num_dofs, values=reference_offset_values), L.ArrayDecl("const " + index_type, num_components, num_dofs, values=num_components_values), ] # Access reference derivatives compactly derivs = L.FlattenedArray(derivatives, dims=(num_components[idof], num_derivatives)) # Create code for all basis values (dofs). dof_cases = [] for i_dof, dof_data in enumerate(data["dofs_data"]): embedded_degree = dof_data["embedded_degree"] basisvalues = basisvalues_for_degree[embedded_degree] shape_dmats = numpy.shape(dof_data["dmats"][0]) if shape_dmats[0] != shape_dmats[1]: error("Something is wrong with the dmats:\n%s" % str(dof_data["dmats"])) aux = L.Symbol("aux") dmats = L.Symbol("dmats") dmats_old = L.Symbol("dmats_old") dmats_name = dmats_names[i_dof] # Create dmats matrix by multiplication comb = L.Symbol("comb") s = L.Symbol("s") t = L.Symbol("t") u = L.Symbol("u") tu = L.Symbol("tu") aux_computation_code = [ L.ArrayDecl("double", aux, shape_dmats[0], values=0), L.Comment("Declare derivative matrix (of polynomial basis)."), L.ArrayDecl("double", dmats, shape_dmats, values=0), L.Comment("Initialize dmats."), L.VariableDecl(index_type, comb, combinations[r, 0]), L.MemCopy(L.AddressOf(dmats_name[comb][0][0]), L.AddressOf(dmats[0][0]), shape_dmats[0]*shape_dmats[1]), L.Comment("Looping derivative order to generate dmats."), L.ForRange(s, 1, order, index_type=index_type, body=[ L.Comment("Store previous dmats matrix."), L.ArrayDecl("double", dmats_old, shape_dmats), L.MemCopy(L.AddressOf(dmats[0][0]), L.AddressOf(dmats_old[0][0]), shape_dmats[0]*shape_dmats[1]), L.Comment("Resetting dmats."), L.MemZero(L.AddressOf(dmats[0][0]), shape_dmats[0]*shape_dmats[1]), L.Comment("Update dmats using an inner product."), L.Assign(comb, combinations[r, s]), L.ForRange(t, 0, shape_dmats[0], index_type=index_type, body= L.ForRange(u, 0, shape_dmats[1], index_type=index_type, body= L.ForRange(tu, 0, shape_dmats[0], index_type=index_type, body= L.AssignAdd(dmats[t, u], dmats_name[comb, t, tu] * dmats_old[tu, u]))))]), L.ForRange(s, 0, shape_dmats[0], index_type=index_type, body= L.ForRange(t, 0, shape_dmats[1], index_type=index_type, body= L.AssignAdd(aux[s], dmats[s, t] * basisvalues[t]))) ] # Unrolled loop over components of basis function n = dof_data["num_components"] compute_ref_derivs_code = [L.Assign(derivs[cc][r], 0.0) for cc in range(n)] compute_ref_derivs_code += [L.ForRange(s, 0, shape_dmats[0], index_type=index_type, body= [L.AssignAdd(derivs[cc][r], coefficients_for_dof[i_dof][cc][s] * aux[s]) for cc in range(n)])] embedded_degree = dof_data["embedded_degree"] basisvalues = basisvalues_for_degree[embedded_degree] # Compute the derivatives of the basisfunctions on the reference (FIAT) element, # as the dot product of the new coefficients and basisvalues. case_code = [L.Comment("Compute reference derivatives for dof %d." % i_dof), # Accumulate sum_s coefficients[s] * aux[s] L.ForRange(r, 0, num_derivatives, index_type=index_type, body=[ aux_computation_code, compute_ref_derivs_code ])] dof_cases.append((i_dof, case_code)) # Loop over all dofs, entering a different switch case in each iteration. # This is a legacy from the previous implementation where the loop # was in a separate function and all the setup above was also repeated # in a call for each dof. # TODO: Further refactoring is needed to improve on this situation, # but at least it's better than before. There's probably more code and # computations that can be shared between dofs, and this would probably # be easier to fix if mixed elements were handled separately! dof_loop_code = [ L.Comment("Loop over all dofs"), L.ForRange(idof, 0, num_dofs, index_type=index_type, body=[ L.ArrayDecl("double", derivatives, max_num_components * max_num_derivatives, 0.0), L.Switch(idof, dof_cases), L.ForRange(r, 0, num_derivatives, index_type=index_type, body= [ L.ForRange(c, 0, num_components[idof], index_type=index_type, body=[ L.Assign(ref_values[ip][idof][r][reference_offset[idof] + c], derivs[c][r]), # FIXME: validate ref_values dims ]), ]) ]), ] # Define loop over points final_loop_code = [ L.ForRange(ip, 0, num_points, index_type=index_type, body= basisvalues_code + dof_loop_code ) ] # Stitch it all together code = ( setup_code + dmats_code + tables_code + combinations_code + final_loop_code ) return code def generate_tabulate_dmats(L, dofs_data): "Tabulate the derivatives of the polynomial base" alignas = 32 # Emit code for the dmats we've actually used dmats_code = [L.Comment("Tables of derivatives of the polynomial base (transpose).")] dmats_names = [] all_matrices = [] for idof, dof_data in enumerate(dofs_data): # Get derivative matrices (coefficients) of basis functions, computed by FIAT at compile time. derivative_matrices = dof_data["dmats"] num_mats = len(derivative_matrices) num_members = dof_data["num_expansion_members"] # Generate tables for each spatial direction. matrix = numpy.zeros((num_mats, num_members, num_members)) for i, dmat in enumerate(derivative_matrices): # Extract derivatives for current direction # (take transpose, FIAT_NEW PolynomialSet.tabulate()). matrix[i,...] = numpy.transpose(dmat) # TODO: Use precision from parameters here from ffc.uflacs.elementtables import clamp_table_small_numbers matrix = clamp_table_small_numbers(matrix) # O(n^2) matrix matching... name = None for oldname, oldmatrix in all_matrices: if matrix.shape == oldmatrix.shape and numpy.allclose(matrix, oldmatrix): name = oldname break if name is None: # Define variable name for coefficients for this dof name = L.Symbol("dmats%d" % (idof,)) all_matrices.append((name, matrix)) # Declare new dmats table with unique values decl = L.ArrayDecl("static const double", name, (num_mats, num_members, num_members), values=matrix, alignas=alignas) dmats_code.append(decl) # Append name for each dof dmats_names.append(name) return dmats_names, dmats_code def _generate_combinations(L, tdim, max_degree, order, num_derivatives, suffix=""): max_num_derivatives = tdim**max_degree combinations = L.Symbol("combinations" + suffix) # This precomputes the combinations for each order and stores in code as table # Python equivalent precomputed for each valid order: combinations_shape = (max_degree, max_num_derivatives, max_degree) all_combinations = numpy.zeros(combinations_shape, dtype=int) for q in range(1, max_degree+1): for row in range(1, max_num_derivatives): for num in range(0, row): for col in range(q-1, -1, -1): if all_combinations[q-1][row][col] > tdim - 2: all_combinations[q-1][row][col] = 0 else: all_combinations[q-1][row][col] += 1 break code = [ L.Comment("Precomputed combinations"), L.ArrayDecl("const " + index_type, combinations, combinations_shape, values=all_combinations), ] # Select the right order for further access combinations = combinations[order-1] return code, combinations ffc-2019.1.0.post0/ffc/uflacs/backends/ufc/evaluatebasis.py000066400000000000000000000520561346026536000233270ustar00rootroot00000000000000# -*- coding: utf-8 -*- """Work in progress translation of FFC evaluatebasis code to uflacs CNodes format.""" import numpy from ffc.log import error from ffc.uflacs.backends.ufc.utils import generate_error # Used for various indices and arrays in this file index_type = "std::size_t" def tabulate_coefficients(L, dof_data): """This function tabulates the element coefficients that are generated by FIAT at compile time.""" # Get coefficients from basis functions, computed by FIAT at # compile time. coefficients = dof_data["coeffs"] # Initialise return code. code = [L.Comment("Table(s) of coefficients")] # Get number of members of the expansion set. num_mem = dof_data["num_expansion_members"] # Generate tables for each component. for i, coeffs in enumerate(coefficients): # Variable name for coefficients. name = L.Symbol("coefficients%d" % i) # Generate array of values. code += [L.ArrayDecl("static const double", name, num_mem, coeffs)] return code def generate_evaluate_reference_basis(L, data, parameters): """Generate code to evaluate element basisfunctions at an arbitrary point on the reference element. The value(s) of the basisfunction is/are computed as in FIAT as the dot product of the coefficients (computed at compile time) and basisvalues which are dependent on the coordinate and thus have to be computed at run time. The function should work for all elements supported by FIAT, but it remains untested for tensor valued elements. This code is adapted from code in FFC which computed the basis from physical coordinates, and also to use UFLACS utilities. The FFC code has a comment "From FIAT_NEW.polynomial_set.tabulate()". """ # Cutoff for feature to disable generation of this code (consider # removing after benchmarking final result) if isinstance(data, str): msg = "evaluate_reference_basis: %s" % (data,) return generate_error(L, msg, parameters["convert_exceptions_to_warnings"]) # Get some known dimensions element_cellname = data["cellname"] tdim = data["topological_dimension"] reference_value_size = data["reference_value_size"] num_dofs = len(data["dofs_data"]) # Input geometry num_points = L.Symbol("num_points") X = L.Symbol("X") # Output values reference_values = L.Symbol("reference_values") ref_values = L.FlattenedArray(reference_values, dims=(num_points, num_dofs, reference_value_size)) # Loop indices ip = L.Symbol("ip") k = L.Symbol("k") c = L.Symbol("c") r = L.Symbol("r") # Generate code with static tables of expansion coefficients tables_code, coefficients_for_dof = generate_expansion_coefficients(L, data["dofs_data"]) # Reset reference_values[:] to 0 reset_values_code = [ L.ForRange(k, 0, num_points*num_dofs*reference_value_size, index_type=index_type, body=L.Assign(reference_values[k], 0.0)) ] setup_code = tables_code + reset_values_code # Generate code to compute tables of basisvalues basisvalues_code, basisvalues_for_degree, need_fiat_coordinates = \ generate_compute_basisvalues(L, data["dofs_data"], element_cellname, tdim, X, ip) # Accumulate products of basisvalues and coefficients into values accumulation_code = [ L.Comment("Accumulate products of coefficients and basisvalues"), ] for idof, dof_data in enumerate(data["dofs_data"]): embedded_degree = dof_data["embedded_degree"] num_components = dof_data["num_components"] num_members = dof_data["num_expansion_members"] # In ffc representation, this is extracted per dof # (but will coincide for some dofs of piola mapped elements): reference_offset = dof_data["reference_offset"] # Select the right basisvalues for this dof basisvalues = basisvalues_for_degree[embedded_degree] # Select the right coefficients for this dof coefficients = coefficients_for_dof[idof] # Generate basis accumulation loop if num_components > 1: # Could just simplify by using this generic code # and dropping the below two special cases accumulation_code += [ L.ForRange(c, 0, num_components, index_type=index_type, body=L.ForRange(r, 0, num_members, index_type=index_type, body=L.AssignAdd(ref_values[ip, idof, reference_offset + c], coefficients[c, r] * basisvalues[r]))) ] elif num_members > 1: accumulation_code += [ L.ForRange(r, 0, num_members, index_type=index_type, body=L.AssignAdd(ref_values[ip, idof, reference_offset], coefficients[0, r] * basisvalues[r])) ] else: accumulation_code += [ L.AssignAdd(ref_values[ip, idof, reference_offset], coefficients[0, 0] * basisvalues[0]) ] # TODO: Move this mapping to its own ufc function # e.g. finite_element::apply_element_mapping(reference_values, # J, K) # code += _generate_apply_mapping_to_computed_values(L, dof_data) # Only works for affine (no-op) # Stitch it all together code = [ setup_code, L.ForRange(ip, 0, num_points, index_type=index_type, body=basisvalues_code + accumulation_code) ] return code def generate_expansion_coefficients(L, dofs_data): # TODO: Use precision parameter to format coefficients to match # legacy implementation and make regression tests more robust all_tables = [] tables_code = [] coefficients_for_dof = [] for idof, dof_data in enumerate(dofs_data): num_components = dof_data["num_components"] num_members = dof_data["num_expansion_members"] fiat_coefficients = dof_data["coeffs"] # Check if any fiat_coefficients tables in expansion_coefficients # are equal and reuse instead of declaring new. coefficients = None # NB: O(n^2) loop over tables for symbol, table in all_tables: if table.shape == fiat_coefficients.shape and numpy.allclose(table, fiat_coefficients): coefficients = symbol break # Create separate variable name for coefficients table for each dof if coefficients is None: coefficients = L.Symbol("coefficients%d" % idof) all_tables.append((coefficients, fiat_coefficients)) # Create static table with expansion coefficients computed by FIAT compile time. tables_code += [L.ArrayDecl("static const double", coefficients, (num_components, num_members), values=fiat_coefficients)] # Store symbol reference for this dof coefficients_for_dof.append(coefficients) return tables_code, coefficients_for_dof def generate_compute_basisvalues(L, dofs_data, element_cellname, tdim, X, ip): basisvalues_code = [ L.Comment("Compute basisvalues for each relevant embedded degree"), ] basisvalues_for_degree = {} need_fiat_coordinates = False Y = L.Symbol("Y") for idof, dof_data in enumerate(dofs_data): embedded_degree = dof_data["embedded_degree"] if embedded_degree: need_fiat_coordinates = True if embedded_degree not in basisvalues_for_degree: num_members = dof_data["num_expansion_members"] basisvalues = L.Symbol("basisvalues%d" % embedded_degree) bfcode = _generate_compute_basisvalues(L, basisvalues, Y, element_cellname, embedded_degree, num_members) basisvalues_code += [L.StatementList(bfcode)] # Store symbol reference for this degree basisvalues_for_degree[embedded_degree] = basisvalues if need_fiat_coordinates: # Mapping from UFC reference cell coordinate X to FIAT reference cell coordinate Y fiat_coordinate_mapping = [ L.Comment("Map from UFC reference coordinate X to FIAT reference coordinate Y"), L.ArrayDecl("const double", Y, (tdim,), values=[2.0*X[ip*tdim + jj]-1.0 for jj in range(tdim)]), ] basisvalues_code = fiat_coordinate_mapping + basisvalues_code return basisvalues_code, basisvalues_for_degree, need_fiat_coordinates def _generate_compute_basisvalues(L, basisvalues, Y, element_cellname, embedded_degree, num_members): """From FIAT_NEW.expansions.""" # Branch off to cell specific implementations if element_cellname == "interval": code = _generate_compute_interval_basisvalues(L, basisvalues, Y, embedded_degree, num_members) elif element_cellname == "triangle": code = _generate_compute_triangle_basisvalues(L, basisvalues, Y, embedded_degree, num_members) elif element_cellname == "tetrahedron": code = _generate_compute_tetrahedron_basisvalues(L, basisvalues, Y, embedded_degree, num_members) else: error() return code def _jrc(a, b, n): an = float((2*n+1+a+b) * (2*n+2+a+b))/float(2*(n+1) * (n+1+a+b)) bn = float((a*a-b*b) * (2*n+1+a+b))/float(2*(n+1) * (2*n+a+b) * (n+1+a+b)) cn = float((n+a) * (n+b) * (2*n+2+a+b))/float((n+1) * (n+1+a+b) * (2*n+a+b)) return (an, bn, cn) def _generate_compute_interval_basisvalues(L, basisvalues, Y, embedded_degree, num_members): # FIAT_NEW.expansions.LineExpansionSet. # Create zero-initialized array for with basisvalues code = [L.ArrayDecl("double", basisvalues, (num_members,), values=0)] # FIAT_NEW.jacobi.eval_jacobi_batch(a,b,n,xs) # for ii in range(result.shape[1]): # result[0,ii] = 1.0 + xs[ii,0] - xs[ii,0] # The initial value basisvalues[0] is always 1.0 code += [L.Assign(basisvalues[0], 1.0)] if embedded_degree > 0: # FIAT_NEW.jacobi.eval_jacobi_batch(a,b,n,xs). # result[1,:] = 0.5 * ( a - b + ( a + b + 2.0 ) * xsnew ) # The initial value basisvalues[1] is always Y[0] code += [L.Assign(basisvalues[1], Y[0])] # Only active if embedded_degree > 1. for r in range(2, embedded_degree+1): # FIAT_NEW.jacobi.eval_jacobi_batch(a,b,n,xs). # apb = a + b (equal to 0 because of function arguments) # for k in range(2,n+1): # a1 = 2.0 * k * ( k + apb ) * ( 2.0 * k + apb - 2.0 ) # a2 = ( 2.0 * k + apb - 1.0 ) * ( a * a - b * b ) # a3 = ( 2.0 * k + apb - 2.0 ) \ # * ( 2.0 * k + apb - 1.0 ) \ # * ( 2.0 * k + apb ) # a4 = 2.0 * ( k + a - 1.0 ) * ( k + b - 1.0 ) \ # * ( 2.0 * k + apb ) # a2 = a2 / a1 # a3 = a3 / a1 # a4 = a4 / a1 # result[k,:] = ( a2 + a3 * xsnew ) * result[k-1,:] \ # - a4 * result[k-2,:] # The below implements the above (with a = b = apb = 0) a1 = float(2*r*r*(2*r - 2)) a3 = ((2*r - 2)*(2*r - 1)*(2*r)) / a1 a4 = (2*(r - 1)*(r - 1)*(2*r)) / a1 value = (Y[0] * (a3 * basisvalues[r-1])) - a4*basisvalues[r-2] code += [L.Assign(basisvalues[r], value)] # FIAT_NEW code # results = numpy.zeros( ( n+1 , len(pts) ) , type( pts[0][0] ) ) # for k in range( n + 1 ): # results[k,:] = psitilde_as[k,:] * math.sqrt( k + 0.5 ) # Scale values p = L.Symbol("p") code += [L.ForRange(p, 0, embedded_degree + 1, index_type=index_type, body=L.AssignMul(basisvalues[p], L.Call("std::sqrt", (0.5 + p,))))] return code def _generate_compute_triangle_basisvalues(L, basisvalues, Y, embedded_degree, num_members): # FIAT_NEW.expansions.TriangleExpansionSet. def _idx2d(p, q): return (p + q)*(p + q + 1)//2 + q # Create zero-initialized array for with basisvalues code = [L.ArrayDecl("double", basisvalues, (num_members,), values=0)] # Compute helper factors # FIAT_NEW code # f1 = (1.0+2*x+y)/2.0 # f2 = (1.0 - y) / 2.0 # f3 = f2**2 # The initial value basisvalues 0 is always 1.0. # FIAT_NEW code # for ii in range( results.shape[1] ): # results[0,ii] = 1.0 + apts[ii,0]-apts[ii,0]+apts[ii,1]-apts[ii,1] code += [L.Assign(basisvalues[0], 1.0)] # Only continue if the embedded degree is larger than zero. if embedded_degree == 0: return code # The initial value of basisfunction 1 is equal to f1. # FIAT_NEW code # results[idx(1,0),:] = f1 f1 = L.Symbol("tmp1_%d" % embedded_degree) code += [L.VariableDecl("const double", f1, (1.0 + 2.0*Y[0] + Y[1]) / 2.0)] code += [L.Assign(basisvalues[1], f1)] # NOTE: KBO: The order of the loops is VERY IMPORTANT!! # FIAT_NEW code (loop 1 in FIAT) # for p in range(1,n): # a = (2.0*p+1)/(1.0+p) # b = p / (p+1.0) # results[idx(p+1,0)] = a * f1 * results[idx(p,0),:] \ # - b * f3 *results[idx(p-1,0),:] # Only active if embedded_degree > 1. if embedded_degree > 1: f2 = L.Symbol("tmp2_%d" % embedded_degree) f3 = L.Symbol("tmp3_%d" % embedded_degree) code += [L.VariableDecl("const double", f2, (1.0 - Y[1]) / 2.0)] code += [L.VariableDecl("const double", f3, f2*f2)] for r in range(1, embedded_degree): rr = _idx2d((r + 1), 0) ss = _idx2d(r, 0) tt = _idx2d((r - 1), 0) A = (2*r + 1.0)/(r + 1) B = r/(1.0 + r) v1 = A * f1 * basisvalues[ss] v2 = B * f3 * basisvalues[tt] value = v1 - v2 code += [L.Assign(basisvalues[rr], value)] # FIAT_NEW code (loop 2 in FIAT). # for p in range(n): # results[idx(p,1),:] = 0.5 * (1+2.0*p+(3.0+2.0*p)*y) \ # * results[idx(p,0)] for r in range(0, embedded_degree): # (p+q)*(p+q+1)//2 + q rr = _idx2d(r, 1) ss = _idx2d(r, 0) A = 0.5*(1 + 2*r) B = 0.5*(3 + 2*r) C = A + (B * Y[1]) value = C * basisvalues[ss] code += [L.Assign(basisvalues[rr], value)] # Only active if embedded_degree > 1. # FIAT_NEW code (loop 3 in FIAT). # for p in range(n-1): # for q in range(1,n-p): # (a1,a2,a3) = jrc(2*p+1,0,q) # results[idx(p,q+1),:] \ # = ( a1 * y + a2 ) * results[idx(p,q)] \ # - a3 * results[idx(p,q-1)] # Only active if embedded_degree > 1. for r in range(0, embedded_degree - 1): for s in range(1, embedded_degree - r): rr = _idx2d(r, (s + 1)) ss = _idx2d(r, s) tt = _idx2d(r, s - 1) A, B, C = _jrc(2*r + 1, 0, s) value = (B + A*Y[1])*basisvalues[ss] - C*basisvalues[tt] code += [L.Assign(basisvalues[rr], value)] # FIAT_NEW code (loop 4 in FIAT). # for p in range(n+1): # for q in range(n-p+1): # results[idx(p,q),:] *= math.sqrt((p+0.5)*(p+q+1.0)) for r in range(0, embedded_degree + 1): for s in range(0, embedded_degree + 1 - r): rr = _idx2d(r, s) A = (r + 0.5)*(r + s + 1) code += [L.AssignMul(basisvalues[rr], L.Call("std::sqrt", (A,)))] return code def _generate_compute_tetrahedron_basisvalues(L, basisvalues, Y, embedded_degree, num_members): # FIAT_NEW.expansions.TetrahedronExpansionSet. # FIAT_NEW code (compute index function) TetrahedronExpansionSet. # def idx(p,q,r): # return (p+q+r)*(p+q+r+1)*(p+q+r+2)//6 + (q+r)*(q+r+1)//2 + r def _idx3d(p, q, r): return (p+q+r)*(p+q+r+1)*(p+q+r+2)//6 + (q+r)*(q+r+1)//2 + r # Compute helper factors. # FIAT_NEW code # factor1 = 0.5 * ( 2.0 + 2.0*x + y + z ) # factor2 = (0.5*(y+z))**2 # factor3 = 0.5 * ( 1 + 2.0 * y + z ) # factor4 = 0.5 * ( 1 - z ) # factor5 = factor4 ** 2 # Create zero-initialized array for with basisvalues code = [L.ArrayDecl("double", basisvalues, (num_members,), values=0)] # The initial value basisvalues 0 is always 1.0. # FIAT_NEW code # for ii in range( results.shape[1] ): # results[0,ii] = 1.0 + apts[ii,0]-apts[ii,0]+apts[ii,1]-apts[ii,1] code += [L.Assign(basisvalues[0], 1.0)] # Only continue if the embedded degree is larger than zero. if embedded_degree == 0: return code # The initial value of basisfunction 1 is equal to f1. # FIAT_NEW code # results[idx(1,0),:] = f1 f1 = L.Symbol("tmp1_%d" % embedded_degree) code += [L.VariableDecl("const double", f1, 0.5*(2.0 + 2.0*Y[0] + Y[1] + Y[2]))] code += [L.Assign(basisvalues[1], f1)] # NOTE: KBO: The order of the loops is VERY IMPORTANT!! # FIAT_NEW code (loop 1 in FIAT). # for p in range(1,n): # a1 = ( 2.0 * p + 1.0 ) / ( p + 1.0 ) # a2 = p / (p + 1.0) # results[idx(p+1,0,0)] = a1 * factor1 * results[idx(p,0,0)] \ # -a2 * factor2 * results[ idx(p-1,0,0) ] # Only active if embedded_degree > 1. if embedded_degree > 1: f2 = L.Symbol("tmp2_%d" % embedded_degree) code += [L.VariableDecl("const double", f2, 0.25*(Y[1] + Y[2])*(Y[1] + Y[2]))] for r in range(1, embedded_degree): rr = _idx3d((r + 1), 0, 0) ss = _idx3d(r, 0, 0) tt = _idx3d((r - 1), 0, 0) A = (2*r + 1.0)/(r + 1) B = r/(r + 1.0) value = (A * f1 * basisvalues[ss]) - (B * f2 * basisvalues[tt]) code += [L.Assign(basisvalues[rr], value)] # FIAT_NEW code (loop 2 in FIAT). # q = 1 # for p in range(0,n): # results[idx(p,1,0)] = results[idx(p,0,0)] \ # * ( p * (1.0 + y) + ( 2.0 + 3.0 * y + z ) / 2 ) for r in range(0, embedded_degree): rr = _idx3d(r, 1, 0) ss = _idx3d(r, 0, 0) term0 = 0.5 * (2.0 + 3.0*Y[1] + Y[2]) term1 = float(r) * (1.0 + Y[1]) value = (term0 + term1) * basisvalues[ss] code += [L.Assign(basisvalues[rr], value)] # FIAT_NEW code (loop 3 in FIAT). # for p in range(0,n-1): # for q in range(1,n-p): # (aq,bq,cq) = jrc(2*p+1,0,q) # qmcoeff = aq * factor3 + bq * factor4 # qm1coeff = cq * factor5 # results[idx(p,q+1,0)] = qmcoeff * results[idx(p,q,0)] \ # - qm1coeff * results[idx(p,q-1,0)] # Only active if embedded_degree > 1. if embedded_degree > 1: f3 = L.Symbol("tmp3_%d" % embedded_degree) f4 = L.Symbol("tmp4_%d" % embedded_degree) f5 = L.Symbol("tmp5_%d" % embedded_degree) code += [L.VariableDecl("const double", f3, 0.5*(1.0 + 2.0*Y[1] + Y[2]))] code += [L.VariableDecl("const double", f4, 0.5*(1.0 - Y[2]))] code += [L.VariableDecl("const double", f5, f4*f4)] for r in range(0, embedded_degree - 1): for s in range(1, embedded_degree - r): rr = _idx3d(r, (s + 1), 0) ss = _idx3d(r, s, 0) tt = _idx3d(r, s - 1, 0) (A, B, C) = _jrc(2*r + 1, 0, s) term0 = ((A*f3) + (B*f4)) * basisvalues[ss] term1 = C * f5 * basisvalues[tt] value = term0 - term1 code += [L.Assign(basisvalues[rr], value)] # FIAT_NEW code (loop 4 in FIAT). # now handle r=1 # for p in range(n): # for q in range(n-p): # results[idx(p,q,1)] = results[idx(p,q,0)] \ # * ( 1.0 + p + q + ( 2.0 + q + p ) * z ) for r in range(0, embedded_degree): for s in range(0, embedded_degree - r): rr = _idx3d(r, s, 1) ss = _idx3d(r, s, 0) A = (float(2 + r + s) * Y[2]) + float(1 + r + s) value = A * basisvalues[ss] code += [L.Assign(basisvalues[rr], value)] # FIAT_NEW code (loop 5 in FIAT). # general r by recurrence # for p in range(n-1): # for q in range(0,n-p-1): # for r in range(1,n-p-q): # ar,br,cr = jrc(2*p+2*q+2,0,r) # results[idx(p,q,r+1)] = \ # (ar * z + br) * results[idx(p,q,r) ] \ # - cr * results[idx(p,q,r-1) ] # Only active if embedded_degree > 1. for r in range(0, embedded_degree - 1): for s in range(0, embedded_degree - r - 1): for t in range(1, embedded_degree - r - s): rr = _idx3d(r, s, (t + 1)) ss = _idx3d(r, s, t) tt = _idx3d(r, s, t - 1) (A, B, C) = _jrc(2*r + 2*s + 2, 0, t) az_b = B + A*Y[2] value = (az_b*basisvalues[ss]) - (C*basisvalues[tt]) code += [L.Assign(basisvalues[rr], value)] # FIAT_NEW code (loop 6 in FIAT). # for p in range(n+1): # for q in range(n-p+1): # for r in range(n-p-q+1): # results[idx(p,q,r)] *= math.sqrt((p+0.5)*(p+q+1.0)*(p+q+r+1.5)) for r in range(0, embedded_degree + 1): for s in range(0, embedded_degree - r + 1): for t in range(0, embedded_degree - r - s + 1): rr = _idx3d(r, s, t) A = (r + 0.5)*(r + s + 1)*(r + s + t + 1.5) code += [L.AssignMul(basisvalues[rr], L.Call("std::sqrt", (A,)))] return code ffc-2019.1.0.post0/ffc/uflacs/backends/ufc/evaluatebasisderivatives.py000066400000000000000000000423211346026536000255670ustar00rootroot00000000000000# -*- coding: utf-8 -*- """Work in progress translation of FFC evaluatebasisderivatives code to uflacs CNodes format.""" import numpy import math from ffc.log import error from ffc.uflacs.backends.ufc.utils import generate_error from ffc.uflacs.backends.ufc.evaluatebasis import _generate_compute_basisvalues, tabulate_coefficients from ffc.uflacs.backends.ufc.evalderivs import _generate_combinations from ffc.uflacs.backends.ufc.jacobian import jacobian, inverse_jacobian, orientation, fiat_coordinate_mapping, _mapping_transform # Used for various indices and arrays in this file index_type = "std::size_t" def _compute_reference_derivatives(L, data, dof_data): """Compute derivatives on the reference element by recursively multiply coefficients with the relevant derivatives of the polynomial base until the requested order of derivatives has been reached. After this take the dot product with the basisvalues. """ # Prefetch formats to speed up code generation tdim = data["topological_dimension"] gdim = data["geometric_dimension"] max_degree = data["max_degree"] _t, _g = ("","") if (tdim == gdim) else ("_t", "_g") num_derivs_t = L.Symbol("num_derivatives" + _t) num_derivs_g = L.Symbol("num_derivatives" + _g) # Get number of components. num_components = dof_data["num_components"] # Get shape of derivative matrix (they should all have the same # shape) and verify that it is a square matrix. shape_dmats = dof_data["dmats"][0].shape if shape_dmats[0] != shape_dmats[1]: error("Something is wrong with the dmats:\n%s" % str(dof_data["dmats"])) code = [L.Comment("Compute reference derivatives.")] # Declare pointer to array that holds derivatives on the FIAT element code += [L.Comment("Declare array of derivatives on FIAT element.")] # The size of the array of reference derivatives is equal to the # number of derivatives times the number of components of the # basis element num_vals = num_components*num_derivs_t nds = tdim**max_degree * num_components mapping = dof_data["mapping"] if "piola" in mapping and "double" not in mapping and gdim > num_components : # In either of the Piola cases, the value space of the # derivatives is the geometric dimension rather than the # topological dimension. Increase size of derivatives array # if needed. nds = tdim**max_degree * gdim derivatives = L.Symbol("derivatives") code += [L.ArrayDecl("double", derivatives, nds, 0.0)] # Declare matrix of dmats (which will hold the matrix product of # all combinations) and dmats_old which is needed in order to # perform the matrix product. code += [L.Comment("Declare derivative matrix (of polynomial basis).")] value = numpy.eye(shape_dmats[0]) dmats = L.Symbol("dmats") code += [L.ArrayDecl("double", dmats, shape_dmats, value)] code += [L.Comment("Declare (auxiliary) derivative matrix (of polynomial basis).")] dmats_old = L.Symbol("dmats_old") code += [L.ArrayDecl("double", dmats_old, shape_dmats, value)] r = L.Symbol("r") s = L.Symbol("s") t = L.Symbol("t") n = L.Symbol("n") lines = [L.Comment("Reset dmats to identity"), L.MemZero(L.AddressOf(dmats[0][0]), shape_dmats[0]*shape_dmats[1]), L.ForRange(t, 0, shape_dmats[0], index_type=index_type, body=[L.Assign(dmats[t][t], 1.0)])] lines_s = [L.MemCopy(L.AddressOf(dmats[0][0]), L.AddressOf(dmats_old[0][0]), shape_dmats[0]*shape_dmats[1]), L.MemZero(L.AddressOf(dmats[0][0]), shape_dmats[0]*shape_dmats[1])] lines_s += [L.Comment("Update dmats using an inner product.")] # Create dmats matrix by multiplication comb = L.Symbol("combinations" + _t) for i in range(len(dof_data["dmats"])): lines_s += [L.Comment("_dmats_product(shape_dmats, comb[r][s], %d)" % i)] dmats_i = L.Symbol("dmats%d" % i) u = L.Symbol("u") tu = L.Symbol("tu") lines_comb = [L.ForRange(t, 0, shape_dmats[0], index_type=index_type, body = [L.ForRange(u, 0, shape_dmats[1], index_type=index_type, body = [L.ForRange(tu, 0, shape_dmats[1], index_type=index_type, body = [L.AssignAdd(dmats[t][u], dmats_old[tu][u] * dmats_i[t][tu])])])])] lines_s += [L.If(L.EQ(comb[n - 1][r][s], i), lines_comb)] lines += [L.Comment("Looping derivative order to generate dmats."), L.ForRange(s, 0, n, index_type=index_type, body=lines_s)] # Compute derivatives for all components lines_c = [] basisvalues = L.Symbol("basisvalues") for i in range(num_components): coeffs = L.Symbol("coefficients%d" % i) lines_c += [L.AssignAdd(derivatives[i*num_derivs_t + r], coeffs[s]*dmats[s,t]*basisvalues[t])] lines += [L.ForRange(s, 0, shape_dmats[0], index_type=index_type, body=[L.ForRange(t, 0, shape_dmats[1], index_type=index_type, body=lines_c)])] lines += _mapping_transform(L, data, dof_data, derivatives, r, num_derivs_t) code += [L.Comment("Loop possible derivatives."), L.ForRange(r, 0, num_derivs_t, index_type=index_type, body=lines)] return code def _transform_derivatives(L, data, dof_data): """Transform derivatives back to the physical element by applying the transformation matrix.""" tdim = data["topological_dimension"] gdim = data["geometric_dimension"] _t, _g = ("","") if (tdim == gdim) else ("_t", "_g") num_derivs_t = L.Symbol("num_derivatives" + _t) num_derivs_g = L.Symbol("num_derivatives" + _g) # Get number of components and offset. num_components = dof_data["num_components"] reference_offset = dof_data["reference_offset"] physical_offset = dof_data["physical_offset"] offset = reference_offset # physical_offset # FIXME: Should be physical offset but that breaks tests mapping = dof_data["mapping"] if "piola" in mapping and "double" not in mapping: # In either of the Piola cases, the value space of the derivatives # is the geometric dimension rather than the topological dimension. num_components = gdim code = [L.Comment("Transform derivatives back to physical element")] lines = [] r = L.Symbol("r") s = L.Symbol("s") values = L.Symbol("values") transform = L.Symbol("transform") derivatives = L.Symbol("derivatives") for i in range(num_components): lines += [L.AssignAdd(values[(offset + i)*num_derivs_g + r], transform[r, s]*derivatives[i*num_derivs_t + s])] code += [L.ForRange(r, 0, num_derivs_g, index_type=index_type, body=[L.ForRange(s, 0, num_derivs_t, index_type=index_type, body=lines)])] return code def _tabulate_dmats(L, dof_data): "Tabulate the derivatives of the polynomial base" # Get derivative matrices (coefficients) of basis functions, # computed by FIAT at compile time. code = [L.Comment("Tables of derivatives of the polynomial base (transpose).")] # Generate tables for each spatial direction. for i, dmat in enumerate(dof_data["dmats"]): # Extract derivatives for current direction (take transpose, # FIAT_NEW PolynomialSet.tabulate()). matrix = numpy.transpose(dmat) # Get shape and check dimension (This is probably not needed). # shape = numpy.shape(matrix) if not (matrix.shape[0] == matrix.shape[1] == dof_data["num_expansion_members"]): error("Something is wrong with the shape of dmats.") # Declare varable name for coefficients. table = L.Symbol("dmats%d" % i) code += [L.ArrayDecl("static const double", table, matrix.shape, matrix)] return code def _generate_dof_code(L, data, dof_data): "Generate code for a basis." basisvalues = L.Symbol("basisvalues") Y = L.Symbol("Y") element_cellname = data["cellname"] embedded_degree = dof_data["embedded_degree"] num_members = dof_data["num_expansion_members"] code = _generate_compute_basisvalues(L, basisvalues, Y, element_cellname, embedded_degree, num_members) # Tabulate coefficients. code += tabulate_coefficients(L, dof_data) # Tabulate coefficients for derivatives. code += _tabulate_dmats(L, dof_data) # Compute the derivatives of the basisfunctions on the reference # (FIAT) element, as the dot product of the new coefficients and # basisvalues. code += _compute_reference_derivatives(L, data, dof_data) # Transform derivatives to physical element by multiplication with # the transformation matrix. code += _transform_derivatives(L, data, dof_data) return code def _generate_transform(L, element_cellname, gdim, tdim, max_degree): """Generate the transformation matrix, which is used to transform derivatives from reference element back to the physical element. """ max_g_d = gdim**max_degree max_t_d = tdim**max_degree K = L.Symbol("K") transform = L.Symbol("transform") col = L.Symbol("col") row = L.Symbol("row") _t, _g = ("","") if (tdim == gdim) else ("_t", "_g") comb_t = L.Symbol("combinations" + _t) comb_g = L.Symbol("combinations" + _g) num_derivatives_t = L.Symbol("num_derivatives" + _t) num_derivatives_g = L.Symbol("num_derivatives" + _g) K = L.FlattenedArray(K, dims=(tdim, gdim)) code = [L.Comment("Declare transformation matrix")] code += [L.ArrayDecl("double", transform, (max_g_d, max_t_d), numpy.ones((max_g_d, max_t_d)))] code += [L.Comment("Construct transformation matrix")] k = L.Symbol("k") n = L.Symbol("n") inner_loop = L.ForRange(col, 0, num_derivatives_t, index_type=index_type, body=[L.ForRange(k, 0, n, index_type=index_type, body=[L.AssignMul(transform[row][col], K[comb_t[n-1][col][k], comb_g[n-1][row][k]])])]) code += [L.ForRange(row, 0, num_derivatives_g, index_type=index_type, body=inner_loop)] return code def generate_evaluate_basis_derivatives(L, data): """Evaluate the derivatives of an element basisfunction at a point. The values are computed as in FIAT as the matrix product of the coefficients (computed at compile time), basisvalues which are dependent on the coordinate and thus have to be computed at run time and combinations (depending on the order of derivative) of dmats tables which hold the derivatives of the expansion coefficients. """ if isinstance(data, str): msg = "evaluate_basis_derivatives: %s" % data return [generate_error(L, msg, True)] # Initialise return code. code = [] # Get the element cell domain, geometric and topological dimension. element_cellname = data["cellname"] gdim = data["geometric_dimension"] tdim = data["topological_dimension"] max_degree = data["max_degree"] physical_value_size = data["physical_value_size"] # Compute number of derivatives that has to be computed, and # declare an array to hold the values of the derivatives on the # reference element. values = L.Symbol("values") n = L.Symbol("n") dofs = L.Symbol("dofs") x = L.Symbol("x") coordinate_dofs = L.Symbol("coordinate_dofs") cell_orientation = L.Symbol("cell_orientation") if tdim == gdim: _t, _g = ("", "") num_derivatives_t = L.Symbol("num_derivatives") num_derivatives_g = L.Symbol("num_derivatives") code += [L.VariableDecl("std::size_t", num_derivatives_t, L.Call("std::pow", (tdim, n)))] else: _t, _g = ("_t", "_g") num_derivatives_t = L.Symbol("num_derivatives_t") num_derivatives_g = L.Symbol("num_derivatives_g") if max_degree > 0: code += [L.VariableDecl("std::size_t", num_derivatives_t, L.Call("std::pow", (tdim, n)))] code += [L.VariableDecl("std::size_t", num_derivatives_g, L.Call("std::pow", (gdim, n)))] # Reset all values. code += [L.MemZero(values, physical_value_size*num_derivatives_g)] # Handle values of argument 'n'. code += [L.Comment("Call evaluate_basis_all if order of derivatives is equal to zero.")] code += [L.If(L.EQ(n, 0), [L.Call("evaluate_basis", (L.Symbol("i"), values, x, coordinate_dofs, cell_orientation)), L.Return()])] # If max_degree is zero, return code (to avoid declarations such as # combinations[1][0]) and because there's nothing to compute.) if max_degree == 0: return code code += [L.Comment("If order of derivatives is greater than the maximum polynomial degree, return zeros.")] code += [L.If(L.GT(n, max_degree), [L.Return()])] # Generate geo code. code += jacobian(L, gdim, tdim, element_cellname) code += inverse_jacobian(L, gdim, tdim, element_cellname) if data["needs_oriented"] and tdim != gdim: code += orientation(L) code += fiat_coordinate_mapping(L, element_cellname, gdim) # Generate all possible combinations of derivatives. combinations_code_t, combinations_t = _generate_combinations(L, tdim, max_degree, n, num_derivatives_t, _t) code += combinations_code_t if tdim != gdim: combinations_code_g, combinations_g = _generate_combinations(L, gdim, max_degree, n, num_derivatives_g, "_g") code += combinations_code_g # Generate the transformation matrix. code += _generate_transform(L, element_cellname, gdim, tdim, max_degree) # Create code for all basis values (dofs). dof_cases = [] for i, dof_data in enumerate(data["dofs_data"]): dof_cases.append((i, _generate_dof_code(L, data, dof_data))) code += [L.Switch(L.Symbol("i"), dof_cases)] return code def generate_evaluate_basis_derivatives_all(L, data): """Like evaluate_basis, but return the values of all basis functions (dofs).""" if isinstance(data, str): msg = "evaluate_basis_derivatives_all: %s" % data return [generate_error(L, msg, True)] # Initialise return code code = [] # FIXME: KBO: Figure out which return format to use, either: # [dN0[0]/dx, dN0[0]/dy, dN0[1]/dx, dN0[1]/dy, dN1[0]/dx, # dN1[0]/dy, dN1[1]/dx, dN1[1]/dy, ...] # or # [dN0[0]/dx, dN1[0]/dx, ..., dN0[1]/dx, dN1[1]/dx, ..., # dN0[0]/dy, dN1[0]/dy, ..., dN0[1]/dy, dN1[1]/dy, ...] # or # [dN0[0]/dx, dN0[1]/dx, ..., dN1[0]/dx, dN1[1]/dx, ..., # dN0[0]/dy, dN0[1]/dy, ..., dN1[0]/dy, dN1[1]/dy, ...] # for vector (tensor elements), currently returning option 1. # FIXME: KBO: For now, just call evaluate_basis_derivatives and # map values accordingly, this will keep the amount of code at a # minimum. If it turns out that speed is an issue (overhead from # calling evaluate_basis), we can easily generate all the code. # Get total value shape and space dimension for entire element # (possibly mixed). physical_value_size = data["physical_value_size"] space_dimension = data["space_dimension"] max_degree = data["max_degree"] gdim = data["geometric_dimension"] tdim = data["topological_dimension"] n = L.Symbol("n") x = L.Symbol("x") coordinate_dofs = L.Symbol("coordinate_dofs") cell_orientation = L.Symbol("cell_orientation") values = L.Symbol("values") # Special case where space dimension is one (constant elements). if space_dimension == 1: code += [L.Comment("Element is constant, calling evaluate_basis_derivatives.")] code += [L.Call("evaluate_basis_derivatives", (0, n, values, x, coordinate_dofs, cell_orientation))] return code # Compute number of derivatives. if tdim == gdim: num_derivatives = L.Symbol("num_derivatives") else: num_derivatives = L.Symbol("num_derivatives_g") # If n == 0, call evaluate_basis. code += [L.Comment("Call evaluate_basis_all if order of derivatives is equal to zero.")] code += [L.If(L.EQ(n, 0), [L.Call("evaluate_basis_all", (values, x, coordinate_dofs, cell_orientation)), L.Return()])] code += [L.VariableDecl("unsigned int", num_derivatives, L.Call("std::pow", (gdim, n)))] num_vals = physical_value_size * num_derivatives # Reset values. code += [L.Comment("Set values equal to zero.")] code += [L.MemZero(values, num_vals*space_dimension)] # If n > max_degree, return zeros. code += [L.Comment("If order of derivatives is greater than the maximum polynomial degree, return zeros.")] code += [L.If(L.GT(n, max_degree), [L.Return()])] # Declare helper value to hold single dof values and reset. code += [L.Comment("Helper variable to hold values of a single dof.")] nds = gdim**max_degree * physical_value_size dof_values = L.Symbol("dof_values") code += [L.ArrayDecl("double", dof_values, (nds,), 0.0)] # Create loop over dofs that calls evaluate_basis_derivatives for # a single dof and inserts the values into the global array. code += [L.Comment("Loop dofs and call evaluate_basis_derivatives.")] values = L.FlattenedArray(values, dims=(space_dimension, num_vals)) r = L.Symbol("r") s = L.Symbol("s") loop_s = L.ForRange(s, 0, num_vals, index_type=index_type, body=[L.Assign(values[r, s], dof_values[s])]) code += [L.ForRange(r, 0, space_dimension, index_type=index_type, body=[L.Call("evaluate_basis_derivatives", (r, n, dof_values, x, coordinate_dofs, cell_orientation)), loop_s])] return code ffc-2019.1.0.post0/ffc/uflacs/backends/ufc/evaluatedof.py000066400000000000000000000360351346026536000227750ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2009-2017 Anders Logg and Martin Sandve Alnæs, Chris Richardson # # This file is part of UFLACS. # # UFLACS is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # UFLACS is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with UFLACS. If not, see . # Note: Much of the code in this file is a direct translation # from the old implementation in FFC, although some improvements # have been made to the generated code. from ffc.uflacs.backends.ufc.utils import generate_error from ffc.uflacs.backends.ufc.jacobian import jacobian, inverse_jacobian, orientation from ufl.permutation import build_component_numbering from ffc.utils import pick_first index_type="std::size_t" def reference_to_physical_map(cellname): "Returns a map from reference coordinates to physical element coordinates" if cellname == "interval": return lambda x: (1.0 - x[0], x[0]) elif cellname == "triangle": return lambda x: (1.0 - x[0] - x[1], x[0], x[1]) elif cellname == "tetrahedron": return lambda x: (1.0 - x[0] - x[1] - x[2], x[0], x[1], x[2]) elif cellname == "quadrilateral": return lambda x: ((1-x[0])*(1-x[1]), (1-x[0])*x[1], x[0]*(1-x[1]), x[0]*x[1]) elif cellname == "hexahedron": return lambda x: ((1-x[0])*(1-x[1])*(1-x[2]), (1-x[0])*(1-x[1])*x[2], (1-x[0])*x[1]*(1-x[2]), (1-x[0])*x[1]*x[2], x[0]*(1-x[1])*(1-x[2]), x[0]*(1-x[1])*x[2], x[0]*x[1]*(1-x[2]), x[0]*x[1]*x[2]) def _change_variables(L, mapping, gdim, tdim, offset): """Generate code for mapping function values according to 'mapping' and offset. The basics of how to map a field from a physical to the reference domain. (For the inverse approach -- see interpolatevertexvalues) Let g be a field defined on a physical domain T with physical coordinates x. Let T_0 be a reference domain with coordinates X. Assume that F: T_0 -> T such that x = F(X) Let J be the Jacobian of F, i.e J = dx/dX and let K denote the inverse of the Jacobian K = J^{-1}. Then we (currently) have the following four types of mappings: 'affine' mapping for g: G(X) = g(x) For vector fields g: 'contravariant piola' mapping for g: G(X) = det(J) K g(x) i.e G_i(X) = det(J) K_ij g_j(x) 'covariant piola' mapping for g: G(X) = J^T g(x) i.e G_i(X) = J^T_ij g(x) = J_ji g_j(x) 'double covariant piola' mapping for g: G(X) = J^T g(x) J i.e. G_il(X) = J_ji g_jk(x) J_kl 'double contravariant piola' mapping for g: G(X) = det(J)^2 K g(x) K^T i.e. G_il(X)=(detJ)^2 K_ij g_jk K_lk """ # meg: Various mappings must be handled both here and in # interpolate_vertex_values. Could this be abstracted out? values = L.Symbol("vals") if mapping == "affine": return [values[offset]] elif mapping == "contravariant piola": # Map each component from physical to reference using inverse # contravariant piola detJ = L.Symbol("detJ") K = L.Symbol("K") K = L.FlattenedArray(K, dims=(tdim, gdim)) w = [] for i in range(tdim): inner = 0.0 for j in range(gdim): inner += values[j + offset]*K[i, j] w.append(inner*detJ) return w elif mapping == "covariant piola": # Map each component from physical to reference using inverse # covariant piola detJ = L.Symbol("detJ") J = L.Symbol("J") J = L.FlattenedArray(J, dims=(gdim, tdim)) w = [] for i in range(tdim): inner = 0.0 for j in range(gdim): inner += values[j + offset]*J[j, i] w.append(inner) return w elif mapping == "double covariant piola": # physical to reference pullback as a covariant 2-tensor w = [] J = L.Symbol("J") J = L.FlattenedArray(J, dims=(gdim, tdim)) for i in range(tdim): for l in range(tdim): inner = 0.0 for k in range(gdim): for j in range(gdim): inner += J[j, i] * values[j*tdim + k + offset] * J[k, l] w.append(inner) return w elif mapping == "double contravariant piola": # physical to reference using double contravariant piola w = [] K = L.Symbol("K") K = L.FlattenedArray(K, dims=(tdim, gdim)) detJ = L.Symbol("detJ") for i in range(tdim): for l in range(tdim): inner = 0.0 for k in range(gdim): for j in range(gdim): inner += K[i, j] * values[j*tdim + k + offset] * K[l, k] w.append(inner*detJ*detJ) return w else: raise Exception("The mapping (%s) is not allowed" % mapping) def _generate_body(L, i, dof, mapping, gdim, tdim, cell_shape, offset=0): "Generate code for a single dof." # EnrichedElement is handled by having [None, ..., None] dual basis if not dof: msg = "evaluate_dof(s) for enriched element not implemented." return ([generate_error(L, msg, True)], 0.0) points = list(dof.keys()) # Generate different code if multiple points. (Otherwise ffc # compile time blows up.) if len(points) > 1: return _generate_multiple_points_body(L, i, dof, mapping, gdim, tdim, offset) # Get weights for mapping reference point to physical x = points[0] w = reference_to_physical_map(cell_shape)(x) # Map point onto physical element: y = F_K(x) code = [] y = L.Symbol("y") coordinate_dofs = L.Symbol("coordinate_dofs") vals = L.Symbol("vals") c = L.Symbol("c") for j in range(gdim): yy = 0.0 for k in range(len(w)): yy += w[k]*coordinate_dofs[k*gdim + j] code += [L.Assign(y[j], yy)] # Evaluate function at physical point code += [L.Call("f.evaluate", (vals, y, c))] # Map function values to the reference element F = _change_variables(L, mapping, gdim, tdim, offset) # Simple affine functions deserve special case: if len(F) == 1: return (code, dof[x][0][0]*F[0]) # Flatten multi-indices (index_map, _) = build_component_numbering([tdim] * len(dof[x][0][1]), ()) # Take inner product between components and weights value = 0.0 for (w, k) in dof[x]: value += w*F[index_map[k]] # Return eval code and value return (code, value) def _generate_multiple_points_body(L, i, dof, mapping, gdim, tdim, offset=0): "Generate c++ for-loop for multiple points (integral bodies)" result = L.Symbol("result") code = [L.Assign(result, 0.0)] points = list(dof.keys()) n = len(points) # Get number of tokens per point tokens = [dof[x] for x in points] len_tokens = pick_first([len(t) for t in tokens]) # Declare points # points = format["list"]([format["list"](x) for x in points]) X_i = L.Symbol("X_%d" % i) code += [L.ArrayDecl("double", X_i, [n, tdim], points)] # Declare components components = [[c[0] for (w, c) in token] for token in tokens] # components = format["list"]([format["list"](c) for c in components]) D_i = L.Symbol("D_%d" % i) code += [L.ArrayDecl("int", D_i, [n, len_tokens], components)] # Declare weights weights = [[w for (w, c) in token] for token in tokens] # weights = format["list"]([format["list"](w) for w in weights]) W_i = L.Symbol("W_%d" % i) code += [L.ArrayDecl("double", W_i, [n, len_tokens], weights)] # Declare copy variable: copy_i = L.Symbol("copy_%d" % i) code += [L.ArrayDecl("double", copy_i, tdim)] # Add loop over points code += [L.Comment("Loop over points")] # Map the points from the reference onto the physical element r = L.Symbol("r") w0 = L.Symbol("w0") w1 = L.Symbol("w1") w2 = L.Symbol("w2") w3 = L.Symbol("w3") y = L.Symbol("y") coordinate_dofs = L.Symbol("coordinate_dofs") if tdim == 1: lines_r = [L.Comment("Evaluate basis functions for affine mapping"), L.VariableDecl("const double", w0, 1 - X_i[r][0]), L.VariableDecl("const double", w1, X_i[r][0]), L.Comment("Compute affine mapping y = F(X)")] for j in range(gdim): lines_r += [L.Assign(y[j], w0*coordinate_dofs[j] + w1*coordinate_dofs[j + gdim])] elif tdim == 2: lines_r = [L.Comment("Evaluate basis functions for affine mapping"), L.VariableDecl("const double", w0, 1 - X_i[r][0] - X_i[r][1]), L.VariableDecl("const double", w1, X_i[r][0]), L.VariableDecl("const double", w2, X_i[r][1]), L.Comment("Compute affine mapping y = F(X)")] for j in range(gdim): lines_r += [L.Assign(y[j], w0*coordinate_dofs[j] + w1*coordinate_dofs[j + gdim] + w2*coordinate_dofs[j + 2*gdim])] elif tdim == 3: lines_r = [L.Comment("Evaluate basis functions for affine mapping"), L.VariableDecl("const double", w0, 1 - X_i[r][0] - X_i[r][1] - X_i[r][2]), L.VariableDecl("const double", w1, X_i[r][0]), L.VariableDecl("const double", w2, X_i[r][1]), L.VariableDecl("const double", w3, X_i[r][2]), L.Comment("Compute affine mapping y = F(X)")] for j in range(gdim): lines_r += [L.Assign(y[j], w0*coordinate_dofs[j] + w1*coordinate_dofs[j + gdim] + w2*coordinate_dofs[j + 2*gdim] + w3*coordinate_dofs[j + 3*gdim])] # Evaluate function at physical point lines_r += [L.Comment("Evaluate function at physical point")] y = L.Symbol("y") vals = L.Symbol("vals") c = L.Symbol("c") lines_r += [L.Call("f.evaluate",(vals, y, c))] # Map function values to the reference element lines_r += [L.Comment("Map function to reference element")] F = _change_variables(L, mapping, gdim, tdim, offset) lines_r += [L.Assign(copy_i[k], F_k) for (k, F_k) in enumerate(F)] # Add loop over directional components lines_r += [L.Comment("Loop over directions")] s = L.Symbol("s") lines_r += [L.ForRange(s, 0, len_tokens, index_type=index_type, body=[L.AssignAdd(result, copy_i[D_i[r, s]] * W_i[r, s])])] # Generate loop over r and add to code. code += [L.ForRange(r, 0, n, index_type=index_type, body=lines_r)] return (code, result) def generate_evaluate_dof(L, ir): "Generate code for evaluate_dof." gdim = ir["geometric_dimension"] tdim = ir["topological_dimension"] cell_shape = ir["cell_shape"] # Enriched element, no dofs defined if not any(ir["dofs"]): code = [] else: # Declare variable for storing the result and physical coordinates code = [L.Comment("Declare variables for result of evaluation")] vals = L.Symbol("vals") code += [L.ArrayDecl("double", vals, ir["physical_value_size"])] code += [L.Comment("Declare variable for physical coordinates")] y = L.Symbol("y") code += [L.ArrayDecl("double", y, gdim)] # Check whether Jacobians are necessary. needs_inverse_jacobian = any(["contravariant piola" in m for m in ir["mappings"]]) needs_jacobian = any(["covariant piola" in m for m in ir["mappings"]]) # Intermediate variable needed for multiple point dofs needs_temporary = any(dof is not None and len(dof) > 1 for dof in ir["dofs"]) if needs_temporary: result = L.Symbol("result") code += [L.VariableDecl("double", result)] if needs_jacobian or needs_inverse_jacobian: code += jacobian(L, gdim, tdim, cell_shape) if needs_inverse_jacobian: code += inverse_jacobian(L, gdim, tdim, cell_shape) if tdim != gdim : code += orientation(L) # Extract variables mappings = ir["mappings"] offsets = ir["physical_offsets"] # Generate bodies for each degree of freedom cases = [] for (i, dof) in enumerate(ir["dofs"]): c, r = _generate_body(L, i, dof, mappings[i], gdim, tdim, cell_shape, offsets[i]) c += [L.Return(r)] cases.append((i, c)) code += [L.Switch(L.Symbol("i"), cases)] code += [L.Return(0.0)] return code def generate_evaluate_dofs(L, ir): "Generate code for evaluate_dofs." # FIXME: consolidate with evaluate_dofs # FIXME: replace gdim = ir["geometric_dimension"] tdim = ir["topological_dimension"] cell_shape = ir["cell_shape"] # Enriched element, no dofs defined if not any(ir["dofs"]): code = [] else: # Declare variable for storing the result and physical coordinates code = [L.Comment("Declare variables for result of evaluation")] vals = L.Symbol("vals") code += [L.ArrayDecl("double", vals, ir["physical_value_size"])] code += [L.Comment("Declare variable for physical coordinates")] y = L.Symbol("y") code += [L.ArrayDecl("double", y, gdim)] # Check whether Jacobians are necessary. needs_inverse_jacobian = any(["contravariant piola" in m for m in ir["mappings"]]) needs_jacobian = any(["covariant piola" in m for m in ir["mappings"]]) # Intermediate variable needed for multiple point dofs needs_temporary = any(dof is not None and len(dof) > 1 for dof in ir["dofs"]) if needs_temporary: result = L.Symbol("result") code += [L.VariableDecl("double", result)] if needs_jacobian or needs_inverse_jacobian: code += jacobian(L, gdim, tdim, cell_shape) if needs_inverse_jacobian: code += inverse_jacobian(L, gdim, tdim, cell_shape) if tdim != gdim : code += orientation(L) # Extract variables mappings = ir["mappings"] offsets = ir["physical_offsets"] # Generate bodies for each degree of freedom values = L.Symbol("values") for (i, dof) in enumerate(ir["dofs"]): c, r = _generate_body(L, i, dof, mappings[i], gdim, tdim, cell_shape, offsets[i]) code += c code += [L.Assign(values[i], r)] return code ffc-2019.1.0.post0/ffc/uflacs/backends/ufc/finite_element.py000066400000000000000000001076301346026536000234650ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2009-2017 Anders Logg and Martin Sandve Alnæs # # This file is part of UFLACS. # # UFLACS is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # UFLACS is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with UFLACS. If not, see . # Note: Much of the code in this file is a direct translation # from the old implementation in FFC, although some improvements # have been made to the generated code. from collections import defaultdict import numpy from ufl import product from ffc.uflacs.backends.ufc.generator import ufc_generator from ffc.uflacs.backends.ufc.utils import generate_return_new_switch, generate_return_int_switch, generate_error from ffc.uflacs.elementtables import clamp_table_small_numbers from ffc.uflacs.backends.ufc.evaluatebasis import generate_evaluate_reference_basis from ffc.uflacs.backends.ufc.evaluatebasis import _generate_compute_basisvalues, tabulate_coefficients from ffc.uflacs.backends.ufc.evaluatebasisderivatives import generate_evaluate_basis_derivatives_all from ffc.uflacs.backends.ufc.evaluatebasisderivatives import generate_evaluate_basis_derivatives from ffc.uflacs.backends.ufc.evalderivs import generate_evaluate_reference_basis_derivatives from ffc.uflacs.backends.ufc.evalderivs import _generate_combinations from ffc.uflacs.backends.ufc.evaluatedof import generate_evaluate_dof, generate_evaluate_dofs, reference_to_physical_map from ffc.uflacs.backends.ufc.jacobian import jacobian, inverse_jacobian, orientation, fiat_coordinate_mapping, _mapping_transform index_type = "std::size_t" def compute_basis_values(L, data, dof_data): basisvalues = L.Symbol("basisvalues") Y = L.Symbol("Y") element_cellname = data["cellname"] embedded_degree = dof_data["embedded_degree"] num_members = dof_data["num_expansion_members"] return _generate_compute_basisvalues(L, basisvalues, Y, element_cellname, embedded_degree, num_members) def compute_values(L, data, dof_data): """This function computes the value of the basisfunction as the dot product of the coefficients and basisvalues.""" # Initialise return code. code = [L.Comment("Compute value(s)")] # Get dof data. num_components = dof_data["num_components"] physical_offset = dof_data["physical_offset"] basisvalues = L.Symbol("basisvalues") values = L.Symbol("values") r = L.Symbol("r") lines = [] if data["reference_value_size"] != 1: # Loop number of components. for i in range(num_components): coefficients = L.Symbol("coefficients%d" % i) lines += [L.AssignAdd(values[i + physical_offset], coefficients[r]*basisvalues[r])] else: coefficients = L.Symbol("coefficients0") lines = [L.AssignAdd(L.Dereference(values), coefficients[r]*basisvalues[r])] # Get number of members of the expansion set and generate loop. num_mem = dof_data["num_expansion_members"] code += [L.ForRange(r, 0, num_mem, index_type=index_type, body=lines)] code += _mapping_transform(L, data, dof_data, values, physical_offset) return code def generate_element_mapping(mapping, i, num_reference_components, tdim, gdim, J, detJ, K): # Select transformation to apply if mapping == "affine": assert num_reference_components == 1 num_physical_components = 1 M_scale = 1 M_row = [1] # M_row[0] == 1 elif mapping == "contravariant piola": assert num_reference_components == tdim num_physical_components = gdim M_scale = 1.0 / detJ M_row = [J[i, jj] for jj in range(tdim)] elif mapping == "covariant piola": assert num_reference_components == tdim num_physical_components = gdim M_scale = 1.0 M_row = [K[jj, i] for jj in range(tdim)] elif mapping == "double covariant piola": assert num_reference_components == tdim**2 num_physical_components = gdim**2 # g_il = K_ji G_jk K_kl = K_ji K_kl G_jk i0 = i // tdim # i in the line above i1 = i % tdim # l ... M_scale = 1.0 M_row = [K[jj,i0]*K[kk,i1] for jj in range(tdim) for kk in range(tdim)] elif mapping == "double contravariant piola": assert num_reference_components == tdim**2 num_physical_components = gdim**2 # g_il = (det J)^(-2) Jij G_jk Jlk = (det J)^(-2) Jij Jlk G_jk i0 = i // tdim # i in the line above i1 = i % tdim # l ... M_scale = 1.0 / (detJ*detJ) M_row = [J[i0,jj]*J[i1,kk] for jj in range(tdim) for kk in range(tdim)] else: error("Unknown mapping: %s" % mapping) return M_scale, M_row, num_physical_components class ufc_finite_element(ufc_generator): "Each function maps to a keyword in the template. See documentation of ufc_generator." def __init__(self): ufc_generator.__init__(self, "finite_element") def cell_shape(self, L, cell_shape): return L.Return(L.Symbol("ufc::shape::" + cell_shape)) def topological_dimension(self, L, topological_dimension): return L.Return(topological_dimension) def geometric_dimension(self, L, geometric_dimension): return L.Return(geometric_dimension) def space_dimension(self, L, space_dimension): return L.Return(space_dimension) def value_rank(self, L, value_shape): return L.Return(len(value_shape)) def value_dimension(self, L, value_shape): return generate_return_int_switch(L, "i", value_shape, 1) def value_size(self, L, value_shape): return L.Return(product(value_shape)) def reference_value_rank(self, L, reference_value_shape): return L.Return(len(reference_value_shape)) def reference_value_dimension(self, L, reference_value_shape): return generate_return_int_switch(L, "i", reference_value_shape, 1) def reference_value_size(self, L, reference_value_shape): return L.Return(product(reference_value_shape)) def degree(self, L, degree): return L.Return(degree) def family(self, L, family): return L.Return(L.LiteralString(family)) def num_sub_elements(self, L, num_sub_elements): return L.Return(num_sub_elements) def create_sub_element(self, L, ir): classnames = ir["create_sub_element"] return generate_return_new_switch(L, "i", classnames, factory=ir["jit"]) def evaluate_basis(self, L, ir, parameters): data = ir["evaluate_basis"] # FIXME: does this make sense? if not data: msg = "evaluate_basis is not defined for this element" return generate_error(L, msg, parameters["convert_exceptions_to_warnings"]) # Handle unsupported elements. if isinstance(data, str): msg = "evaluate_basis: %s" % data return [generate_error(L, msg, parameters["convert_exceptions_to_warnings"])] # Get the element cell name and geometric dimension. element_cellname = data["cellname"] gdim = data["geometric_dimension"] tdim = data["topological_dimension"] # Generate run time code to evaluate an element basisfunction # at an arbitrary point. The value(s) of the basisfunction # is/are computed as in FIAT as the dot product of the # coefficients (computed at compile time) and basisvalues # which are dependent on the coordinate and thus have to be # computed at run time. # The function should work for all elements supported by FIAT, # but it remains untested for tensor valued elements. # Get code snippets for Jacobian, Inverse of Jacobian and # mapping of coordinates from physical element to the FIAT # reference element. cm = L.Symbol("cm") X = L.Symbol("X") code = [L.ArrayDecl("double", X, (tdim), values=0)] J = L.Symbol("J") code += [L.ArrayDecl("double", J, (gdim*tdim,))] detJ = L.Symbol("detJ") code += [L.VariableDecl("double", detJ)] K = L.Symbol("K") code += [L.ArrayDecl("double", K, (gdim*tdim,))] x = L.Symbol("x") coordinate_dofs = L.Symbol("coordinate_dofs") cell_orientation = L.Symbol("cell_orientation") no_cm_code = [ L.Call("compute_jacobian_"+element_cellname+"_"+str(gdim)+"d", (J, coordinate_dofs)), L.Call("compute_jacobian_inverse_"+element_cellname+"_"+str(gdim)+"d", (K, detJ, J))] if data["needs_oriented"] and tdim != gdim: no_cm_code += orientation(L) if any((d["embedded_degree"] > 0) for d in data["dofs_data"]): k = L.Symbol("k") Y = L.Symbol("Y") no_cm_code += fiat_coordinate_mapping(L, element_cellname, gdim) if element_cellname in ('interval', 'triangle', 'tetrahedron'): no_cm_code += [L.Comment("Map to FFC reference coordinate"), L.ForRange(k, 0, tdim, index_type=index_type, body=[L.Assign(X[k], (Y[k] + 1.0)/2.0)])] else: no_cm_code += [L.ForRange(k, 0, tdim, index_type=index_type, body=[L.Assign(X[k], Y[k])])] code += [L.If(cm, L.Call("cm->compute_reference_geometry", (X, J, L.AddressOf(detJ), K, 1, x, coordinate_dofs, cell_orientation))), L.Else(no_cm_code)] reference_value_size = data["reference_value_size"] num_dofs = len(data["dofs_data"]) ref_values = L.Symbol("ref_values") code += [L.Comment("Evaluate basis on reference element"), L.ArrayDecl("double", ref_values, num_dofs*reference_value_size), L.Call("evaluate_reference_basis",(ref_values, 1, X))] physical_value_size = data["physical_value_size"] physical_values = L.Symbol("physical_values") i = L.Symbol("i") k = L.Symbol("k") values = L.Symbol("values") code += [L.Comment("Push forward"), L.ArrayDecl("double", physical_values, num_dofs*physical_value_size), L.Call("transform_reference_basis_derivatives",(physical_values, 0, 1, ref_values, X, J, L.AddressOf(detJ), K, cell_orientation)), L.ForRange(k, 0, physical_value_size, index_type=index_type, body=[ L.Assign(values[k], physical_values[physical_value_size*i + k]) ])] return code def evaluate_basis_all(self, L, ir, parameters): data=ir["evaluate_basis"] # Handle unsupported elements. if isinstance(data, str): msg = "evaluate_basis_all: %s" % data return [generate_error(L, msg, parameters["convert_exceptions_to_warnings"])] physical_value_size = data["physical_value_size"] space_dimension = data["space_dimension"] x = L.Symbol("x") coordinate_dofs = L.Symbol("coordinate_dofs") cell_orientation = L.Symbol("cell_orientation") values = L.Symbol("values") # Special case where space dimension is one (constant elements). if space_dimension == 1: code = [L.Comment("Element is constant, calling evaluate_basis."), L.Call("evaluate_basis", (0, values, x, coordinate_dofs, cell_orientation))] return code r = L.Symbol("r") dof_values = L.Symbol("dof_values") if physical_value_size == 1: code = [ L.Comment("Helper variable to hold value of a single dof."), L.VariableDecl("double", dof_values, 0.0), L.Comment("Loop dofs and call evaluate_basis"), L.ForRange(r, 0, space_dimension, index_type=index_type, body=[L.Call("evaluate_basis", (r, L.AddressOf(dof_values), x, coordinate_dofs, cell_orientation)), L.Assign(values[r], dof_values)] ) ] else: s = L.Symbol("s") code = [L.Comment("Helper variable to hold values of a single dof."), L.ArrayDecl("double", dof_values, physical_value_size, 0.0), L.Comment("Loop dofs and call evaluate_basis"), L.ForRange(r, 0, space_dimension, index_type=index_type, body=[L.Call("evaluate_basis", (r, dof_values, x, coordinate_dofs, cell_orientation)), L.ForRange(s, 0, physical_value_size, index_type=index_type, body=[L.Assign(values[r*physical_value_size+s], dof_values[s])]) ] ) ] return code def evaluate_basis_derivatives(self, L, ir, parameters): # FIXME: Get rid of this return generate_evaluate_basis_derivatives(L, ir["evaluate_basis"]) def evaluate_basis_derivatives_all(self, L, ir, parameters): return generate_evaluate_basis_derivatives_all(L, ir["evaluate_basis"]) """ // Legacy version: evaluate_basis_derivatives_all(std::size_t n, double * values, const double * x, const double * coordinate_dofs, int cell_orientation) // Suggestion for new version: new_evaluate_basis_derivatives(double * values, std::size_t order, std::size_t num_points, const double * x, const double * coordinate_dofs, int cell_orientation, const ufc::coordinate_mapping * cm) """ # TODO: This is a refactoring step to allow rewriting code # generation to use coordinate_mapping in one stage, before # making it available as an argument from dolfin in the next stage. affine_coordinate_mapping_classname = ir["affine_coordinate_mapping_classname"] # Output arguments: values = L.Symbol("values") # Input arguments: #order = L.Symbol("order") order = L.Symbol("n") x = L.Symbol("x") coordinate_dofs = L.Symbol("coordinate_dofs") cell_orientation = L.Symbol("cell_orientation") # Internal variables: #num_points = L.Symbol("num_points") num_points = 1 # Always 1 in legacy API reference_values = L.Symbol("reference_values") X = L.Symbol("X") J = L.Symbol("J") detJ = L.Symbol("detJ") K = L.Symbol("K") ip = L.Symbol("ip") gdim = ir["geometric_dimension"] tdim = ir["topological_dimension"] code = [ # Create local affine coordinate mapping object # TODO: Get this as input instead to support non-affine L.VariableDecl(affine_coordinate_mapping_classname, "cm"), L.ForRange(ip, 0, num_points, index_type=index_type, body=[ L.ArrayDecl("double", X, (tdim,)), L.ArrayDecl("double", J, (gdim*tdim,)), L.ArrayDecl("double", detJ, (1,)), L.ArrayDecl("double", K, (tdim*gdim,)), L.Call("cm.compute_reference_geometry", (X, J, detJ, K, num_points, x, coordinate_dofs, cell_orientation)), L.Call("evaluate_reference_basis_derivatives", (reference_values, order, num_points, X)), L.Call("transform_reference_basis_derivatives", (values, order, num_points, reference_values, X, J, detJ, K, cell_orientation)), ]) ] return code def evaluate_dof(self, L, ir, parameters): return generate_evaluate_dof(L, ir["evaluate_dof"]) def evaluate_dofs(self, L, ir, parameters): """Generate code for evaluate_dofs.""" """ - evaluate_dof needs to be split into invert_mapping + evaluate_dof or similar? f = M fhat; nu(f) = nu(M fhat) = nuhat(M^-1 f) = sum_i w_i M^-1 f(x_i) // Get fixed set of points on reference element num_points = element->num_dof_evaluation_points(); double X[num_points*tdim]; element->tabulate_dof_evaluation_points(X); // Compute geometry in these points domain->compute_geometry(reference_points, num_point, X, J, detJ, K, coordinate_dofs, cell_orientation); // Computed by dolfin for ip fvalues[ip][:] = f.evaluate(point[ip])[:]; // Finally: nu_j(f) = sum_component sum_ip weights[j][ip][component] fvalues[ip][component] element->evaluate_dofs(fdofs, fvalues, J, detJ, K) """ # FIXME: translate into reference version return generate_evaluate_dofs(L, ir["evaluate_dof"]) def interpolate_vertex_values(self, L, ir, parameters): irdata = ir["interpolate_vertex_values"] # Raise error if interpolate_vertex_values is ill-defined if not irdata: msg = "interpolate_vertex_values is not defined for this element" return [generate_error(L, msg, parameters["convert_exceptions_to_warnings"])] # Handle unsupported elements. if isinstance(irdata, str): msg = "interpolate_vertex_values: %s" % irdata return [generate_error(L, msg, parameters["convert_exceptions_to_warnings"])] # Add code for Jacobian if necessary code = [] gdim = irdata["geometric_dimension"] tdim = irdata["topological_dimension"] cell_shape = ir["cell_shape"] if irdata["needs_jacobian"]: code += jacobian(L, gdim, tdim, cell_shape) code += inverse_jacobian(L, gdim, tdim, cell_shape) if irdata["needs_oriented"] and tdim != gdim: code += orientation(L) # Compute total value dimension for (mixed) element total_dim = irdata["physical_value_size"] # Generate code for each element value_offset = 0 space_offset = 0 for data in irdata["element_data"]: # Add vertex interpolation for this element code += [L.Comment("Evaluate function and change variables")] # Extract vertex values for all basis functions vertex_values = data["basis_values"] value_size = data["physical_value_size"] space_dim = data["space_dim"] mapping = data["mapping"] J = L.Symbol("J") J = L.FlattenedArray(J, dims=(gdim, tdim)) detJ = L.Symbol("detJ") K = L.Symbol("K") K = L.FlattenedArray(K, dims=(tdim, gdim)) # Create code for each value dimension: for k in range(value_size): # Create code for each vertex x_j for (j, values_at_vertex) in enumerate(vertex_values): if value_size == 1: values_at_vertex = [values_at_vertex] values = clamp_table_small_numbers(values_at_vertex) # Map basis functions using appropriate mapping # FIXME: sort out all non-affine mappings and make into a function # components = change_of_variables(values_at_vertex, k) w = [] if mapping == 'affine': w = values[k] elif mapping == 'contravariant piola': for index in range(space_dim): w += [sum(J[k, p]*values[p][index] for p in range(tdim))/detJ] elif mapping == 'covariant piola': for index in range(space_dim): w += [sum(K[p, k]*values[p][index] for p in range(tdim))] elif mapping == 'double covariant piola': for index in range(space_dim): w += [sum(K[p, k//tdim]*values[p][q][index]*K[q, k % tdim] for q in range(tdim) for p in range(tdim))] elif mapping == 'double contravariant piola': for index in range(space_dim): w += [sum(J[k//tdim, p]*values[p][q][index]*J[k % tdim, q] for q in range(tdim) for p in range(tdim))/(detJ*detJ)] else: error("Unknown mapping: %s" % mapping) # Contract coefficients and basis functions dof_values = L.Symbol("dof_values") dof_list = [dof_values[i + space_offset] for i in range(space_dim)] value = sum(p*q for (p, q) in zip(dof_list, w)) # Assign value to correct vertex index = j * total_dim + (k + value_offset) v_values = L.Symbol("vertex_values") code += [L.Assign(v_values[index], value)] # Update offsets for value- and space dimension value_offset += data["physical_value_size"] space_offset += data["space_dim"] return code def tabulate_dof_coordinates(self, L, ir, parameters): ir = ir["tabulate_dof_coordinates"] # Raise error if tabulate_dof_coordinates is ill-defined if not ir: msg = "tabulate_dof_coordinates is not defined for this element" return generate_error(L, msg, parameters["convert_exceptions_to_warnings"]) # Extract coordinates and cell dimension gdim = ir["gdim"] tdim = ir["tdim"] points = ir["points"] # Extract cellshape cell_shape = ir["cell_shape"] # Output argument dof_coordinates = L.FlattenedArray(L.Symbol("dof_coordinates"), dims=(len(points), gdim)) # Input argument coordinate_dofs = L.Symbol("coordinate_dofs") # Loop indices i = L.Symbol("i") k = L.Symbol("k") ip = L.Symbol("ip") # Basis symbol phi = L.Symbol("phi") # TODO: Get rid of all places that use reference_to_physical_map, it is restricted to a basis of degree 1 # Create code for evaluating coordinate mapping num_scalar_xdofs = _num_vertices(cell_shape) cg1_basis = reference_to_physical_map(cell_shape) phi_values = numpy.asarray([phi_comp for X in points for phi_comp in cg1_basis(X)]) assert len(phi_values) == len(points) * num_scalar_xdofs # TODO: Use precision parameter here phi_values = clamp_table_small_numbers(phi_values) code = [ L.Assign( dof_coordinates[ip][i], sum(phi_values[ip*num_scalar_xdofs + k] * coordinate_dofs[gdim*k + i] for k in range(num_scalar_xdofs)) ) for ip in range(len(points)) for i in range(gdim) ] # FIXME: This code assumes an affine coordinate field. # To get around that limitation, make this function take another argument # const ufc::coordinate_mapping * cm # and generate code like this: """ index_type X[tdim*num_dofs]; tabulate_dof_coordinates(X); cm->compute_physical_coordinates(x, X, coordinate_dofs); """ return code def tabulate_reference_dof_coordinates(self, L, ir, parameters): # TODO: Change signature to avoid copy? E.g. # virtual const std::vector & tabulate_reference_dof_coordinates() const = 0; # See integral::enabled_coefficients for example # TODO: ensure points is a numpy array, # get tdim from points.shape[1], # place points in ir directly instead of the subdict ir = ir["tabulate_dof_coordinates"] # Raise error if tabulate_reference_dof_coordinates is ill-defined if not ir: msg = "tabulate_reference_dof_coordinates is not defined for this element" return generate_error(L, msg, parameters["convert_exceptions_to_warnings"]) # Extract coordinates and cell dimension tdim = ir["tdim"] points = ir["points"] # Output argument reference_dof_coordinates = L.Symbol("reference_dof_coordinates") # Reference coordinates dof_X = L.Symbol("dof_X") dof_X_values = [X[jj] for X in points for jj in range(tdim)] decl = L.ArrayDecl("static const double", dof_X, (len(points) * tdim,), values=dof_X_values) copy = L.MemCopy(dof_X, reference_dof_coordinates, tdim*len(points)) code = [decl, copy] return code def evaluate_reference_basis(self, L, ir, parameters): data = ir["evaluate_basis"] if isinstance(data, str): msg = "evaluate_reference_basis: %s" % data return generate_error(L, msg, parameters["convert_exceptions_to_warnings"]) return generate_evaluate_reference_basis(L, data, parameters) def evaluate_reference_basis_derivatives(self, L, ir, parameters): data = ir["evaluate_basis"] if isinstance(data, str): msg = "evaluate_reference_basis_derivatives: %s" % data return generate_error(L, msg, parameters["convert_exceptions_to_warnings"]) return generate_evaluate_reference_basis_derivatives(L, data, parameters) def transform_reference_basis_derivatives(self, L, ir, parameters): data = ir["evaluate_basis"] if isinstance(data, str): msg = "transform_reference_basis_derivatives: %s" % data return generate_error(L, msg, parameters["convert_exceptions_to_warnings"]) # Get some known dimensions #element_cellname = data["cellname"] gdim = data["geometric_dimension"] tdim = data["topological_dimension"] max_degree = data["max_degree"] reference_value_size = data["reference_value_size"] physical_value_size = data["physical_value_size"] num_dofs = len(data["dofs_data"]) max_g_d = gdim**max_degree max_t_d = tdim**max_degree # Output arguments values_symbol = L.Symbol("values") # Input arguments order = L.Symbol("order") num_points = L.Symbol("num_points") # FIXME: Currently assuming 1 point? reference_values = L.Symbol("reference_values") J = L.Symbol("J") detJ = L.Symbol("detJ") K = L.Symbol("K") # Internal variables transform = L.Symbol("transform") # Indices, I've tried to use these for a consistent purpose ip = L.Symbol("ip") # point i = L.Symbol("i") # physical component j = L.Symbol("j") # reference component k = L.Symbol("k") # order r = L.Symbol("r") # physical derivative number s = L.Symbol("s") # reference derivative number d = L.Symbol("d") # dof combinations_code = [] if max_degree == 0: # Don't need combinations num_derivatives_t = 1 # TODO: I think this is the right thing to do to make this still work for order=0? num_derivatives_g = 1 elif tdim == gdim: num_derivatives_t = L.Symbol("num_derivatives") num_derivatives_g = num_derivatives_t combinations_code += [ L.VariableDecl("const " + index_type, num_derivatives_t, L.Call("std::pow", (tdim, order))), ] # Add array declarations of combinations combinations_code_t, combinations_t = _generate_combinations(L, tdim, max_degree, order, num_derivatives_t) combinations_code += combinations_code_t combinations_g = combinations_t else: num_derivatives_t = L.Symbol("num_derivatives_t") num_derivatives_g = L.Symbol("num_derivatives_g") combinations_code += [ L.VariableDecl("const " + index_type, num_derivatives_t, L.Call("std::pow", (tdim, order))), L.VariableDecl("const " + index_type, num_derivatives_g, L.Call("std::pow", (gdim, order))), ] # Add array declarations of combinations combinations_code_t, combinations_t = _generate_combinations(L, tdim, max_degree, order, num_derivatives_t, suffix="_t") combinations_code_g, combinations_g = _generate_combinations(L, gdim, max_degree, order, num_derivatives_g, suffix="_g") combinations_code += combinations_code_t combinations_code += combinations_code_g # Define expected dimensions of argument arrays J = L.FlattenedArray(J, dims=(num_points, gdim, tdim)) detJ = L.FlattenedArray(detJ, dims=(num_points,)) K = L.FlattenedArray(K, dims=(num_points, tdim, gdim)) values = L.FlattenedArray(values_symbol, dims=(num_points, num_dofs, num_derivatives_g, physical_value_size)) reference_values = L.FlattenedArray(reference_values, dims=(num_points, num_dofs, num_derivatives_t, reference_value_size)) # Generate code to compute the derivative transform matrix transform_matrix_code = [ # Initialize transform matrix to all 1.0 L.ArrayDecl("double", transform, (max_g_d, max_t_d)), L.ForRanges( (r, 0, num_derivatives_g), (s, 0, num_derivatives_t), index_type=index_type, body=L.Assign(transform[r, s], 1.0) ), ] if max_degree > 0: transform_matrix_code += [ # Compute transform matrix entries, each a product of K entries L.ForRanges( (r, 0, num_derivatives_g), (s, 0, num_derivatives_t), (k, 0, order), index_type=index_type, body=L.AssignMul(transform[r, s], K[ip, combinations_t[s, k], combinations_g[r, k]]) ), ] # Initialize values to 0, will be added to inside loops values_init_code = [ L.MemZero(values_symbol, num_points * num_dofs * num_derivatives_g * physical_value_size), ] # Make offsets available in generated code reference_offsets = L.Symbol("reference_offsets") physical_offsets = L.Symbol("physical_offsets") dof_attributes_code = [ L.ArrayDecl("const " + index_type, reference_offsets, (num_dofs,), values=[dof_data["reference_offset"] for dof_data in data["dofs_data"]]), L.ArrayDecl("const " + index_type, physical_offsets, (num_dofs,), values=[dof_data["physical_offset"] for dof_data in data["dofs_data"]]), ] # Build dof lists for each mapping type mapping_dofs = defaultdict(list) for idof, dof_data in enumerate(data["dofs_data"]): mapping_dofs[dof_data["mapping"]].append(idof) # Generate code for each mapping type d = L.Symbol("d") transform_apply_code = [] for mapping in sorted(mapping_dofs): # Get list of dofs using this mapping idofs = mapping_dofs[mapping] # Select iteration approach over dofs if idofs == list(range(idofs[0], idofs[-1]+1)): # Contiguous dofrange = (d, idofs[0], idofs[-1]+1) idof = d else: # Stored const array of dof indices idofs_symbol = L.Symbol("%s_dofs" % mapping.replace(" ", "_")) dof_attributes_code += [ L.ArrayDecl("const " + index_type, idofs_symbol, (len(idofs),), values=idofs), ] dofrange = (d, 0, len(idofs)) idof = idofs_symbol[d] # NB! Array access to offsets, these are not Python integers reference_offset = reference_offsets[idof] physical_offset = physical_offsets[idof] # How many components does each basis function with this mapping have? # This should be uniform, i.e. there should be only one element in this set: num_reference_components, = set(data["dofs_data"][i]["num_components"] for i in idofs) M_scale, M_row, num_physical_components = generate_element_mapping( mapping, i, num_reference_components, tdim, gdim, J[ip], detJ[ip], K[ip] ) # transform_apply_body = [ # L.AssignAdd(values[ip, idof, r, physical_offset + k], # transform[r, s] * reference_values[ip, idof, s, reference_offset + k]) # for k in range(num_physical_components) # ] msg = "Using %s transform to map values back to the physical element." % mapping.replace("piola", "Piola") mapped_value = L.Symbol("mapped_value") transform_apply_code += [ L.ForRanges( dofrange, (s, 0, num_derivatives_t), (i, 0, num_physical_components), index_type=index_type, body=[ # Unrolled application of mapping to one physical component, # for affine this automatically reduces to # mapped_value = reference_values[..., reference_offset] L.Comment(msg), L.VariableDecl("const double", mapped_value, M_scale * sum(M_row[jj] * reference_values[ip, idof, s, reference_offset + jj] for jj in range(num_reference_components))), # Apply derivative transformation, for order=0 this reduces to # values[ip,idof,0,physical_offset+i] = transform[0,0]*mapped_value L.Comment("Mapping derivatives back to the physical element"), L.ForRanges( (r, 0, num_derivatives_g), index_type=index_type, body=[ L.AssignAdd(values[ip, idof, r, physical_offset + i], transform[r, s] * mapped_value) ]) ]) ] # Transform for each point point_loop_code = [ L.ForRange(ip, 0, num_points, index_type=index_type, body=( transform_matrix_code + transform_apply_code )) ] # Join code code = ( combinations_code + values_init_code + dof_attributes_code + point_loop_code ) return code def _num_vertices(cell_shape): """Returns number of vertices for a given cell shape.""" num_vertices_dict = {"interval": 2, "triangle": 3, "tetrahedron": 4, "quadrilateral": 4, "hexahedron": 8} return num_vertices_dict[cell_shape] ffc-2019.1.0.post0/ffc/uflacs/backends/ufc/form.py000066400000000000000000000146631346026536000214440ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2009-2017 Anders Logg and Martin Sandve Alnæs # # This file is part of UFLACS. # # UFLACS is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # UFLACS is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with UFLACS. If not, see . # Note: Most of the code in this file is a direct translation from the old implementation in FFC from ffc.classname import make_integral_classname from ffc.uflacs.backends.ufc.generator import ufc_generator, integral_name_templates, ufc_integral_types from ffc.uflacs.backends.ufc.utils import generate_return_new, generate_return_new_switch def create_delegate(integral_type, declname, impl): def _delegate(self, L, ir, parameters): return impl(self, L, ir, parameters, integral_type, declname) _delegate.__doc__ = impl.__doc__ % {"declname": declname, "integral_type": integral_type} return _delegate def add_ufc_form_integral_methods(cls): """This function generates methods on the class it decorates, for each integral name template and for each integral type. This allows implementing e.g. create_###_integrals once in the decorated class as '_create_foo_integrals', and this function will expand that implementation into 'create_cell_integrals', 'create_exterior_facet_integrals', etc. Name templates are taken from 'integral_name_templates' and 'ufc_integral_types'. """ # The dummy name "foo" is chosen for familiarity for ffc developers dummy_integral_type = "foo" for template in integral_name_templates: implname = "_" + (template % (dummy_integral_type,)) impl = getattr(cls, implname) for integral_type in ufc_integral_types: declname = template % (integral_type,) _delegate = create_delegate(integral_type, declname, impl) setattr(cls, declname, _delegate) return cls @add_ufc_form_integral_methods class ufc_form(ufc_generator): """Each function maps to a keyword in the template. See documentation of ufc_generator. The exceptions are functions on the form def _*_foo_*(self, L, ir, parameters, integral_type, declname) which add_ufc_form_integral_methods will duplicate for foo = each integral type. """ def __init__(self): ufc_generator.__init__(self, "form") def num_coefficients(self, L, num_coefficients): return L.Return(num_coefficients) def rank(self, L, rank): return L.Return(rank) def original_coefficient_position(self, L, ir): i = L.Symbol("i") positions = ir["original_coefficient_position"] # Check argument msg = "Invalid original coefficient index." if positions: code = [ L.If(L.GE(i, len(positions)), L.Throw("std::runtime_error", msg)) ] # Throwing a lot into the 'typename' string here but # no plans for building a full C++ type system typename = "static const std::vector" position = L.Symbol("position") initializer_list = L.VerbatimExpr("{" + ", ".join(str(i) for i in positions) + "}") code += [ L.VariableDecl(typename, position, value=initializer_list), L.Return(position[i]), ] return code else: code = [ L.Throw("std::runtime_error", msg), L.Return(i) ] return code def create_coordinate_finite_element(self, L, ir): classnames = ir["create_coordinate_finite_element"] assert len(classnames) == 1 return generate_return_new(L, classnames[0], factory=ir["jit"]) def create_coordinate_dofmap(self, L, ir): classnames = ir["create_coordinate_dofmap"] assert len(classnames) == 1 return generate_return_new(L, classnames[0], factory=ir["jit"]) def create_coordinate_mapping(self, L, ir): classnames = ir["create_coordinate_mapping"] assert len(classnames) == 1 # list of length 1 until we support multiple domains return generate_return_new(L, classnames[0], factory=ir["jit"]) def create_finite_element(self, L, ir): i = L.Symbol("i") classnames = ir["create_finite_element"] return generate_return_new_switch(L, i, classnames, factory=ir["jit"]) def create_dofmap(self, L, ir): i = L.Symbol("i") classnames = ir["create_dofmap"] return generate_return_new_switch(L, i, classnames, factory=ir["jit"]) # This group of functions are repeated for each # foo_integral by add_ufc_form_integral_methods: def _max_foo_subdomain_id(self, L, ir, parameters, integral_type, declname): "Return implementation of ufc::form::%(declname)s()." # e.g. max_subdomain_id = ir["max_cell_subdomain_id"] max_subdomain_id = ir[declname] return L.Return(int(max_subdomain_id)) def _has_foo_integrals(self, L, ir, parameters, integral_type, declname): "Return implementation of ufc::form::%(declname)s()." # e.g. has_integrals = ir["has_cell_integrals"] has_integrals = ir[declname] return L.Return(bool(has_integrals)) def _create_foo_integral(self, L, ir, parameters, integral_type, declname): "Return implementation of ufc::form::%(declname)s()." # e.g. subdomain_ids, classnames = ir["create_cell_integral"] subdomain_ids, classnames = ir[declname] subdomain_id = L.Symbol("subdomain_id") return generate_return_new_switch(L, subdomain_id, classnames, subdomain_ids, factory=ir["jit"]) def _create_default_foo_integral(self, L, ir, parameters, integral_type, declname): "Return implementation of ufc::form::%(declname)s()." # e.g. classname = ir["create_default_cell_integral"] classname = ir[declname] if classname is None: return L.Return(L.Null()) else: return generate_return_new(L, classname, factory=ir["jit"]) ffc-2019.1.0.post0/ffc/uflacs/backends/ufc/generator.py000066400000000000000000000222011346026536000224520ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2015-2017 Martin Sandve Alnæs # # This file is part of UFLACS. # # UFLACS is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # UFLACS is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with UFLACS. If not, see . import re from ffc.log import error, warning import ffc.backends.ufc from ffc.uflacs.language.format_lines import format_indented_lines from ffc.uflacs.backends.ufc.templates import * #__all__ = (["ufc_form", "ufc_dofmap", "ufc_finite_element", "ufc_integral"] # + ["ufc_%s_integral" % integral_type for integral_type in integral_types]) # These are all the integral types directly supported in ufc from ffc.representation import ufc_integral_types # These are the method names in ufc::form that are specialized for each integral type integral_name_templates = ( "max_%s_subdomain_id", "has_%s_integrals", "create_%s_integral", "create_default_%s_integral", ) class ufc_generator(object): """Common functionality for code generators producing ufc classes. The generate function is the driver for generating code for a class. It automatically extracts each template keyword %(foo)s and calls self.foo(...) to define the code snippet for that keyword. The standard arguments to every self.foo(...) function are: - L (the language module reference) - ir (the full ir dict) - parameters (the parameters dict, can be omitted) If the second argument is not named "ir", it must be a valid key in ir, and the value of ir[keyname] is passed instead of the full ir dict. Invalid keynames result in attempts at informative errors, meaning errors will often be caught early when making changes. """ def __init__(self, basename): ufc_templates = ffc.backends.ufc.templates self._header_template = ufc_templates[basename + "_header"] self._implementation_template = ufc_templates[basename + "_implementation"] self._combined_template = ufc_templates[basename + "_combined"] self._jit_header_template = ufc_templates[basename + "_jit_header"] self._jit_implementation_template = ufc_templates[basename + "_jit_implementation"] r = re.compile(r"%\(([a-zA-Z0-9_]*)\)") self._header_keywords = set(r.findall(self._header_template)) self._implementation_keywords = set(r.findall(self._implementation_template)) self._combined_keywords = set(r.findall(self._combined_template)) self._keywords = sorted(self._header_keywords | self._implementation_keywords) # Do some ufc interface template checking, to catch bugs # early when we change the ufc interface templates if set(self._keywords) != set(self._combined_keywords): a = set(self._header_keywords) - set(self._combined_keywords) b = set(self._implementation_keywords) - set(self._combined_keywords) c = set(self._combined_keywords) - set(self._keywords) msg = "Templates do not have matching sets of keywords:" if a: msg += "\n Header template keywords '%s' are not in the combined template." % (sorted(a),) if b: msg += "\n Implementation template keywords '%s' are not in the combined template." % (sorted(b),) if c: msg += "\n Combined template keywords '%s' are not in the header or implementation templates." % (sorted(c),) error(msg) def generate_snippets(self, L, ir, parameters): "Generate code snippets for each keyword found in templates." snippets = {} for kw in self._keywords: handlerstr = "%s.%s" % (self.__class__.__name__, kw) # Check that attribute self. is available if not hasattr(self, kw): error("Missing handler %s." % (handlerstr,)) # Call self.(*args) to get value to insert in snippets method = getattr(self, kw) vn = method.__code__.co_varnames[:method.__code__.co_argcount] file_line = "%s:%s" % (method.__code__.co_filename, method.__code__.co_firstlineno) #if handlerstr == "ufc_dofmap.create": # import ipdb; ipdb.set_trace() # Always pass L assert vn[:2] == ("self", "L") vn = vn[2:] args = (L,) # Either pass full ir or extract ir value with keyword given by argument name if vn[0] == "ir": args += (ir,) elif vn[0] in ir: args += (ir[vn[0]],) else: error("Cannot find key '%s' in ir, argument to %s at %s." % (vn[0], handlerstr, file_line)) vn = vn[1:] # Optionally pass parameters if vn == ("parameters",): args += (parameters,) elif vn: error("Invalid argument names %s to %s at %s." % (vn, handlerstr, file_line)) # Call handler value = method(*args) if isinstance(value, list): value = L.StatementList(value) # Indent body and format to str if isinstance(value, L.CStatement): value = L.Indented(value.cs_format(precision=parameters["precision"])) value = format_indented_lines(value) elif not isinstance(value, str): error("Expecting code or string, not %s, returned from handler %s at %s." % (type(value), handlerstr, file_line)) # Store formatted code in snippets dict snippets[kw] = value # Error checking (can detect some bugs early when changing the ufc interface) # Get all attributes of subclass class (skip "_foo") attrs = set(name for name in dir(self) if not (name.startswith("_") or name.startswith("generate"))) # Get all attributes of this base class (skip "_foo" and "generate*") base_attrs = set(name for name in dir(ufc_generator) if not (name.startswith("_") or name.startswith("generate"))) # The template keywords should not contain any names not among the class attributes missing = set(self._keywords) - attrs if missing: warning("*** Missing generator functions:\n%s" % ('\n'.join(map(str, sorted(missing))),)) # The class attributes should not contain any names not among the template keywords # (this is strict, a useful check when changing ufc, but can be dropped) unused = attrs - set(self._keywords) - base_attrs if unused: warning("*** Unused generator functions:\n%s" % ('\n'.join(map(str, sorted(unused))),)) # Return snippets, a dict of code strings return snippets def generate(self, L, ir, parameters=None, snippets=None, jit=False): "Return composition of templates with generated snippets." if snippets is None: snippets = self.generate_snippets(L, ir, parameters) if jit: ht = self._jit_header_template it = self._jit_implementation_template else: ht = self._header_template it = self._implementation_template h = ht % snippets cpp = it % snippets return h, cpp def classname(self, L, ir): "Return classname." return ir["classname"] def members(self, L, ir): "Return empty string. Override in classes that need members." if ir.get("members"): error("Missing generator function.") return "" def constructor(self, L, ir): "Return empty string. Override in classes that need constructor." if ir.get("constructor"): error("Missing generator function.") return L.NoOp() def constructor_arguments(self, L, ir): "Return empty string. Override in classes that need constructor." if ir.get("constructor_arguments"): error("Missing generator function.") return "" def initializer_list(self, L, ir): "Return empty string. Override in classes that need constructor." if ir.get("initializer_list"): error("Missing generator function.") return "" def destructor(self, L, ir): "Return empty string. Override in classes that need destructor." if ir.get("destructor"): error("Missing generator function.") return L.NoOp() def signature(self, L, ir): "Default implementation of returning signature string fetched from ir." sig = ir["signature"] return L.Return(L.LiteralString(sig)) def create(self, L, ir): "Default implementation of creating a new object of the same type." classname = ir["classname"] return L.Return(L.New(classname)) ffc-2019.1.0.post0/ffc/uflacs/backends/ufc/generators.py000066400000000000000000000020221346026536000226340ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2015-2017 Martin Sandve Alnæs # # This file is part of UFLACS. # # UFLACS is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # UFLACS is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with UFLACS. If not, see . from ffc.uflacs.backends.ufc.finite_element import ufc_finite_element from ffc.uflacs.backends.ufc.dofmap import ufc_dofmap from ffc.uflacs.backends.ufc.coordinate_mapping import ufc_coordinate_mapping from ffc.uflacs.backends.ufc.integrals import * from ffc.uflacs.backends.ufc.form import ufc_form ffc-2019.1.0.post0/ffc/uflacs/backends/ufc/integrals.py000066400000000000000000000075261346026536000224710ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2015-2017 Martin Sandve Alnæs # # This file is part of UFLACS. # # UFLACS is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # UFLACS is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with UFLACS. If not, see . from ffc.uflacs.backends.ufc.generator import ufc_generator, ufc_integral_types from ufl.utils.formatting import dstr # (This is currently not in use, see ffc/codegeneration.py for current status) class ufc_integral(ufc_generator): "Each function maps to a keyword in the template. See documentation of ufc_generator." def __init__(self, integral_type): assert integral_type in ufc_integral_types ufc_generator.__init__(self, integral_type + "_integral") def enabled_coefficients(self, L, ir): enabled_coefficients = ir["enabled_coefficients"] initializer_list = ", ".join("true" if enabled else "false" for enabled in enabled_coefficients) code = L.StatementList([ # Cheating a bit with verbatim: L.VerbatimStatement("static const std::vector enabled({%s});" % initializer_list), L.Return(L.Symbol("enabled")), ]) return code def tabulate_tensor(self, L, ir): # FIXME: This is where the current ffc backend code generation should be injected, # however that will require using pick_representation in here? tt = ir["tabulate_tensor"] code = "code generated from %s" % tt return code def tabulate_tensor_comment(self, L, ir): "Generate comment for tabulate_tensor." r = ir["representation"] integrals_metadata = ir["integrals_metadata"] integral_metadata = ir["integral_metadata"] lines = [ "This function was generated using '%s' representation" % r, "with the following integrals metadata:", ""] lines += [(" " + l) for l in dstr(integrals_metadata).split("\n")] lines += [""] for i, metadata in enumerate(integral_metadata): lines += [ "", "and the following integral %d metadata:" % i, "", ] lines += [(" " + l) for l in dstr(metadata).split("\n")][:-1] lines += [""] return [L.Comment(line) for line in lines] class ufc_cell_integral(ufc_integral): def __init__(self): ufc_integral.__init__(self, "cell") class ufc_exterior_facet_integral(ufc_integral): def __init__(self): ufc_integral.__init__(self, "exterior_facet") class ufc_interior_facet_integral(ufc_integral): def __init__(self): ufc_integral.__init__(self, "interior_facet") class ufc_vertex_integral(ufc_integral): def __init__(self): ufc_integral.__init__(self, "vertex") class ufc_custom_integral(ufc_integral): def __init__(self): ufc_integral.__init__(self, "custom") def num_cells(self, L, ir): value = ir["num_cells"] return L.Return(L.LiteralInt(value)) class ufc_interface_integral(ufc_integral): def __init__(self): ufc_integral.__init__(self, "interface") class ufc_overlap_integral(ufc_integral): def __init__(self): ufc_integral.__init__(self, "overlap") class ufc_cutcell_integral(ufc_integral): def __init__(self): ufc_integral.__init__(self, "cutcell") ffc-2019.1.0.post0/ffc/uflacs/backends/ufc/jacobian.py000066400000000000000000000225031346026536000222370ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2009-2017 Anders Logg and Martin Sandve Alnæs, Chris Richardson # # This file is part of UFLACS. # # UFLACS is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # UFLACS is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with UFLACS. If not, see . # Note: Much of the code in this file is a direct translation # from the old implementation in FFC, although some improvements # have been made to the generated code. def jacobian(L, gdim, tdim, element_cellname): J = L.Symbol("J") coordinate_dofs = L.Symbol("coordinate_dofs") code = [L.Comment("Compute Jacobian"), L.ArrayDecl("double", J, (gdim*tdim,)), L.Call("compute_jacobian_"+element_cellname+"_"+str(gdim)+"d", (J, coordinate_dofs))] return code def inverse_jacobian(L, gdim, tdim, element_cellname): K = L.Symbol("K") J = L.Symbol("J") detJ = L.Symbol("detJ") code = [L.Comment("Compute Inverse Jacobian and determinant"), L.ArrayDecl("double", K, (gdim*tdim,)), L.VariableDecl("double", detJ), L.Call("compute_jacobian_inverse_"+element_cellname+"_"+str(gdim)+"d",(K, detJ, J))] return code def orientation(L): detJ = L.Symbol("detJ") cell_orientation = L.Symbol("cell_orientation") code = [L.Comment("Check orientation"), L.If(L.EQ(cell_orientation, -1), [L.Throw("std::runtime_error", "cell orientation must be defined (not -1)")]), L.Comment("(If cell_orientation == 1 = down, multiply det(J) by -1)"), L.ElseIf(L.EQ(cell_orientation, 1), [L.AssignMul(detJ , -1)])] return code def fiat_coordinate_mapping(L, cellname, gdim, ref_coord_symbol="Y"): # Code used in evaluatebasis[|derivatives] x = L.Symbol("x") Y = L.Symbol(ref_coord_symbol) coordinate_dofs = L.Symbol("coordinate_dofs") if cellname == "interval": J = L.Symbol("J") detJ = L.Symbol("detJ") if gdim == 1: code = [L.Comment("Get coordinates and map to the reference (FIAT) element"), L.ArrayDecl("double", Y, 1, [(2*x[0] - coordinate_dofs[0] - coordinate_dofs[1])/J[0]])] elif gdim == 2: code = [L.Comment("Get coordinates and map to the reference (FIAT) element"), L.ArrayDecl("double", Y, 1, [2*(L.Sqrt(L.Call("std::pow", (x[0] - coordinate_dofs[0], 2)) + L.Call("std::pow", (x[1] - coordinate_dofs[1], 2))))/detJ - 1.0])] elif gdim == 3: code = [L.Comment("Get coordinates and map to the reference (FIAT) element"), L.ArrayDecl("double", Y, 1, [2*(L.Sqrt(L.Call("std::pow", (x[0] - coordinate_dofs[0], 2)) + L.Call("std::pow", (x[1] - coordinate_dofs[1], 2)) + L.Call("std::pow", (x[2] - coordinate_dofs[2], 2))))/ detJ - 1.0])] else: error("Cannot compute interval with gdim: %d" % gdim) elif cellname == "triangle": if gdim == 2: C0 = L.Symbol("C0") C1 = L.Symbol("C1") J = L.Symbol("J") detJ = L.Symbol("detJ") code = [L.Comment("Compute constants"), L.VariableDecl("const double", C0, coordinate_dofs[2] + coordinate_dofs[4]), L.VariableDecl("const double", C1, coordinate_dofs[3] + coordinate_dofs[5]), L.Comment("Get coordinates and map to the reference (FIAT) element"), L.ArrayDecl("double", Y, 2, [(J[1]*(C1 - 2.0*x[1]) + J[3]*(2.0*x[0] - C0)) / detJ, (J[0]*(2.0*x[1] - C1) + J[2]*(C0 - 2.0*x[0])) / detJ])] elif gdim == 3: K = L.Symbol("K") code = [L.Comment("P_FFC = J^dag (p - b), P_FIAT = 2*P_FFC - (1, 1)"), L.ArrayDecl("double", Y, 2, [2*(K[0]*(x[0] - coordinate_dofs[0]) + K[1]*(x[1] - coordinate_dofs[1]) + K[2]*(x[2] - coordinate_dofs[2])) - 1.0, 2*(K[3]*(x[0] - coordinate_dofs[0]) + K[4]*(x[1] - coordinate_dofs[1]) + K[5]*(x[2] - coordinate_dofs[2])) - 1.0])] else: error("Cannot compute triangle with gdim: %d" % gdim) elif cellname == 'tetrahedron' and gdim == 3: C0 = L.Symbol("C0") C1 = L.Symbol("C1") C2 = L.Symbol("C2") J = L.Symbol("J") detJ = L.Symbol("detJ") d = L.Symbol("d") code = [L.Comment("Compute constants"), L.VariableDecl("const double", C0, coordinate_dofs[9] + coordinate_dofs[6] + coordinate_dofs[3] - coordinate_dofs[0]), L.VariableDecl("const double", C1, coordinate_dofs[10] + coordinate_dofs[7] + coordinate_dofs[4] - coordinate_dofs[1]), L.VariableDecl("const double", C2, coordinate_dofs[11] + coordinate_dofs[8] + coordinate_dofs[5] - coordinate_dofs[2]), L.Comment("Compute subdeterminants"), L.ArrayDecl("const double", d, 9, [J[4]*J[8] - J[5]*J[7], J[5]*J[6] - J[3]*J[8], J[3]*J[7] - J[4]*J[6], J[2]*J[7] - J[1]*J[8], J[0]*J[8] - J[2]*J[6], J[1]*J[6] - J[0]*J[7], J[1]*J[5] - J[2]*J[4], J[2]*J[3] - J[0]*J[5], J[0]*J[4] - J[1]*J[3]]), L.Comment("Get coordinates and map to the reference (FIAT) element"), L.ArrayDecl("double", Y, 3, [(d[0]*(2.0*x[0] - C0) + d[3]*(2.0*x[1] - C1) + d[6]*(2.0*x[2] - C2)) / detJ, (d[1]*(2.0*x[0] - C0) + d[4]*(2.0*x[1] - C1) + d[7]*(2.0*x[2] - C2)) / detJ, (d[2]*(2.0*x[0] - C0) + d[5]*(2.0*x[1] - C1) + d[8]*(2.0*x[2] - C2)) / detJ])] else: error("Cannot compute %s with gdim: %d" % (cellname, gdim)) return code def _mapping_transform(L, data, dof_data, values, offset, width=1): code = [] tdim = data["topological_dimension"] gdim = data["geometric_dimension"] num_components = dof_data["num_components"] mapping = dof_data["mapping"] # Apply transformation if applicable. if mapping == "affine": return code # Get temporary values before mapping. tmp_ref = L.Symbol("tmp_ref") code += [L.ArrayDecl("const double", tmp_ref, num_components, [values[i*width + offset] for i in range(num_components)])] # Define symbols for Jacobian, inverse and determinant J = L.Symbol("J") J = L.FlattenedArray(J, dims=(gdim, tdim)) detJ = L.Symbol("detJ") K = L.Symbol("K") K = L.FlattenedArray(K, dims=(tdim, gdim)) if mapping == "contravariant piola": code += [L.Comment("Using contravariant Piola transform to map values back to the physical element")] assert num_components == tdim for i in range(gdim): inner = sum(J[i, j]*tmp_ref[j] for j in range(tdim)) code += [L.Assign(values[i*width + offset], inner/detJ)] elif mapping == "covariant piola": code += [L.Comment("Using covariant Piola transform to map values back to the physical element")] assert num_components == tdim for i in range(gdim): inner = sum(K[j, i]*tmp_ref[j] for j in range(tdim)) code += [L.Assign(values[i*width + offset], inner)] elif mapping == "double covariant piola": code += [L.Comment("Using double covariant Piola transform to map values back to the physical element")] assert num_components == tdim**2 for p in range(num_components): # unflatten the indices i = p // tdim l = p % tdim # noqa: E741 # g_il = K_ji G_jk K_kl inner = sum(K[j, i]*tmp_ref[j*tdim + k]*K[k, l] for j in range(tdim) for k in range(tdim)) code += [L.Assign(values[p*width + offset], inner)] elif mapping == "double contravariant piola": code += [L.Comment("Using double contravariant Piola transform to map values back to the physical element")] assert num_components == tdim**2 for p in range(num_components): # unflatten the indices i = p // tdim l = p % tdim # noqa: E741 inner = sum(J[i, j]*tmp_ref[j*tdim + k]*J[l, k] for j in range(tdim) for k in range(tdim)) code += [L.Assign(values[p*width + offset], inner/(detJ*detJ))] else: error("Unknown mapping: %s" % mapping) return code ffc-2019.1.0.post0/ffc/uflacs/backends/ufc/templates.py000066400000000000000000000016001346026536000224620ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2015-2017 Martin Sandve Alnæs # # This file is part of UFLACS. # # UFLACS is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # UFLACS is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with UFLACS. If not, see . import re from ffc.backends.ufc import * def extract_keywords(template): r = re.compile(r"%\(([a-zA-Z0-9_]*)\)") return set(r.findall(template)) ffc-2019.1.0.post0/ffc/uflacs/backends/ufc/utils.py000066400000000000000000000101551346026536000216310ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2015-2017 Martin Sandve Alnæs # # This file is part of UFLACS. # # UFLACS is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # UFLACS is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with UFLACS. If not, see . # TODO: Move these to uflacs.language utils? def generate_return_new(L, classname, factory): if factory: return L.Return(L.Call("create_" + classname)) else: return L.Return(L.New(classname)) def generate_return_new_switch(L, i, classnames, args=None, factory=False): # TODO: UFC functions of this type could be replaced with return vector>{objects}. if isinstance(i, str): i = L.Symbol(i) if factory: def create(classname): return L.Call("create_" + classname) else: def create(classname): return L.New(classname) default = L.Return(L.Null()) if classnames: cases = [] if args is None: args = list(range(len(classnames))) for j, classname in zip(args, classnames): if classname: cases.append((j, L.Return(create(classname)))) return L.Switch(i, cases, default=default) else: return default def generate_return_literal_switch(L, i, values, default, literal_type, typename=None): # TODO: UFC functions of this type could be replaced with return vector{values}. if isinstance(i, str): i = L.Symbol(i) return_default = L.Return(literal_type(default)) if values and typename is not None: # Store values in static table and return from there V = L.Symbol("return_values") decl = L.ArrayDecl("static const %s" % typename, V, len(values), [literal_type(k) for k in values]) return L.StatementList([ decl, L.If(L.GE(i, len(values)), return_default), L.Return(V[i]) ]) elif values: # Need typename to create static array, fallback to switch cases = [(j, L.Return(literal_type(k))) for j, k in enumerate(values)] return L.Switch(i, cases, default=return_default) else: # No values, just return default return return_default def generate_return_sizet_switch(L, i, values, default): return generate_return_literal_switch(L, i, values, default, L.LiteralInt, "std::size_t") def generate_return_int_switch(L, i, values, default): return generate_return_literal_switch(L, i, values, default, L.LiteralInt, "int") def generate_return_bool_switch(L, i, values, default): return generate_return_literal_switch(L, i, values, default, L.LiteralBool, "bool") # TODO: Better error handling def generate_error(L, msg, emit_warning): if emit_warning: return L.VerbatimStatement('std::cerr << "*** FFC warning: " << "%s" << std::endl;' % (msg,)) else: return L.Throw('std::runtime_error', msg) """ // TODO: UFC functions that return constant arrays // could use something like this to reduce copying, // returning pointers to static data: template class array_view { public: array_view(std::size_t size, const T * data): size(size), data(data) const std::size_t size; const T * data; const T operator[](std::size_t i) const { return data[i]; } }; array_view form::original_coefficient_positions() const { static const int data = { 0, 1, 2, 5 }; return array_view{4, data}; } array_view integral::enabled_coefficients() const { static const bool data = { true, true, true, false, false, true }; return array_view{6, data}; } """ ffc-2019.1.0.post0/ffc/uflacs/build_uflacs_ir.py000066400000000000000000001205041346026536000212700ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2011-2017 Martin Sandve Alnæs # # This file is part of UFLACS. # # UFLACS is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # UFLACS is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with UFLACS. If not, see . """Main algorithm for building the uflacs intermediate representation.""" import numpy from collections import defaultdict, namedtuple from itertools import chain import itertools from ufl import product, as_ufl from ufl.log import error, warning, debug from ufl.checks import is_cellwise_constant from ufl.classes import CellCoordinate, FacetCoordinate, QuadratureWeight from ufl.measure import custom_integral_types, point_integral_types, facet_integral_types from ufl.algorithms.analysis import has_type from ffc.uflacs.analysis.balancing import balance_modifiers from ffc.uflacs.analysis.modified_terminals import is_modified_terminal, analyse_modified_terminal from ffc.uflacs.analysis.graph import build_graph from ffc.uflacs.analysis.graph_vertices import build_scalar_graph_vertices from ffc.uflacs.analysis.graph_rebuild import rebuild_with_scalar_subexpressions from ffc.uflacs.analysis.dependencies import compute_dependencies, mark_active, mark_image from ffc.uflacs.analysis.graph_ssa import compute_dependency_count, invert_dependencies #from ffc.uflacs.analysis.graph_ssa import default_cache_score_policy, compute_cache_scores, allocate_registers from ffc.uflacs.analysis.factorization import compute_argument_factorization from ffc.uflacs.elementtables import build_optimized_tables, piecewise_ttypes, uniform_ttypes, clamp_table_small_numbers # Some quick internal structs, massive improvement to # readability and maintainability over just tuples... ma_data_t = namedtuple( "ma_data_t", ["ma_index", "tabledata"] ) common_block_data_fields = [ "block_mode", # block mode name: "safe" | "full" | "preintegrated" | "premultiplied" "ttypes", # list of table types for each block rank "factor_index", # int: index of factor in vertex array "factor_is_piecewise", # bool: factor is found in piecewise vertex array instead of quadloop specific vertex array "unames", # list of unique FE table names for each block rank "restrictions", # restriction "+" | "-" | None for each block rank "transposed", # block is the transpose of another ] common_block_data_t = namedtuple( "common_block_data_t", common_block_data_fields ) def get_common_block_data(blockdata): return common_block_data_t(*blockdata[:len(common_block_data_fields)]) preintegrated_block_data_t = namedtuple( "preintegrated_block_data_t", common_block_data_fields + ["is_uniform", "name"] ) premultiplied_block_data_t = namedtuple( "premultiplied_block_data_t", common_block_data_fields + ["is_uniform", "name"] ) partial_block_data_t = namedtuple( "partial_block_data_t", common_block_data_fields + ["ma_data", "piecewise_ma_index"] ) full_block_data_t = namedtuple( "full_block_data_t", common_block_data_fields + ["ma_data"] ) def multiply_block_interior_facets(point_index, unames, ttypes, unique_tables, unique_table_num_dofs): rank = len(unames) tables = [unique_tables.get(name) for name in unames] num_dofs = tuple(unique_table_num_dofs[name] for name in unames) num_entities = max([1] + [tbl.shape[0] for tbl in tables if tbl is not None]) ptable = numpy.zeros((num_entities,)*rank + num_dofs) for facets in itertools.product(*[range(num_entities)]*rank): vectors = [] for i, tbl in enumerate(tables): if tbl is None: assert ttypes[i] == "ones" vectors.append(numpy.ones((num_dofs[i],))) else: # Some tables are compacted along entities or points e = 0 if tbl.shape[0] == 1 else facets[i] q = 0 if tbl.shape[1] == 1 else point_index vectors.append(tbl[e, q, :]) if rank > 1: assert rank == 2 ptable[facets[0], facets[1], ...] = numpy.outer(*vectors) elif rank == 1: ptable[facets[0], :] = vectors[0] else: error("Nothing to multiply!") return ptable def multiply_block(point_index, unames, ttypes, unique_tables, unique_table_num_dofs): rank = len(unames) tables = [unique_tables.get(name) for name in unames] num_dofs = tuple(unique_table_num_dofs[name] for name in unames) num_entities = max([1] + [tbl.shape[0] for tbl in tables if tbl is not None]) ptable = numpy.zeros((num_entities,) + num_dofs) for entity in range(num_entities): vectors = [] for i, tbl in enumerate(tables): if tbl is None: assert ttypes[i] == "ones" vectors.append(numpy.ones((num_dofs[i],))) else: # Some tables are compacted along entities or points e = 0 if tbl.shape[0] == 1 else entity q = 0 if tbl.shape[1] == 1 else point_index vectors.append(tbl[e, q, :]) if rank > 1: ptable[entity, ...] = numpy.outer(*vectors) elif rank == 1: ptable[entity, :] = vectors[0] else: error("Nothing to multiply!") return ptable def integrate_block(weights, unames, ttypes, unique_tables, unique_table_num_dofs): rank = len(unames) tables = [unique_tables.get(name) for name in unames] num_dofs = tuple(unique_table_num_dofs[name] for name in unames) num_entities = max([1] + [tbl.shape[0] for tbl in tables if tbl is not None]) ptable = numpy.zeros((num_entities,) + num_dofs) for iq, w in enumerate(weights): ptable[...] += w * multiply_block(iq, unames, ttypes, unique_tables, unique_table_num_dofs) return ptable def integrate_block_interior_facets(weights, unames, ttypes, unique_tables, unique_table_num_dofs): rank = len(unames) tables = [unique_tables.get(name) for name in unames] num_dofs = tuple(unique_table_num_dofs[name] for name in unames) num_entities = max([1] + [tbl.shape[0] for tbl in tables if tbl is not None]) ptable = numpy.zeros((num_entities,)*rank + num_dofs) for iq, w in enumerate(weights): mtable = multiply_block_interior_facets(iq, unames, ttypes, unique_tables, unique_table_num_dofs) ptable[...] += w * mtable return ptable def empty_expr_ir(): expr_ir = {} expr_ir["V"] = [] expr_ir["V_active"] = [] expr_ir["V_targets"] = [] expr_ir["V_mts"] = [] expr_ir["mt_tabledata"] = {} expr_ir["modified_arguments"] = [] expr_ir["preintegrated_blocks"] = {} expr_ir["premultiplied_blocks"] = {} expr_ir["preintegrated_contributions"] = defaultdict(list) expr_ir["block_contributions"] = defaultdict(list) return expr_ir def uflacs_default_parameters(optimize): """Default parameters for tuning of uflacs code generation. These are considered experimental and may change without deprecation mechanism at any time. """ p = { # Relative precision to use when comparing finite element # table values for table reuse "table_rtol": 1e-6, # Absolute precision to use when comparing finite element # table values for table reuse and dropping of table zeros "table_atol": 1e-9, # Point chunk size for custom integrals "chunk_size": 8, # Optimization parameters used in representation building # TODO: The names of these parameters can be a bit misleading "enable_preintegration": False, "enable_premultiplication": False, "enable_sum_factorization": False, "enable_block_transpose_reuse": False, "enable_table_zero_compression": False, # Code generation parameters "vectorize": False, "alignas": 0, "padlen": 1, "use_symbol_array": True, "tensor_init_mode": "upfront", # interleaved | direct | upfront } if optimize: # Override defaults if optimization is turned on p.update({ # Optimization parameters used in representation building # TODO: The names of these parameters can be a bit misleading "enable_preintegration": True, "enable_premultiplication": False, "enable_sum_factorization": True, "enable_block_transpose_reuse": True, "enable_table_zero_compression": True, # Code generation parameters "vectorize": False, "alignas": 32, "padlen": 1, "use_symbol_array": True, "tensor_init_mode": "interleaved", # interleaved | direct | upfront }) return p def parse_uflacs_optimization_parameters(parameters, integral_type): """Following model from quadrature representation, extracting uflacs specific parameters from the global parameters dict.""" # Get default parameters p = uflacs_default_parameters(parameters["optimize"]) # Override with uflacs specific parameters if # present in given global parameters dict for key in p: if key in parameters: value = parameters[key] # Casting done here because main doesn't know about these parameters if isinstance(p[key], int): value = int(value) elif isinstance(p[key], float): value = float(value) p[key] = value # Conditionally disable some optimizations based on integral type, # i.e. these options are not valid for certain integral types skip_preintegrated = point_integral_types + custom_integral_types if integral_type in skip_preintegrated: p["enable_preintegration"] = False skip_premultiplied = point_integral_types + custom_integral_types if integral_type in skip_premultiplied: p["enable_premultiplication"] = False return p def build_uflacs_ir(cell, integral_type, entitytype, integrands, tensor_shape, coefficient_numbering, quadrature_rules, parameters): # The intermediate representation dict we're building and returning here ir = {} # Extract uflacs specific optimization and code generation parameters p = parse_uflacs_optimization_parameters(parameters, integral_type) # Pass on parameters for consumption in code generation ir["params"] = p # { ufl coefficient: count } ir["coefficient_numbering"] = coefficient_numbering # Shared unique tables for all quadrature loops ir["unique_tables"] = {} ir["unique_table_types"] = {} # Shared piecewise expr_ir for all quadrature loops ir["piecewise_ir"] = empty_expr_ir() # { num_points: expr_ir for one integrand } ir["varying_irs"] = {} # Temporary data structures to build shared piecewise data pe2i = {} piecewise_modified_argument_indices = {} # Whether we expect the quadrature weight to be applied or not # (in some cases it's just set to 1 in ufl integral scaling) tdim = cell.topological_dimension() expect_weight = ( integral_type not in ("expression",) + point_integral_types and (entitytype == "cell" or (entitytype == "facet" and tdim > 1) or (integral_type in custom_integral_types) ) ) if integral_type == "expression": # TODO: Figure out how to get non-integrand expressions in here, this is just a draft: # Analyse all expressions in one list assert isinstance(integrands, (tuple, list)) all_num_points = [None] cases = [(None, integrands)] else: # Analyse each num_points/integrand separately assert isinstance(integrands, dict) all_num_points = sorted(integrands.keys()) cases = [(num_points, [integrands[num_points]]) for num_points in all_num_points] ir["all_num_points"] = all_num_points for num_points, expressions in cases: # Rebalance order of nested terminal modifiers expressions = [balance_modifiers(expr) for expr in expressions] # Build initial scalar list-based graph representation V, V_deps, V_targets = build_scalar_graph(expressions) # Build terminal_data from V here before factorization. # Then we can use it to derive table properties for all modified terminals, # and then use that to rebuild the scalar graph more efficiently before # argument factorization. We can build terminal_data again after factorization # if that's necessary. initial_terminal_indices = [i for i, v in enumerate(V) if is_modified_terminal(v)] initial_terminal_data = [analyse_modified_terminal(V[i]) for i in initial_terminal_indices] unique_tables, unique_table_types, unique_table_num_dofs, mt_unique_table_reference = \ build_optimized_tables(num_points, quadrature_rules, cell, integral_type, entitytype, initial_terminal_data, ir["unique_tables"], p["enable_table_zero_compression"], rtol=p["table_rtol"], atol=p["table_atol"]) # Replace some scalar modified terminals before reconstructing expressions # (could possibly use replace() on target expressions instead) z = as_ufl(0.0) one = as_ufl(1.0) for i, mt in zip(initial_terminal_indices, initial_terminal_data): if isinstance(mt.terminal, QuadratureWeight): # Replace quadrature weight with 1.0, will be added back later V[i] = one else: # Set modified terminals with zero tables to zero tr = mt_unique_table_reference.get(mt) if tr is not None and tr.ttype == "zeros": V[i] = z # Propagate expression changes using dependency list for i in range(len(V)): deps = [V[j] for j in V_deps[i]] if deps: V[i] = V[i]._ufl_expr_reconstruct_(*deps) # Rebuild scalar target expressions and graph # (this may be overkill and possible to optimize # away if it turns out to be costly) expressions = [V[i] for i in V_targets] # Rebuild scalar list-based graph representation SV, SV_deps, SV_targets = build_scalar_graph(expressions) assert all(i < len(SV) for i in SV_targets) # Compute factorization of arguments (argument_factorizations, modified_arguments, FV, FV_deps, FV_targets) = \ compute_argument_factorization(SV, SV_deps, SV_targets, len(tensor_shape)) assert len(SV_targets) == len(argument_factorizations) # TODO: Still expecting one target variable in code generation assert len(argument_factorizations) == 1 argument_factorization, = argument_factorizations # Store modified arguments in analysed form for i in range(len(modified_arguments)): modified_arguments[i] = analyse_modified_terminal(modified_arguments[i]) # Build set of modified_terminal indices into factorized_vertices modified_terminal_indices = [i for i, v in enumerate(FV) if is_modified_terminal(v)] # Build set of modified terminal ufl expressions modified_terminals = [analyse_modified_terminal(FV[i]) for i in modified_terminal_indices] # Make it easy to get mt object from FV index FV_mts = [None]*len(FV) for i, mt in zip(modified_terminal_indices, modified_terminals): FV_mts[i] = mt # Mark active modified arguments #active_modified_arguments = numpy.zeros(len(modified_arguments), dtype=int) #for ma_indices in argument_factorization: # for j in ma_indices: # active_modified_arguments[j] = 1 # Dependency analysis inv_FV_deps, FV_active, FV_piecewise, FV_varying = \ analyse_dependencies(FV, FV_deps, FV_targets, modified_terminal_indices, modified_terminals, mt_unique_table_reference) # Extend piecewise V with unique new FV_piecewise vertices pir = ir["piecewise_ir"] for i, v in enumerate(FV): if FV_piecewise[i]: j = pe2i.get(v) if j is None: j = len(pe2i) pe2i[v] = j pir["V"].append(v) pir["V_active"].append(1) mt = FV_mts[i] if mt is not None: pir["mt_tabledata"][mt] = mt_unique_table_reference.get(mt) pir["V_mts"].append(mt) # Extend piecewise modified_arguments list with unique new items for mt in modified_arguments: ma = piecewise_modified_argument_indices.get(mt) if ma is None: ma = len(pir["modified_arguments"]) pir["modified_arguments"].append(mt) piecewise_modified_argument_indices[mt] = ma # Loop over factorization terms block_contributions = defaultdict(list) for ma_indices, fi in sorted(argument_factorization.items()): # Get a bunch of information about this term rank = len(ma_indices) trs = tuple(mt_unique_table_reference[modified_arguments[ai]] for ai in ma_indices) unames = tuple(tr.name for tr in trs) ttypes = tuple(tr.ttype for tr in trs) assert not any(tt == "zeros" for tt in ttypes) blockmap = tuple(tr.dofmap for tr in trs) block_is_uniform = all(tr.is_uniform for tr in trs) # Collect relevant restrictions to identify blocks # correctly in interior facet integrals block_restrictions = [] for i, ma in enumerate(ma_indices): if trs[i].is_uniform: r = None else: r = modified_arguments[ma].restriction block_restrictions.append(r) block_restrictions = tuple(block_restrictions) # Store piecewise status for fi and translate # index to piecewise scope if relevant factor_is_piecewise = FV_piecewise[fi] if factor_is_piecewise: factor_index = pe2i[FV[fi]] else: factor_index = fi # TODO: Add separate block modes for quadrature # Both arguments in quadrature elements """ for iq fw = f*w #for i # for j # B[i,j] = fw*U[i]*V[j] = 0 if i != iq or j != iq BQ[iq] = B[iq,iq] = fw for (iq) A[iq+offset0, iq+offset1] = BQ[iq] """ # One argument in quadrature element """ for iq fw[iq] = f*w #for i # for j # B[i,j] = fw*UQ[i]*V[j] = 0 if i != iq for j BQ[iq,j] = fw[iq]*V[iq,j] for (iq) for (j) A[iq+offset, j+offset] = BQ[iq,j] """ # Decide how to handle code generation for this block if p["enable_preintegration"] and (factor_is_piecewise and rank > 0 and "quadrature" not in ttypes): # - Piecewise factor is an absolute prerequisite # - Could work for rank 0 as well but currently doesn't # - Haven't considered how quadrature elements work out block_mode = "preintegrated" elif p["enable_premultiplication"] and (rank > 0 and all(tt in piecewise_ttypes for tt in ttypes)): # Integrate functional in quadloop, scale block after quadloop block_mode = "premultiplied" elif p["enable_sum_factorization"]: if (rank == 2 and any(tt in piecewise_ttypes for tt in ttypes)): # Partial computation in quadloop of f*u[i], # compute (f*u[i])*v[i] outside quadloop, # (or with u,v swapped) block_mode = "partial" else: # Full runtime integration of f*u[i]*v[j], # can still do partial computation in quadloop of f*u[i] # but must compute (f*u[i])*v[i] as well inside quadloop. # (or with u,v swapped) block_mode = "full" else: # Use full runtime integration with nothing fancy going on block_mode = "safe" # Carry out decision if block_mode == "preintegrated": # Add to contributions: # P = sum_q weight*u*v; preintegrated here # B[...] = f * P[...]; generated after quadloop # A[blockmap] += B[...]; generated after quadloop cache = ir["piecewise_ir"]["preintegrated_blocks"] block_is_transposed = False pname = cache.get(unames) # Reuse transpose to save memory if p["enable_block_transpose_reuse"] and pname is None and len(unames) == 2: pname = cache.get((unames[1], unames[0])) if pname is not None: # Cache hit on transpose block_is_transposed = True if pname is None: # Cache miss, precompute block weights = quadrature_rules[num_points][1] if integral_type == "interior_facet": ptable = integrate_block_interior_facets(weights, unames, ttypes, unique_tables, unique_table_num_dofs) else: ptable = integrate_block(weights, unames, ttypes, unique_tables, unique_table_num_dofs) ptable = clamp_table_small_numbers(ptable, rtol=p["table_rtol"], atol=p["table_atol"]) pname = "PI%d" % (len(cache,)) cache[unames] = pname unique_tables[pname] = ptable unique_table_types[pname] = "preintegrated" assert factor_is_piecewise block_unames = (pname,) blockdata = preintegrated_block_data_t(block_mode, ttypes, factor_index, factor_is_piecewise, block_unames, block_restrictions, block_is_transposed, block_is_uniform, pname) block_is_piecewise = True elif block_mode == "premultiplied": # Add to contributions: # P = u*v; computed here # FI = sum_q weight * f; generated inside quadloop # B[...] = FI * P[...]; generated after quadloop # A[blockmap] += B[...]; generated after quadloop cache = ir["piecewise_ir"]["premultiplied_blocks"] block_is_transposed = False pname = cache.get(unames) # Reuse transpose to save memory if p["enable_block_transpose_reuse"] and pname is None and len(unames) == 2: pname = cache.get((unames[1], unames[0])) if pname is not None: # Cache hit on transpose block_is_transposed = True if pname is None: # Cache miss, precompute block if integral_type == "interior_facet": ptable = multiply_block_interior_facets(0, unames, ttypes, unique_tables, unique_table_num_dofs) else: ptable = multiply_block(0, unames, ttypes, unique_tables, unique_table_num_dofs) pname = "PM%d" % (len(cache,)) cache[unames] = pname unique_tables[pname] = ptable unique_table_types[pname] = "premultiplied" block_unames = (pname,) blockdata = premultiplied_block_data_t(block_mode, ttypes, factor_index, factor_is_piecewise, block_unames, block_restrictions, block_is_transposed, block_is_uniform, pname) block_is_piecewise = False elif block_mode == "scaled": # TODO: Add mode, block is piecewise but choose not to be premultiplied # Add to contributions: # FI = sum_q weight * f; generated inside quadloop # B[...] = FI * u * v; generated after quadloop # A[blockmap] += B[...]; generated after quadloop raise NotImplementedError("scaled block mode not implemented.") # (probably need mostly the same data as premultiplied, except no P table name or values) block_is_piecewise = False elif block_mode in ("partial", "full", "safe"): # Translate indices to piecewise context if necessary block_is_piecewise = factor_is_piecewise and not expect_weight ma_data = [] for i, ma in enumerate(ma_indices): if trs[i].is_piecewise: ma_index = piecewise_modified_argument_indices[modified_arguments[ma]] else: block_is_piecewise = False ma_index = ma ma_data.append(ma_data_t(ma_index, trs[i])) block_is_transposed = False # FIXME: Handle transposes for these block types if block_mode == "partial": # Add to contributions: # P[i] = sum_q weight * f * u[i]; generated inside quadloop # B[i,j] = P[i] * v[j]; generated after quadloop (where v is the piecewise ma) # A[blockmap] += B[...]; generated after quadloop # Find first piecewise index TODO: Is last better? just reverse range here for i in range(rank): if trs[i].is_piecewise: piecewise_ma_index = i break assert rank == 2 not_piecewise_ma_index = 1 - piecewise_ma_index block_unames = (unames[not_piecewise_ma_index],) blockdata = partial_block_data_t(block_mode, ttypes, factor_index, factor_is_piecewise, block_unames, block_restrictions, block_is_transposed, tuple(ma_data), piecewise_ma_index) elif block_mode in ("full", "safe"): # Add to contributions: # B[i] = sum_q weight * f * u[i] * v[j]; generated inside quadloop # A[blockmap] += B[i]; generated after quadloop block_unames = unames blockdata = full_block_data_t(block_mode, ttypes, factor_index, factor_is_piecewise, block_unames, block_restrictions, block_is_transposed, tuple(ma_data)) else: error("Invalid block_mode %s" % (block_mode,)) if block_is_piecewise: # Insert in piecewise expr_ir ir["piecewise_ir"]["block_contributions"][blockmap].append(blockdata) else: # Insert in varying expr_ir for this quadrature loop block_contributions[blockmap].append(blockdata) # Figure out which table names are referenced in unstructured partition active_table_names = set() for i, mt in zip(modified_terminal_indices, modified_terminals): tr = mt_unique_table_reference.get(mt) if tr is not None and FV_active[i]: active_table_names.add(tr.name) # Figure out which table names are referenced in blocks for blockmap, contributions in chain(block_contributions.items(), ir["piecewise_ir"]["block_contributions"].items()): for blockdata in contributions: if blockdata.block_mode in ("preintegrated", "premultiplied"): active_table_names.add(blockdata.name) elif blockdata.block_mode in ("partial", "full", "safe"): for mad in blockdata.ma_data: active_table_names.add(mad.tabledata.name) # Record all table types before dropping tables ir["unique_table_types"].update(unique_table_types) # Drop tables not referenced from modified terminals # and tables of zeros and ones unused_ttypes = ("zeros", "ones", "quadrature") keep_table_names = set() for name in active_table_names: ttype = ir["unique_table_types"][name] if ttype not in unused_ttypes: if name in unique_tables: keep_table_names.add(name) unique_tables = { name: unique_tables[name] for name in keep_table_names } # Add to global set of all tables for name, table in unique_tables.items(): tbl = ir["unique_tables"].get(name) if tbl is not None and not numpy.allclose(tbl, table, rtol=p["table_rtol"], atol=p["table_atol"]): error("Table values mismatch with same name.") ir["unique_tables"].update(unique_tables) # Analyse active terminals to check what we'll need to generate code for active_mts = [] for i, mt in zip(modified_terminal_indices, modified_terminals): if FV_active[i]: active_mts.append(mt) # Figure out if we need to access CellCoordinate to # avoid generating quadrature point table otherwise if integral_type == "cell": need_points = any(isinstance(mt.terminal, CellCoordinate) for mt in active_mts) elif integral_type in facet_integral_types: need_points = any(isinstance(mt.terminal, FacetCoordinate) for mt in active_mts) elif integral_type in custom_integral_types: need_points = True # TODO: Always? else: need_points = False # Figure out if we need to access QuadratureWeight to # avoid generating quadrature point table otherwise #need_weights = any(isinstance(mt.terminal, QuadratureWeight) # for mt in active_mts) # Count blocks of each mode block_modes = defaultdict(int) for blockmap, contributions in block_contributions.items(): for blockdata in contributions: block_modes[blockdata.block_mode] += 1 # Debug output summary = "\n".join(" %d\t%s" % (count, mode) for mode, count in sorted(block_modes.items())) debug("Blocks of each mode: \n" + summary) # If there are any blocks other than preintegrated we need weights if expect_weight and any(mode != "preintegrated" for mode in block_modes): need_weights = True elif integral_type in custom_integral_types: need_weights = True # TODO: Always? else: need_weights = False # Build IR dict for the given expressions expr_ir = {} # (array) FV-index -> UFL subexpression expr_ir["V"] = FV # (array) V indices for each input expression component in flattened order expr_ir["V_targets"] = FV_targets ### Result of factorization: # (array) MA-index -> UFL expression of modified arguments expr_ir["modified_arguments"] = modified_arguments # (dict) tuple(MA-indices) -> FV-index of monomial factor #expr_ir["argument_factorization"] = argument_factorization expr_ir["block_contributions"] = block_contributions ### Modified terminals # (array) list of FV-indices to modified terminals #expr_ir["modified_terminal_indices"] = modified_terminal_indices # Dependency structure of graph: # (CRSArray) FV-index -> direct dependency FV-index list #expr_ir["dependencies"] = FV_deps # (CRSArray) FV-index -> direct dependee FV-index list #expr_ir["inverse_dependencies"] = inv_FV_deps # Metadata about each vertex #expr_ir["active"] = FV_active # (array) FV-index -> bool #expr_ir["V_piecewise"] = FV_piecewise # (array) FV-index -> bool expr_ir["V_varying"] = FV_varying # (array) FV-index -> bool expr_ir["V_mts"] = FV_mts # Store mapping from modified terminal object to # table data, this is used in integralgenerator expr_ir["mt_tabledata"] = mt_unique_table_reference # To emit quadrature rules only if needed expr_ir["need_points"] = need_points expr_ir["need_weights"] = need_weights # Store final ir for this num_points ir["varying_irs"][num_points] = expr_ir return ir def build_scalar_graph(expressions): """Build list representation of expression graph covering the given expressions. TODO: Renaming, refactoring and cleanup of the graph building algorithms used in here """ # Build the initial coarse computational graph of the expression G = build_graph(expressions) assert len(expressions) == 1, "FIXME: Multiple expressions in graph building needs more work from this point on." # Build more fine grained computational graph of scalar subexpressions # TODO: Make it so that # expressions[k] <-> NV[nvs[k][:]], # len(nvs[k]) == value_size(expressions[k]) scalar_expressions = rebuild_with_scalar_subexpressions(G) # Sanity check on number of scalar symbols/components assert len(scalar_expressions) == sum(product(expr.ufl_shape) for expr in expressions) # Build new list representation of graph where all # vertices of V represent single scalar operations e2i, V, V_targets = build_scalar_graph_vertices(scalar_expressions) # Compute sparse dependency matrix V_deps = compute_dependencies(e2i, V) return V, V_deps, V_targets def analyse_dependencies(V, V_deps, V_targets, modified_terminal_indices, modified_terminals, mt_unique_table_reference): # Count the number of dependencies every subexpr has V_depcount = compute_dependency_count(V_deps) # Build the 'inverse' of the sparse dependency matrix inv_deps = invert_dependencies(V_deps, V_depcount) # Mark subexpressions of V that are actually needed for final result active, num_active = mark_active(V_deps, V_targets) # Build piecewise/varying markers for factorized_vertices varying_ttypes = ("varying", "uniform", "quadrature") varying_indices = [] for i, mt in zip(modified_terminal_indices, modified_terminals): tr = mt_unique_table_reference.get(mt) if tr is not None: ttype = tr.ttype # Check if table computations have revealed values varying over points # Note: uniform means entity-wise uniform, varying over points if ttype in varying_ttypes: varying_indices.append(i) else: if ttype not in ("fixed", "piecewise", "ones", "zeros"): error("Invalid ttype %s" % (ttype,)) elif not is_cellwise_constant(V[i]): # Keeping this check to be on the safe side, # not sure which cases this will cover (if any) varying_indices.append(i) # Mark every subexpression that is computed # from the spatially dependent terminals varying, num_varying = mark_image(inv_deps, varying_indices) # The rest of the subexpressions are piecewise constant (1-1=0, 1-0=1) piecewise = 1 - varying # Unmark non-active subexpressions varying *= active piecewise *= active # TODO: Skip literals in both varying and piecewise # nonliteral = ... # varying *= nonliteral # piecewise *= nonliteral return inv_deps, active, piecewise, varying # TODO: Consider comments below and do it or delete them. """ Old comments: Work for later:: - Apply some suitable renumbering of vertices and corresponding arrays prior to returning - Allocate separate registers for each partition (but e.g. argument[iq][i0] may need to be accessible in other loops) - Improve register allocation algorithm - Take a list of expressions as input to compile several expressions in one joined graph (e.g. to compile a,L,M together for nonlinear problems) """ """ # Old comments: # TODO: Inspection of varying shows that factorization is # needed for effective loop invariant code motion w.r.t. quadrature loop as well. # Postphoning that until everything is working fine again. # Core ingredients for such factorization would be: # - Flatten products of products somehow # - Sorting flattened product factors by loop dependency then by canonical ordering # Or to keep binary products: # - Rebalancing product trees ((a*c)*(b*d) -> (a*b)*(c*d)) to make piecewise quantities 'float' to the top of the list # rank = max(len(ma_indices) for ma_indices in argument_factorization) # for i,a in enumerate(modified_arguments): # iarg = a.number() # ipart = a.part() # TODO: More structured MA organization? #modified_arguments[rank][block][entry] -> UFL expression of modified argument #dofranges[rank][block] -> (begin, end) # or #modified_arguments[rank][entry] -> UFL expression of modified argument #dofrange[rank][entry] -> (begin, end) #argument_factorization: (dict) tuple(MA-indices (only relevant ones!)) -> V-index of monomial factor # becomes #argument_factorization: (dict) tuple(entry for each(!) rank) -> V-index of monomial factor ## doesn't cover intermediate f*u in f*u*v! """ """ def old_code_useful_for_optimization(): # Use heuristics to mark the usefulness of storing every subexpr in a variable scores = compute_cache_scores(V, active, dependencies, inverse_dependencies, partitions, # TODO: Rewrite in terms of something else, this doesn't exist anymore cache_score_policy=default_cache_score_policy) # Allocate variables to store subexpressions in allocations = allocate_registers(active, partitions, target_variables, scores, int(parameters["max_registers"]), int(parameters["score_threshold"])) target_registers = [allocations[r] for r in target_variables] num_registers = sum(1 if x >= 0 else 0 for x in allocations) # TODO: If we renumber we can allocate registers separately for each partition, which is probably a good idea. expr_oir = {} expr_oir["num_registers"] = num_registers expr_oir["partitions"] = partitions expr_oir["allocations"] = allocations expr_oir["target_registers"] = target_registers return expr_oir """ ffc-2019.1.0.post0/ffc/uflacs/elementtables.py000066400000000000000000000604471346026536000207770ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2011-2017 Martin Sandve Alnæs # # This file is part of UFLACS. # # UFLACS is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # UFLACS is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with UFLACS. If not, see . """Tools for precomputed tables of terminal values.""" from collections import namedtuple import numpy from ufl.cell import num_cell_entities from ufl.utils.sequences import product from ufl.utils.derivativetuples import derivative_listing_to_counts from ufl.permutation import build_component_numbering from ufl.classes import FormArgument, GeometricQuantity, SpatialCoordinate, Jacobian from ufl.algorithms.analysis import unique_tuple from ufl.measure import custom_integral_types from ffc.log import error from ffc.fiatinterface import create_element from ffc.representationutils import integral_type_to_entity_dim, map_integral_points from ffc.representationutils import create_quadrature_points_and_weights from ffc.uflacs.backends.ffc.common import ufc_restriction_offset # Using same defaults as numpy.allclose default_rtol = 1e-5 default_atol = 1e-8 table_origin_t = namedtuple("table_origin", ["element", "avg", "derivatives", "flat_component", "dofrange", "dofmap"]) piecewise_ttypes = ("piecewise", "fixed", "ones", "zeros") uniform_ttypes = ("uniform", "fixed", "ones", "zeros") valid_ttypes = set(("quadrature",)) | set(piecewise_ttypes) | set(uniform_ttypes) unique_table_reference_t = namedtuple("unique_table_reference", ["name", "values", "dofrange", "dofmap", "original_dim", "ttype", "is_piecewise", "is_uniform"]) def equal_tables(a, b, rtol=default_rtol, atol=default_atol): a = numpy.asarray(a) b = numpy.asarray(b) if a.shape != b.shape: return False else: return numpy.allclose(a, b, rtol=rtol, atol=atol) def clamp_table_small_numbers(table, rtol=default_rtol, atol=default_atol, numbers=(-1.0, -0.5, 0.0, 0.5, 1.0)): "Clamp almost 0,1,-1 values to integers. Returns new table." # Get shape of table and number of columns, defined as the last axis table = numpy.asarray(table) for n in numbers: table[numpy.where(numpy.isclose(table, n, rtol=rtol, atol=atol))] = n return table def strip_table_zeros(table, compress_zeros, rtol=default_rtol, atol=default_atol): "Strip zero columns from table. Returns column range (begin, end) and the new compact table." # Get shape of table and number of columns, defined as the last axis table = numpy.asarray(table) sh = table.shape # Find nonzero columns z = numpy.zeros(sh[:-1]) # Correctly shaped zero table dofmap = tuple(i for i in range(sh[-1]) if not numpy.allclose(z, table[..., i], rtol=rtol, atol=atol)) if dofmap: # Find first nonzero column begin = dofmap[0] # Find (one beyond) last nonzero column end = dofmap[-1] + 1 else: begin = 0 end = 0 # If compression is not wanted, pretend whole range is nonzero if not compress_zeros: dofmap = tuple(range(begin, end)) # Make subtable by dropping zero columns stripped_table = table[..., dofmap] dofrange = (begin, end) return dofrange, dofmap, stripped_table def build_unique_tables(tables, rtol=default_rtol, atol=default_atol): """Given a list or dict of tables, return a list of unique tables and a dict of unique table indices for each input table key.""" unique = [] mapping = {} if isinstance(tables, list): keys = list(range(len(tables))) elif isinstance(tables, dict): keys = sorted(tables.keys()) for k in keys: t = tables[k] found = -1 for i, u in enumerate(unique): if equal_tables(u, t, rtol=rtol, atol=atol): found = i break if found == -1: i = len(unique) unique.append(t) mapping[k] = i return unique, mapping def get_ffc_table_values(points, cell, integral_type, ufl_element, avg, entitytype, derivative_counts, flat_component): """Extract values from ffc element table. Returns a 3D numpy array with axes (entity number, quadrature point number, dof number) """ deriv_order = sum(derivative_counts) if integral_type in custom_integral_types: # Use quadrature points on cell for analysis in custom integral types integral_type = "cell" assert not avg if avg in ("cell", "facet"): # Redefine points to compute average tables # Make sure this is not called with points, that doesn't make sense #assert points is None # Not expecting derivatives of averages assert not any(derivative_counts) assert deriv_order == 0 # Doesn't matter if it's exterior or interior facet integral, # just need a valid integral type to create quadrature rule if avg == "cell": integral_type = "cell" elif avg == "facet": integral_type = "exterior_facet" # Make quadrature rule and get points and weights points, weights = create_quadrature_points_and_weights( integral_type, cell, ufl_element.degree(), "default") # Tabulate table of basis functions and derivatives in points for each entity fiat_element = create_element(ufl_element) tdim = cell.topological_dimension() entity_dim = integral_type_to_entity_dim(integral_type, tdim) num_entities = num_cell_entities[cell.cellname()][entity_dim] entity_tables = [] for entity in range(num_entities): entity_points = map_integral_points(points, integral_type, cell, entity) tbl = fiat_element.tabulate(deriv_order, entity_points)[derivative_counts] entity_tables.append(tbl) # Extract arrays for the right scalar component component_tables = [] sh = ufl_element.value_shape() if sh == (): # Scalar valued element for entity, entity_table in enumerate(entity_tables): component_tables.append(entity_table) elif len(sh) == 2 and ufl_element.num_sub_elements() == 0: # 2-tensor-valued elements, not a tensor product # mapping flat_component back to tensor component (_, f2t) = build_component_numbering(sh, ufl_element.symmetry()) t_comp = f2t[flat_component] for entity, entity_table in enumerate(entity_tables): tbl = entity_table[:, t_comp[0], t_comp[1], :] component_tables.append(tbl) else: # Vector-valued or mixed element for entity, entity_table in enumerate(entity_tables): tbl = entity_table[:, flat_component, :] component_tables.append(tbl) if avg in ("cell", "facet"): # Compute numeric integral of the each component table wsum = sum(weights) for entity, tbl in enumerate(component_tables): num_dofs = tbl.shape[0] tbl = numpy.dot(tbl, weights) / wsum tbl = numpy.reshape(tbl, (num_dofs, 1)) component_tables[entity] = tbl # Loop over entities and fill table blockwise (each block = points x dofs) # Reorder axes as (points, dofs) instead of (dofs, points) assert len(component_tables) == num_entities num_dofs, num_points = component_tables[0].shape shape = (num_entities, num_points, num_dofs) res = numpy.zeros(shape) for entity in range(num_entities): res[entity, :, :] = numpy.transpose(component_tables[entity]) return res def generate_psi_table_name(num_points, element_counter, averaged, entitytype, derivative_counts, flat_component): """Generate a name for the psi table of the form: FE#_C#_D###[_AC|_AF|][_F|V][_Q#], where '#' will be an integer value. FE - is a simple counter to distinguish the various bases, it will be assigned in an arbitrary fashion. C - is the component number if any (this does not yet take into account tensor valued functions) D - is the number of derivatives in each spatial direction if any. If the element is defined in 3D, then D012 means d^3(*)/dydz^2. AC - marks that the element values are averaged over the cell AF - marks that the element values are averaged over the facet F - marks that the first array dimension enumerates facets on the cell V - marks that the first array dimension enumerates vertices on the cell Q - number of quadrature points, to distinguish between tables in a mixed quadrature degree setting """ name = "FE%d" % element_counter if flat_component is not None: name += "_C%d" % flat_component if any(derivative_counts): name += "_D" + "".join(str(d) for d in derivative_counts) name += { None: "", "cell": "_AC", "facet": "_AF" }[averaged] name += { "cell": "", "facet": "_F", "vertex": "_V" }[entitytype] if num_points is not None: name += "_Q%d" % num_points return name def get_modified_terminal_element(mt): gd = mt.global_derivatives ld = mt.local_derivatives # Extract element from FormArguments and relevant GeometricQuantities if isinstance(mt.terminal, FormArgument): if gd and mt.reference_value: error("Global derivatives of reference values not defined.") elif ld and not mt.reference_value: error("Local derivatives of global values not defined.") element = mt.terminal.ufl_element() fc = mt.flat_component elif isinstance(mt.terminal, SpatialCoordinate): if mt.reference_value: error("Not expecting reference value of x.") if gd: error("Not expecting global derivatives of x.") element = mt.terminal.ufl_domain().ufl_coordinate_element() if not ld: fc = mt.flat_component else: # Actually the Jacobian expressed as reference_grad(x) fc = mt.flat_component # x-component assert len(mt.component) == 1 assert mt.component[0] == mt.flat_component elif isinstance(mt.terminal, Jacobian): if mt.reference_value: error("Not expecting reference value of J.") if gd: error("Not expecting global derivatives of J.") element = mt.terminal.ufl_domain().ufl_coordinate_element() # Translate component J[i,d] to x element context rgrad(x[i])[d] assert len(mt.component) == 2 fc, d = mt.component # x-component, derivative ld = tuple(sorted((d,) + ld)) else: return None assert not (mt.averaged and (ld or gd)) # Change derivatives format for table lookup #gdim = mt.terminal.ufl_domain().geometric_dimension() #global_derivatives = derivative_listing_to_counts(gd, gdim) # Change derivatives format for table lookup tdim = mt.terminal.ufl_domain().topological_dimension() local_derivatives = derivative_listing_to_counts(ld, tdim) return element, mt.averaged, local_derivatives, fc def build_element_tables(num_points, quadrature_rules, cell, integral_type, entitytype, modified_terminals, rtol=default_rtol, atol=default_atol): """Build the element tables needed for a list of modified terminals. Input: entitytype - str modified_terminals - ordered sequence of unique modified terminals FIXME: Document Output: tables - dict(name: table) mt_table_names - dict(ModifiedTerminal: name) """ mt_table_names = {} tables = {} table_origins = {} # Add to element tables analysis = {} for mt in modified_terminals: # FIXME: Use a namedtuple for res res = get_modified_terminal_element(mt) if res: analysis[mt] = res # Build element numbering using topological # ordering so subelements get priority from ffc.analysis import extract_sub_elements, sort_elements, _compute_element_numbers all_elements = [res[0] for res in analysis.values()] unique_elements = sort_elements(extract_sub_elements(all_elements)) element_numbers = _compute_element_numbers(unique_elements) def add_table(res): element, avg, local_derivatives, flat_component = res # Build name for this particular table element_number = element_numbers[element] name = generate_psi_table_name( num_points, element_number, avg, entitytype, local_derivatives, flat_component) # Extract the values of the table from ffc table format if name not in tables: tables[name] = get_ffc_table_values( quadrature_rules[num_points][0], cell, integral_type, element, avg, entitytype, local_derivatives, flat_component) # Track table origin for custom integrals: table_origins[name] = res return name for mt in modified_terminals: res = analysis.get(mt) if not res: continue element, avg, local_derivatives, flat_component = res # Generate tables for each subelement in topological ordering, # using same avg and local_derivatives, for each component. # We want the first table to be the innermost subelement so that's # the one the optimized tables get the name from and so that's # the one the table origins point to for custom integrals. # This results in some superfluous tables but those will be # removed before code generation and it's not believed to be # a bottleneck. for subelement in sort_elements(extract_sub_elements([element])): for fc in range(product(subelement.reference_value_shape())): subres = (subelement, avg, local_derivatives, fc) name_ignored = add_table(subres) # Generate table and store table name with modified terminal name = add_table(res) mt_table_names[mt] = name return tables, mt_table_names, table_origins def optimize_element_tables(tables, table_origins, compress_zeros, rtol=default_rtol, atol=default_atol): """Optimize tables and make unique set. Steps taken: - clamp values that are very close to -1, 0, +1 to those values - remove dofs from beginning and end of tables where values are all zero - for each modified terminal, provide the dof range that a given table corresponds to Terminology: name - str, name used in input arguments here table - numpy array of float values stripped_table - numpy array of float values with zeroes removed from each end of dofrange Input: tables - { name: table } table_origins - FIXME Output: unique_tables - { unique_name: stripped_table } unique_table_origins - FIXME """ used_names = sorted(tables) compressed_tables = {} table_ranges = {} table_dofmaps = {} table_original_num_dofs = {} for name in used_names: tbl = tables[name] # Clamp to selected small numbers if close, # (-1.0, -0.5, 0.0, 0.5 and +1.0) # (i.e. 0.999999 -> 1.0 if within rtol/atol distance) tbl = clamp_table_small_numbers(tbl, rtol=rtol, atol=atol) # Store original dof dimension before compressing num_dofs = tbl.shape[2] # Strip contiguous zero blocks at the ends of all tables dofrange, dofmap, tbl = strip_table_zeros(tbl, compress_zeros, rtol=rtol, atol=atol) compressed_tables[name] = tbl table_ranges[name] = dofrange table_dofmaps[name] = dofmap table_original_num_dofs[name] = num_dofs # Build unique table mapping unique_tables_list, name_to_unique_index = build_unique_tables( compressed_tables, rtol=rtol, atol=atol) # Build mapping of constructed table names to unique names. # Picking first constructed name preserves some information # about the table origins although some names may be dropped. unique_names = {} for name in used_names: ui = name_to_unique_index[name] if ui not in unique_names: unique_names[ui] = name table_unames = { name: unique_names[name_to_unique_index[name]] for name in name_to_unique_index } # Build mapping from unique table name to the table itself unique_tables = {} for ui, tbl in enumerate(unique_tables_list): uname = unique_names[ui] unique_tables[uname] = tbl unique_table_origins = {} #for ui in range(len(unique_tables_list)): for ui in []: # FIXME uname = unique_names[ui] # Track table origins for runtime recomputation in custom integrals: name = "name of 'smallest' element we can use to compute this table" dofrange = "FIXME" # table_ranges[name] dofmap = "FIXME" # table_dofmaps[name] # FIXME: Make sure the "smallest" element is chosen (element, avg, derivative_counts, fc) = table_origins[name] unique_table_origins[uname] = table_origin_t(element, avg, derivative_counts, fc, dofrange, dofmap) return unique_tables, unique_table_origins, table_unames, table_ranges, table_dofmaps, table_original_num_dofs def is_zeros_table(table, rtol=default_rtol, atol=default_atol): return (product(table.shape) == 0 or numpy.allclose(table, numpy.zeros(table.shape), rtol=rtol, atol=atol)) def is_ones_table(table, rtol=default_rtol, atol=default_atol): return numpy.allclose(table, numpy.ones(table.shape), rtol=rtol, atol=atol) def is_quadrature_table(table, rtol=default_rtol, atol=default_atol): num_entities, num_points, num_dofs = table.shape Id = numpy.eye(num_points) return (num_points == num_dofs and all(numpy.allclose(table[i, :, :], Id, rtol=rtol, atol=atol) for i in range(num_entities))) def is_piecewise_table(table, rtol=default_rtol, atol=default_atol): return all(numpy.allclose(table[:, 0, :], table[:, i, :], rtol=rtol, atol=atol) for i in range(1, table.shape[1])) def is_uniform_table(table, rtol=default_rtol, atol=default_atol): return all(numpy.allclose(table[0, :, :], table[i, :, :], rtol=rtol, atol=atol) for i in range(1, table.shape[0])) def analyse_table_type(table, rtol=default_rtol, atol=default_atol): num_entities, num_points, num_dofs = table.shape if is_zeros_table(table, rtol=rtol, atol=atol): # Table is empty or all values are 0.0 ttype = "zeros" elif is_ones_table(table, rtol=rtol, atol=atol): # All values are 1.0 ttype = "ones" elif is_quadrature_table(table, rtol=rtol, atol=atol): # Identity matrix mapping points to dofs (separately on each entity) ttype = "quadrature" else: # Equal for all points on a given entity piecewise = is_piecewise_table(table, rtol=rtol, atol=atol) # Equal for all entities uniform = is_uniform_table(table, rtol=rtol, atol=atol) if piecewise and uniform: # Constant for all points and all entities ttype = "fixed" elif piecewise: # Constant for all points on each entity separately ttype = "piecewise" elif uniform: # Equal on all entities ttype = "uniform" else: # Varying over points and entities ttype = "varying" return ttype def analyse_table_types(unique_tables, rtol=default_rtol, atol=default_atol): return { uname: analyse_table_type(table, rtol=rtol, atol=atol) for uname, table in unique_tables.items() } def build_optimized_tables(num_points, quadrature_rules, cell, integral_type, entitytype, modified_terminals, existing_tables, compress_zeros, rtol=default_rtol, atol=default_atol): # Build tables needed by all modified terminals tables, mt_table_names, table_origins = \ build_element_tables(num_points, quadrature_rules, cell, integral_type, entitytype, modified_terminals, rtol=rtol, atol=atol) # Optimize tables and get table name and dofrange for each modified terminal unique_tables, unique_table_origins, table_unames, table_ranges, table_dofmaps, table_original_num_dofs = \ optimize_element_tables(tables, table_origins, compress_zeros, rtol=rtol, atol=atol) # Get num_dofs for all tables before they can be deleted later unique_table_num_dofs = { uname: tbl.shape[2] for uname, tbl in unique_tables.items() } # Analyze tables for properties useful for optimization unique_table_ttypes = analyse_table_types(unique_tables, rtol=rtol, atol=atol) # Compress tables that are constant along num_entities or num_points for uname, tabletype in unique_table_ttypes.items(): if tabletype in piecewise_ttypes: # Reduce table to dimension 1 along num_points axis in generated code unique_tables[uname] = unique_tables[uname][:,0:1,:] if tabletype in uniform_ttypes: # Reduce table to dimension 1 along num_entities axis in generated code unique_tables[uname] = unique_tables[uname][0:1,:,:] # Delete tables not referenced by modified terminals used_unames = set(table_unames[name] for name in mt_table_names.values()) unused_unames = set(unique_tables.keys()) - used_unames for uname in unused_unames: del unique_table_ttypes[uname] del unique_tables[uname] # Change tables to point to existing optimized tables # (i.e. tables from other contexts that have been compressed to look the same) name_map = {} existing_names = sorted(existing_tables) for uname in sorted(unique_tables): utbl = unique_tables[uname] for i, ename in enumerate(existing_names): etbl = existing_tables[ename] if equal_tables(utbl, etbl, rtol=rtol, atol=atol): # Setup table name mapping name_map[uname] = ename # Don't visit this table again (just to avoid the processing) existing_names.pop(i) break # Replace unique table names for uname, ename in name_map.items(): unique_tables[ename] = existing_tables[ename] del unique_tables[uname] unique_table_ttypes[ename] = unique_table_ttypes[uname] del unique_table_ttypes[uname] # Build mapping from modified terminal to unique table with metadata # { mt: (unique name, # (table dof range begin, table dof range end), # [top parent element dof index for each local index], # ttype, original_element_dim) } mt_unique_table_reference = {} for mt, name in list(mt_table_names.items()): # Get metadata for the original table (name is not the unique name!) dofrange = table_ranges[name] dofmap = table_dofmaps[name] original_dim = table_original_num_dofs[name] # Map name -> uname uname = table_unames[name] # Map uname -> ename ename = name_map.get(uname, uname) # Some more metadata stored under the ename ttype = unique_table_ttypes[ename] # Add offset to dofmap and dofrange for restricted terminals if mt.restriction and isinstance(mt.terminal, FormArgument): # offset = 0 or number of dofs before table optimization offset = ufc_restriction_offset(mt.restriction, original_dim) (b, e) = dofrange dofrange = (b + offset, e + offset) dofmap = tuple(i + offset for i in dofmap) # Store reference to unique table for this mt mt_unique_table_reference[mt] = unique_table_reference_t( ename, unique_tables[ename], dofrange, dofmap, original_dim, ttype, ttype in piecewise_ttypes, ttype in uniform_ttypes) return unique_tables, unique_table_ttypes, unique_table_num_dofs, mt_unique_table_reference ffc-2019.1.0.post0/ffc/uflacs/integralgenerator.py000066400000000000000000001466101346026536000216640ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2011-2017 Martin Sandve Alnæs # # This file is part of UFLACS. # # UFLACS is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # UFLACS is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with UFLACS. If not, see """Controlling algorithm for building the tabulate_tensor source structure from factorized representation.""" from collections import defaultdict import itertools from ufl import product from ufl.classes import ConstantValue, Condition from ufl.measure import custom_integral_types, point_integral_types from ufl.utils.indexflattening import shape_to_strides from ffc.log import error, warning from ffc.uflacs.build_uflacs_ir import get_common_block_data from ffc.uflacs.elementtables import piecewise_ttypes from ffc.uflacs.language.cnodes import pad_innermost_dim, pad_dim, MemZero, MemZeroRange class IntegralGenerator(object): def __init__(self, ir, backend, precision): # Store ir self.ir = ir # Formatting precision self.precision = precision # Backend specific plugin with attributes # - language: for translating ufl operators to target language # - symbols: for translating ufl operators to target language # - definitions: for defining backend specific variables # - access: for accessing backend specific variables self.backend = backend # Set of operator names code has been generated for, # used in the end for selecting necessary includes self._ufl_names = set() # Initialize lookup tables for variable scopes self.init_scopes() # Cache of reusable blocks contributing to A self.shared_blocks = {} # Block contributions collected during generation to be added to A at the end self.finalization_blocks = defaultdict(list) # Set of counters used for assigning names to intermediate variables # TODO: Should this be part of the backend symbols? Doesn't really matter now. self.symbol_counters = defaultdict(int) # Whether pre-integration tables should be inlined - or added as static arrays self._inline_tables = None # Whether to unroll the tabulate_tensor assignments self._unroll_tables = None # The maximum size of any preintegration table in order to perform unrolling self.max_unroll_table_size = 1024 def get_includes(self): "Return list of include statements needed to support generated code." includes = set() # Get std::fill used by MemZero includes.add("#include ") # For controlling floating point environment #includes.add("#include ") # For intel intrinsics and controlling floating point environment #includes.add("#include ") cmath_names = set(( "abs", "sign", "pow", "sqrt", "exp", "ln", "cos", "sin", "tan", "acos", "asin", "atan", "atan_2", "cosh", "sinh", "tanh", "acosh", "asinh", "atanh", "erf", "erfc", )) boost_math_names = set(( "bessel_j", "bessel_y", "bessel_i", "bessel_k", )) # Only return the necessary headers if cmath_names & self._ufl_names: includes.add("#include ") if boost_math_names & self._ufl_names: includes.add("#include ") return sorted(includes) def init_scopes(self): "Initialize variable scope dicts." # Reset variables, separate sets for quadrature loop self.scopes = { num_points: {} for num_points in self.ir["all_num_points"] } self.scopes[None] = {} def set_var(self, num_points, v, vaccess): """"Set a new variable in variable scope dicts. Scope is determined by num_points which identifies the quadrature loop scope or None if outside quadrature loops. v is the ufl expression and vaccess is the CNodes expression to access the value in the code. """ self.scopes[num_points][v] = vaccess def has_var(self, num_points, v): """"Check if variable exists in variable scope dicts. Return True if ufl expression v exists in the num_points scope. NB! Does not fall back to piecewise scope. """ return v in self.scopes[num_points] def get_var(self, num_points, v): """"Lookup ufl expression v in variable scope dicts. Scope is determined by num_points which identifies the quadrature loop scope or None if outside quadrature loops. If v is not found in quadrature loop scope, the piecewise scope (None) is checked. Returns the CNodes expression to access the value in the code. """ if v._ufl_is_literal_: return self.backend.ufl_to_language(v) f = self.scopes[num_points].get(v) if f is None: f = self.scopes[None][v] return f def new_temp_symbol(self, basename): "Create a new code symbol named basename + running counter." L = self.backend.language name = "%s%d" % (basename, self.symbol_counters[basename]) self.symbol_counters[basename] += 1 return L.Symbol(name) def get_temp_symbol(self, tempname, key): key = (tempname,) + key s = self.shared_blocks.get(key) defined = s is not None if not defined: s = self.new_temp_symbol(tempname) self.shared_blocks[key] = s return s, defined def generate(self): """Generate entire tabulate_tensor body. Assumes that the code returned from here will be wrapped in a context that matches a suitable version of the UFC tabulate_tensor signatures. """ L = self.backend.language # Assert that scopes are empty: expecting this to be called only once assert not any(d for d in self.scopes.values()) parts = [] # TODO: Is this needed? Find a test case to check. parts += [ #L.VerbatimStatement("_MM_SET_FLUSH_ZERO_MODE(_MM_FLUSH_ZERO_ON);"), #L.VerbatimStatement("std::fesetenv(FE_DFL_DISABLE_SSE_DENORMS_ENV);"), ] # Generate the tables of quadrature points and weights parts += self.generate_quadrature_tables() # Generate the tables of basis function values and preintegrated blocks parts += self.generate_element_tables() # Generate code to compute piecewise constant scalar factors parts += self.generate_unstructured_piecewise_partition() # Loop generation code will produce parts to go before quadloops, # to define the quadloops, and to go after the quadloops all_preparts = [] all_quadparts = [] all_postparts = [] # Go through each relevant quadrature loop if self.ir["integral_type"] in custom_integral_types: preparts, quadparts, postparts = \ self.generate_runtime_quadrature_loop() all_preparts += preparts all_quadparts += quadparts all_postparts += postparts else: for num_points in self.ir["all_num_points"]: # Generate code to integrate reusable blocks of final element tensor preparts, quadparts, postparts = \ self.generate_quadrature_loop(num_points) all_preparts += preparts all_quadparts += quadparts all_postparts += postparts # Generate code to finish computing reusable blocks outside quadloop preparts, quadparts, postparts = \ self.generate_dofblock_partition(None) all_preparts += preparts all_quadparts += quadparts all_postparts += postparts # Generate code to fill in A all_finalizeparts = [] # Generate code to set A = 0 #all_finalizeparts += self.generate_tensor_reset() # Generate code to compute piecewise constant scalar factors # and set A at corresponding nonzero components all_finalizeparts += self.generate_preintegrated_dofblock_partition() # Generate code to add reusable blocks B* to element tensor A all_finalizeparts += self.generate_copyout_statements() # Collect parts before, during, and after quadrature loops parts += all_preparts parts += all_quadparts parts += all_postparts parts += all_finalizeparts return L.StatementList(parts) def generate_quadrature_tables(self): "Generate static tables of quadrature points and weights." L = self.backend.language parts = [] # No quadrature tables for custom (given argument) # or point (evaluation in single vertex) skip = custom_integral_types + point_integral_types if self.ir["integral_type"] in skip: return parts alignas = self.ir["params"]["alignas"] padlen = self.ir["params"]["padlen"] # Loop over quadrature rules for num_points in self.ir["all_num_points"]: varying_ir = self.ir["varying_irs"][num_points] points, weights = self.ir["quadrature_rules"][num_points] assert num_points == len(weights) assert num_points == points.shape[0] # Generate quadrature weights array if varying_ir["need_weights"]: wsym = self.backend.symbols.weights_table(num_points) parts += [L.ArrayDecl("static const double", wsym, num_points, weights, alignas=alignas)] # Generate quadrature points array N = product(points.shape) if varying_ir["need_points"] and N: # Flatten array: (TODO: avoid flattening here, it makes padding harder) flattened_points = points.reshape(N) psym = self.backend.symbols.points_table(num_points) parts += [L.ArrayDecl("static const double", psym, N, flattened_points, alignas=alignas)] # Add leading comment if there are any tables parts = L.commented_code_list(parts, "Quadrature rules") return parts def generate_element_tables(self): """Generate static tables with precomputed element basis function values in quadrature points.""" L = self.backend.language parts = [] tables = self.ir["unique_tables"] table_types = self.ir["unique_table_types"] alignas = self.ir["params"]["alignas"] padlen = self.ir["params"]["padlen"] if self.ir["integral_type"] in custom_integral_types: # Define only piecewise tables table_names = [name for name in sorted(tables) if table_types[name] in piecewise_ttypes] else: # Define all tables table_names = sorted(tables) # Check whether it is ok to unroll all assignments involving the tables self._inline_tables = self.ir["integral_type"] == "cell" self._unroll_tables = True for name in table_names: table = tables[name] if table.size > self.max_unroll_table_size: self._inline_tables = False self._unroll_tables = False break for name in table_names: table = tables[name] # Don't pad preintegrated tables if name[0] == "P": p = 1 else: p = padlen # Skip tables that are inlined in code generation if self._inline_tables and name[:2] == "PI": continue decl = L.ArrayDecl("static const double", name, table.shape, table, alignas=alignas, padlen=p) parts += [decl] # Add leading comment if there are any tables parts = L.commented_code_list(parts, [ "Precomputed values of basis functions and precomputations", "FE* dimensions: [entities][points][dofs]", "PI* dimensions: [entities][dofs][dofs] or [entities][dofs]", "PM* dimensions: [entities][dofs][dofs]", ]) return parts def generate_tensor_reset(self): "Generate statements for resetting the element tensor to zero." L = self.backend.language A = self.backend.symbols.element_tensor() A_shape = self.ir["tensor_shape"] A_size = product(A_shape) parts = [ L.Comment("Reset element tensor"), L.MemZero(A, A_size), ] return parts def generate_quadrature_loop(self, num_points): "Generate quadrature loop with for this num_points." L = self.backend.language # Generate unstructured varying partition body = self.generate_unstructured_varying_partition(num_points) body = L.commented_code_list(body, "Quadrature loop body setup (num_points={0})".format(num_points)) # Generate dofblock parts, some of this # will be placed before or after quadloop preparts, quadparts, postparts = \ self.generate_dofblock_partition(num_points) body += quadparts # Wrap body in loop or scope if not body: # Could happen for integral with everything zero and optimized away quadparts = [] elif num_points == 1: # For now wrapping body in Scope to avoid thinking about scoping issues quadparts = L.commented_code_list(L.Scope(body), "Only 1 quadrature point, no loop") else: # Regular case: define quadrature loop if num_points == 1: iq = 0 else: iq = self.backend.symbols.quadrature_loop_index() quadparts = [L.ForRange(iq, 0, num_points, body=body)] return preparts, quadparts, postparts def generate_runtime_quadrature_loop(self): "Generate quadrature loop for custom integrals, with physical points given runtime." L = self.backend.language assert self.ir["integral_type"] in custom_integral_types num_points = self.ir["fake_num_points"] chunk_size = self.ir["params"]["chunk_size"] gdim = self.ir["geometric_dimension"] tdim = self.ir["topological_dimension"] alignas = self.ir["params"]["alignas"] #padlen = self.ir["params"]["padlen"] tables = self.ir["unique_tables"] table_types = self.ir["unique_table_types"] #table_origins = self.ir["unique_table_origins"] # FIXME # Generate unstructured varying partition body = self.generate_unstructured_varying_partition(num_points) body = L.commented_code_list(body, ["Run-time quadrature loop body setup", "(chunk_size={0}, analysis_num_points={1})".format( chunk_size, num_points)]) # Generate dofblock parts, some of this # will be placed before or after quadloop preparts, quadparts, postparts = \ self.generate_dofblock_partition(num_points) body += quadparts # Wrap body in loop if not body: # Could happen for integral with everything zero and optimized away quadparts = [] else: rule_parts = [] # Define two-level quadrature loop; over chunks then over points in chunk iq_chunk = L.Symbol("iq_chunk") np = self.backend.symbols.num_custom_quadrature_points() num_point_blocks = (np + chunk_size - 1) / chunk_size iq = self.backend.symbols.quadrature_loop_index() # Not assuming runtime size to be multiple by chunk size num_points_in_block = L.Symbol("num_points_in_chunk") decl = L.VariableDecl("const int", num_points_in_block, L.Call("min", (chunk_size, np - iq_chunk * chunk_size))) rule_parts.append(decl) iq_body = L.ForRange(iq, 0, num_points_in_block, body=body) ### Preparations for quadrature rules # varying_ir = self.ir["varying_irs"][num_points] # Copy quadrature weights for this chunk if varying_ir["need_weights"]: cwsym = self.backend.symbols.custom_quadrature_weights() wsym = self.backend.symbols.custom_weights_table() rule_parts += [ L.ArrayDecl("double", wsym, chunk_size, 0, alignas=alignas), L.ForRange(iq, 0, num_points_in_block, body=L.Assign(wsym[iq], cwsym[chunk_size*iq_chunk + iq])), ] # Copy quadrature points for this chunk if varying_ir["need_points"]: cpsym = self.backend.symbols.custom_quadrature_points() psym = self.backend.symbols.custom_points_table() rule_parts += [ L.ArrayDecl("double", psym, chunk_size * gdim, 0, alignas=alignas), L.ForRange(iq, 0, num_points_in_block, body=[L.Assign(psym[iq*gdim + i], cpsym[chunk_size*iq_chunk*gdim + iq*gdim + i]) for i in range(gdim)]) ] # Add leading comment if there are any tables rule_parts = L.commented_code_list(rule_parts, "Quadrature weights and points") ### Preparations for element tables table_parts = [] # Only declare non-piecewise tables, computed inside chunk loop non_piecewise_tables = [name for name in sorted(tables) if table_types[name] not in piecewise_ttypes] for name in non_piecewise_tables: table = tables[name] decl = L.ArrayDecl("double", name, (1, chunk_size, table.shape[2]), 0, alignas=alignas) # padlen=padlen) table_parts += [decl] table_parts += [L.Comment("FIXME: Fill element tables here")] #table_origins ### Gather all in chunk loop chunk_body = rule_parts + table_parts + [iq_body] quadparts = [L.ForRange(iq_chunk, 0, num_point_blocks, body=chunk_body)] return preparts, quadparts, postparts def generate_unstructured_piecewise_partition(self): L = self.backend.language num_points = None expr_ir = self.ir["piecewise_ir"] name = "sp" arraysymbol = L.Symbol(name) parts = self.generate_partition(arraysymbol, expr_ir["V"], expr_ir["V_active"], expr_ir["V_mts"], expr_ir["mt_tabledata"], num_points) parts = L.commented_code_list(parts, "Unstructured piecewise computations") return parts def generate_unstructured_varying_partition(self, num_points): L = self.backend.language expr_ir = self.ir["varying_irs"][num_points] name = "sv" arraysymbol = L.Symbol("%s%d" % (name, num_points)) parts = self.generate_partition(arraysymbol, expr_ir["V"], expr_ir["V_varying"], expr_ir["V_mts"], expr_ir["mt_tabledata"], num_points) parts = L.commented_code_list(parts, "Unstructured varying computations for num_points=%d" % (num_points,)) return parts def generate_partition(self, symbol, V, V_active, V_mts, mt_tabledata, num_points): L = self.backend.language definitions = [] intermediates = [] active_indices = [i for i, p in enumerate(V_active) if p] for i in active_indices: v = V[i] mt = V_mts[i] if v._ufl_is_literal_: vaccess = self.backend.ufl_to_language(v) elif mt is not None: # All finite element based terminals has table data, as well # as some but not all of the symbolic geometric terminals tabledata = mt_tabledata.get(mt) # Backend specific modified terminal translation vaccess = self.backend.access(mt.terminal, mt, tabledata, num_points) vdef = self.backend.definitions(mt.terminal, mt, tabledata, num_points, vaccess) # Store definitions of terminals in list assert isinstance(vdef, list) definitions.extend(vdef) else: # Get previously visited operands vops = [self.get_var(num_points, op) for op in v.ufl_operands] # Mapping UFL operator to target language self._ufl_names.add(v._ufl_handler_name_) vexpr = self.backend.ufl_to_language(v, *vops) # TODO: Let optimized ir provide mapping of vertex indices to # variable indices, marking which subexpressions to store in variables # and in what order: #j = variable_id[i] # Currently instead creating a new intermediate for # each subexpression except boolean conditions if isinstance(v, Condition): # Inline the conditions x < y, condition values # 'x' and 'y' may still be stored in intermediates. # This removes the need to handle boolean intermediate variables. # With tensor-valued conditionals it may not be optimal but we # let the C++ compiler take responsibility for optimizing those cases. j = None elif any(op._ufl_is_literal_ for op in v.ufl_operands): # Skip intermediates for e.g. -2.0*x, # resulting in lines like z = y + -2.0*x j = None else: j = len(intermediates) if j is not None: # Record assignment of vexpr to intermediate variable if self.ir["params"]["use_symbol_array"]: vaccess = symbol[j] intermediates.append(L.Assign(vaccess, vexpr)) else: vaccess = L.Symbol("%s_%d" % (symbol.name, j)) intermediates.append(L.VariableDecl("const double", vaccess, vexpr)) else: # Access the inlined expression vaccess = vexpr # Store access node for future reference self.set_var(num_points, v, vaccess) # Join terminal computation, array of intermediate expressions, # and intermediate computations parts = [] if definitions: parts += definitions if intermediates: if self.ir["params"]["use_symbol_array"]: alignas = self.ir["params"]["alignas"] parts += [L.ArrayDecl("double", symbol, len(intermediates), alignas=alignas)] parts += intermediates return parts def generate_dofblock_partition(self, num_points): L = self.backend.language if num_points is None: # NB! None meaning piecewise partition, not custom integral block_contributions = self.ir["piecewise_ir"]["block_contributions"] else: block_contributions = self.ir["varying_irs"][num_points]["block_contributions"] preparts = [] quadparts = [] postparts = [] blocks = [(blockmap, blockdata) for blockmap, contributions in sorted(block_contributions.items()) for blockdata in contributions if blockdata.block_mode != "preintegrated"] for blockmap, blockdata in blocks: # Get symbol for already defined block B if it exists common_block_data = get_common_block_data(blockdata) B = self.shared_blocks.get(common_block_data) if B is None: # Define code for block depending on mode B, block_preparts, block_quadparts, block_postparts = \ self.generate_block_parts(num_points, blockmap, blockdata) # Add definitions preparts.extend(block_preparts) # Add computations quadparts.extend(block_quadparts) # Add finalization postparts.extend(block_postparts) # Store reference for reuse self.shared_blocks[common_block_data] = B # Add A[blockmap] += B[...] to finalization self.finalization_blocks[blockmap].append(B) return preparts, quadparts, postparts def get_entities(self, blockdata): L = self.backend.language if self.ir["integral_type"] == "interior_facet": # Get the facet entities entities = [] for r in blockdata.restrictions: if r is None: entities.append(0) else: entities.append(self.backend.symbols.entity(self.ir["entitytype"], r)) if blockdata.transposed: return (entities[1], entities[0]) else: return tuple(entities) else: # Get the current cell or facet entity if blockdata.is_uniform: # uniform, i.e. constant across facets entity = L.LiteralInt(0) else: entity = self.backend.symbols.entity(self.ir["entitytype"], None) return (entity,) def get_arg_factors(self, blockdata, block_rank, num_points, iq, indices): L = self.backend.language arg_factors = [] for i in range(block_rank): mad = blockdata.ma_data[i] td = mad.tabledata if td.is_piecewise: scope = self.ir["piecewise_ir"]["modified_arguments"] else: scope = self.ir["varying_irs"][num_points]["modified_arguments"] mt = scope[mad.ma_index] # Translate modified terminal to code # TODO: Move element table access out of backend? # Not using self.backend.access.argument() here # now because it assumes too much about indices. table = self.backend.symbols.element_table(td, self.ir["entitytype"], mt.restriction) assert td.ttype != "zeros" if td.ttype == "ones": arg_factor = L.LiteralFloat(1.0) elif td.ttype == "quadrature": # TODO: Revisit all quadrature ttype checks arg_factor = table[iq] else: # Assuming B sparsity follows element table sparsity arg_factor = table[indices[i]] arg_factors.append(arg_factor) return arg_factors def generate_block_parts(self, num_points, blockmap, blockdata): """Generate and return code parts for a given block. Returns parts occuring before, inside, and after the quadrature loop identified by num_points. Should be called with num_points=None for quadloop-independent blocks. """ L = self.backend.language # The parts to return preparts = [] quadparts = [] postparts = [] # TODO: Define names in backend symbols? #tempnames = self.backend.symbols.block_temp_names #blocknames = self.backend.symbols.block_names tempnames = { #"preintegrated": "TI", "premultiplied": "TM", "partial": "TP", "full": "TF", "safe": "TS", "quadrature": "TQ", } blocknames = { #"preintegrated": "BI", #"premultiplied": "BM", #"partial": "BP", "full": "BF", "safe": "BS", "quadrature": "BQ", } fwtempname = "fw" tempname = tempnames.get(blockdata.block_mode) alignas = self.ir["params"]["alignas"] padlen = self.ir["params"]["padlen"] block_rank = len(blockmap) blockdims = tuple(len(dofmap) for dofmap in blockmap) padded_blockdims = pad_innermost_dim(blockdims, padlen) ttypes = blockdata.ttypes if "zeros" in ttypes: error("Not expecting zero arguments to be left in dofblock generation.") if num_points is None: iq = None elif num_points == 1: iq = 0 else: iq = self.backend.symbols.quadrature_loop_index() # Override dof index with quadrature loop index for arguments with # quadrature element, to index B like B[iq*num_dofs + iq] arg_indices = tuple(self.backend.symbols.argument_loop_index(i) for i in range(block_rank)) B_indices = [] for i in range(block_rank): if ttypes[i] == "quadrature": B_indices.append(iq) else: B_indices.append(arg_indices[i]) B_indices = tuple(B_indices) # Define unique block symbol blockname = blocknames.get(blockdata.block_mode) if blockname: B = self.new_temp_symbol(blockname) # Add initialization of this block to parts # For all modes, block definition occurs before quadloop preparts.append(L.ArrayDecl("double", B, blockdims, 0, alignas=alignas, padlen=padlen)) # Get factor expression if blockdata.factor_is_piecewise: v = self.ir["piecewise_ir"]["V"][blockdata.factor_index] else: v = self.ir["varying_irs"][num_points]["V"][blockdata.factor_index] f = self.get_var(num_points, v) # Quadrature weight was removed in representation, add it back now if num_points is None: weight = L.LiteralFloat(1.0) elif self.ir["integral_type"] in custom_integral_types: weights = self.backend.symbols.custom_weights_table() weight = weights[iq] else: weights = self.backend.symbols.weights_table(num_points) weight = weights[iq] # Define fw = f * weight if blockdata.block_mode in ("safe", "full", "partial"): assert not blockdata.transposed, "Not handled yet" # Fetch code to access modified arguments arg_factors = self.get_arg_factors(blockdata, block_rank, num_points, iq, B_indices) fw_rhs = L.float_product([f, weight]) if not isinstance(fw_rhs, L.Product): fw = fw_rhs else: # Define and cache scalar temp variable key = (num_points, blockdata.factor_index, blockdata.factor_is_piecewise) fw, defined = self.get_temp_symbol(fwtempname, key) if not defined: quadparts.append(L.VariableDecl("const double", fw, fw_rhs)) # Plan for vectorization of fw computations over iq: # 1) Define fw as arrays e.g. "double fw0[nq];" outside quadloop # 2) Access as fw0[iq] of course # 3) Split quadrature loops, one for fw computation and one for blocks # 4) Pad quadrature rule with 0 weights and last point # Plan for vectorization of coefficient evaluation over iq: # 1) Define w0_c1 etc as arrays e.g. "double w0_c1[nq] = {};" outside quadloop # 2) Access as w0_c1[iq] of course # 3) Splitquadrature loops, coefficients before fw computation # 4) Possibly swap loops over iq and ic: # for(ic) for(iq) w0_c1[iq] = w[0][ic] * FE[iq][ic]; if blockdata.block_mode == "safe": # Naively accumulate integrand for this block in the innermost loop assert not blockdata.transposed B_rhs = L.float_product([fw] + arg_factors) body = L.AssignAdd(B[B_indices], B_rhs) # NB! += not = for i in reversed(range(block_rank)): body = L.ForRange(B_indices[i], 0, padded_blockdims[i], body=body) quadparts += [body] # Define rhs expression for A[blockmap[arg_indices]] += A_rhs A_rhs = B[arg_indices] elif blockdata.block_mode == "full": assert not blockdata.transposed, "Not handled yet" if block_rank < 2: # Multiply collected factors B_rhs = L.float_product([fw] + arg_factors) else: # TODO: Pick arg with smallest dimension, or pick # based on global optimization to reuse more blocks i = 0 # Index selected for precomputation j = 1 - i P_index = B_indices[i] key = (num_points, blockdata.factor_index, blockdata.factor_is_piecewise, arg_factors[i].ce_format(self.precision)) P, defined = self.get_temp_symbol(tempname, key) if not defined: # TODO: If FE table is varying and only used in contexts # where it's multiplied by weight, we can premultiply it! # Then this would become P = f * preweighted_FE_table[:]. # Define and compute intermediate value # P[:] = (weight * f) * args[i][:] # inside quadrature loop P_dim = blockdims[i] quadparts.append(L.ArrayDecl("double", P, P_dim, None, alignas=alignas, padlen=padlen)) P_rhs = L.float_product([fw, arg_factors[i]]) body = L.Assign(P[P_index], P_rhs) #if ttypes[i] != "quadrature": # FIXME: What does this mean here? vectorize = self.ir["params"]["vectorize"] body = L.ForRange(P_index, 0, P_dim, body=body, vectorize=vectorize) quadparts.append(body) B_rhs = P[P_index] * arg_factors[j] # Add result to block inside quadloop body = L.AssignAdd(B[B_indices], B_rhs) # NB! += not = for i in reversed(range(block_rank)): # Vectorize only the innermost loop vectorize = self.ir["params"]["vectorize"] and (i == block_rank - 1) if ttypes[i] != "quadrature": body = L.ForRange(B_indices[i], 0, padded_blockdims[i], body=body, vectorize=vectorize) quadparts += [body] # Define rhs expression for A[blockmap[arg_indices]] += A_rhs A_rhs = B[arg_indices] elif blockdata.block_mode == "partial": # TODO: To handle transpose here, must add back intermediate block B assert not blockdata.transposed, "Not handled yet" # Get indices and dimensions right here... assert block_rank == 2 i = blockdata.piecewise_ma_index not_piecewise_index = 1 - i P_index = arg_indices[not_piecewise_index] key = (num_points, blockdata.factor_index, blockdata.factor_is_piecewise, arg_factors[not_piecewise_index].ce_format(self.precision)) P, defined = self.get_temp_symbol(tempname, key) if not defined: # Declare P table in preparts P_dim = blockdims[not_piecewise_index] preparts.append(L.ArrayDecl("double", P, P_dim, 0, alignas=alignas, padlen=padlen)) # Multiply collected factors P_rhs = L.float_product([fw, arg_factors[not_piecewise_index]]) # Accumulate P += weight * f * args in quadrature loop body = L.AssignAdd(P[P_index], P_rhs) body = L.ForRange(P_index, 0, pad_dim(P_dim, padlen), body=body) quadparts.append(body) # Define B = B_rhs = piecewise_argument[:] * P[:], where P[:] = sum_q weight * f * other_argument[:] B_rhs = arg_factors[i] * P[P_index] # Define rhs expression for A[blockmap[arg_indices]] += A_rhs A_rhs = B_rhs elif blockdata.block_mode in ("premultiplied", "preintegrated"): P_entity_indices = self.get_entities(blockdata) if blockdata.transposed: P_block_indices = (arg_indices[1], arg_indices[0]) else: P_block_indices = arg_indices P_ii = P_entity_indices + P_block_indices if blockdata.block_mode == "preintegrated": # Preintegrated should never get into quadloops assert num_points is None # Define B = B_rhs = f * PI where PI = sum_q weight * u * v PI = L.Symbol(blockdata.name)[P_ii] B_rhs = L.float_product([f, PI]) elif blockdata.block_mode == "premultiplied": key = (num_points, blockdata.factor_index, blockdata.factor_is_piecewise) FI, defined = self.get_temp_symbol(tempname, key) if not defined: # Declare FI = 0 before quadloop preparts += [L.VariableDecl("double", FI, 0)] # Accumulate FI += weight * f in quadparts quadparts += [L.AssignAdd(FI, L.float_product([weight, f]))] # Define B_rhs = FI * PM where FI = sum_q weight*f, and PM = u * v PM = L.Symbol(blockdata.name)[P_ii] B_rhs = L.float_product([FI, PM]) # Define rhs expression for A[blockmap[arg_indices]] += A_rhs A_rhs = B_rhs return A_rhs, preparts, quadparts, postparts def generate_preintegrated_dofblock_partition(self): # FIXME: Generalize this to unrolling all A[] += ... loops, or all loops with noncontiguous DM?? L = self.backend.language block_contributions = self.ir["piecewise_ir"]["block_contributions"] blocks = [(blockmap, blockdata) for blockmap, contributions in sorted(block_contributions.items()) for blockdata in contributions if blockdata.block_mode == "preintegrated"] # Get symbol, dimensions, and loop index symbols for A A = self.backend.symbols.element_tensor() A_shape = self.ir["tensor_shape"] A_size = product(A_shape) A_strides = shape_to_strides(A_shape) # List for unrolled assignments A_values = [0.0] * A_size # List of statements when not unrolling parts = [] for block_id, (blockmap, blockdata) in enumerate(blocks): # Accumulate A[blockmap[...]] += f*PI[...] # Get table for inlining tables = self.ir["unique_tables"] table = tables[blockdata.name] # Get factor expression v = self.ir["piecewise_ir"]["V"][blockdata.factor_index] f = self.get_var(None, v) # Define rhs expression for A[blockmap[arg_indices]] += A_rhs # A_rhs = f * PI where PI = sum_q weight * u * v PI = L.Symbol(blockdata.name) block_rank = len(blockmap) # Indices used to index A A_index_symbols = tuple(self.backend.symbols.argument_loop_index(i) for i in range(block_rank)) # Indices used to index PI P_index_symbols = tuple(self.backend.symbols.argument_loop_index(i) for i in range(block_rank, 2*block_rank)) # Define indices into preintegrated block P_entity_indices = self.get_entities(blockdata) if self._inline_tables: assert P_entity_indices == (L.LiteralInt(0),) assert table.shape[0] == 1 # Unroll loop blockshape = [len(DM) for DM in blockmap] blockrange = [range(d) for d in blockshape] if self._unroll_tables: # Generate unrolled assignments for the current block for ii in itertools.product(*blockrange): A_ii = sum(A_strides[i] * blockmap[i][ii[i]] for i in range(len(ii))) if blockdata.transposed: P_arg_indices = (ii[1], ii[0]) else: P_arg_indices = ii if self._inline_tables: # Extract float value of PI[P_ii] Pval = table[0] # always entity 0 for i in P_arg_indices: Pval = Pval[i] A_rhs = Pval * f else: # Index the static preintegrated table: P_ii = P_entity_indices + P_arg_indices A_rhs = f * PI[P_ii] A_values[A_ii] = A_values[A_ii] + A_rhs else: # Don't unroll, generate assignment loops # TODO: Allow mixed unrolled/non unrolled assignments? # Make sure that we can use PI as static array assert not self._inline_tables # Accumulate index expression 'A_idx' used as 'A[A_idx]' in the loop, e.g. A_idx = i*3 + j A_idx = sum(A_strides[i] * index_symbol for i, index_symbol in enumerate(A_index_symbols)) # Get the index tuple used to index the pre-integrated table P_arg_indices = P_index_symbols # Transpose the PI indices, if block is transposed if blockdata.transposed: P_arg_indices = tuple(reversed(P_arg_indices)) # Initialize the central assignment statement P_ii = P_entity_indices + P_arg_indices A_rhs = f * PI[P_ii] loop_code = L.AssignAdd(A[A_idx], A_rhs) # Construct nested loops, starting with inner-most dimension for sub_blockmap, A_arg_index, P_arg_index in zip(reversed(blockmap), reversed(A_index_symbols), reversed(P_index_symbols)): # Append increment of PI index to inner code loop_code = [loop_code, L.PreIncrement(P_arg_index)] # Check whether the current bock map is a contiguous range if (sub_blockmap[-1] - sub_blockmap[0] + 1) == len(sub_blockmap): # We have contiguous indices in the blockmap, a single loop is enough loop_code = [L.ForRange(A_arg_index, sub_blockmap[0], sub_blockmap[-1] + 1, loop_code)] else: # Split blockmap into contiguous sub-ranges -> multiple loops loop_block = [] # The groupby statement yields these contiguous sub-ranges for k, group in itertools.groupby(enumerate(sub_blockmap), key=lambda x: x[0] - x[1]): contiguous_range = list(group) loop_interval = contiguous_range[0][1], contiguous_range[-1][1] + 1 # Check whether we really need a loop if loop_interval[1] - loop_interval[0] > 1: loop_block += [L.ForRange(A_arg_index, loop_interval[0], loop_interval[1], loop_code)] else: # TODO: Remove scope by explicitly replacing Symbol A_arg_index with Literal loop_interval[0] in all subexpression loop_block += [L.Scope([ L.VariableDecl("int", A_arg_index, loop_interval[0]), loop_code] )] loop_code = loop_block # Reset the currently inner-most PI table index loop_code += [L.Assign(P_arg_index, 0)] # Define the P index variables before the loop code loop_code = [L.VariableDecl("int", P_arg_index, 0) for P_arg_index in P_arg_indices] + loop_code # Add a comment and scope to keep index variables enclosed parts += [L.Scope(loop_code)] if self._unroll_tables: parts = self.generate_tensor_value_initialization(A_values) else: # If we have assignment loops, zero all entries of A first parts = [L.MemZero(A, A_size)] + parts return parts def generate_tensor_value_initialization(self, A_values): parts = [] L = self.backend.language A = self.backend.symbols.element_tensor() A_size = len(A_values) init_mode = self.ir["params"]["tensor_init_mode"] z = L.LiteralFloat(0.0) if init_mode == "direct": # Generate A[i] = A_values[i] including zeros for i in range(A_size): parts += [L.Assign(A[i], A_values[i])] elif init_mode == "upfront": # Zero everything first parts += [L.MemZero(A, A_size)] # Generate A[i] = A_values[i] skipping zeros for i in range(A_size): if not (A_values[i] == 0.0 or A_values[i] == z): parts += [L.Assign(A[i], A_values[i])] elif init_mode == "interleaved": # Generate A[i] = A_values[i] with interleaved zero filling i = 0 zero_begin = 0 zero_end = zero_begin while i < A_size: if A_values[i] == 0.0 or A_values[i] == z: # Update range of A zeros zero_end = i + 1 else: # Set zeros of A just prior to A[i] if zero_end == zero_begin + 1: parts += [L.Assign(A[zero_begin], 0.0)] elif zero_end > zero_begin: parts += [L.MemZeroRange(A, zero_begin, zero_end)] zero_begin = i + 1 zero_end = zero_begin # Set A[i] value parts += [L.Assign(A[i], A_values[i])] i += 1 if zero_end == zero_begin + 1: parts += [L.Assign(A[zero_begin], 0.0)] elif zero_end > zero_begin: parts += [L.MemZeroRange(A, zero_begin, zero_end)] else: error("Invalid init_mode parameter %s" % (init_mode,)) return parts def generate_expr_copyout_statements(self): L = self.backend.language parts = [] # Not expecting any quadrature loop scopes here assert tuple(self.scopes.keys()) == (None,) # TODO: Get symbol from backend values = L.Symbol("values") # TODO: Allow expression compilation to compute multiple points at once! # Similarities to custom integrals in that points are given, # while different in output format: results are not accumulated # for each point but stored in output array instead. # Assign computed results to output variables pir = self.ir["piecewise_ir"] V = pir["V"] V_targets = pir["V_targets"] for i, fi in enumerate(V_targets): parts.append(L.Assign(values[i], self.get_var(None, V[fi]))) return parts def generate_tensor_copyout_statements(self): L = self.backend.language parts = [] # Get symbol, dimensions, and loop index symbols for A A_shape = self.ir["tensor_shape"] A_size = product(A_shape) A_rank = len(A_shape) A_strides = [1]*A_rank # TODO: there's something like shape2strides(A_shape) somewhere for i in reversed(range(0, A_rank-1)): A_strides[i] = A_strides[i+1] * A_shape[i+1] Asym = self.backend.symbols.element_tensor() A = L.FlattenedArray(Asym, dims=A_shape) indices = [self.backend.symbols.argument_loop_index(i) for i in range(A_rank)] dofmap_parts = [] dofmaps = {} for blockmap, contributions in sorted(self.finalization_blocks.items()): # Define mapping from B indices to A indices A_indices = [] for i in range(A_rank): dofmap = blockmap[i] begin = dofmap[0] end = dofmap[-1] + 1 if len(dofmap) == end - begin: # Dense insertion, offset B index to index A j = indices[i] + begin else: # Sparse insertion, map B index through dofmap DM = dofmaps.get(dofmap) if DM is None: DM = L.Symbol("DM%d" % len(dofmaps)) dofmaps[dofmap] = DM dofmap_parts.append(L.ArrayDecl("static const int", DM, len(dofmap), dofmap)) j = DM[indices[i]] A_indices.append(j) A_indices = tuple(A_indices) # Sum up all blocks contributing to this blockmap term = L.Sum([B_rhs for B_rhs in contributions]) # TODO: need ttypes associated with this block to deal # with loop dropping for quadrature elements: ttypes = () if ttypes == ("quadrature", "quadrature"): debug("quadrature element block insertion not optimized") # Add components of all B's to A component in loop nest body = L.AssignAdd(A[A_indices], term) for i in reversed(range(A_rank)): body = L.ForRange(indices[i], 0, len(blockmap[i]), body=body) # Add this block to parts parts.append(body) # Place static dofmap tables first parts = dofmap_parts + parts return parts def generate_copyout_statements(self): """Generate statements copying results to output array.""" if self.ir["integral_type"] == "expression": return self.generate_expr_copyout_statements() #elif self.ir["unroll_copyout_loops"]: # return self.generate_unrolled_tensor_copyout_statements() else: return self.generate_tensor_copyout_statements() ffc-2019.1.0.post0/ffc/uflacs/language/000077500000000000000000000000001346026536000173515ustar00rootroot00000000000000ffc-2019.1.0.post0/ffc/uflacs/language/__init__.py000066400000000000000000000013451346026536000214650ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2011-2017 Martin Sandve Alnæs # # This file is part of UFLACS. # # UFLACS is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # UFLACS is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with UFLACS. If not, see . ffc-2019.1.0.post0/ffc/uflacs/language/cnodes.py000066400000000000000000001506631346026536000212110ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2011-2017 Martin Sandve Alnæs # # This file is part of UFLACS. # # UFLACS is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # UFLACS is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with UFLACS. If not, see . import numpy import numbers from ffc.uflacs.language.format_value import format_value, format_float, format_int from ffc.uflacs.language.format_lines import format_indented_lines, Indented from ffc.uflacs.language.precedence import PRECEDENCE """CNode TODO: - Array copy statement - Extend ArrayDecl and ArrayAccess with support for flattened but conceptually multidimensional arrays, maybe even with padding (FlattenedArray possibly covers what we need) - Function declaration - TypeDef - Type - TemplateArgumentList - Class declaration - Class definition """ ############## Some helper functions def assign_loop(dst, src, ranges): """Generate a nested loop over a list of ranges, assigning src to dst in the innermost loop. Ranges is a list on the format [(index, begin, end),...]. """ code = Assign(dst, src) for i, b, e in reversed(ranges): code = ForRange(i, b, e, code) return code def accumulate_loop(dst, src, ranges): """Generate a nested loop over a list of ranges, adding src to dst in the innermost loop. Ranges is a list on the format [(index, begin, end),...]. """ code = AssignAdd(dst, src) for i, b, e in reversed(ranges): code = ForRange(i, b, e, code) return code def scale_loop(dst, factor, ranges): """Generate a nested loop over a list of ranges, multiplying dst with factor in the innermost loop. Ranges is a list on the format [(index, begin, end),...]. """ code = AssignMul(dst, factor) for i, b, e in reversed(ranges): code = ForRange(i, b, e, code) return code def is_zero_cexpr(cexpr): return ( (isinstance(cexpr, LiteralFloat) and cexpr.value == 0.0) or (isinstance(cexpr, LiteralInt) and cexpr.value == 0) ) def is_one_cexpr(cexpr): return ( (isinstance(cexpr, LiteralFloat) and cexpr.value == 1.0) or (isinstance(cexpr, LiteralInt) and cexpr.value == 1) ) def is_negative_one_cexpr(cexpr): return ( (isinstance(cexpr, LiteralFloat) and cexpr.value == -1.0) or (isinstance(cexpr, LiteralInt) and cexpr.value == -1) ) def float_product(factors): "Build product of float factors, simplifying ones and zeros and returning 1.0 if empty sequence." factors = [f for f in factors if not is_one_cexpr(f)] if len(factors) == 0: return LiteralFloat(1.0) elif len(factors) == 1: return factors[0] else: for f in factors: if is_zero_cexpr(f): return f return Product(factors) def MemZeroRange(name, begin, end): name = as_cexpr_or_string_symbol(name) return Call("std::fill", (name + begin, name + end, LiteralFloat(0.0))) #return Call("std::fill", (AddressOf(name[begin]), AddressOf(name[end]), LiteralFloat(0.0))) def MemZero(name, size): name = as_cexpr_or_string_symbol(name) size = as_cexpr(size) return Call("std::fill_n", (name, size, LiteralFloat(0.0))) def MemCopy(src, dst, size): src = as_cexpr_or_string_symbol(src) dst = as_cexpr_or_string_symbol(dst) size = as_cexpr(size) return Call("std::copy_n", (src, size, dst)) ############## CNode core class CNode(object): "Base class for all C AST nodes." __slots__ = () def __str__(self): name = self.__class__.__name__ raise NotImplementedError("Missing implementation of __str__ in " + name) def __eq__(self, other): name = self.__class__.__name__ raise NotImplementedError("Missing implementation of __eq__ in " + name) def __ne__(self, other): return not self.__eq__(other) CNode.debug = False ############## CExpr base classes class CExpr(CNode): """Base class for all C expressions. All subtypes should define a 'precedence' class attribute. """ __slots__ = () def ce_format(self, precision=None): raise NotImplementedError("Missing implementation of ce_format() in CExpr.") def __str__(self): try: s = self.ce_format() except Exception: if CNode.debug: print("Error in CExpr string formatting. Inspect self.") import IPython; IPython.embed() raise return s def __getitem__(self, indices): return ArrayAccess(self, indices) def __neg__(self): if isinstance(self, LiteralFloat): return LiteralFloat(-self.value) if isinstance(self, LiteralInt): return LiteralInt(-self.value) return Neg(self) def __add__(self, other): other = as_cexpr(other) if is_zero_cexpr(self): return other if is_zero_cexpr(other): return self if isinstance(other, Neg): return Sub(self, other.arg) return Add(self, other) def __radd__(self, other): other = as_cexpr(other) if is_zero_cexpr(self): return other if is_zero_cexpr(other): return self if isinstance(self, Neg): return Sub(other, self.arg) return Add(other, self) def __sub__(self, other): other = as_cexpr(other) if is_zero_cexpr(self): return -other if is_zero_cexpr(other): return self if isinstance(other, Neg): return Add(self, other.arg) return Sub(self, other) def __rsub__(self, other): other = as_cexpr(other) if is_zero_cexpr(self): return other if is_zero_cexpr(other): return -self if isinstance(self, Neg): return Add(other, self.arg) return Sub(other, self) def __mul__(self, other): other = as_cexpr(other) if is_zero_cexpr(self): return self if is_zero_cexpr(other): return other if is_one_cexpr(self): return other if is_one_cexpr(other): return self if is_negative_one_cexpr(other): return Neg(self) if is_negative_one_cexpr(self): return Neg(other) return Mul(self, other) def __rmul__(self, other): other = as_cexpr(other) if is_zero_cexpr(self): return self if is_zero_cexpr(other): return other if is_one_cexpr(self): return other if is_one_cexpr(other): return self if is_negative_one_cexpr(other): return Neg(self) if is_negative_one_cexpr(self): return Neg(other) return Mul(other, self) def __div__(self, other): other = as_cexpr(other) if is_zero_cexpr(other): raise ValueError("Division by zero!") if is_zero_cexpr(self): return self return Div(self, other) def __rdiv__(self, other): other = as_cexpr(other) if is_zero_cexpr(self): raise ValueError("Division by zero!") if is_zero_cexpr(other): return other return Div(other, self) # TODO: Error check types? Can't do that exactly as symbols here have no type. __truediv__ = __div__ __rtruediv__ = __rdiv__ __floordiv__ = __div__ __rfloordiv__ = __rdiv__ def __mod__(self, other): other = as_cexpr(other) if is_zero_cexpr(other): raise ValueError("Division by zero!") if is_zero_cexpr(self): return self return Mod(self, other) def __rmod__(self, other): other = as_cexpr(other) if is_zero_cexpr(self): raise ValueError("Division by zero!") if is_zero_cexpr(other): return other return Mod(other, self) class CExprOperator(CExpr): """Base class for all C expression operator.""" __slots__ = () sideeffect = False class CExprTerminal(CExpr): """Base class for all C expression terminals.""" __slots__ = () sideeffect = False ############## CExprTerminal types class CExprLiteral(CExprTerminal): "A float or int literal value." __slots__ = () precedence = PRECEDENCE.LITERAL class Null(CExprLiteral): "A null pointer literal." __slots__ = () precedence = PRECEDENCE.LITERAL def ce_format(self, precision=None): # C or old C++ version #return "NULL" # C++11 version return "nullptr" def __eq__(self, other): return isinstance(other, Null) class LiteralFloat(CExprLiteral): "A floating point literal value." __slots__ = ("value",) precedence = PRECEDENCE.LITERAL def __init__(self, value): assert isinstance(value, (float, int, numpy.number)) self.value = value def ce_format(self, precision=None): return format_float(self.value, precision) def __eq__(self, other): return isinstance(other, LiteralFloat) and self.value == other.value def __bool__(self): return bool(self.value) __nonzero__ = __bool__ def __float__(self): return float(self.value) class LiteralInt(CExprLiteral): "An integer literal value." __slots__ = ("value",) precedence = PRECEDENCE.LITERAL def __init__(self, value): assert isinstance(value, (int, numpy.number)) self.value = value def ce_format(self, precision=None): return str(self.value) def __eq__(self, other): return isinstance(other, LiteralInt) and self.value == other.value def __bool__(self): return bool(self.value) __nonzero__ = __bool__ def __int__(self): return int(self.value) def __float__(self): return float(self.value) class LiteralBool(CExprLiteral): "A boolean literal value." __slots__ = ("value",) precedence = PRECEDENCE.LITERAL def __init__(self, value): assert isinstance(value, (bool,)) self.value = value def ce_format(self, precision=None): return "true" if self.value else "false" def __eq__(self, other): return isinstance(other, LiteralBool) and self.value == other.value def __bool__(self): return bool(self.value) __nonzero__ = __bool__ class LiteralString(CExprLiteral): "A boolean literal value." __slots__ = ("value",) precedence = PRECEDENCE.LITERAL def __init__(self, value): assert isinstance(value, (str,)) assert '"' not in value self.value = value def ce_format(self, precision=None): return '"%s"' % (self.value,) def __eq__(self, other): return isinstance(other, LiteralString) and self.value == other.value class Symbol(CExprTerminal): "A named symbol." __slots__ = ("name",) precedence = PRECEDENCE.SYMBOL def __init__(self, name): assert isinstance(name, str) self.name = name def ce_format(self, precision=None): return self.name def __eq__(self, other): return isinstance(other, Symbol) and self.name == other.name class VerbatimExpr(CExprTerminal): """A verbatim copy of an expression source string. Handled as having the lowest precedence which will introduce parentheses around it most of the time.""" __slots__ = ("codestring",) precedence = PRECEDENCE.LOWEST def __init__(self, codestring): assert isinstance(codestring, str) self.codestring = codestring def ce_format(self, precision=None): return self.codestring def __eq__(self, other): return isinstance(other, VerbatimExpr) and self.codestring == other.codestring class New(CExpr): __slots__ = ("typename",) def __init__(self, typename): assert isinstance(typename, str) self.typename = typename def ce_format(self, precision=None): return "new %s()" % (self.typename,) def __eq__(self, other): return isinstance(other, New) and self.typename == other.typename ############## CExprOperator base classes class UnaryOp(CExprOperator): "Base class for unary operators." __slots__ = ("arg",) def __init__(self, arg): self.arg = as_cexpr(arg) def __eq__(self, other): return isinstance(other, type(self)) and self.arg == other.arg class PrefixUnaryOp(UnaryOp): "Base class for prefix unary operators." __slots__ = () def ce_format(self, precision=None): arg = self.arg.ce_format(precision) if self.arg.precedence >= self.precedence: arg = '(' + arg + ')' return self.op + arg def __eq__(self, other): return isinstance(other, type(self)) class PostfixUnaryOp(UnaryOp): "Base class for postfix unary operators." __slots__ = () def ce_format(self, precision=None): arg = self.arg.ce_format(precision) if self.arg.precedence >= self.precedence: arg = '(' + arg + ')' return arg + self.op def __eq__(self, other): return isinstance(other, type(self)) class BinOp(CExprOperator): __slots__ = ("lhs", "rhs") def __init__(self, lhs, rhs): self.lhs = as_cexpr(lhs) self.rhs = as_cexpr(rhs) def ce_format(self, precision=None): # Format children lhs = self.lhs.ce_format(precision) rhs = self.rhs.ce_format(precision) # Apply parentheses if self.lhs.precedence > self.precedence: lhs = '(' + lhs + ')' if self.rhs.precedence >= self.precedence: rhs = '(' + rhs + ')' # Return combined string return lhs + (" " + self.op + " ") + rhs def __eq__(self, other): return (isinstance(other, type(self)) and self.lhs == other.lhs and self.rhs == other.rhs) class NaryOp(CExprOperator): "Base class for special n-ary operators." __slots__ = ("args",) def __init__(self, args): self.args = [as_cexpr(arg) for arg in args] def ce_format(self, precision=None): # Format children args = [arg.ce_format(precision) for arg in self.args] # Apply parentheses for i in range(len(args)): if self.args[i].precedence >= self.precedence: args[i] = '(' + args[i] + ')' # Return combined string op = " " + self.op + " " s = args[0] for i in range(1, len(args)): s += op + args[i] return s def __eq__(self, other): return (isinstance(other, type(self)) and len(self.args) == len(other.args) and all(a == b for a, b in zip(self.args, other.args))) ############## CExpr unary operators class Dereference(PrefixUnaryOp): __slots__ = () precedence = PRECEDENCE.DEREFERENCE op = "*" class AddressOf(PrefixUnaryOp): __slots__ = () precedence = PRECEDENCE.ADDRESSOF op = "&" class SizeOf(PrefixUnaryOp): __slots__ = () precedence = PRECEDENCE.SIZEOF op = "sizeof" class Neg(PrefixUnaryOp): __slots__ = () precedence = PRECEDENCE.NEG op = "-" class Pos(PrefixUnaryOp): __slots__ = () precedence = PRECEDENCE.POS op = "+" class Not(PrefixUnaryOp): __slots__ = () precedence = PRECEDENCE.NOT op = "!" class BitNot(PrefixUnaryOp): __slots__ = () precedence = PRECEDENCE.BIT_NOT op = "~" class PreIncrement(PrefixUnaryOp): __slots__ = () precedence = PRECEDENCE.PRE_INC sideeffect = True op = "++" class PreDecrement(PrefixUnaryOp): __slots__ = () precedence = PRECEDENCE.PRE_DEC sideeffect = True op = "--" class PostIncrement(PostfixUnaryOp): __slots__ = () precedence = PRECEDENCE.POST_INC sideeffect = True op = "++" class PostDecrement(PostfixUnaryOp): __slots__ = () precedence = PRECEDENCE.POST_DEC sideeffect = True op = "--" ############## CExpr binary operators class Add(BinOp): __slots__ = () precedence = PRECEDENCE.ADD op = "+" class Sub(BinOp): __slots__ = () precedence = PRECEDENCE.SUB op = "-" class Mul(BinOp): __slots__ = () precedence = PRECEDENCE.MUL op = "*" class Div(BinOp): __slots__ = () precedence = PRECEDENCE.DIV op = "/" class Mod(BinOp): __slots__ = () precedence = PRECEDENCE.MOD op = "%" class EQ(BinOp): __slots__ = () precedence = PRECEDENCE.EQ op = "==" class NE(BinOp): __slots__ = () precedence = PRECEDENCE.NE op = "!=" class LT(BinOp): __slots__ = () precedence = PRECEDENCE.LT op = "<" class GT(BinOp): __slots__ = () precedence = PRECEDENCE.GT op = ">" class LE(BinOp): __slots__ = () precedence = PRECEDENCE.LE op = "<=" class GE(BinOp): __slots__ = () precedence = PRECEDENCE.GE op = ">=" class And(BinOp): __slots__ = () precedence = PRECEDENCE.AND op = "&&" class Or(BinOp): __slots__ = () precedence = PRECEDENCE.OR op = "||" class BitAnd(BinOp): __slots__ = () precedence = PRECEDENCE.BIT_AND op = "&" class BitXor(BinOp): __slots__ = () precedence = PRECEDENCE.BIT_XOR op = "^" class BitOr(BinOp): __slots__ = () precedence = PRECEDENCE.BIT_OR op = "|" class Sum(NaryOp): "Sum of any number of operands." __slots__ = () precedence = PRECEDENCE.ADD op = "+" class Product(NaryOp): "Product of any number of operands." __slots__ = () precedence = PRECEDENCE.MUL op = "*" class AssignOp(BinOp): "Base class for assignment operators." __slots__ = () precedence = PRECEDENCE.ASSIGN sideeffect = True def __init__(self, lhs, rhs): BinOp.__init__(self, as_cexpr_or_string_symbol(lhs), rhs) class Assign(AssignOp): __slots__ = () op = "=" class AssignAdd(AssignOp): __slots__ = () op = "+=" class AssignSub(AssignOp): __slots__ = () op = "-=" class AssignMul(AssignOp): __slots__ = () op = "*=" class AssignDiv(AssignOp): __slots__ = () op = "/=" class AssignMod(AssignOp): __slots__ = () op = "%=" class AssignLShift(AssignOp): __slots__ = () op = "<<=" class AssignRShift(AssignOp): __slots__ = () op = ">>=" class AssignAnd(AssignOp): __slots__ = () op = "&&=" class AssignOr(AssignOp): __slots__ = () op = "||=" class AssignBitAnd(AssignOp): __slots__ = () op = "&=" class AssignBitXor(AssignOp): __slots__ = () op = "^=" class AssignBitOr(AssignOp): __slots__ = () op = "|=" ############## CExpr operators class FlattenedArray(object): """Syntax carrying object only, will get translated on __getitem__ to ArrayAccess.""" __slots__ = ("array", "strides", "offset", "dims") def __init__(self, array, dummy=None, dims=None, strides=None, offset=None): assert dummy is None, "Please use keyword arguments for strides or dims." # Typecheck array argument if isinstance(array, ArrayDecl): self.array = array.symbol elif isinstance(array, Symbol): self.array = array else: assert isinstance(array, str) self.array = Symbol(array) # Allow expressions or literals as strides or dims and offset if strides is None: assert dims is not None, "Please provide either strides or dims." assert isinstance(dims, (list, tuple)) dims = tuple(as_cexpr(i) for i in dims) self.dims = dims n = len(dims) literal_one = LiteralInt(1) strides = [literal_one]*n for i in range(n-2, -1, -1): s = strides[i+1] d = dims[i+1] if d == literal_one: strides[i] = s elif s == literal_one: strides[i] = d else: strides[i] = d * s else: self.dims = None assert isinstance(strides, (list, tuple)) strides = tuple(as_cexpr(i) for i in strides) self.strides = strides self.offset = None if offset is None else as_cexpr(offset) def __getitem__(self, indices): if not isinstance(indices, (list,tuple)): indices = (indices,) n = len(indices) if n == 0: # Handle scalar case, allowing dims=() and indices=() for A[0] if len(self.strides) != 0: raise ValueError("Empty indices for nonscalar array.") flat = LiteralInt(0) else: i, s = (indices[0], self.strides[0]) literal_one = LiteralInt(1) flat = (i if s == literal_one else s * i) if self.offset is not None: flat = self.offset + flat for i, s in zip(indices[1:n], self.strides[1:n]): flat = flat + (i if s == literal_one else s * i) # Delay applying ArrayAccess until we have all indices if n == len(self.strides): return ArrayAccess(self.array, flat) else: return FlattenedArray(self.array, strides=self.strides[n:], offset=flat) class ArrayAccess(CExprOperator): __slots__ = ("array", "indices") precedence = PRECEDENCE.SUBSCRIPT def __init__(self, array, indices): # Typecheck array argument if isinstance(array, str): array = Symbol(array) if isinstance(array, Symbol): self.array = array elif isinstance(array, ArrayDecl): self.array = array.symbol else: raise ValueError("Unexpected array type %s." % (type(array).__name__,)) # Allow expressions or literals as indices if not isinstance(indices, (list, tuple)): indices = (indices,) self.indices = tuple(as_cexpr_or_string_symbol(i) for i in indices) # Early error checking for negative array dimensions if any(isinstance(i, int) and i < 0 for i in self.indices): raise ValueError("Index value < 0.") # Additional dimension checks possible if we get an ArrayDecl instead of just a name if isinstance(array, ArrayDecl): if len(self.indices) != len(array.sizes): raise ValueError("Invalid number of indices.") ints = (int, LiteralInt) if any((isinstance(i, ints) and isinstance(d, ints) and int(i) >= int(d)) for i, d in zip(self.indices, array.sizes)): raise ValueError("Index value >= array dimension.") def __getitem__(self, indices): "Handling nested expr[i][j]." if isinstance(indices, list): indices = tuple(indices) elif not isinstance(indices, tuple): indices = (indices,) return ArrayAccess(self.array, self.indices + indices) def ce_format(self, precision=None): s = self.array.ce_format(precision) for index in self.indices: s += "[" + index.ce_format(precision) + "]" return s def __eq__(self, other): return (isinstance(other, type(self)) and self.array == other.array and self.indices == other.indices) class Conditional(CExprOperator): __slots__ = ("condition", "true", "false") precedence = PRECEDENCE.CONDITIONAL def __init__(self, condition, true, false): self.condition = as_cexpr(condition) self.true = as_cexpr(true) self.false = as_cexpr(false) def ce_format(self, precision=None): # Format children c = self.condition.ce_format(precision) t = self.true.ce_format(precision) f = self.false.ce_format(precision) # Apply parentheses if self.condition.precedence >= self.precedence: c = '(' + c + ')' if self.true.precedence >= self.precedence: t = '(' + t + ')' if self.false.precedence >= self.precedence: f = '(' + f + ')' # Return combined string return c + " ? " + t + " : " + f def __eq__(self, other): return (isinstance(other, type(self)) and self.condition == other.condition and self.true == other.true and self.false == other.false) class Call(CExprOperator): __slots__ = ("function", "arguments") precedence = PRECEDENCE.CALL sideeffect = True def __init__(self, function, arguments=None): self.function = as_cexpr_or_string_symbol(function) # Accept None, single, or multple arguments; literals or CExprs if arguments is None: arguments = () elif not isinstance(arguments, (tuple, list)): arguments = (arguments,) self.arguments = [as_cexpr(arg) for arg in arguments] def ce_format(self, precision=None): args = ", ".join(arg.ce_format(precision) for arg in self.arguments) return self.function.ce_format(precision) + "(" + args + ")" def __eq__(self, other): return (isinstance(other, type(self)) and self.function == other.function and self.arguments == other.arguments) def Sqrt(x): return Call("std::sqrt", x) ############## Convertion function to expression nodes def _is_zero_valued(values): if isinstance(values, (numbers.Integral, LiteralInt)): return int(values) == 0 elif isinstance(values, (numbers.Number, LiteralFloat)): return float(values) == 0.0 else: return numpy.count_nonzero(values) == 0 def as_cexpr(node): """Typechecks and wraps an object as a valid CExpr. Accepts CExpr nodes, treats int and float as literals, and treats a string as a symbol. """ if isinstance(node, CExpr): return node elif isinstance(node, bool): return LiteralBool(node) elif isinstance(node, numbers.Integral): return LiteralInt(node) elif isinstance(node, numbers.Real): return LiteralFloat(node) elif isinstance(node, str): raise RuntimeError("Got string for CExpr, this is ambiguous: %s" % (node,)) else: raise RuntimeError("Unexpected CExpr type %s:\n%s" % (type(node), str(node))) def as_cexpr_or_string_symbol(node): if isinstance(node, str): return Symbol(node) return as_cexpr(node) def as_cexpr_or_verbatim(node): if isinstance(node, str): return VerbatimExpr(node) return as_cexpr(node) def as_cexpr_or_literal(node): if isinstance(node, str): return LiteralString(node) return as_cexpr(node) def as_symbol(symbol): if isinstance(symbol, str): symbol = Symbol(symbol) assert isinstance(symbol, Symbol) return symbol def flattened_indices(indices, shape): """Given a tuple of indices and a shape tuple, return CNode expression for flattened indexing into multidimensional array. Indices and shape entries can be int values, str symbol names, or CNode expressions. """ n = len(shape) if n == 0: # Scalar return as_cexpr(0) elif n == 1: # Simple vector return as_cexpr(indices[0]) else: # 2d or higher strides = [None]*(n-2) + [shape[-1], 1] for i in range(n-3, -1, -1): strides[i] = Mul(shape[i+1], strides[i+1]) result = indices[-1] for i in range(n-2, -1, -1): result = Add(Mul(strides[i], indices[i]), result) return result ############## Base class for all statements class CStatement(CNode): """Base class for all C statements. Subtypes do _not_ define a 'precedence' class attribute. """ __slots__ = () # True if statement contains its own scope, false by default to be on the safe side is_scoped = False def cs_format(self, precision=None): "Return S: string | list(S) | Indented(S)." raise NotImplementedError("Missing implementation of cs_format() in CStatement.") def __str__(self): try: s = self.cs_format() except Exception: if CNode.debug: print("Error in CStatement string formatting. Inspect self.") import IPython; IPython.embed() raise return format_indented_lines(s) ############## Statements class VerbatimStatement(CStatement): "Wraps a source code string to be pasted verbatim into the source code." __slots__ = ("codestring",) is_scoped = False def __init__(self, codestring): assert isinstance(codestring, str) self.codestring = codestring def cs_format(self, precision=None): return self.codestring def __eq__(self, other): return (isinstance(other, type(self)) and self.codestring == other.codestring) class Statement(CStatement): "Make an expression into a statement." __slots__ = ("expr",) is_scoped = False def __init__(self, expr): self.expr = as_cexpr(expr) def cs_format(self, precision=None): return self.expr.ce_format(precision) + ";" def __eq__(self, other): return (isinstance(other, type(self)) and self.expr == other.expr) class StatementList(CStatement): "A simple sequence of statements. No new scopes are introduced." __slots__ = ("statements",) def __init__(self, statements): self.statements = [as_cstatement(st) for st in statements] @property def is_scoped(self): return all(st.is_scoped for st in self.statements) def cs_format(self, precision=None): return [st.cs_format(precision) for st in self.statements] def __eq__(self, other): return (isinstance(other, type(self)) and self.statements == other.statements) ############## Simple statements class Using(CStatement): __slots__ = ("name",) is_scoped = True def __init__(self, name): assert isinstance(name, str) self.name = name def cs_format(self, precision=None): return "using " + self.name + ";" def __eq__(self, other): return (isinstance(other, type(self)) and self.name == other.name) class Break(CStatement): __slots__ = () is_scoped = True def cs_format(self, precision=None): return "break;" def __eq__(self, other): return isinstance(other, type(self)) class Continue(CStatement): __slots__ = () is_scoped = True def cs_format(self, precision=None): return "continue;" def __eq__(self, other): return isinstance(other, type(self)) class Return(CStatement): __slots__ = ("value",) is_scoped = True def __init__(self, value=None): if value is None: self.value = None else: self.value = as_cexpr(value) def cs_format(self, precision=None): if self.value is None: return "return;" else: return "return %s;" % (self.value.ce_format(precision),) def __eq__(self, other): return (isinstance(other, type(self)) and self.value == other.value) class Case(CStatement): __slots__ = ("value",) is_scoped = False def __init__(self, value): # NB! This is too permissive and will allow invalid case arguments. self.value = as_cexpr(value) def cs_format(self, precision=None): return "case " + self.value.ce_format(precision) + ":" def __eq__(self, other): return (isinstance(other, type(self)) and self.value == other.value) class Default(CStatement): __slots__ = () is_scoped = False def cs_format(self, precision=None): return "default:" def __eq__(self, other): return isinstance(other, type(self)) class Throw(CStatement): __slots__ = ("exception", "message") is_scoped = True def __init__(self, exception, message): assert isinstance(exception, str) assert isinstance(message, str) self.exception = exception self.message = message def cs_format(self, precision=None): assert '"' not in self.message return "throw " + self.exception + '("' + self.message + '");' def __eq__(self, other): return (isinstance(other, type(self)) and self.message == other.message and self.exception == other.exception) class Comment(CStatement): "Line comment(s) used for annotating the generated code with human readable remarks." __slots__ = ("comment",) is_scoped = True def __init__(self, comment): assert isinstance(comment, str) self.comment = comment def cs_format(self, precision=None): lines = self.comment.strip().split("\n") return ["// " + line.strip() for line in lines] def __eq__(self, other): return (isinstance(other, type(self)) and self.comment == other.comment) def NoOp(): return Comment("Do nothing") def commented_code_list(code, comments): "Convenience wrapper for adding comment to code list if the list is not empty." if isinstance(code, CNode): code = [code] assert isinstance(code, list) if code: if not isinstance(comments, (list, tuple)): comments = [comments] comments = [Comment(c) for c in comments] code = comments + code return code class Pragma(CStatement): "Pragma comments used for compiler-specific annotations." __slots__ = ("comment",) is_scoped = True def __init__(self, comment): assert isinstance(comment, str) self.comment = comment def cs_format(self, precision=None): assert "\n" not in self.comment return "#pragma " + self.comment def __eq__(self, other): return (isinstance(other, type(self)) and self.comment == other.comment) ############## Type and variable declarations class VariableDecl(CStatement): "Declare a variable, optionally define initial value." __slots__ = ("typename", "symbol", "value") is_scoped = False def __init__(self, typename, symbol, value=None): # No type system yet, just using strings assert isinstance(typename, str) self.typename = typename # Allow Symbol or just a string self.symbol = as_symbol(symbol) if value is not None: value = as_cexpr(value) self.value = value def cs_format(self, precision=None): code = self.typename + " " + self.symbol.name if self.value is not None: code += " = " + self.value.ce_format(precision) return code + ";" def __eq__(self, other): return (isinstance(other, type(self)) and self.typename == other.typename and self.symbol == other.symbol and self.value == other.value) def leftover(size, padlen): "Return minimum integer to add to size to make it divisible by padlen." return (padlen - (size % padlen)) % padlen def pad_dim(dim, padlen): "Make dim divisible by padlen." return ((dim + padlen - 1) // padlen) * padlen def pad_innermost_dim(shape, padlen): "Make the last dimension in shape divisible by padlen." if not shape: return () shape = list(shape) if padlen: shape[-1] = pad_dim(shape[-1], padlen) return tuple(shape) def build_1d_initializer_list(values, formatter, padlen=0, precision=None): '''Return a list containing a single line formatted like "{ 0.0, 1.0, 2.0 }"''' if formatter == str: formatter = lambda x, p: str(x) tokens = ["{ "] if numpy.product(values.shape) > 0: sep = ", " fvalues = [formatter(v, precision) for v in values] for v in fvalues[:-1]: tokens.append(v) tokens.append(sep) tokens.append(fvalues[-1]) if padlen: # Add padding zero = formatter(values.dtype.type(0), precision) for i in range(leftover(len(values), padlen)): tokens.append(sep) tokens.append(zero) tokens += " }" return "".join(tokens) def build_initializer_lists(values, sizes, level, formatter, padlen=0, precision=None): """Return a list of lines with initializer lists for a multidimensional array. Example output:: { { 0.0, 0.1 }, { 1.0, 1.1 } } """ if formatter == str: formatter = lambda x, p: str(x) values = numpy.asarray(values) assert numpy.product(values.shape) == numpy.product(sizes) assert len(sizes) > 0 assert len(values.shape) > 0 assert len(sizes) == len(values.shape) assert numpy.all(values.shape == sizes) r = len(sizes) assert r > 0 if r == 1: return [build_1d_initializer_list(values, formatter, padlen=padlen, precision=precision)] else: # Render all sublists parts = [] for val in values: sublist = build_initializer_lists(val, sizes[1:], level+1, formatter, padlen=padlen, precision=precision) parts.append(sublist) # Add comma after last line in each part except the last one for part in parts[:-1]: part[-1] += "," # Collect all lines in flat list lines = [] for part in parts: lines.extend(part) # Enclose lines in '{ ' and ' }' and indent lines in between lines[0] = "{ " + lines[0] for i in range(1,len(lines)): lines[i] = " " + lines[i] lines[-1] += " }" return lines class ArrayDecl(CStatement): """A declaration or definition of an array. Note that just setting values=0 is sufficient to initialize the entire array to zero. Otherwise use nested lists of lists to represent multidimensional array values to initialize to. """ __slots__ = ("typename", "symbol", "sizes", "alignas", "padlen", "values") is_scoped = False def __init__(self, typename, symbol, sizes=None, values=None, alignas=None, padlen=0): assert isinstance(typename, str) self.typename = typename if isinstance(symbol, FlattenedArray): if sizes is None: assert symbol.dims is not None sizes = symbol.dims elif symbol.dims is not None: assert symbol.dims == sizes self.symbol = symbol.array else: self.symbol = as_symbol(symbol) if isinstance(sizes, int): sizes = (sizes,) self.sizes = tuple(sizes) # NB! No type checking, assuming nested lists of literal values. Not applying as_cexpr. if isinstance(values, (list, tuple)): self.values = numpy.asarray(values) else: self.values = values self.alignas = alignas self.padlen = padlen def __getitem__(self, indices): """Allow using array declaration object as the array when indexed. A = ArrayDecl("int", "A", (2,3)) code = [A, Assign(A[0,0], 1.0)] """ return ArrayAccess(self, indices) def cs_format(self, precision=None): # Pad innermost array dimension sizes = pad_innermost_dim(self.sizes, self.padlen) # Add brackets brackets = ''.join("[%d]" % n for n in sizes) # Join declaration decl = self.typename + " " + self.symbol.name + brackets # NB! C++11 style alignas prefix syntax. # If trying other target languages, must use other syntax. if self.alignas: align = "alignas(%d)" % int(self.alignas) decl = align + " " + decl if self.values is None: # Undefined initial values return decl + ";" elif _is_zero_valued(self.values): # Zero initial values # (NB! C++ style zero initialization, not sure about other target languages) return decl + " = {};" else: # Construct initializer lists for arbitrary multidimensional array values if self.values.dtype.kind == "f": formatter = format_float elif self.values.dtype.kind == "i": formatter = format_int else: formatter = format_value initializer_lists = build_initializer_lists(self.values, self.sizes, 0, formatter, padlen=self.padlen, precision=precision) if len(initializer_lists) == 1: return decl + " = " + initializer_lists[0] + ";" else: initializer_lists[-1] += ";" # Close statement on final line return (decl + " =", Indented(initializer_lists)) def __eq__(self, other): attributes = ("typename", "symbol", "sizes", "alignas", "padlen", "values") return (isinstance(other, type(self)) and all(getattr(self, name) == getattr(self, name) for name in attributes)) ############## Scoped statements class Scope(CStatement): __slots__ = ("body",) is_scoped = True def __init__(self, body): self.body = as_cstatement(body) def cs_format(self, precision=None): return ("{", Indented(self.body.cs_format(precision)), "}") def __eq__(self, other): return (isinstance(other, type(self)) and self.body == other.body) class Namespace(CStatement): __slots__ = ("name", "body") is_scoped = True def __init__(self, name, body): assert isinstance(name, str) self.name = name self.body = as_cstatement(body) def cs_format(self, precision=None): return ("namespace " + self.name, "{", Indented(self.body.cs_format(precision)), "}") def __eq__(self, other): return (isinstance(other, type(self)) and self.name == other.name and self.body == other.body) def _is_scoped_statement(body): return def _is_simple_if_body(body): if isinstance(body, StatementList): if len(body.statements) > 1: return False body, = body.statements return isinstance(body, (Return, AssignOp, Break, Continue)) class If(CStatement): __slots__ = ("condition", "body") is_scoped = True def __init__(self, condition, body): self.condition = as_cexpr(condition) self.body = as_cstatement(body) def cs_format(self, precision=None): statement = "if (" + self.condition.ce_format(precision) + ")" body_fmt = Indented(self.body.cs_format(precision)) if _is_simple_if_body(self.body): return (statement, body_fmt) else: return (statement, "{", body_fmt, "}") def __eq__(self, other): return (isinstance(other, type(self)) and self.condition == other.condition and self.body == other.body) class ElseIf(CStatement): __slots__ = ("condition", "body") is_scoped = True def __init__(self, condition, body): self.condition = as_cexpr(condition) self.body = as_cstatement(body) def cs_format(self, precision=None): statement = "else if (" + self.condition.ce_format(precision) + ")" body_fmt = Indented(self.body.cs_format(precision)) if _is_simple_if_body(self.body): return (statement, body_fmt) else: return (statement, "{", body_fmt, "}") def __eq__(self, other): return (isinstance(other, type(self)) and self.condition == other.condition and self.body == other.body) class Else(CStatement): __slots__ = ("body",) is_scoped = True def __init__(self, body): self.body = as_cstatement(body) def cs_format(self, precision=None): statement = "else" body_fmt = Indented(self.body.cs_format(precision)) if _is_simple_if_body(self.body): return (statement, body_fmt) else: return (statement, "{", body_fmt, "}") def __eq__(self, other): return (isinstance(other, type(self)) and self.body == other.body) class While(CStatement): __slots__ = ("condition", "body") is_scoped = True def __init__(self, condition, body): self.condition = as_cexpr(condition) self.body = as_cstatement(body) def cs_format(self, precision=None): return ("while (" + self.condition.ce_format(precision) + ")", "{", Indented(self.body.cs_format(precision)), "}") def __eq__(self, other): return (isinstance(other, type(self)) and self.condition == other.condition and self.body == other.body) class Do(CStatement): __slots__ = ("condition", "body") is_scoped = True def __init__(self, condition, body): self.condition = as_cexpr(condition) self.body = as_cstatement(body) def cs_format(self, precision=None): return ("do", "{", Indented(self.body.cs_format(precision)), "} while (" + self.condition.ce_format(precision) + ");") def __eq__(self, other): return (isinstance(other, type(self)) and self.condition == other.condition and self.body == other.body) def as_pragma(pragma): if isinstance(pragma, str): return Pragma(pragma) elif isinstance(pragma, Pragma): return pragma return None def is_simple_inner_loop(code): if isinstance(code, (ForRange, For)) and code.pragma is None and is_simple_inner_loop(code.body): return True if isinstance(code, Statement) and isinstance(code.expr, AssignOp): return True return False class For(CStatement): __slots__ = ("init", "check", "update", "body", "pragma") is_scoped = True def __init__(self, init, check, update, body, pragma=None): self.init = as_cstatement(init) self.check = as_cexpr_or_verbatim(check) self.update = as_cexpr_or_verbatim(update) self.body = as_cstatement(body) self.pragma = as_pragma(pragma) def cs_format(self, precision=None): # The C model here is a bit crude and this causes trouble # in the init statement/expression here: init = self.init.cs_format(precision) assert isinstance(init, str) init = init.rstrip(" ;") check = self.check.ce_format(precision) update = self.update.ce_format(precision) prelude = "for (" + init + "; " + check + "; " + update + ")" body = Indented(self.body.cs_format(precision)) # Reduce size of code with lots of simple loops by dropping {} in obviously safe cases if is_simple_inner_loop(self.body): code = (prelude, body) else: code = (prelude, "{", body, "}") # Add pragma prefix if requested if self.pragma is not None: code = (self.pragma.cs_format(),) + code return code def __eq__(self, other): attributes = ("init", "check", "update", "body") return (isinstance(other, type(self)) and all(getattr(self, name) == getattr(self, name) for name in attributes)) class Switch(CStatement): __slots__ = ("arg", "cases", "default", "autobreak", "autoscope") is_scoped = True def __init__(self, arg, cases, default=None, autobreak=True, autoscope=True): self.arg = as_cexpr_or_string_symbol(arg) self.cases = [(as_cexpr(value), as_cstatement(body)) for value, body in cases] if default is not None: default = as_cstatement(default) defcase = [(None, default)] else: defcase = [] self.default = default # If this is a switch where every case returns, scopes or breaks are never needed if all(isinstance(case[1], Return) for case in self.cases + defcase): autobreak = False autoscope = False if all(case[1].is_scoped for case in self.cases + defcase): autoscope = False assert autobreak in (True, False) assert autoscope in (True, False) self.autobreak = autobreak self.autoscope = autoscope def cs_format(self, precision=None): cases = [] for case in self.cases: caseheader = "case " + case[0].ce_format(precision) + ":" casebody = case[1].cs_format(precision) if self.autoscope: casebody = ("{", Indented(casebody), "}") if self.autobreak: casebody = (casebody, "break;") cases.extend([caseheader, Indented(casebody)]) if self.default is not None: caseheader = "default:" casebody = self.default.cs_format(precision) if self.autoscope: casebody = ("{", Indented(casebody), "}") cases.extend([caseheader, Indented(casebody)]) return ("switch (" + self.arg.ce_format(precision) + ")", "{", cases, "}") def __eq__(self, other): attributes = ("arg", "cases", "default", "autobreak", "autoscope") return (isinstance(other, type(self)) and all(getattr(self, name) == getattr(self, name) for name in attributes)) class ForRange(CStatement): "Slightly higher-level for loop assuming incrementing an index over a range." __slots__ = ("index", "begin", "end", "body", "pragma", "index_type") is_scoped = True def __init__(self, index, begin, end, body, index_type="int", vectorize=None): self.index = as_cexpr_or_string_symbol(index) self.begin = as_cexpr(begin) self.end = as_cexpr(end) self.body = as_cstatement(body) if vectorize: pragma = Pragma("omp simd") else: pragma = None self.pragma = pragma self.index_type = index_type def cs_format(self, precision=None): indextype = self.index_type index = self.index.ce_format(precision) begin = self.begin.ce_format(precision) end = self.end.ce_format(precision) init = indextype + " " + index + " = " + begin check = index + " < " + end update = "++" + index prelude = "for (" + init + "; " + check + "; " + update + ")" body = Indented(self.body.cs_format(precision)) # Reduce size of code with lots of simple loops by dropping {} in obviously safe cases if is_simple_inner_loop(self.body): code = (prelude, body) else: code = (prelude, "{", body, "}") # Add vectorization hint if requested if self.pragma is not None: code = (self.pragma.cs_format(),) + code return code def __eq__(self, other): attributes = ("index", "begin", "end", "body", "pragma", "index_type") return (isinstance(other, type(self)) and all(getattr(self, name) == getattr(self, name) for name in attributes)) def ForRanges(*ranges, **kwargs): ranges = list(reversed(ranges)) code = kwargs["body"] for r in ranges: kwargs["body"] = code code = ForRange(*r, **kwargs) return code ############## Convertion function to statement nodes def as_cstatement(node): "Perform type checking on node and wrap in a suitable statement type if necessary." if isinstance(node, StatementList) and len(node.statements) == 1: # Cleans up the expression tree a bit return node.statements[0] elif isinstance(node, CStatement): # No-op return node elif isinstance(node, CExprOperator): if node.sideeffect: # Special case for using assignment expressions as statements return Statement(node) else: raise RuntimeError( "Trying to create a statement of CExprOperator type %s:\n%s" % (type(node), str(node))) elif isinstance(node, list): # Convenience case for list of statements if len(node) == 1: # Cleans up the expression tree a bit return as_cstatement(node[0]) else: return StatementList(node) elif isinstance(node, str): # Backdoor for flexibility in code generation to allow verbatim pasted statements return VerbatimStatement(node) else: raise RuntimeError("Unexpected CStatement type %s:\n%s" % (type(node), str(node))) ffc-2019.1.0.post0/ffc/uflacs/language/format_lines.py000066400000000000000000000050471346026536000224130ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2011-2017 Martin Sandve Alnæs # # This file is part of UFLACS. # # UFLACS is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # UFLACS is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with UFLACS. If not, see . """Tools for indentation-aware code string stitching. When formatting an AST into a string, it's better to collect lists of snippets and then join them than adding the pieces continually, which gives O(n^2) behaviour w.r.t. AST size n. """ class Indented(object): """Class to mark a collection of snippets for indentation. This way nested indentations can be handled by adding the prefix spaces only once to each line instead of splitting and indenting substrings repeatedly. """ # Try to keep memory overhead low: __slots__ = ("body",) def __init__(self, body): # Body can be any valid snippet format self.body = body def iter_indented_lines(snippets, level=0): """Iterate over indented string lines from a snippets data structure. The snippets object can be built recursively using the following types: - str: Split and yield as one line at a time indented to the appropriate level. - Indented: Yield the lines within this object indented by one level. - tuple,list: Yield lines from recursive application of this function to list items. """ tabsize = 4 indentation = ' ' * (tabsize * level) if isinstance(snippets, str): for line in snippets.split("\n"): yield indentation + line elif isinstance(snippets, Indented): for line in iter_indented_lines(snippets.body, level+1): yield line elif isinstance(snippets, (tuple, list)): for part in snippets: for line in iter_indented_lines(part, level): yield line else: raise RuntimeError("Unexpected type %s:\n%s" % (type(snippets), str(snippets))) def format_indented_lines(snippets, level=0): "Format recursive sequences of indented lines as one string." return "\n".join(iter_indented_lines(snippets, level)) ffc-2019.1.0.post0/ffc/uflacs/language/format_value.py000066400000000000000000000040451346026536000224120ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2011-2017 Martin Sandve Alnæs # # This file is part of UFLACS. # # UFLACS is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # UFLACS is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with UFLACS. If not, see . import numbers import re _subs = ( # Remove 0s after e+ or e- (re.compile(r"e[\+]0*(.)"), r"e\1"), (re.compile(r"e[\-]0*(.)"), r"e-\1"), ) def format_float(x, precision=None): "Format a float value according to given precision." global _subs if precision: s = "{:.{prec}}".format(float(x), prec=precision) else: # Using "{}".format(float(x)) apparently results # in lesser precision in python 2 than python 3 s = repr(float(x)) for r, v in _subs: s = r.sub(v, s) return s def format_int(x, precision=None): return str(x) def format_value(value, precision=None): """Format a literal value as s tring. - float: Formatted according to current precision configuration. - int: Formatted as regular base 10 int literal. - str: Wrapped in "quotes". """ if isinstance(value, numbers.Real): return format_float(float(value), precision=precision) elif isinstance(value, numbers.Integral): return format_int(int(value)) elif isinstance(value, str): # FIXME: Is this ever used? assert '"' not in value return '"' + value + '"' elif hasattr(value, "ce_format"): return value.ce_format() else: raise RuntimeError("Unexpected type %s:\n%s" % (type(value), str(value))) ffc-2019.1.0.post0/ffc/uflacs/language/precedence.py000066400000000000000000000025661346026536000220310ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2011-2017 Martin Sandve Alnæs # # This file is part of UFLACS. # # UFLACS is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # UFLACS is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with UFLACS. If not, see . class PRECEDENCE: "An enum-like class for C operator precedence levels." HIGHEST = 0 LITERAL = 0 SYMBOL = 0 #SCOPE = 1 POST_INC = 2 POST_DEC = 2 CALL = 2 SUBSCRIPT = 2 #MEMBER = 2 #PTR_MEMBER = 2 PRE_INC = 3 PRE_DEC = 3 NOT = 3 BIT_NOT = 3 POS = 3 NEG = 3 DEREFERENCE = 3 ADDRESSOF = 3 SIZEOF = 3 MUL = 4 DIV = 4 MOD = 4 ADD = 5 SUB = 5 LT = 6 LE = 6 GT = 6 GE = 6 EQ = 7 NE = 7 BIT_AND = 8 BIT_XOR = 9 BIT_OR = 10 AND = 11 OR = 12 CONDITIONAL = 13 ASSIGN = 13 #COMMA = 14 LOWEST = 15 ffc-2019.1.0.post0/ffc/uflacs/language/ufl_to_cnodes.py000066400000000000000000000145701346026536000225550ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2011-2017 Martin Sandve Alnæs # # This file is part of UFLACS. # # UFLACS is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # UFLACS is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with UFLACS. If not, see . """Tools for C/C++ expression formatting.""" from ffc.log import error from ufl.corealg.multifunction import MultiFunction #from ufl.corealg.map_dag import map_expr_dag class UFL2CNodesMixin(object): """Rules collection mixin for a UFL to CNodes translator class.""" def __init__(self, language): self.L = language self.force_floats = False self.enable_strength_reduction = False # === Error handlers for missing formatting rules === def expr(self, o): "Generic fallback with error message for missing rules." error("Missing C++ formatting rule for expr type {0}.".format(o._ufl_class_)) # === Formatting rules for scalar literals === def zero(self, o): return self.L.LiteralFloat(0.0) #def complex_value(self, o): # return self.L.ComplexValue(complex(o)) def float_value(self, o): return self.L.LiteralFloat(float(o)) def int_value(self, o): if self.force_floats: return self.float_value(o) return self.L.LiteralInt(int(o)) # === Formatting rules for arithmetic operators === def sum(self, o, a, b): return self.L.Add(a, b) #def sub(self, o, a, b): # Not in UFL # return self.L.Sub(a, b) def product(self, o, a, b): return self.L.Mul(a, b) def division(self, o, a, b): if self.enable_strength_reduction: return self.L.Mul(a, self.L.Div(1.0, b)) else: return self.L.Div(a, b) # === Formatting rules for conditional expressions === def conditional(self, o, c, t, f): return self.L.Conditional(c, t, f) def eq(self, o, a, b): return self.L.EQ(a, b) def ne(self, o, a, b): return self.L.NE(a, b) def le(self, o, a, b): return self.L.LE(a, b) def ge(self, o, a, b): return self.L.GE(a, b) def lt(self, o, a, b): return self.L.LT(a, b) def gt(self, o, a, b): return self.L.GT(a, b) def and_condition(self, o, a, b): return self.L.And(a, b) def or_condition(self, o, a, b): return self.L.Or(a, b) def not_condition(self, o, a): return self.L.Not(a) # === Formatting rules for cmath functions === def math_function(self, o, op): # Fallback for unhandled MathFunction subclass: attempting to just call it. # TODO: Introduce a UserFunction to UFL to keep it separate from MathFunction? return self.L.Call(o._name, op) def sqrt(self, o, op): return self._cmath("sqrt", op) #def cbrt(self, o, op): # Not in UFL # return self._cmath("cbrt", op) # cmath also has log10 etc def ln(self, o, op): return self._cmath("log", op) # cmath also has exp2 etc def exp(self, o, op): return self._cmath("exp", op) def cos(self, o, op): return self._cmath("cos", op) def sin(self, o, op): return self._cmath("sin", op) def tan(self, o, op): return self._cmath("tan", op) def cosh(self, o, op): return self._cmath("cosh", op) def sinh(self, o, op): return self._cmath("sinh", op) def tanh(self, o, op): return self._cmath("tanh", op) def atan_2(self, o, y, x): return self._cmath("atan2", (y, x)) def acos(self, o, op): return self._cmath("acos", op) def asin(self, o, op): return self._cmath("asin", op) def atan(self, o, op): return self._cmath("atan", op) #def acosh(self, o, op): # Not in UFL # return self._cmath("acosh", op) #def asinh(self, o, op): # Not in UFL # return self._cmath("asinh", op) #def atanh(self, o, op): # Not in UFL # return self._cmath("atanh", op) def erf(self, o, op): return self._cmath("erf", op) #def erfc(self, o, op): # Not in UFL # # C++11 stl has this function # return self._cmath("erfc", op) class RulesForC(object): def _cmath(self, name, op): return self.L.Call(name, op) def power(self, o, a, b): return self.L.Call("pow", (a, b)) def abs(self, o, op): return self.L.Call("fabs", op) def min_value(self, o, a, b): return self.L.Call("fmin", (a, b)) def max_value(self, o, a, b): return self.L.Call("fmax", (a, b)) # ignoring bessel functions class RulesForCpp(object): def _cmath(self, name, op): return self.L.Call("std::" + name, op) def power(self, o, a, b): return self.L.Call("std::pow", (a, b)) def abs(self, o, op): return self.L.Call("std::abs", op) def min_value(self, o, a, b): return self.L.Call("std::min", (a, b)) def max_value(self, o, a, b): return self.L.Call("std::max", (a, b)) # === Formatting rules for bessel functions === def _bessel(self, o, n, v, name): return self.L.Call("boost::math::" + name, (n, v)) def bessel_i(self, o, n, v): return self._bessel(o, n, v, "cyl_bessel_i") def bessel_j(self, o, n, v): return self._bessel(o, n, v, "cyl_bessel_j") def bessel_k(self, o, n, v): return self._bessel(o, n, v, "cyl_bessel_k") def bessel_y(self, o, n, v): return self._bessel(o, n, v, "cyl_neumann") class UFL2CNodesTranslatorC(MultiFunction, UFL2CNodesMixin, RulesForC): """UFL to CNodes translator class.""" def __init__(self, language): MultiFunction.__init__(self) UFL2CNodesMixin.__init__(self, language) class UFL2CNodesTranslatorCpp(MultiFunction, UFL2CNodesMixin, RulesForCpp): """UFL to CNodes translator class.""" def __init__(self, language): MultiFunction.__init__(self) UFL2CNodesMixin.__init__(self, language) ffc-2019.1.0.post0/ffc/uflacs/params.py000066400000000000000000000015421346026536000174250ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2011-2017 Martin Sandve Alnæs # # This file is part of UFLACS. # # UFLACS is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # UFLACS is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with UFLACS. If not, see . """Collection of exposed parameters available to tune form compiler algorithms.""" def default_parameters(): return {} ffc-2019.1.0.post0/ffc/uflacs/tools.py000066400000000000000000000077741346026536000173170ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2009-2017 Kristian B. Oelgaard and Martin Sandve Alnæs # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . import numbers import collections import numpy from ufl.sorting import sorted_expr_sum from ufl import custom_integral_types from ufl.classes import Integral from ffc.representationutils import create_quadrature_points_and_weights def collect_quadrature_rules(integrals, default_scheme, default_degree): "Collect quadrature rules found in list of integrals." rules = set() for integral in integrals: md = integral.metadata() or {} scheme = md.get("quadrature_rule", default_scheme) degree = md.get("quadrature_degree", default_degree) rule = (scheme, degree) rules.add(rule) return rules def compute_quadrature_rules(rules, integral_type, cell): "Compute points and weights for a set of quadrature rules." quadrature_rules = {} quadrature_rule_sizes = {} for rule in rules: scheme, degree = rule # Compute quadrature points and weights (points, weights) = create_quadrature_points_and_weights( integral_type, cell, degree, scheme) if points is not None: points = numpy.asarray(points) if weights is None: # For custom integrals, there are no points num_points = None else: num_points = len(weights) # Assuming all rules with the same number of points are equal if num_points in quadrature_rules: assert quadrature_rules[num_points][0] == points assert quadrature_rules[num_points][0] == weights error("This number of points is already present in the weight table:\n %s" % (quadrature_rules,)) quadrature_rules[num_points] = (points, weights) quadrature_rule_sizes[rule] = num_points return quadrature_rules, quadrature_rule_sizes def accumulate_integrals(itg_data, quadrature_rule_sizes): """Group and accumulate integrals according to the number of quadrature points in their rules. """ if not itg_data.integrals: return {} # Group integrands by quadrature rule sorted_integrands = collections.defaultdict(list) if itg_data.integral_type in custom_integral_types: # Should only be one size here, ignoring irrelevant metadata and parameters num_points, = quadrature_rule_sizes.values() for integral in itg_data.integrals: sorted_integrands[num_points].append(integral.integrand()) else: default_scheme = itg_data.metadata["quadrature_degree"] default_degree = itg_data.metadata["quadrature_rule"] for integral in itg_data.integrals: md = integral.metadata() or {} scheme = md.get("quadrature_rule", default_scheme) degree = md.get("quadrature_degree", default_degree) rule = (scheme, degree) num_points = quadrature_rule_sizes[rule] sorted_integrands[num_points].append(integral.integrand()) # Accumulate integrands in a canonical ordering defined by UFL sorted_integrals = { num_points: Integral( sorted_expr_sum(integrands), itg_data.integral_type, itg_data.domain, itg_data.subdomain_id, {}, None) for num_points, integrands in list(sorted_integrands.items()) } return sorted_integrals ffc-2019.1.0.post0/ffc/uflacs/uflacsgenerator.py000066400000000000000000000044361346026536000213330ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2013-2017 Martin Sandve Alnæs # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . """Controlling algorithm for building the tabulate_tensor source structure from factorized representation.""" from ffc.log import info from ffc.representationutils import initialize_integral_code from ffc.uflacs.backends.ffc.backend import FFCBackend from ffc.uflacs.integralgenerator import IntegralGenerator from ffc.uflacs.language.format_lines import format_indented_lines def generate_integral_code(ir, prefix, parameters): "Generate code for integral from intermediate representation." info("Generating code from ffc.uflacs representation") # FIXME: Is this the right precision value to use? Make it default to None or 0. precision = ir["integrals_metadata"]["precision"] # Create FFC C++ backend backend = FFCBackend(ir, parameters) # Configure kernel generator ig = IntegralGenerator(ir, backend, precision) # Generate code ast for the tabulate_tensor body parts = ig.generate() # Format code as string body = format_indented_lines(parts.cs_format(precision), 1) # Generate generic ffc code snippets and add uflacs specific parts code = initialize_integral_code(ir, prefix, parameters) code["tabulate_tensor"] = body code["additional_includes_set"] = set(ig.get_includes()) code["additional_includes_set"].update(ir.get("additional_includes_set", ())) # TODO: Move to initialize_integral_code, this is not representation specific if ir.get("num_cells") is not None: ret = backend.language.Return(ir["num_cells"]) code["num_cells"] = format_indented_lines(ret.cs_format(), 1) return code ffc-2019.1.0.post0/ffc/uflacs/uflacsoptimization.py000066400000000000000000000020031346026536000220570ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2013-2017 Martin Sandve Alnæs # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . from ffc.log import info def optimize_integral_ir(ir, parameters): "Compute optimized intermediate representation of integral." info("Optimizing uflacs representation") # TODO: Implement optimization of ssa representation prior to code generation here. oir = ir return oir ffc-2019.1.0.post0/ffc/uflacs/uflacsrepresentation.py000066400000000000000000000135421346026536000224050ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2013-2017 Martin Sandve Alnæs # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . import numpy from ufl.algorithms import replace from ufl.utils.sorting import sorted_by_count from ufl import custom_integral_types from ffc.log import info from ffc.representationutils import initialize_integral_ir from ffc.fiatinterface import create_element from ffc.uflacs.tools import collect_quadrature_rules, compute_quadrature_rules, accumulate_integrals from ffc.uflacs.build_uflacs_ir import build_uflacs_ir def compute_integral_ir(itg_data, form_data, form_id, element_numbers, classnames, parameters): "Compute intermediate represention of integral." info("Computing uflacs representation") # Initialise representation ir = initialize_integral_ir("uflacs", itg_data, form_data, form_id) # Store element classnames ir["classnames"] = classnames # Get element space dimensions unique_elements = element_numbers.keys() ir["element_dimensions"] = { ufl_element: create_element(ufl_element).space_dimension() for ufl_element in unique_elements } # Create dimensions of primary indices, needed to reset the argument 'A' # given to tabulate_tensor() by the assembler. argument_dimensions = [ir["element_dimensions"][ufl_element] for ufl_element in form_data.argument_elements] # Compute shape of element tensor if ir["integral_type"] == "interior_facet": ir["tensor_shape"] = [2 * dim for dim in argument_dimensions] else: ir["tensor_shape"] = argument_dimensions integral_type = itg_data.integral_type cell = itg_data.domain.ufl_cell() if integral_type in custom_integral_types: # Set quadrature degree to twice the highest element degree, to get # enough points to identify basis functions via table computations max_element_degree = max([1] + [ufl_element.degree() for ufl_element in unique_elements]) rules = [("default", 2*max_element_degree)] quadrature_integral_type = "cell" else: # Collect which quadrature rules occur in integrals default_scheme = itg_data.metadata["quadrature_degree"] default_degree = itg_data.metadata["quadrature_rule"] rules = collect_quadrature_rules( itg_data.integrals, default_scheme, default_degree) quadrature_integral_type = integral_type # Compute actual points and weights quadrature_rules, quadrature_rule_sizes = compute_quadrature_rules( rules, quadrature_integral_type, cell) # Store quadrature rules in format { num_points: (points, weights) } ir["quadrature_rules"] = quadrature_rules # Store the fake num_points for analysis in custom integrals if integral_type in custom_integral_types: ir["fake_num_points"], = quadrature_rules.keys() # Group and accumulate integrals on the format { num_points: integral data } sorted_integrals = accumulate_integrals(itg_data, quadrature_rule_sizes) # Build coefficient numbering for UFC interface here, to avoid # renumbering in UFL and application of replace mapping if True: # Using the mapped coefficients, numbered by UFL coefficient_numbering = {} sorted_coefficients = sorted_by_count(form_data.function_replace_map.keys()) for i, f in enumerate(sorted_coefficients): g = form_data.function_replace_map[f] coefficient_numbering[g] = i assert i == g.count() # Replace coefficients so they all have proper element and domain for what's to come # TODO: We can avoid the replace call when proper Expression support is in place # and element/domain assignment is removed from compute_form_data. integrands = { num_points: replace(sorted_integrals[num_points].integrand(), form_data.function_replace_map) for num_points in sorted(sorted_integrals) } else: pass #coefficient_numbering = {} #coefficient_element = {} #coefficient_domain = {} #sorted_coefficients = sorted_by_count(form_data.function_replace_map.keys()) #for i, f in enumerate(sorted_coefficients): # g = form_data.function_replace_map[f] # coefficient_numbering[f] = i # coefficient_element[f] = g.ufl_element() # coefficient_domain[f] = g.ufl_domain() #integrands = { # num_points: sorted_integrals[num_points].integrand() # for num_points in sorted(sorted_integrals) # } # then pass coefficient_element and coefficient_domain to the uflacs ir as well # Build the more uflacs-specific intermediate representation uflacs_ir = build_uflacs_ir(itg_data.domain.ufl_cell(), itg_data.integral_type, ir["entitytype"], integrands, ir["tensor_shape"], coefficient_numbering, quadrature_rules, parameters) ir.update(uflacs_ir) return ir ffc-2019.1.0.post0/ffc/utils.py000066400000000000000000000044401346026536000160250ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2005-2017 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # Modified by Kristian B. Oelgaard, 2009 # Modified by Martin Sandve Alnæs 2014 # Python modules. import operator import functools import itertools # FFC modules. from .log import error from ufl.utils.sequences import product def all_equal(sequence): "Check that all items in list are equal." return sequence[:-1] == sequence[1:] def pick_first(sequence): "Check that all values are equal and return the value." if not all_equal(sequence): error("Values differ: " + str(sequence)) return sequence[0] def listcopy(sequence): """Create a copy of the list, calling the copy constructor on each object in the list (problems when using copy.deepcopy).""" if not sequence: return [] else: return [object.__class__(object) for object in sequence] def compute_permutations(k, n, skip=[]): """Compute all permutations of k elements from (0, n) in rising order. Any elements that are contained in the list skip are not included.""" if k == 1: return [(i,) for i in range(n) if i not in skip] pp = compute_permutations(k - 1, n, skip) permutations = [] for i in range(n): if i in skip: continue for p in pp: if i < p[0]: permutations += [(i, ) + p] return permutations def insert_nested_dict(root, keys, value): "Set root[keys[0]][...][keys[-1]] = value, creating subdicts on the way if missing." for k in keys[:-1]: d = root.get(k) if d is None: d = {} root[k] = d root = d root[keys[-1]] = value ffc-2019.1.0.post0/ffc/wrappers.py000066400000000000000000000102551346026536000165310ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2010-2017 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # Python modules from itertools import chain # FFC modules from ffc.log import begin, end, info, error from ffc.utils import all_equal from ffc.classname import make_classname from ffc.backends.dolfin.wrappers import generate_dolfin_code from ffc.backends.dolfin.capsules import UFCElementNames, UFCFormNames __all__ = ["generate_wrapper_code"] # FIXME: More clean-ups needed here. def generate_wrapper_code(analysis, prefix, object_names, parameters): "Generate code for additional wrappers." # Skip if wrappers not requested if not parameters["format"] == "dolfin": return None # Return dolfin wrapper return _generate_dolfin_wrapper(analysis, prefix, object_names, parameters) def _generate_dolfin_wrapper(analysis, prefix, object_names, parameters): begin("Compiler stage 4.1: Generating additional wrapper code") # Encapsulate data (capsules, common_space) = _encapsulate(prefix, object_names, analysis, parameters) # Generate code info("Generating wrapper code for DOLFIN") code = generate_dolfin_code(prefix, "", capsules, common_space, error_control=parameters["error_control"]) code += "\n\n" end() return code def _encapsulate(prefix, object_names, analysis, parameters): # Extract data from analysis form_datas, elements, element_map, domains = analysis # FIXME: Encapsulate domains? num_form_datas = len(form_datas) common_space = False # Special case: single element if num_form_datas == 0: capsules = _encapsule_element(prefix, elements) # Special case: with error control elif parameters["error_control"] and num_form_datas == 11: capsules = [_encapsule_form(prefix, object_names, form_data, i, element_map) for (i, form_data) in enumerate(form_datas[:num_form_datas - 1])] capsules += [_encapsule_form(prefix, object_names, form_datas[-1], num_form_datas - 1, element_map, "GoalFunctional")] # Otherwise: generate standard capsules for each form else: capsules = [_encapsule_form(prefix, object_names, form_data, i, element_map) for (i, form_data) in enumerate(form_datas)] # Check if all argument elements are equal elements = [] for form_data in form_datas: elements += form_data.argument_elements common_space = all_equal(elements) return (capsules, common_space) def _encapsule_form(prefix, object_names, form_data, i, element_map, superclassname=None): element_numbers = [element_map[e] for e in form_data.argument_elements + form_data.coefficient_elements] if superclassname is None: superclassname = "Form" form_names = UFCFormNames( object_names.get(id(form_data.original_form), "%d" % i), [object_names.get(id(obj), "w%d" % j) for j, obj in enumerate(form_data.reduced_coefficients)], make_classname(prefix, "form", i), [make_classname(prefix, "finite_element", j) for j in element_numbers], [make_classname(prefix, "dofmap", j) for j in element_numbers], superclassname) return form_names def _encapsule_element(prefix, elements): element_number = len(elements) - 1 # eh? this doesn't make any sense args = ("0", [make_classname(prefix, "finite_element", element_number)], [make_classname(prefix, "dofmap", element_number)]) return UFCElementNames(*args) ffc-2019.1.0.post0/libs/000077500000000000000000000000001346026536000145045ustar00rootroot00000000000000ffc-2019.1.0.post0/libs/ffc-factory/000077500000000000000000000000001346026536000167075ustar00rootroot00000000000000ffc-2019.1.0.post0/libs/ffc-factory/README.rst000066400000000000000000000005131346026536000203750ustar00rootroot00000000000000UFC Python wrappers =================== This module provides UFC Python wrappers in the module `ufc_wrappers`. It uses pybind11 to generate the wrapper code. Installation ------------ FFC must be installed to build this module. To build and install: pip install . --user Adding functionality -------------------- TODO. ffc-2019.1.0.post0/libs/ffc-factory/dofmap.cpp000066400000000000000000000021511346026536000206600ustar00rootroot00000000000000// Copyright (C) 2017 Garth N. Wells // // This file is part of FFC. // // FFC is free software: you can redistribute it and/or modify // it under the terms of the GNU Lesser General Public License as published by // the Free Software Foundation, either version 3 of the License, or // (at your option) any later version. // // FFC is distributed in the hope that it will be useful, // but WITHOUT ANY WARRANTY; without even the implied warranty of // MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the // GNU Lesser General Public License for more details. // // You should have received a copy of the GNU Lesser General Public License // along with FFC. If not, see . #include #include #include #include #include namespace py = pybind11; namespace ufc_wrappers { void dofmap(py::module& m) { // Wrap ufc::dofmap classe py::class_> dofmap(m, "dofmap"); dofmap.def("topological_dimension", &ufc::dofmap::topological_dimension); } } ffc-2019.1.0.post0/libs/ffc-factory/finite_element.cpp000066400000000000000000000173461346026536000224150ustar00rootroot00000000000000// Copyright (C) 2017 Garth N. Wells // // This file is part of FFC. // // FFC is free software: you can redistribute it and/or modify // it under the terms of the GNU Lesser General Public License as published by // the Free Software Foundation, either version 3 of the License, or // (at your option) any later version. // // FFC is distributed in the hope that it will be useful, // but WITHOUT ANY WARRANTY; without even the implied warranty of // MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the // GNU Lesser General Public License for more details. // // You should have received a copy of the GNU Lesser General Public License // along with FFC. If not, see . #include #include #include #include #include #include #include namespace py = pybind11; namespace ufc_wrappers { // Interface function to // ufc::finite_element::evaluate_reference_basis py::array_t evaluate_reference_basis(ufc::finite_element &instance, py::array_t x); py::array_t evaluate_reference_basis_derivatives(const ufc::finite_element &instance, std::size_t order, py::array_t x); py::array_t transform_reference_basis_derivatives(ufc::finite_element &instance, std::size_t order, py::array_t reference_values, py::array_t X, py::array_t J, py::array_t detJ, py::array_t K, int orientation); // Remove from UFC py::array_t evaluate_basis(ufc::finite_element &instance, std::size_t i, py::array_t x, py::array_t coordinate_dofs, int cell_orientation); // Remove from UFC py::array_t evaluate_basis_all(ufc::finite_element &instance, py::array_t x, py::array_t coordinate_dofs, int cell_orientation); // Remove from UFC py::array_t evaluate_basis_derivatives(ufc::finite_element &instance, std::size_t i, std::size_t n, py::array_t x, py::array_t coordinate_dofs, int cell_orientation); // Remove from UFC py::array_t evaluate_basis_derivatives_all(ufc::finite_element &instance, std::size_t n, py::array_t x, py::array_t coordinate_dofs, int cell_orientation); double evaluate_dof(ufc::finite_element &instance, std::size_t i, const ufc::function& f, py::array_t coordinate_dofs, int cell_orientation, const ufc::cell& cell); py::array_t evaluate_dofs(ufc::finite_element &instance, const ufc::function& f, py::array_t coordinate_dofs, int cell_orientation, const ufc::cell& cell); py::array_t interpolate_vertex_values(ufc::finite_element &instance, py::array_t dof_values, py::array_t coordinate_dofs, int cell_orientation); py::array_t tabulate_dof_coordinates(ufc::finite_element &instance, py::array_t coordinate_dofs); py::array_t tabulate_reference_dof_coordinates(ufc::finite_element &instance); // Add the Python submodule 'finite_element' void finite_element(py::module& m) { // Wrap ufc classes py::class_> element(m, "finite_element"); // Wrap ufc::shape enum py::enum_ shape(m, "shape"); shape.value("interval", ufc::shape::interval); shape.value("triangle", ufc::shape::triangle); shape.value("tetrahedon", ufc::shape::tetrahedron); shape.value("quadrilateral", ufc::shape::quadrilateral); shape.value("hexahedron", ufc::shape::hexahedron); shape.value("vertex", ufc::shape::vertex); // Expose ufc::finite_element functions element.def("signature", &ufc::finite_element::signature); element.def("cell_shape", &ufc::finite_element::cell_shape); element.def("topological_dimension", &ufc::finite_element::topological_dimension); element.def("geometric_dimension", &ufc::finite_element::geometric_dimension); element.def("space_dimension", &ufc::finite_element::space_dimension); element.def("value_rank", &ufc::finite_element::value_rank); element.def("value_dimension", &ufc::finite_element::value_dimension); element.def("value_size", &ufc::finite_element::value_size); element.def("reference_value_rank", &ufc::finite_element::reference_value_rank); element.def("reference_value_dimension", &ufc::finite_element::reference_value_dimension); element.def("reference_value_size", &ufc::finite_element::reference_value_size); element.def("degree", &ufc::finite_element::degree); element.def("family", &ufc::finite_element::family); element.def("evaluate_reference_basis", &ufc_wrappers::evaluate_reference_basis, "evaluate reference basis"); element.def("evaluate_reference_basis_derivatives", &ufc_wrappers::evaluate_reference_basis_derivatives, "evaluate reference basis"); element.def("transform_reference_basis_derivatives", &ufc_wrappers::transform_reference_basis_derivatives, "transform reference basis derivatives"); element.def("evaluate_basis", &ufc_wrappers::evaluate_basis, "evaluate basis"); element.def("evaluate_basis_all", &ufc_wrappers::evaluate_basis_all, "evaluate basis all"); element.def("evaluate_basis_derivatives", &ufc_wrappers::evaluate_basis_derivatives, "evaluate basis derivatives"); element.def("evaluate_basis_derivatives_all", &ufc_wrappers::evaluate_basis_derivatives_all, "evaluate basis derivatives all"); element.def("evaluate_dof", &ufc_wrappers::evaluate_dof, "evaluate dof"); element.def("evaluate_dofs", &ufc_wrappers::evaluate_dofs, "evaluate dofs"); element.def("interpolate_vertex_values", &ufc_wrappers::interpolate_vertex_values, "interpolate vertex values"); element.def("tabulate_dof_coordinates", &ufc_wrappers::tabulate_dof_coordinates, "tabulate dof coordinates"); element.def("tabulate_reference_dof_coordinates", &ufc_wrappers::tabulate_reference_dof_coordinates, "tabulate reference dof coordinates"); element.def("num_sub_elements", &ufc::finite_element::num_sub_elements); element.def("create_sub_element", &ufc::finite_element::create_sub_element, py::return_value_policy::take_ownership); element.def("create", &ufc::finite_element::create, py::return_value_policy::take_ownership); } } ffc-2019.1.0.post0/libs/ffc-factory/finite_element_bindings.cpp000066400000000000000000000362561346026536000242730ustar00rootroot00000000000000// Copyright (C) 2017 Garth N. Wells // // This file is part of FFC. // // FFC is free software: you can redistribute it and/or modify // it under the terms of the GNU Lesser General Public License as published by // the Free Software Foundation, either version 3 of the License, or // (at your option) any later version. // // FFC is distributed in the hope that it will be useful, // but WITHOUT ANY WARRANTY; without even the implied warranty of // MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the // GNU Lesser General Public License for more details. // // You should have received a copy of the GNU Lesser General Public License // along with FFC. If not, see . #include #include #include #include #include #include #include namespace py = pybind11; namespace ufc_wrappers { void check_array_shape(py::array_t x, const std::vector shape) { const py::buffer_info x_info = x.request(); if ((std::size_t) x_info.ndim != shape.size()) throw std::runtime_error("NumPy array has wrong number of dimensions"); const auto& x_shape = x_info.shape; for (std::size_t i =0; i < shape.size();++i) { if ((std::size_t) x_shape[i] != shape[i]) throw std::runtime_error("NumPy array has wrong size"); } } // Interface function to // ufc::finite_element::evaluate_reference_basis py::array_t evaluate_reference_basis(ufc::finite_element &instance, py::array_t x) { // Dimensions const std::size_t num_dofs = instance.space_dimension(); const std::size_t ref_size = instance.reference_value_size(); const std::size_t tdim = instance.topological_dimension(); // Extract point data const py::buffer_info x_info = x.request(); const auto& x_shape = x_info.shape; const std::size_t num_points = x_shape[0]; check_array_shape(x, {num_points, tdim}); // Create object to hold data to be computed and returned py::array_t f({num_points, num_dofs, ref_size}); // Evaluate basis instance.evaluate_reference_basis(f.mutable_data(), num_points, x.data()); return f; } py::array_t evaluate_reference_basis_derivatives(const ufc::finite_element &instance, std::size_t order, py::array_t x) { // Dimensions const std::size_t tdim = instance.topological_dimension(); const std::size_t num_dofs = instance.space_dimension(); const std::size_t ref_size = instance.reference_value_size(); // Extract point data py::buffer_info x_info = x.request(); const auto& x_shape = x_info.shape; const std::size_t num_points = x_shape[0]; check_array_shape(x, {num_points, tdim}); // Shape of data to be computed and returned, and create const std::size_t num_derivs = pow(tdim, order); py::array_t f({num_points, num_dofs, num_derivs, ref_size}); // Evaluate basis derivatives instance.evaluate_reference_basis_derivatives(f.mutable_data(), order, num_points, x.data()); return f; } py::array_t transform_reference_basis_derivatives(ufc::finite_element &instance, std::size_t order, py::array_t reference_values, py::array_t X, py::array_t J, py::array_t detJ, py::array_t K, int orientation) { // Dimensions const std::size_t gdim = instance.geometric_dimension(); const std::size_t tdim = instance.topological_dimension(); const std::size_t num_dofs = instance.space_dimension(); const std::size_t ref_size = instance.reference_value_size(); const std::size_t num_derivs = std::pow(tdim, order); // Extract reference value data (reference_values) py::buffer_info info_reference_values = reference_values.request(); const auto& shape_reference_values = info_reference_values.shape; const std::size_t num_points = shape_reference_values[0]; check_array_shape(reference_values, {num_points, num_dofs, num_derivs, ref_size}); // Extract point data (X) py::buffer_info x_info = X.request(); check_array_shape(X, {num_points, tdim}); // Extract Jacobian data (J) py::buffer_info info_J = J.request(); check_array_shape(J, {num_points, gdim, tdim}); // Extract determinant of Jacobian data (detJ) py::buffer_info info_detJ = detJ.request(); check_array_shape(detJ, {num_points}); // Extract inverse of Jacobian data (K) py::buffer_info info_K = K.request(); check_array_shape(K, {num_points, tdim, gdim}); // Shape of data to be computed and returned, and create py::array_t f({num_points, num_dofs, num_derivs, ref_size}); // Evaluate basis derivatives instance.transform_reference_basis_derivatives(f.mutable_data(), order, num_points, reference_values.data(), X.data(), J.data(), detJ.data(), K.data(), orientation); return f; } // Remove from UFC py::array_t evaluate_basis(ufc::finite_element &instance, std::size_t i, py::array_t x, py::array_t coordinate_dofs, int cell_orientation) { // Dimensions const std::size_t gdim = instance.geometric_dimension(); const std::size_t tdim = instance.topological_dimension(); const std::size_t ref_size = instance.reference_value_size(); // Extract point data (x) py::buffer_info info_x = x.request(); check_array_shape(x, {gdim}); // Extract coordinate data (coordinate_dofs) py::buffer_info info_coordinate_dofs = coordinate_dofs.request(); // FIXME: this assumes an affine map check_array_shape(coordinate_dofs, {tdim + 1, gdim}); // Shape of data to be computed and returned, and create py::array_t values(ref_size); instance.evaluate_basis(i, values.mutable_data(), x.data(), coordinate_dofs.data(), cell_orientation, nullptr); return values; } // Remove from UFC py::array_t evaluate_basis_all(ufc::finite_element &instance, py::array_t x, py::array_t coordinate_dofs, int cell_orientation) { // Dimensions const std::size_t gdim = instance.geometric_dimension(); const std::size_t tdim = instance.topological_dimension(); const std::size_t num_dofs = instance.space_dimension(); const std::size_t ref_size = instance.reference_value_size(); // Extract point data (x) py::buffer_info info_x = x.request(); check_array_shape(x, {gdim}); // Extract reference value data (coordinate_dofs) py::buffer_info info_coordinate_dofs = coordinate_dofs.request(); // FIXME: this assumes an affine map check_array_shape(coordinate_dofs, {tdim + 1, gdim}); // Shape of data to be computed and returned, and create py::array_t f({num_dofs, ref_size}); // Call UFC function instance.evaluate_basis_all(f.mutable_data(), x.data(), coordinate_dofs.data(), cell_orientation, nullptr); return f; } // Remove from UFC py::array_t evaluate_basis_derivatives(ufc::finite_element &instance, std::size_t i, std::size_t n, py::array_t x, py::array_t coordinate_dofs, int cell_orientation) { // Dimensions const std::size_t gdim = instance.geometric_dimension(); const std::size_t tdim = instance.topological_dimension(); // Extract point data (x) py::buffer_info info_x = x.request(); check_array_shape(x, {gdim}); // Extract reference value data (coordinate_dofs) py::buffer_info info_coordinate_dofs = coordinate_dofs.request(); // FIXME: this assumes an affine map check_array_shape(coordinate_dofs, {tdim + 1, gdim}); // Shape of data to be computed and returned, and create py::array_t f(gdim); // Call UFC function instance.evaluate_basis_derivatives(i, n, f.mutable_data(), x.data(), coordinate_dofs.data(), cell_orientation, nullptr); return f; } // Remove from UFC py::array_t evaluate_basis_derivatives_all(ufc::finite_element &instance, std::size_t n, py::array_t x, py::array_t coordinate_dofs, int cell_orientation) { // Dimensions const std::size_t gdim = instance.geometric_dimension(); const std::size_t tdim = instance.topological_dimension(); const std::size_t num_dofs = instance.space_dimension(); // Extract point data (x) py::buffer_info info_x = x.request(); check_array_shape(x, {gdim}); // Extract coordinate data (coordinate_dofs) py::buffer_info info_coordinate_dofs = coordinate_dofs.request(); // FIXME: this assumes an affine map check_array_shape(coordinate_dofs, {tdim + 1, gdim}); // Shape of data to be computed and returned, and create py::array_t f({num_dofs, gdim}); // Call UFC function instance.evaluate_basis_derivatives_all(n, f.mutable_data(), x.data(), coordinate_dofs.data(), cell_orientation, nullptr); return f; } double evaluate_dof(ufc::finite_element &instance, std::size_t i, const ufc::function& f, py::array_t coordinate_dofs, int cell_orientation, const ufc::cell& cell) { // Dimensions const std::size_t gdim = instance.geometric_dimension(); const std::size_t tdim = instance.topological_dimension(); // Extract coordinate data (coordinate_dofs) py::buffer_info info_coordinate_dofs = coordinate_dofs.request(); // FIXME: this assumes an affine map check_array_shape(coordinate_dofs, {tdim + 1, gdim}); // Call UFC function return instance.evaluate_dof(i, f , coordinate_dofs.data(), cell_orientation, cell, nullptr); } py::array_t evaluate_dofs(ufc::finite_element &instance, const ufc::function& f, py::array_t coordinate_dofs, int cell_orientation, const ufc::cell& cell) { // Dimensions const std::size_t gdim = instance.geometric_dimension(); const std::size_t tdim = instance.topological_dimension(); const std::size_t num_dofs = instance.space_dimension(); // Extract coordinate data (coordinate_dofs) py::buffer_info info_coordinate_dofs = coordinate_dofs.request(); // FIXME: this assumes an affine map check_array_shape(coordinate_dofs, {tdim + 1, gdim}); // Shape of data to be computed and returned, and create py::array_t values(num_dofs); // Call UFC function instance.evaluate_dofs(values.mutable_data(), f , coordinate_dofs.data(), cell_orientation, cell, nullptr); return values; } py::array_t interpolate_vertex_values(ufc::finite_element &instance, py::array_t dof_values, py::array_t coordinate_dofs, int cell_orientation) { // Dimensions const std::size_t gdim = instance.geometric_dimension(); const std::size_t tdim = instance.topological_dimension(); const std::size_t num_dofs = instance.space_dimension(); const std::size_t num_components = instance.value_size(); // Extract dof values (dof_values) py::buffer_info info_dof_values = dof_values.request(); check_array_shape(dof_values, {num_dofs}); // Extract coordinate data (coordinate_dofs) py::buffer_info info_coordinate_dofs = coordinate_dofs.request(); // FIXME: this assumes an affine map check_array_shape(coordinate_dofs, {tdim + 1, gdim}); // Shape of data to be computed and returned, and create std::size_t num_vertices = tdim + 1; py::array_t vertex_values({num_vertices, num_components}); // Call UFC function instance.interpolate_vertex_values(vertex_values.mutable_data(), dof_values.data(), coordinate_dofs.data(), cell_orientation, nullptr); return vertex_values; } py::array_t tabulate_dof_coordinates(ufc::finite_element &instance, py::array_t coordinate_dofs) { // Dimensions const std::size_t gdim = instance.geometric_dimension(); const std::size_t tdim = instance.topological_dimension(); const std::size_t num_dofs = instance.space_dimension(); //const std::size_t num_components = instance.value_size(); // Extract coordinate data (coordinate_dofs) py::buffer_info info_coordinate_dofs = coordinate_dofs.request(); // FIXME: this assumes an affine map check_array_shape(coordinate_dofs, {tdim + 1, gdim}); // Shape of data to be computed and returned, and create py::array_t coords({num_dofs, gdim}); // Call UFC function instance.tabulate_dof_coordinates(coords.mutable_data(), coordinate_dofs.data(), nullptr); return coords; } py::array_t tabulate_reference_dof_coordinates(ufc::finite_element &instance) { // Dimensions const std::size_t gdim = instance.geometric_dimension(); const std::size_t num_dofs = instance.space_dimension(); // Shape of data to be computed and returned, and create py::array_t coords({num_dofs, gdim}); // Call UFC function instance.tabulate_reference_dof_coordinates(coords.mutable_data()); return coords; } } ffc-2019.1.0.post0/libs/ffc-factory/interface.cpp000066400000000000000000000035141346026536000213560ustar00rootroot00000000000000// Copyright (C) 2017 Garth N. Wells // // This file is part of FFC. // // FFC is free software: you can redistribute it and/or modify // it under the terms of the GNU Lesser General Public License as published by // the Free Software Foundation, either version 3 of the License, or // (at your option) any later version. // // FFC is distributed in the hope that it will be useful, // but WITHOUT ANY WARRANTY; without even the implied warranty of // MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the // GNU Lesser General Public License for more details. // // You should have received a copy of the GNU Lesser General Public License // along with FFC. If not, see . #include #include #include namespace py = pybind11; namespace ufc_wrappers { void finite_element(py::module& m); void dofmap(py::module& m); } //PYBIND11_MODULE(ffc_factory, m) PYBIND11_PLUGIN(ffc_factory) { pybind11::module m("ffc_factory", "Factory for wrapping FFC JIT-compiled objects"); // Create finite_element submodule py::module finite_element = m.def_submodule("finite_element", "UFC finite_element"); ufc_wrappers::finite_element(finite_element); // Create dofmap submodule py::module dofmap = m.def_submodule("dofmap", "UFC dofmap"); ufc_wrappers::dofmap(dofmap); // Factory functions m.def("make_ufc_finite_element", [](std::uintptr_t e) { ufc::finite_element * p = reinterpret_cast(e); return std::shared_ptr(p); }); m.def("make_ufc_dofmap", [](std::uintptr_t e) { ufc::dofmap * p = reinterpret_cast(e); return std::shared_ptr(p); }); return m.ptr(); } ffc-2019.1.0.post0/libs/ffc-factory/setup.py000066400000000000000000000064511346026536000204270ustar00rootroot00000000000000from setuptools import setup, Extension, find_packages from setuptools.command.build_ext import build_ext import sys import setuptools import ffc __version__ = '0.0.1' class get_pybind_include(object): """Helper class to determine the pybind11 include path The purpose of this class is to postpone importing pybind11 until it is actually installed, so that the ``get_include()`` method can be invoked. """ def __init__(self, user=False): self.user = user def __str__(self): import pybind11 return pybind11.get_include(self.user) ext_modules = [ Extension('ffc_factory', ['interface.cpp', 'dofmap.cpp', 'finite_element.cpp', 'finite_element_bindings.cpp'], include_dirs=[ # Path to pybind11 headers ffc.get_include_path(), get_pybind_include(), get_pybind_include(user=True) ], language='c++'), ] # As of Python 3.6, CCompiler has a `has_flag` method. # cf http://bugs.python.org/issue26689 def has_flag(compiler, flagname): """Return a boolean indicating whether a flag name is supported on the specified compiler. """ import tempfile with tempfile.NamedTemporaryFile('w', suffix='.cpp') as f: f.write('int main (int argc, char **argv) { return 0; }') try: compiler.compile([f.name], extra_postargs=[flagname]) except setuptools.distutils.errors.CompileError: return False return True def cpp_flag(compiler): """Return the -std=c++[11/14] compiler flag. The c++14 is prefered over c++11 (when it is available). """ if has_flag(compiler, '-std=c++14'): return '-std=c++14' elif has_flag(compiler, '-std=c++11'): return '-std=c++11' else: raise RuntimeError('Unsupported compiler -- at least C++11 support ' 'is needed!') class BuildExt(build_ext): """A custom build extension for adding compiler-specific options.""" c_opts = { 'msvc': ['/EHsc'], 'unix': [], } if sys.platform == 'darwin': c_opts['unix'] += ['-stdlib=libc++', '-mmacosx-version-min=10.7'] def build_extensions(self): ct = self.compiler.compiler_type opts = self.c_opts.get(ct, []) if ct == 'unix': opts.append('-DVERSION_INFO="%s"' % self.distribution.get_version()) opts.append(cpp_flag(self.compiler)) if has_flag(self.compiler, '-fvisibility=default'): opts.append('-fvisibility=default') elif ct == 'msvc': opts.append('/DVERSION_INFO=\\"%s\\"' % self.distribution.get_version()) for ext in self.extensions: ext.extra_compile_args = opts build_ext.build_extensions(self) setup(name='fenics-ffc-factory', version=__version__, author='FEniCS Project', author_email='fenics-dev@googlegroups.com', url='https://fenicsproject.org', description='Python wrappers for UFC using pybind11', long_description='', packages=find_packages(), ext_modules=ext_modules, setup_requires=['pybind11>=1.7', 'fenics-ffc'], cmdclass={'build_ext': BuildExt}, zip_safe=False,) ffc-2019.1.0.post0/libs/gtest-1.7.0/000077500000000000000000000000001346026536000162735ustar00rootroot00000000000000ffc-2019.1.0.post0/libs/gtest-1.7.0/CHANGES000066400000000000000000000147651346026536000173030ustar00rootroot00000000000000Changes for 1.7.0: * New feature: death tests are supported on OpenBSD and in iOS simulator now. * New feature: Google Test now implements a protocol to allow a test runner to detect that a test program has exited prematurely and report it as a failure (before it would be falsely reported as a success if the exit code is 0). * New feature: Test::RecordProperty() can now be used outside of the lifespan of a test method, in which case it will be attributed to the current test case or the test program in the XML report. * New feature (potentially breaking): --gtest_list_tests now prints the type parameters and value parameters for each test. * Improvement: char pointers and char arrays are now escaped properly in failure messages. * Improvement: failure summary in XML reports now includes file and line information. * Improvement: the XML element now has a timestamp attribute. * Improvement: When --gtest_filter is specified, XML report now doesn't contain information about tests that are filtered out. * Fixed the bug where long --gtest_filter flag values are truncated in death tests. * Potentially breaking change: RUN_ALL_TESTS() is now implemented as a function instead of a macro in order to work better with Clang. * Compatibility fixes with C++ 11 and various platforms. * Bug/warning fixes. Changes for 1.6.0: * New feature: ADD_FAILURE_AT() for reporting a test failure at the given source location -- useful for writing testing utilities. * New feature: the universal value printer is moved from Google Mock to Google Test. * New feature: type parameters and value parameters are reported in the XML report now. * A gtest_disable_pthreads CMake option. * Colored output works in GNU Screen sessions now. * Parameters of value-parameterized tests are now printed in the textual output. * Failures from ad hoc test assertions run before RUN_ALL_TESTS() are now correctly reported. * Arguments of ASSERT_XY and EXPECT_XY no longer need to support << to ostream. * More complete handling of exceptions. * GTEST_ASSERT_XY can be used instead of ASSERT_XY in case the latter name is already used by another library. * --gtest_catch_exceptions is now true by default, allowing a test program to continue after an exception is thrown. * Value-parameterized test fixtures can now derive from Test and WithParamInterface separately, easing conversion of legacy tests. * Death test messages are clearly marked to make them more distinguishable from other messages. * Compatibility fixes for Android, Google Native Client, MinGW, HP UX, PowerPC, Lucid autotools, libCStd, Sun C++, Borland C++ Builder (Code Gear), IBM XL C++ (Visual Age C++), and C++0x. * Bug fixes and implementation clean-ups. * Potentially incompatible changes: disables the harmful 'make install' command in autotools. Changes for 1.5.0: * New feature: assertions can be safely called in multiple threads where the pthreads library is available. * New feature: predicates used inside EXPECT_TRUE() and friends can now generate custom failure messages. * New feature: Google Test can now be compiled as a DLL. * New feature: fused source files are included. * New feature: prints help when encountering unrecognized Google Test flags. * Experimental feature: CMake build script (requires CMake 2.6.4+). * Experimental feature: the Pump script for meta programming. * double values streamed to an assertion are printed with enough precision to differentiate any two different values. * Google Test now works on Solaris and AIX. * Build and test script improvements. * Bug fixes and implementation clean-ups. Potentially breaking changes: * Stopped supporting VC++ 7.1 with exceptions disabled. * Dropped support for 'make install'. Changes for 1.4.0: * New feature: the event listener API * New feature: test shuffling * New feature: the XML report format is closer to junitreport and can be parsed by Hudson now. * New feature: when a test runs under Visual Studio, its failures are integrated in the IDE. * New feature: /MD(d) versions of VC++ projects. * New feature: elapsed time for the tests is printed by default. * New feature: comes with a TR1 tuple implementation such that Boost is no longer needed for Combine(). * New feature: EXPECT_DEATH_IF_SUPPORTED macro and friends. * New feature: the Xcode project can now produce static gtest libraries in addition to a framework. * Compatibility fixes for Solaris, Cygwin, minGW, Windows Mobile, Symbian, gcc, and C++Builder. * Bug fixes and implementation clean-ups. Changes for 1.3.0: * New feature: death tests on Windows, Cygwin, and Mac. * New feature: ability to use Google Test assertions in other testing frameworks. * New feature: ability to run disabled test via --gtest_also_run_disabled_tests. * New feature: the --help flag for printing the usage. * New feature: access to Google Test flag values in user code. * New feature: a script that packs Google Test into one .h and one .cc file for easy deployment. * New feature: support for distributing test functions to multiple machines (requires support from the test runner). * Bug fixes and implementation clean-ups. Changes for 1.2.1: * Compatibility fixes for Linux IA-64 and IBM z/OS. * Added support for using Boost and other TR1 implementations. * Changes to the build scripts to support upcoming release of Google C++ Mocking Framework. * Added Makefile to the distribution package. * Improved build instructions in README. Changes for 1.2.0: * New feature: value-parameterized tests. * New feature: the ASSERT/EXPECT_(NON)FATAL_FAILURE(_ON_ALL_THREADS) macros. * Changed the XML report format to match JUnit/Ant's. * Added tests to the Xcode project. * Added scons/SConscript for building with SCons. * Added src/gtest-all.cc for building Google Test from a single file. * Fixed compatibility with Solaris and z/OS. * Enabled running Python tests on systems with python 2.3 installed, e.g. Mac OS X 10.4. * Bug fixes. Changes for 1.1.0: * New feature: type-parameterized tests. * New feature: exception assertions. * New feature: printing elapsed time of tests. * Improved the robustness of death tests. * Added an Xcode project and samples. * Adjusted the output format on Windows to be understandable by Visual Studio. * Minor bug fixes. Changes for 1.0.1: * Added project files for Visual Studio 7.1. * Fixed issues with compiling on Mac OS X. * Fixed issues with compiling on Cygwin. Changes for 1.0.0: * Initial Open Source release of Google Test ffc-2019.1.0.post0/libs/gtest-1.7.0/CMakeLists.txt000066400000000000000000000216401346026536000210360ustar00rootroot00000000000000######################################################################## # CMake build script for Google Test. # # To run the tests for Google Test itself on Linux, use 'make test' or # ctest. You can select which tests to run using 'ctest -R regex'. # For more options, run 'ctest --help'. # BUILD_SHARED_LIBS is a standard CMake variable, but we declare it here to # make it prominent in the GUI. option(BUILD_SHARED_LIBS "Build shared libraries (DLLs)." OFF) # When other libraries are using a shared version of runtime libraries, # Google Test also has to use one. option( gtest_force_shared_crt "Use shared (DLL) run-time lib even when Google Test is built as static lib." OFF) option(gtest_build_tests "Build all of gtest's own tests." OFF) option(gtest_build_samples "Build gtest's sample programs." OFF) option(gtest_disable_pthreads "Disable uses of pthreads in gtest." OFF) # Defines pre_project_set_up_hermetic_build() and set_up_hermetic_build(). include(cmake/hermetic_build.cmake OPTIONAL) if (COMMAND pre_project_set_up_hermetic_build) pre_project_set_up_hermetic_build() endif() ######################################################################## # # Project-wide settings # Name of the project. # # CMake files in this project can refer to the root source directory # as ${gtest_SOURCE_DIR} and to the root binary directory as # ${gtest_BINARY_DIR}. # Language "C" is required for find_package(Threads). project(gtest CXX C) cmake_minimum_required(VERSION 2.6.2) if (COMMAND set_up_hermetic_build) set_up_hermetic_build() endif() # Define helper functions and macros used by Google Test. include(cmake/internal_utils.cmake) config_compiler_and_linker() # Defined in internal_utils.cmake. # Where Google Test's .h files can be found. include_directories( ${gtest_SOURCE_DIR}/include ${gtest_SOURCE_DIR}) # Where Google Test's libraries can be found. link_directories(${gtest_BINARY_DIR}/src) ######################################################################## # # Defines the gtest & gtest_main libraries. User tests should link # with one of them. # Google Test libraries. We build them using more strict warnings than what # are used for other targets, to ensure that gtest can be compiled by a user # aggressive about warnings. cxx_library(gtest "${cxx_strict}" src/gtest-all.cc) cxx_library(gtest_main "${cxx_strict}" src/gtest_main.cc) target_link_libraries(gtest_main gtest) ######################################################################## # # Samples on how to link user tests with gtest or gtest_main. # # They are not built by default. To build them, set the # gtest_build_samples option to ON. You can do it by running ccmake # or specifying the -Dgtest_build_samples=ON flag when running cmake. if (gtest_build_samples) cxx_executable(sample1_unittest samples gtest_main samples/sample1.cc) cxx_executable(sample2_unittest samples gtest_main samples/sample2.cc) cxx_executable(sample3_unittest samples gtest_main) cxx_executable(sample4_unittest samples gtest_main samples/sample4.cc) cxx_executable(sample5_unittest samples gtest_main samples/sample1.cc) cxx_executable(sample6_unittest samples gtest_main) cxx_executable(sample7_unittest samples gtest_main) cxx_executable(sample8_unittest samples gtest_main) cxx_executable(sample9_unittest samples gtest) cxx_executable(sample10_unittest samples gtest) endif() ######################################################################## # # Google Test's own tests. # # You can skip this section if you aren't interested in testing # Google Test itself. # # The tests are not built by default. To build them, set the # gtest_build_tests option to ON. You can do it by running ccmake # or specifying the -Dgtest_build_tests=ON flag when running cmake. if (gtest_build_tests) # This must be set in the root directory for the tests to be run by # 'make test' or ctest. enable_testing() ############################################################ # C++ tests built with standard compiler flags. cxx_test(gtest-death-test_test gtest_main) cxx_test(gtest_environment_test gtest) cxx_test(gtest-filepath_test gtest_main) cxx_test(gtest-linked_ptr_test gtest_main) cxx_test(gtest-listener_test gtest_main) cxx_test(gtest_main_unittest gtest_main) cxx_test(gtest-message_test gtest_main) cxx_test(gtest_no_test_unittest gtest) cxx_test(gtest-options_test gtest_main) cxx_test(gtest-param-test_test gtest test/gtest-param-test2_test.cc) cxx_test(gtest-port_test gtest_main) cxx_test(gtest_pred_impl_unittest gtest_main) cxx_test(gtest_premature_exit_test gtest test/gtest_premature_exit_test.cc) cxx_test(gtest-printers_test gtest_main) cxx_test(gtest_prod_test gtest_main test/production.cc) cxx_test(gtest_repeat_test gtest) cxx_test(gtest_sole_header_test gtest_main) cxx_test(gtest_stress_test gtest) cxx_test(gtest-test-part_test gtest_main) cxx_test(gtest_throw_on_failure_ex_test gtest) cxx_test(gtest-typed-test_test gtest_main test/gtest-typed-test2_test.cc) cxx_test(gtest_unittest gtest_main) cxx_test(gtest-unittest-api_test gtest) ############################################################ # C++ tests built with non-standard compiler flags. # MSVC 7.1 does not support STL with exceptions disabled. if (NOT MSVC OR MSVC_VERSION GREATER 1310) cxx_library(gtest_no_exception "${cxx_no_exception}" src/gtest-all.cc) cxx_library(gtest_main_no_exception "${cxx_no_exception}" src/gtest-all.cc src/gtest_main.cc) endif() cxx_library(gtest_main_no_rtti "${cxx_no_rtti}" src/gtest-all.cc src/gtest_main.cc) cxx_test_with_flags(gtest-death-test_ex_nocatch_test "${cxx_exception} -DGTEST_ENABLE_CATCH_EXCEPTIONS_=0" gtest test/gtest-death-test_ex_test.cc) cxx_test_with_flags(gtest-death-test_ex_catch_test "${cxx_exception} -DGTEST_ENABLE_CATCH_EXCEPTIONS_=1" gtest test/gtest-death-test_ex_test.cc) cxx_test_with_flags(gtest_no_rtti_unittest "${cxx_no_rtti}" gtest_main_no_rtti test/gtest_unittest.cc) cxx_shared_library(gtest_dll "${cxx_default}" src/gtest-all.cc src/gtest_main.cc) cxx_executable_with_flags(gtest_dll_test_ "${cxx_default}" gtest_dll test/gtest_all_test.cc) set_target_properties(gtest_dll_test_ PROPERTIES COMPILE_DEFINITIONS "GTEST_LINKED_AS_SHARED_LIBRARY=1") if (NOT MSVC OR NOT MSVC_VERSION EQUAL 1600) # The C++ Standard specifies tuple_element. # Yet MSVC 10's declares tuple_element. # That declaration conflicts with our own standard-conforming # tuple implementation. Therefore using our own tuple with # MSVC 10 doesn't compile. cxx_library(gtest_main_use_own_tuple "${cxx_use_own_tuple}" src/gtest-all.cc src/gtest_main.cc) cxx_test_with_flags(gtest-tuple_test "${cxx_use_own_tuple}" gtest_main_use_own_tuple test/gtest-tuple_test.cc) cxx_test_with_flags(gtest_use_own_tuple_test "${cxx_use_own_tuple}" gtest_main_use_own_tuple test/gtest-param-test_test.cc test/gtest-param-test2_test.cc) endif() ############################################################ # Python tests. cxx_executable(gtest_break_on_failure_unittest_ test gtest) py_test(gtest_break_on_failure_unittest) # MSVC 7.1 does not support STL with exceptions disabled. if (NOT MSVC OR MSVC_VERSION GREATER 1310) cxx_executable_with_flags( gtest_catch_exceptions_no_ex_test_ "${cxx_no_exception}" gtest_main_no_exception test/gtest_catch_exceptions_test_.cc) endif() cxx_executable_with_flags( gtest_catch_exceptions_ex_test_ "${cxx_exception}" gtest_main test/gtest_catch_exceptions_test_.cc) py_test(gtest_catch_exceptions_test) cxx_executable(gtest_color_test_ test gtest) py_test(gtest_color_test) cxx_executable(gtest_env_var_test_ test gtest) py_test(gtest_env_var_test) cxx_executable(gtest_filter_unittest_ test gtest) py_test(gtest_filter_unittest) cxx_executable(gtest_help_test_ test gtest_main) py_test(gtest_help_test) cxx_executable(gtest_list_tests_unittest_ test gtest) py_test(gtest_list_tests_unittest) cxx_executable(gtest_output_test_ test gtest) py_test(gtest_output_test) cxx_executable(gtest_shuffle_test_ test gtest) py_test(gtest_shuffle_test) # MSVC 7.1 does not support STL with exceptions disabled. if (NOT MSVC OR MSVC_VERSION GREATER 1310) cxx_executable(gtest_throw_on_failure_test_ test gtest_no_exception) set_target_properties(gtest_throw_on_failure_test_ PROPERTIES COMPILE_FLAGS "${cxx_no_exception}") py_test(gtest_throw_on_failure_test) endif() cxx_executable(gtest_uninitialized_test_ test gtest) py_test(gtest_uninitialized_test) cxx_executable(gtest_xml_outfile1_test_ test gtest_main) cxx_executable(gtest_xml_outfile2_test_ test gtest_main) py_test(gtest_xml_outfiles_test) cxx_executable(gtest_xml_output_unittest_ test gtest) py_test(gtest_xml_output_unittest) endif() ffc-2019.1.0.post0/libs/gtest-1.7.0/CONTRIBUTORS000066400000000000000000000025161346026536000201570ustar00rootroot00000000000000# This file contains a list of people who've made non-trivial # contribution to the Google C++ Testing Framework project. People # who commit code to the project are encouraged to add their names # here. Please keep the list sorted by first names. Ajay Joshi Balázs Dán Bharat Mediratta Chandler Carruth Chris Prince Chris Taylor Dan Egnor Eric Roman Hady Zalek Jeffrey Yasskin Jói Sigurðsson Keir Mierle Keith Ray Kenton Varda Manuel Klimek Markus Heule Mika Raento Miklós Fazekas Pasi Valminen Patrick Hanna Patrick Riley Peter Kaminski Preston Jackson Rainer Klaffenboeck Russ Cox Russ Rufer Sean Mcafee Sigurður Ásgeirsson Tracy Bialik Vadim Berman Vlad Losev Zhanyong Wan ffc-2019.1.0.post0/libs/gtest-1.7.0/LICENSE000066400000000000000000000027031346026536000173020ustar00rootroot00000000000000Copyright 2008, Google Inc. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of Google Inc. nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ffc-2019.1.0.post0/libs/gtest-1.7.0/Makefile.am000066400000000000000000000230131346026536000203260ustar00rootroot00000000000000# Automake file ACLOCAL_AMFLAGS = -I m4 # Nonstandard package files for distribution EXTRA_DIST = \ CHANGES \ CONTRIBUTORS \ LICENSE \ include/gtest/gtest-param-test.h.pump \ include/gtest/internal/gtest-param-util-generated.h.pump \ include/gtest/internal/gtest-tuple.h.pump \ include/gtest/internal/gtest-type-util.h.pump \ make/Makefile \ scripts/fuse_gtest_files.py \ scripts/gen_gtest_pred_impl.py \ scripts/pump.py \ scripts/test/Makefile # gtest source files that we don't compile directly. They are # #included by gtest-all.cc. GTEST_SRC = \ src/gtest-death-test.cc \ src/gtest-filepath.cc \ src/gtest-internal-inl.h \ src/gtest-port.cc \ src/gtest-printers.cc \ src/gtest-test-part.cc \ src/gtest-typed-test.cc \ src/gtest.cc EXTRA_DIST += $(GTEST_SRC) # Sample files that we don't compile. EXTRA_DIST += \ samples/prime_tables.h \ samples/sample2_unittest.cc \ samples/sample3_unittest.cc \ samples/sample4_unittest.cc \ samples/sample5_unittest.cc \ samples/sample6_unittest.cc \ samples/sample7_unittest.cc \ samples/sample8_unittest.cc \ samples/sample9_unittest.cc # C++ test files that we don't compile directly. EXTRA_DIST += \ test/gtest-death-test_ex_test.cc \ test/gtest-death-test_test.cc \ test/gtest-filepath_test.cc \ test/gtest-linked_ptr_test.cc \ test/gtest-listener_test.cc \ test/gtest-message_test.cc \ test/gtest-options_test.cc \ test/gtest-param-test2_test.cc \ test/gtest-param-test2_test.cc \ test/gtest-param-test_test.cc \ test/gtest-param-test_test.cc \ test/gtest-param-test_test.h \ test/gtest-port_test.cc \ test/gtest_premature_exit_test.cc \ test/gtest-printers_test.cc \ test/gtest-test-part_test.cc \ test/gtest-tuple_test.cc \ test/gtest-typed-test2_test.cc \ test/gtest-typed-test_test.cc \ test/gtest-typed-test_test.h \ test/gtest-unittest-api_test.cc \ test/gtest_break_on_failure_unittest_.cc \ test/gtest_catch_exceptions_test_.cc \ test/gtest_color_test_.cc \ test/gtest_env_var_test_.cc \ test/gtest_environment_test.cc \ test/gtest_filter_unittest_.cc \ test/gtest_help_test_.cc \ test/gtest_list_tests_unittest_.cc \ test/gtest_main_unittest.cc \ test/gtest_no_test_unittest.cc \ test/gtest_output_test_.cc \ test/gtest_pred_impl_unittest.cc \ test/gtest_prod_test.cc \ test/gtest_repeat_test.cc \ test/gtest_shuffle_test_.cc \ test/gtest_sole_header_test.cc \ test/gtest_stress_test.cc \ test/gtest_throw_on_failure_ex_test.cc \ test/gtest_throw_on_failure_test_.cc \ test/gtest_uninitialized_test_.cc \ test/gtest_unittest.cc \ test/gtest_unittest.cc \ test/gtest_xml_outfile1_test_.cc \ test/gtest_xml_outfile2_test_.cc \ test/gtest_xml_output_unittest_.cc \ test/production.cc \ test/production.h # Python tests that we don't run. EXTRA_DIST += \ test/gtest_break_on_failure_unittest.py \ test/gtest_catch_exceptions_test.py \ test/gtest_color_test.py \ test/gtest_env_var_test.py \ test/gtest_filter_unittest.py \ test/gtest_help_test.py \ test/gtest_list_tests_unittest.py \ test/gtest_output_test.py \ test/gtest_output_test_golden_lin.txt \ test/gtest_shuffle_test.py \ test/gtest_test_utils.py \ test/gtest_throw_on_failure_test.py \ test/gtest_uninitialized_test.py \ test/gtest_xml_outfiles_test.py \ test/gtest_xml_output_unittest.py \ test/gtest_xml_test_utils.py # CMake script EXTRA_DIST += \ CMakeLists.txt \ cmake/internal_utils.cmake # MSVC project files EXTRA_DIST += \ msvc/gtest-md.sln \ msvc/gtest-md.vcproj \ msvc/gtest.sln \ msvc/gtest.vcproj \ msvc/gtest_main-md.vcproj \ msvc/gtest_main.vcproj \ msvc/gtest_prod_test-md.vcproj \ msvc/gtest_prod_test.vcproj \ msvc/gtest_unittest-md.vcproj \ msvc/gtest_unittest.vcproj # xcode project files EXTRA_DIST += \ xcode/Config/DebugProject.xcconfig \ xcode/Config/FrameworkTarget.xcconfig \ xcode/Config/General.xcconfig \ xcode/Config/ReleaseProject.xcconfig \ xcode/Config/StaticLibraryTarget.xcconfig \ xcode/Config/TestTarget.xcconfig \ xcode/Resources/Info.plist \ xcode/Scripts/runtests.sh \ xcode/Scripts/versiongenerate.py \ xcode/gtest.xcodeproj/project.pbxproj # xcode sample files EXTRA_DIST += \ xcode/Samples/FrameworkSample/Info.plist \ xcode/Samples/FrameworkSample/WidgetFramework.xcodeproj/project.pbxproj \ xcode/Samples/FrameworkSample/runtests.sh \ xcode/Samples/FrameworkSample/widget.cc \ xcode/Samples/FrameworkSample/widget.h \ xcode/Samples/FrameworkSample/widget_test.cc # C++Builder project files EXTRA_DIST += \ codegear/gtest.cbproj \ codegear/gtest.groupproj \ codegear/gtest_all.cc \ codegear/gtest_link.cc \ codegear/gtest_main.cbproj \ codegear/gtest_unittest.cbproj # Distribute and install M4 macro m4datadir = $(datadir)/aclocal m4data_DATA = m4/gtest.m4 EXTRA_DIST += $(m4data_DATA) # We define the global AM_CPPFLAGS as everything we compile includes from these # directories. AM_CPPFLAGS = -I$(srcdir) -I$(srcdir)/include # Modifies compiler and linker flags for pthreads compatibility. if HAVE_PTHREADS AM_CXXFLAGS = @PTHREAD_CFLAGS@ -DGTEST_HAS_PTHREAD=1 AM_LIBS = @PTHREAD_LIBS@ else AM_CXXFLAGS = -DGTEST_HAS_PTHREAD=0 endif # Build rules for libraries. lib_LTLIBRARIES = lib/libgtest.la lib/libgtest_main.la lib_libgtest_la_SOURCES = src/gtest-all.cc pkginclude_HEADERS = \ include/gtest/gtest-death-test.h \ include/gtest/gtest-message.h \ include/gtest/gtest-param-test.h \ include/gtest/gtest-printers.h \ include/gtest/gtest-spi.h \ include/gtest/gtest-test-part.h \ include/gtest/gtest-typed-test.h \ include/gtest/gtest.h \ include/gtest/gtest_pred_impl.h \ include/gtest/gtest_prod.h pkginclude_internaldir = $(pkgincludedir)/internal pkginclude_internal_HEADERS = \ include/gtest/internal/gtest-death-test-internal.h \ include/gtest/internal/gtest-filepath.h \ include/gtest/internal/gtest-internal.h \ include/gtest/internal/gtest-linked_ptr.h \ include/gtest/internal/gtest-param-util-generated.h \ include/gtest/internal/gtest-param-util.h \ include/gtest/internal/gtest-port.h \ include/gtest/internal/gtest-string.h \ include/gtest/internal/gtest-tuple.h \ include/gtest/internal/gtest-type-util.h lib_libgtest_main_la_SOURCES = src/gtest_main.cc lib_libgtest_main_la_LIBADD = lib/libgtest.la # Bulid rules for samples and tests. Automake's naming for some of # these variables isn't terribly obvious, so this is a brief # reference: # # TESTS -- Programs run automatically by "make check" # check_PROGRAMS -- Programs built by "make check" but not necessarily run noinst_LTLIBRARIES = samples/libsamples.la samples_libsamples_la_SOURCES = \ samples/sample1.cc \ samples/sample1.h \ samples/sample2.cc \ samples/sample2.h \ samples/sample3-inl.h \ samples/sample4.cc \ samples/sample4.h TESTS= TESTS_ENVIRONMENT = GTEST_SOURCE_DIR="$(srcdir)/test" \ GTEST_BUILD_DIR="$(top_builddir)/test" check_PROGRAMS= # A simple sample on using gtest. TESTS += samples/sample1_unittest check_PROGRAMS += samples/sample1_unittest samples_sample1_unittest_SOURCES = samples/sample1_unittest.cc samples_sample1_unittest_LDADD = lib/libgtest_main.la \ lib/libgtest.la \ samples/libsamples.la # Another sample. It also verifies that libgtest works. TESTS += samples/sample10_unittest check_PROGRAMS += samples/sample10_unittest samples_sample10_unittest_SOURCES = samples/sample10_unittest.cc samples_sample10_unittest_LDADD = lib/libgtest.la # This tests most constructs of gtest and verifies that libgtest_main # and libgtest work. TESTS += test/gtest_all_test check_PROGRAMS += test/gtest_all_test test_gtest_all_test_SOURCES = test/gtest_all_test.cc test_gtest_all_test_LDADD = lib/libgtest_main.la \ lib/libgtest.la # Tests that fused gtest files compile and work. FUSED_GTEST_SRC = \ fused-src/gtest/gtest-all.cc \ fused-src/gtest/gtest.h \ fused-src/gtest/gtest_main.cc if HAVE_PYTHON TESTS += test/fused_gtest_test check_PROGRAMS += test/fused_gtest_test test_fused_gtest_test_SOURCES = $(FUSED_GTEST_SRC) \ samples/sample1.cc samples/sample1_unittest.cc test_fused_gtest_test_CPPFLAGS = -I"$(srcdir)/fused-src" # Build rules for putting fused Google Test files into the distribution # package. The user can also create those files by manually running # scripts/fuse_gtest_files.py. $(test_fused_gtest_test_SOURCES): fused-gtest fused-gtest: $(pkginclude_HEADERS) $(pkginclude_internal_HEADERS) \ $(GTEST_SRC) src/gtest-all.cc src/gtest_main.cc \ scripts/fuse_gtest_files.py mkdir -p "$(srcdir)/fused-src" chmod -R u+w "$(srcdir)/fused-src" rm -f "$(srcdir)/fused-src/gtest/gtest-all.cc" rm -f "$(srcdir)/fused-src/gtest/gtest.h" "$(srcdir)/scripts/fuse_gtest_files.py" "$(srcdir)/fused-src" cp -f "$(srcdir)/src/gtest_main.cc" "$(srcdir)/fused-src/gtest/" maintainer-clean-local: rm -rf "$(srcdir)/fused-src" endif # Death tests may produce core dumps in the build directory. In case # this happens, clean them to keep distcleancheck happy. CLEANFILES = core # Disables 'make install' as installing a compiled version of Google # Test can lead to undefined behavior due to violation of the # One-Definition Rule. install-exec-local: echo "'make install' is dangerous and not supported. Instead, see README for how to integrate Google Test into your build system." false install-data-local: echo "'make install' is dangerous and not supported. Instead, see README for how to integrate Google Test into your build system." false ffc-2019.1.0.post0/libs/gtest-1.7.0/Makefile.in000066400000000000000000001632441346026536000203520ustar00rootroot00000000000000# Makefile.in generated by automake 1.11.3 from Makefile.am. # @configure_input@ # Copyright (C) 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002, # 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011 Free Software # Foundation, Inc. # This Makefile.in is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY, to the extent permitted by law; without # even the implied warranty of MERCHANTABILITY or FITNESS FOR A # PARTICULAR PURPOSE. @SET_MAKE@ # Automake file VPATH = @srcdir@ pkgdatadir = $(datadir)/@PACKAGE@ pkgincludedir = $(includedir)/@PACKAGE@ pkglibdir = $(libdir)/@PACKAGE@ pkglibexecdir = $(libexecdir)/@PACKAGE@ am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd install_sh_DATA = $(install_sh) -c -m 644 install_sh_PROGRAM = $(install_sh) -c install_sh_SCRIPT = $(install_sh) -c INSTALL_HEADER = $(INSTALL_DATA) transform = $(program_transform_name) NORMAL_INSTALL = : PRE_INSTALL = : POST_INSTALL = : NORMAL_UNINSTALL = : PRE_UNINSTALL = : POST_UNINSTALL = : build_triplet = @build@ host_triplet = @host@ TESTS = samples/sample1_unittest$(EXEEXT) \ samples/sample10_unittest$(EXEEXT) \ test/gtest_all_test$(EXEEXT) $(am__EXEEXT_1) check_PROGRAMS = samples/sample1_unittest$(EXEEXT) \ samples/sample10_unittest$(EXEEXT) \ test/gtest_all_test$(EXEEXT) $(am__EXEEXT_1) @HAVE_PYTHON_TRUE@am__append_1 = test/fused_gtest_test @HAVE_PYTHON_TRUE@am__append_2 = test/fused_gtest_test subdir = . DIST_COMMON = README $(am__configure_deps) $(pkginclude_HEADERS) \ $(pkginclude_internal_HEADERS) $(srcdir)/Makefile.am \ $(srcdir)/Makefile.in $(top_srcdir)/build-aux/config.h.in \ $(top_srcdir)/configure $(top_srcdir)/scripts/gtest-config.in \ build-aux/config.guess build-aux/config.sub build-aux/depcomp \ build-aux/install-sh build-aux/ltmain.sh build-aux/missing ACLOCAL_M4 = $(top_srcdir)/aclocal.m4 am__aclocal_m4_deps = $(top_srcdir)/m4/libtool.m4 \ $(top_srcdir)/m4/ltoptions.m4 $(top_srcdir)/m4/ltsugar.m4 \ $(top_srcdir)/m4/ltversion.m4 $(top_srcdir)/m4/lt~obsolete.m4 \ $(top_srcdir)/m4/acx_pthread.m4 $(top_srcdir)/configure.ac am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \ $(ACLOCAL_M4) am__CONFIG_DISTCLEAN_FILES = config.status config.cache config.log \ configure.lineno config.status.lineno mkinstalldirs = $(install_sh) -d CONFIG_HEADER = $(top_builddir)/build-aux/config.h CONFIG_CLEAN_FILES = scripts/gtest-config CONFIG_CLEAN_VPATH_FILES = am__vpath_adj_setup = srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`; am__vpath_adj = case $$p in \ $(srcdir)/*) f=`echo "$$p" | sed "s|^$$srcdirstrip/||"`;; \ *) f=$$p;; \ esac; am__strip_dir = f=`echo $$p | sed -e 's|^.*/||'`; am__install_max = 40 am__nobase_strip_setup = \ srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*|]/\\\\&/g'` am__nobase_strip = \ for p in $$list; do echo "$$p"; done | sed -e "s|$$srcdirstrip/||" am__nobase_list = $(am__nobase_strip_setup); \ for p in $$list; do echo "$$p $$p"; done | \ sed "s| $$srcdirstrip/| |;"' / .*\//!s/ .*/ ./; s,\( .*\)/[^/]*$$,\1,' | \ $(AWK) 'BEGIN { files["."] = "" } { files[$$2] = files[$$2] " " $$1; \ if (++n[$$2] == $(am__install_max)) \ { print $$2, files[$$2]; n[$$2] = 0; files[$$2] = "" } } \ END { for (dir in files) print dir, files[dir] }' am__base_list = \ sed '$$!N;$$!N;$$!N;$$!N;$$!N;$$!N;$$!N;s/\n/ /g' | \ sed '$$!N;$$!N;$$!N;$$!N;s/\n/ /g' am__uninstall_files_from_dir = { \ test -z "$$files" \ || { test ! -d "$$dir" && test ! -f "$$dir" && test ! -r "$$dir"; } \ || { echo " ( cd '$$dir' && rm -f" $$files ")"; \ $(am__cd) "$$dir" && rm -f $$files; }; \ } am__installdirs = "$(DESTDIR)$(libdir)" "$(DESTDIR)$(m4datadir)" \ "$(DESTDIR)$(pkgincludedir)" \ "$(DESTDIR)$(pkginclude_internaldir)" LTLIBRARIES = $(lib_LTLIBRARIES) $(noinst_LTLIBRARIES) lib_libgtest_la_LIBADD = am__dirstamp = $(am__leading_dot)dirstamp am_lib_libgtest_la_OBJECTS = src/gtest-all.lo lib_libgtest_la_OBJECTS = $(am_lib_libgtest_la_OBJECTS) lib_libgtest_main_la_DEPENDENCIES = lib/libgtest.la am_lib_libgtest_main_la_OBJECTS = src/gtest_main.lo lib_libgtest_main_la_OBJECTS = $(am_lib_libgtest_main_la_OBJECTS) samples_libsamples_la_LIBADD = am_samples_libsamples_la_OBJECTS = samples/sample1.lo \ samples/sample2.lo samples/sample4.lo samples_libsamples_la_OBJECTS = $(am_samples_libsamples_la_OBJECTS) @HAVE_PYTHON_TRUE@am__EXEEXT_1 = test/fused_gtest_test$(EXEEXT) am_samples_sample10_unittest_OBJECTS = \ samples/sample10_unittest.$(OBJEXT) samples_sample10_unittest_OBJECTS = \ $(am_samples_sample10_unittest_OBJECTS) samples_sample10_unittest_DEPENDENCIES = lib/libgtest.la am_samples_sample1_unittest_OBJECTS = \ samples/sample1_unittest.$(OBJEXT) samples_sample1_unittest_OBJECTS = \ $(am_samples_sample1_unittest_OBJECTS) samples_sample1_unittest_DEPENDENCIES = lib/libgtest_main.la \ lib/libgtest.la samples/libsamples.la am__test_fused_gtest_test_SOURCES_DIST = fused-src/gtest/gtest-all.cc \ fused-src/gtest/gtest.h fused-src/gtest/gtest_main.cc \ samples/sample1.cc samples/sample1_unittest.cc am__objects_1 = \ fused-src/gtest/test_fused_gtest_test-gtest-all.$(OBJEXT) \ fused-src/gtest/test_fused_gtest_test-gtest_main.$(OBJEXT) @HAVE_PYTHON_TRUE@am_test_fused_gtest_test_OBJECTS = $(am__objects_1) \ @HAVE_PYTHON_TRUE@ samples/test_fused_gtest_test-sample1.$(OBJEXT) \ @HAVE_PYTHON_TRUE@ samples/test_fused_gtest_test-sample1_unittest.$(OBJEXT) test_fused_gtest_test_OBJECTS = $(am_test_fused_gtest_test_OBJECTS) test_fused_gtest_test_LDADD = $(LDADD) am_test_gtest_all_test_OBJECTS = test/gtest_all_test.$(OBJEXT) test_gtest_all_test_OBJECTS = $(am_test_gtest_all_test_OBJECTS) test_gtest_all_test_DEPENDENCIES = lib/libgtest_main.la \ lib/libgtest.la DEFAULT_INCLUDES = -I.@am__isrc@ -I$(top_builddir)/build-aux depcomp = $(SHELL) $(top_srcdir)/build-aux/depcomp am__depfiles_maybe = depfiles am__mv = mv -f CXXCOMPILE = $(CXX) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) \ $(AM_CPPFLAGS) $(CPPFLAGS) $(AM_CXXFLAGS) $(CXXFLAGS) LTCXXCOMPILE = $(LIBTOOL) --tag=CXX $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) \ --mode=compile $(CXX) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) \ $(AM_CPPFLAGS) $(CPPFLAGS) $(AM_CXXFLAGS) $(CXXFLAGS) CXXLD = $(CXX) CXXLINK = $(LIBTOOL) --tag=CXX $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) \ --mode=link $(CXXLD) $(AM_CXXFLAGS) $(CXXFLAGS) $(AM_LDFLAGS) \ $(LDFLAGS) -o $@ COMPILE = $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) \ $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) LTCOMPILE = $(LIBTOOL) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) \ --mode=compile $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) \ $(AM_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) CCLD = $(CC) LINK = $(LIBTOOL) --tag=CC $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) \ --mode=link $(CCLD) $(AM_CFLAGS) $(CFLAGS) $(AM_LDFLAGS) \ $(LDFLAGS) -o $@ SOURCES = $(lib_libgtest_la_SOURCES) $(lib_libgtest_main_la_SOURCES) \ $(samples_libsamples_la_SOURCES) \ $(samples_sample10_unittest_SOURCES) \ $(samples_sample1_unittest_SOURCES) \ $(test_fused_gtest_test_SOURCES) \ $(test_gtest_all_test_SOURCES) DIST_SOURCES = $(lib_libgtest_la_SOURCES) \ $(lib_libgtest_main_la_SOURCES) \ $(samples_libsamples_la_SOURCES) \ $(samples_sample10_unittest_SOURCES) \ $(samples_sample1_unittest_SOURCES) \ $(am__test_fused_gtest_test_SOURCES_DIST) \ $(test_gtest_all_test_SOURCES) DATA = $(m4data_DATA) HEADERS = $(pkginclude_HEADERS) $(pkginclude_internal_HEADERS) ETAGS = etags CTAGS = ctags am__tty_colors = \ red=; grn=; lgn=; blu=; std= DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST) distdir = $(PACKAGE)-$(VERSION) top_distdir = $(distdir) am__remove_distdir = \ if test -d "$(distdir)"; then \ find "$(distdir)" -type d ! -perm -200 -exec chmod u+w {} ';' \ && rm -rf "$(distdir)" \ || { sleep 5 && rm -rf "$(distdir)"; }; \ else :; fi DIST_ARCHIVES = $(distdir).tar.gz $(distdir).tar.bz2 $(distdir).zip GZIP_ENV = --best distuninstallcheck_listfiles = find . -type f -print am__distuninstallcheck_listfiles = $(distuninstallcheck_listfiles) \ | sed 's|^\./|$(prefix)/|' | grep -v '$(infodir)/dir$$' distcleancheck_listfiles = find . -type f -print ACLOCAL = @ACLOCAL@ AMTAR = @AMTAR@ AR = @AR@ AUTOCONF = @AUTOCONF@ AUTOHEADER = @AUTOHEADER@ AUTOMAKE = @AUTOMAKE@ AWK = @AWK@ CC = @CC@ CCDEPMODE = @CCDEPMODE@ CFLAGS = @CFLAGS@ CPP = @CPP@ CPPFLAGS = @CPPFLAGS@ CXX = @CXX@ CXXCPP = @CXXCPP@ CXXDEPMODE = @CXXDEPMODE@ CXXFLAGS = @CXXFLAGS@ CYGPATH_W = @CYGPATH_W@ DEFS = @DEFS@ DEPDIR = @DEPDIR@ DLLTOOL = @DLLTOOL@ DSYMUTIL = @DSYMUTIL@ DUMPBIN = @DUMPBIN@ ECHO_C = @ECHO_C@ ECHO_N = @ECHO_N@ ECHO_T = @ECHO_T@ EGREP = @EGREP@ EXEEXT = @EXEEXT@ FGREP = @FGREP@ GREP = @GREP@ INSTALL = @INSTALL@ INSTALL_DATA = @INSTALL_DATA@ INSTALL_PROGRAM = @INSTALL_PROGRAM@ INSTALL_SCRIPT = @INSTALL_SCRIPT@ INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@ LD = @LD@ LDFLAGS = @LDFLAGS@ LIBOBJS = @LIBOBJS@ LIBS = @LIBS@ LIBTOOL = @LIBTOOL@ LIPO = @LIPO@ LN_S = @LN_S@ LTLIBOBJS = @LTLIBOBJS@ MAKEINFO = @MAKEINFO@ MANIFEST_TOOL = @MANIFEST_TOOL@ MKDIR_P = @MKDIR_P@ NM = @NM@ NMEDIT = @NMEDIT@ OBJDUMP = @OBJDUMP@ OBJEXT = @OBJEXT@ OTOOL = @OTOOL@ OTOOL64 = @OTOOL64@ PACKAGE = @PACKAGE@ PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@ PACKAGE_NAME = @PACKAGE_NAME@ PACKAGE_STRING = @PACKAGE_STRING@ PACKAGE_TARNAME = @PACKAGE_TARNAME@ PACKAGE_URL = @PACKAGE_URL@ PACKAGE_VERSION = @PACKAGE_VERSION@ PATH_SEPARATOR = @PATH_SEPARATOR@ PTHREAD_CC = @PTHREAD_CC@ PTHREAD_CFLAGS = @PTHREAD_CFLAGS@ PTHREAD_LIBS = @PTHREAD_LIBS@ PYTHON = @PYTHON@ RANLIB = @RANLIB@ SED = @SED@ SET_MAKE = @SET_MAKE@ SHELL = @SHELL@ STRIP = @STRIP@ VERSION = @VERSION@ abs_builddir = @abs_builddir@ abs_srcdir = @abs_srcdir@ abs_top_builddir = @abs_top_builddir@ abs_top_srcdir = @abs_top_srcdir@ ac_ct_AR = @ac_ct_AR@ ac_ct_CC = @ac_ct_CC@ ac_ct_CXX = @ac_ct_CXX@ ac_ct_DUMPBIN = @ac_ct_DUMPBIN@ acx_pthread_config = @acx_pthread_config@ am__include = @am__include@ am__leading_dot = @am__leading_dot@ am__quote = @am__quote@ am__tar = @am__tar@ am__untar = @am__untar@ bindir = @bindir@ build = @build@ build_alias = @build_alias@ build_cpu = @build_cpu@ build_os = @build_os@ build_vendor = @build_vendor@ builddir = @builddir@ datadir = @datadir@ datarootdir = @datarootdir@ docdir = @docdir@ dvidir = @dvidir@ exec_prefix = @exec_prefix@ host = @host@ host_alias = @host_alias@ host_cpu = @host_cpu@ host_os = @host_os@ host_vendor = @host_vendor@ htmldir = @htmldir@ includedir = @includedir@ infodir = @infodir@ install_sh = @install_sh@ libdir = @libdir@ libexecdir = @libexecdir@ localedir = @localedir@ localstatedir = @localstatedir@ mandir = @mandir@ mkdir_p = @mkdir_p@ oldincludedir = @oldincludedir@ pdfdir = @pdfdir@ prefix = @prefix@ program_transform_name = @program_transform_name@ psdir = @psdir@ sbindir = @sbindir@ sharedstatedir = @sharedstatedir@ srcdir = @srcdir@ sysconfdir = @sysconfdir@ target_alias = @target_alias@ top_build_prefix = @top_build_prefix@ top_builddir = @top_builddir@ top_srcdir = @top_srcdir@ ACLOCAL_AMFLAGS = -I m4 # Nonstandard package files for distribution # Sample files that we don't compile. # C++ test files that we don't compile directly. # Python tests that we don't run. # CMake script # MSVC project files # xcode project files # xcode sample files # C++Builder project files EXTRA_DIST = CHANGES CONTRIBUTORS LICENSE \ include/gtest/gtest-param-test.h.pump \ include/gtest/internal/gtest-param-util-generated.h.pump \ include/gtest/internal/gtest-tuple.h.pump \ include/gtest/internal/gtest-type-util.h.pump make/Makefile \ scripts/fuse_gtest_files.py scripts/gen_gtest_pred_impl.py \ scripts/pump.py scripts/test/Makefile $(GTEST_SRC) \ samples/prime_tables.h samples/sample2_unittest.cc \ samples/sample3_unittest.cc samples/sample4_unittest.cc \ samples/sample5_unittest.cc samples/sample6_unittest.cc \ samples/sample7_unittest.cc samples/sample8_unittest.cc \ samples/sample9_unittest.cc test/gtest-death-test_ex_test.cc \ test/gtest-death-test_test.cc test/gtest-filepath_test.cc \ test/gtest-linked_ptr_test.cc test/gtest-listener_test.cc \ test/gtest-message_test.cc test/gtest-options_test.cc \ test/gtest-param-test2_test.cc test/gtest-param-test2_test.cc \ test/gtest-param-test_test.cc test/gtest-param-test_test.cc \ test/gtest-param-test_test.h test/gtest-port_test.cc \ test/gtest_premature_exit_test.cc test/gtest-printers_test.cc \ test/gtest-test-part_test.cc test/gtest-tuple_test.cc \ test/gtest-typed-test2_test.cc test/gtest-typed-test_test.cc \ test/gtest-typed-test_test.h test/gtest-unittest-api_test.cc \ test/gtest_break_on_failure_unittest_.cc \ test/gtest_catch_exceptions_test_.cc test/gtest_color_test_.cc \ test/gtest_env_var_test_.cc test/gtest_environment_test.cc \ test/gtest_filter_unittest_.cc test/gtest_help_test_.cc \ test/gtest_list_tests_unittest_.cc test/gtest_main_unittest.cc \ test/gtest_no_test_unittest.cc test/gtest_output_test_.cc \ test/gtest_pred_impl_unittest.cc test/gtest_prod_test.cc \ test/gtest_repeat_test.cc test/gtest_shuffle_test_.cc \ test/gtest_sole_header_test.cc test/gtest_stress_test.cc \ test/gtest_throw_on_failure_ex_test.cc \ test/gtest_throw_on_failure_test_.cc \ test/gtest_uninitialized_test_.cc test/gtest_unittest.cc \ test/gtest_unittest.cc test/gtest_xml_outfile1_test_.cc \ test/gtest_xml_outfile2_test_.cc \ test/gtest_xml_output_unittest_.cc test/production.cc \ test/production.h test/gtest_break_on_failure_unittest.py \ test/gtest_catch_exceptions_test.py test/gtest_color_test.py \ test/gtest_env_var_test.py test/gtest_filter_unittest.py \ test/gtest_help_test.py test/gtest_list_tests_unittest.py \ test/gtest_output_test.py \ test/gtest_output_test_golden_lin.txt \ test/gtest_shuffle_test.py test/gtest_test_utils.py \ test/gtest_throw_on_failure_test.py \ test/gtest_uninitialized_test.py \ test/gtest_xml_outfiles_test.py \ test/gtest_xml_output_unittest.py test/gtest_xml_test_utils.py \ CMakeLists.txt cmake/internal_utils.cmake msvc/gtest-md.sln \ msvc/gtest-md.vcproj msvc/gtest.sln msvc/gtest.vcproj \ msvc/gtest_main-md.vcproj msvc/gtest_main.vcproj \ msvc/gtest_prod_test-md.vcproj msvc/gtest_prod_test.vcproj \ msvc/gtest_unittest-md.vcproj msvc/gtest_unittest.vcproj \ xcode/Config/DebugProject.xcconfig \ xcode/Config/FrameworkTarget.xcconfig \ xcode/Config/General.xcconfig \ xcode/Config/ReleaseProject.xcconfig \ xcode/Config/StaticLibraryTarget.xcconfig \ xcode/Config/TestTarget.xcconfig xcode/Resources/Info.plist \ xcode/Scripts/runtests.sh xcode/Scripts/versiongenerate.py \ xcode/gtest.xcodeproj/project.pbxproj \ xcode/Samples/FrameworkSample/Info.plist \ xcode/Samples/FrameworkSample/WidgetFramework.xcodeproj/project.pbxproj \ xcode/Samples/FrameworkSample/runtests.sh \ xcode/Samples/FrameworkSample/widget.cc \ xcode/Samples/FrameworkSample/widget.h \ xcode/Samples/FrameworkSample/widget_test.cc \ codegear/gtest.cbproj codegear/gtest.groupproj \ codegear/gtest_all.cc codegear/gtest_link.cc \ codegear/gtest_main.cbproj codegear/gtest_unittest.cbproj \ $(m4data_DATA) # gtest source files that we don't compile directly. They are # #included by gtest-all.cc. GTEST_SRC = \ src/gtest-death-test.cc \ src/gtest-filepath.cc \ src/gtest-internal-inl.h \ src/gtest-port.cc \ src/gtest-printers.cc \ src/gtest-test-part.cc \ src/gtest-typed-test.cc \ src/gtest.cc # Distribute and install M4 macro m4datadir = $(datadir)/aclocal m4data_DATA = m4/gtest.m4 # We define the global AM_CPPFLAGS as everything we compile includes from these # directories. AM_CPPFLAGS = -I$(srcdir) -I$(srcdir)/include @HAVE_PTHREADS_FALSE@AM_CXXFLAGS = -DGTEST_HAS_PTHREAD=0 # Modifies compiler and linker flags for pthreads compatibility. @HAVE_PTHREADS_TRUE@AM_CXXFLAGS = @PTHREAD_CFLAGS@ -DGTEST_HAS_PTHREAD=1 @HAVE_PTHREADS_TRUE@AM_LIBS = @PTHREAD_LIBS@ # Build rules for libraries. lib_LTLIBRARIES = lib/libgtest.la lib/libgtest_main.la lib_libgtest_la_SOURCES = src/gtest-all.cc pkginclude_HEADERS = \ include/gtest/gtest-death-test.h \ include/gtest/gtest-message.h \ include/gtest/gtest-param-test.h \ include/gtest/gtest-printers.h \ include/gtest/gtest-spi.h \ include/gtest/gtest-test-part.h \ include/gtest/gtest-typed-test.h \ include/gtest/gtest.h \ include/gtest/gtest_pred_impl.h \ include/gtest/gtest_prod.h pkginclude_internaldir = $(pkgincludedir)/internal pkginclude_internal_HEADERS = \ include/gtest/internal/gtest-death-test-internal.h \ include/gtest/internal/gtest-filepath.h \ include/gtest/internal/gtest-internal.h \ include/gtest/internal/gtest-linked_ptr.h \ include/gtest/internal/gtest-param-util-generated.h \ include/gtest/internal/gtest-param-util.h \ include/gtest/internal/gtest-port.h \ include/gtest/internal/gtest-string.h \ include/gtest/internal/gtest-tuple.h \ include/gtest/internal/gtest-type-util.h lib_libgtest_main_la_SOURCES = src/gtest_main.cc lib_libgtest_main_la_LIBADD = lib/libgtest.la # Bulid rules for samples and tests. Automake's naming for some of # these variables isn't terribly obvious, so this is a brief # reference: # # TESTS -- Programs run automatically by "make check" # check_PROGRAMS -- Programs built by "make check" but not necessarily run noinst_LTLIBRARIES = samples/libsamples.la samples_libsamples_la_SOURCES = \ samples/sample1.cc \ samples/sample1.h \ samples/sample2.cc \ samples/sample2.h \ samples/sample3-inl.h \ samples/sample4.cc \ samples/sample4.h TESTS_ENVIRONMENT = GTEST_SOURCE_DIR="$(srcdir)/test" \ GTEST_BUILD_DIR="$(top_builddir)/test" samples_sample1_unittest_SOURCES = samples/sample1_unittest.cc samples_sample1_unittest_LDADD = lib/libgtest_main.la \ lib/libgtest.la \ samples/libsamples.la samples_sample10_unittest_SOURCES = samples/sample10_unittest.cc samples_sample10_unittest_LDADD = lib/libgtest.la test_gtest_all_test_SOURCES = test/gtest_all_test.cc test_gtest_all_test_LDADD = lib/libgtest_main.la \ lib/libgtest.la # Tests that fused gtest files compile and work. FUSED_GTEST_SRC = \ fused-src/gtest/gtest-all.cc \ fused-src/gtest/gtest.h \ fused-src/gtest/gtest_main.cc @HAVE_PYTHON_TRUE@test_fused_gtest_test_SOURCES = $(FUSED_GTEST_SRC) \ @HAVE_PYTHON_TRUE@ samples/sample1.cc samples/sample1_unittest.cc @HAVE_PYTHON_TRUE@test_fused_gtest_test_CPPFLAGS = -I"$(srcdir)/fused-src" # Death tests may produce core dumps in the build directory. In case # this happens, clean them to keep distcleancheck happy. CLEANFILES = core all: all-am .SUFFIXES: .SUFFIXES: .cc .lo .o .obj am--refresh: Makefile @: $(srcdir)/Makefile.in: $(srcdir)/Makefile.am $(am__configure_deps) @for dep in $?; do \ case '$(am__configure_deps)' in \ *$$dep*) \ echo ' cd $(srcdir) && $(AUTOMAKE) --foreign'; \ $(am__cd) $(srcdir) && $(AUTOMAKE) --foreign \ && exit 0; \ exit 1;; \ esac; \ done; \ echo ' cd $(top_srcdir) && $(AUTOMAKE) --foreign Makefile'; \ $(am__cd) $(top_srcdir) && \ $(AUTOMAKE) --foreign Makefile .PRECIOUS: Makefile Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status @case '$?' in \ *config.status*) \ echo ' $(SHELL) ./config.status'; \ $(SHELL) ./config.status;; \ *) \ echo ' cd $(top_builddir) && $(SHELL) ./config.status $@ $(am__depfiles_maybe)'; \ cd $(top_builddir) && $(SHELL) ./config.status $@ $(am__depfiles_maybe);; \ esac; $(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES) $(SHELL) ./config.status --recheck $(top_srcdir)/configure: $(am__configure_deps) $(am__cd) $(srcdir) && $(AUTOCONF) $(ACLOCAL_M4): $(am__aclocal_m4_deps) $(am__cd) $(srcdir) && $(ACLOCAL) $(ACLOCAL_AMFLAGS) $(am__aclocal_m4_deps): build-aux/config.h: build-aux/stamp-h1 @if test ! -f $@; then rm -f build-aux/stamp-h1; else :; fi @if test ! -f $@; then $(MAKE) $(AM_MAKEFLAGS) build-aux/stamp-h1; else :; fi build-aux/stamp-h1: $(top_srcdir)/build-aux/config.h.in $(top_builddir)/config.status @rm -f build-aux/stamp-h1 cd $(top_builddir) && $(SHELL) ./config.status build-aux/config.h $(top_srcdir)/build-aux/config.h.in: $(am__configure_deps) ($(am__cd) $(top_srcdir) && $(AUTOHEADER)) rm -f build-aux/stamp-h1 touch $@ distclean-hdr: -rm -f build-aux/config.h build-aux/stamp-h1 scripts/gtest-config: $(top_builddir)/config.status $(top_srcdir)/scripts/gtest-config.in cd $(top_builddir) && $(SHELL) ./config.status $@ install-libLTLIBRARIES: $(lib_LTLIBRARIES) @$(NORMAL_INSTALL) test -z "$(libdir)" || $(MKDIR_P) "$(DESTDIR)$(libdir)" @list='$(lib_LTLIBRARIES)'; test -n "$(libdir)" || list=; \ list2=; for p in $$list; do \ if test -f $$p; then \ list2="$$list2 $$p"; \ else :; fi; \ done; \ test -z "$$list2" || { \ echo " $(LIBTOOL) $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=install $(INSTALL) $(INSTALL_STRIP_FLAG) $$list2 '$(DESTDIR)$(libdir)'"; \ $(LIBTOOL) $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=install $(INSTALL) $(INSTALL_STRIP_FLAG) $$list2 "$(DESTDIR)$(libdir)"; \ } uninstall-libLTLIBRARIES: @$(NORMAL_UNINSTALL) @list='$(lib_LTLIBRARIES)'; test -n "$(libdir)" || list=; \ for p in $$list; do \ $(am__strip_dir) \ echo " $(LIBTOOL) $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=uninstall rm -f '$(DESTDIR)$(libdir)/$$f'"; \ $(LIBTOOL) $(AM_LIBTOOLFLAGS) $(LIBTOOLFLAGS) --mode=uninstall rm -f "$(DESTDIR)$(libdir)/$$f"; \ done clean-libLTLIBRARIES: -test -z "$(lib_LTLIBRARIES)" || rm -f $(lib_LTLIBRARIES) @list='$(lib_LTLIBRARIES)'; for p in $$list; do \ dir="`echo $$p | sed -e 's|/[^/]*$$||'`"; \ test "$$dir" != "$$p" || dir=.; \ echo "rm -f \"$${dir}/so_locations\""; \ rm -f "$${dir}/so_locations"; \ done clean-noinstLTLIBRARIES: -test -z "$(noinst_LTLIBRARIES)" || rm -f $(noinst_LTLIBRARIES) @list='$(noinst_LTLIBRARIES)'; for p in $$list; do \ dir="`echo $$p | sed -e 's|/[^/]*$$||'`"; \ test "$$dir" != "$$p" || dir=.; \ echo "rm -f \"$${dir}/so_locations\""; \ rm -f "$${dir}/so_locations"; \ done src/$(am__dirstamp): @$(MKDIR_P) src @: > src/$(am__dirstamp) src/$(DEPDIR)/$(am__dirstamp): @$(MKDIR_P) src/$(DEPDIR) @: > src/$(DEPDIR)/$(am__dirstamp) src/gtest-all.lo: src/$(am__dirstamp) src/$(DEPDIR)/$(am__dirstamp) lib/$(am__dirstamp): @$(MKDIR_P) lib @: > lib/$(am__dirstamp) lib/libgtest.la: $(lib_libgtest_la_OBJECTS) $(lib_libgtest_la_DEPENDENCIES) $(EXTRA_lib_libgtest_la_DEPENDENCIES) lib/$(am__dirstamp) $(CXXLINK) -rpath $(libdir) $(lib_libgtest_la_OBJECTS) $(lib_libgtest_la_LIBADD) $(LIBS) src/gtest_main.lo: src/$(am__dirstamp) src/$(DEPDIR)/$(am__dirstamp) lib/libgtest_main.la: $(lib_libgtest_main_la_OBJECTS) $(lib_libgtest_main_la_DEPENDENCIES) $(EXTRA_lib_libgtest_main_la_DEPENDENCIES) lib/$(am__dirstamp) $(CXXLINK) -rpath $(libdir) $(lib_libgtest_main_la_OBJECTS) $(lib_libgtest_main_la_LIBADD) $(LIBS) samples/$(am__dirstamp): @$(MKDIR_P) samples @: > samples/$(am__dirstamp) samples/$(DEPDIR)/$(am__dirstamp): @$(MKDIR_P) samples/$(DEPDIR) @: > samples/$(DEPDIR)/$(am__dirstamp) samples/sample1.lo: samples/$(am__dirstamp) \ samples/$(DEPDIR)/$(am__dirstamp) samples/sample2.lo: samples/$(am__dirstamp) \ samples/$(DEPDIR)/$(am__dirstamp) samples/sample4.lo: samples/$(am__dirstamp) \ samples/$(DEPDIR)/$(am__dirstamp) samples/libsamples.la: $(samples_libsamples_la_OBJECTS) $(samples_libsamples_la_DEPENDENCIES) $(EXTRA_samples_libsamples_la_DEPENDENCIES) samples/$(am__dirstamp) $(CXXLINK) $(samples_libsamples_la_OBJECTS) $(samples_libsamples_la_LIBADD) $(LIBS) clean-checkPROGRAMS: @list='$(check_PROGRAMS)'; test -n "$$list" || exit 0; \ echo " rm -f" $$list; \ rm -f $$list || exit $$?; \ test -n "$(EXEEXT)" || exit 0; \ list=`for p in $$list; do echo "$$p"; done | sed 's/$(EXEEXT)$$//'`; \ echo " rm -f" $$list; \ rm -f $$list samples/sample10_unittest.$(OBJEXT): samples/$(am__dirstamp) \ samples/$(DEPDIR)/$(am__dirstamp) samples/sample10_unittest$(EXEEXT): $(samples_sample10_unittest_OBJECTS) $(samples_sample10_unittest_DEPENDENCIES) $(EXTRA_samples_sample10_unittest_DEPENDENCIES) samples/$(am__dirstamp) @rm -f samples/sample10_unittest$(EXEEXT) $(CXXLINK) $(samples_sample10_unittest_OBJECTS) $(samples_sample10_unittest_LDADD) $(LIBS) samples/sample1_unittest.$(OBJEXT): samples/$(am__dirstamp) \ samples/$(DEPDIR)/$(am__dirstamp) samples/sample1_unittest$(EXEEXT): $(samples_sample1_unittest_OBJECTS) $(samples_sample1_unittest_DEPENDENCIES) $(EXTRA_samples_sample1_unittest_DEPENDENCIES) samples/$(am__dirstamp) @rm -f samples/sample1_unittest$(EXEEXT) $(CXXLINK) $(samples_sample1_unittest_OBJECTS) $(samples_sample1_unittest_LDADD) $(LIBS) fused-src/gtest/$(am__dirstamp): @$(MKDIR_P) fused-src/gtest @: > fused-src/gtest/$(am__dirstamp) fused-src/gtest/$(DEPDIR)/$(am__dirstamp): @$(MKDIR_P) fused-src/gtest/$(DEPDIR) @: > fused-src/gtest/$(DEPDIR)/$(am__dirstamp) fused-src/gtest/test_fused_gtest_test-gtest-all.$(OBJEXT): \ fused-src/gtest/$(am__dirstamp) \ fused-src/gtest/$(DEPDIR)/$(am__dirstamp) fused-src/gtest/test_fused_gtest_test-gtest_main.$(OBJEXT): \ fused-src/gtest/$(am__dirstamp) \ fused-src/gtest/$(DEPDIR)/$(am__dirstamp) samples/test_fused_gtest_test-sample1.$(OBJEXT): \ samples/$(am__dirstamp) samples/$(DEPDIR)/$(am__dirstamp) samples/test_fused_gtest_test-sample1_unittest.$(OBJEXT): \ samples/$(am__dirstamp) samples/$(DEPDIR)/$(am__dirstamp) test/$(am__dirstamp): @$(MKDIR_P) test @: > test/$(am__dirstamp) test/fused_gtest_test$(EXEEXT): $(test_fused_gtest_test_OBJECTS) $(test_fused_gtest_test_DEPENDENCIES) $(EXTRA_test_fused_gtest_test_DEPENDENCIES) test/$(am__dirstamp) @rm -f test/fused_gtest_test$(EXEEXT) $(CXXLINK) $(test_fused_gtest_test_OBJECTS) $(test_fused_gtest_test_LDADD) $(LIBS) test/$(DEPDIR)/$(am__dirstamp): @$(MKDIR_P) test/$(DEPDIR) @: > test/$(DEPDIR)/$(am__dirstamp) test/gtest_all_test.$(OBJEXT): test/$(am__dirstamp) \ test/$(DEPDIR)/$(am__dirstamp) test/gtest_all_test$(EXEEXT): $(test_gtest_all_test_OBJECTS) $(test_gtest_all_test_DEPENDENCIES) $(EXTRA_test_gtest_all_test_DEPENDENCIES) test/$(am__dirstamp) @rm -f test/gtest_all_test$(EXEEXT) $(CXXLINK) $(test_gtest_all_test_OBJECTS) $(test_gtest_all_test_LDADD) $(LIBS) mostlyclean-compile: -rm -f *.$(OBJEXT) -rm -f fused-src/gtest/test_fused_gtest_test-gtest-all.$(OBJEXT) -rm -f fused-src/gtest/test_fused_gtest_test-gtest_main.$(OBJEXT) -rm -f samples/sample1.$(OBJEXT) -rm -f samples/sample1.lo -rm -f samples/sample10_unittest.$(OBJEXT) -rm -f samples/sample1_unittest.$(OBJEXT) -rm -f samples/sample2.$(OBJEXT) -rm -f samples/sample2.lo -rm -f samples/sample4.$(OBJEXT) -rm -f samples/sample4.lo -rm -f samples/test_fused_gtest_test-sample1.$(OBJEXT) -rm -f samples/test_fused_gtest_test-sample1_unittest.$(OBJEXT) -rm -f src/gtest-all.$(OBJEXT) -rm -f src/gtest-all.lo -rm -f src/gtest_main.$(OBJEXT) -rm -f src/gtest_main.lo -rm -f test/gtest_all_test.$(OBJEXT) distclean-compile: -rm -f *.tab.c @AMDEP_TRUE@@am__include@ @am__quote@fused-src/gtest/$(DEPDIR)/test_fused_gtest_test-gtest-all.Po@am__quote@ @AMDEP_TRUE@@am__include@ @am__quote@fused-src/gtest/$(DEPDIR)/test_fused_gtest_test-gtest_main.Po@am__quote@ @AMDEP_TRUE@@am__include@ @am__quote@samples/$(DEPDIR)/sample1.Plo@am__quote@ @AMDEP_TRUE@@am__include@ @am__quote@samples/$(DEPDIR)/sample10_unittest.Po@am__quote@ @AMDEP_TRUE@@am__include@ @am__quote@samples/$(DEPDIR)/sample1_unittest.Po@am__quote@ @AMDEP_TRUE@@am__include@ @am__quote@samples/$(DEPDIR)/sample2.Plo@am__quote@ @AMDEP_TRUE@@am__include@ @am__quote@samples/$(DEPDIR)/sample4.Plo@am__quote@ @AMDEP_TRUE@@am__include@ @am__quote@samples/$(DEPDIR)/test_fused_gtest_test-sample1.Po@am__quote@ @AMDEP_TRUE@@am__include@ @am__quote@samples/$(DEPDIR)/test_fused_gtest_test-sample1_unittest.Po@am__quote@ @AMDEP_TRUE@@am__include@ @am__quote@src/$(DEPDIR)/gtest-all.Plo@am__quote@ @AMDEP_TRUE@@am__include@ @am__quote@src/$(DEPDIR)/gtest_main.Plo@am__quote@ @AMDEP_TRUE@@am__include@ @am__quote@test/$(DEPDIR)/gtest_all_test.Po@am__quote@ .cc.o: @am__fastdepCXX_TRUE@ depbase=`echo $@ | sed 's|[^/]*$$|$(DEPDIR)/&|;s|\.o$$||'`;\ @am__fastdepCXX_TRUE@ $(CXXCOMPILE) -MT $@ -MD -MP -MF $$depbase.Tpo -c -o $@ $< &&\ @am__fastdepCXX_TRUE@ $(am__mv) $$depbase.Tpo $$depbase.Po @AMDEP_TRUE@@am__fastdepCXX_FALSE@ source='$<' object='$@' libtool=no @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCXX_FALSE@ DEPDIR=$(DEPDIR) $(CXXDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCXX_FALSE@ $(CXXCOMPILE) -c -o $@ $< .cc.obj: @am__fastdepCXX_TRUE@ depbase=`echo $@ | sed 's|[^/]*$$|$(DEPDIR)/&|;s|\.obj$$||'`;\ @am__fastdepCXX_TRUE@ $(CXXCOMPILE) -MT $@ -MD -MP -MF $$depbase.Tpo -c -o $@ `$(CYGPATH_W) '$<'` &&\ @am__fastdepCXX_TRUE@ $(am__mv) $$depbase.Tpo $$depbase.Po @AMDEP_TRUE@@am__fastdepCXX_FALSE@ source='$<' object='$@' libtool=no @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCXX_FALSE@ DEPDIR=$(DEPDIR) $(CXXDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCXX_FALSE@ $(CXXCOMPILE) -c -o $@ `$(CYGPATH_W) '$<'` .cc.lo: @am__fastdepCXX_TRUE@ depbase=`echo $@ | sed 's|[^/]*$$|$(DEPDIR)/&|;s|\.lo$$||'`;\ @am__fastdepCXX_TRUE@ $(LTCXXCOMPILE) -MT $@ -MD -MP -MF $$depbase.Tpo -c -o $@ $< &&\ @am__fastdepCXX_TRUE@ $(am__mv) $$depbase.Tpo $$depbase.Plo @AMDEP_TRUE@@am__fastdepCXX_FALSE@ source='$<' object='$@' libtool=yes @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCXX_FALSE@ DEPDIR=$(DEPDIR) $(CXXDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCXX_FALSE@ $(LTCXXCOMPILE) -c -o $@ $< fused-src/gtest/test_fused_gtest_test-gtest-all.o: fused-src/gtest/gtest-all.cc @am__fastdepCXX_TRUE@ $(CXX) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(test_fused_gtest_test_CPPFLAGS) $(CPPFLAGS) $(AM_CXXFLAGS) $(CXXFLAGS) -MT fused-src/gtest/test_fused_gtest_test-gtest-all.o -MD -MP -MF fused-src/gtest/$(DEPDIR)/test_fused_gtest_test-gtest-all.Tpo -c -o fused-src/gtest/test_fused_gtest_test-gtest-all.o `test -f 'fused-src/gtest/gtest-all.cc' || echo '$(srcdir)/'`fused-src/gtest/gtest-all.cc @am__fastdepCXX_TRUE@ $(am__mv) fused-src/gtest/$(DEPDIR)/test_fused_gtest_test-gtest-all.Tpo fused-src/gtest/$(DEPDIR)/test_fused_gtest_test-gtest-all.Po @AMDEP_TRUE@@am__fastdepCXX_FALSE@ source='fused-src/gtest/gtest-all.cc' object='fused-src/gtest/test_fused_gtest_test-gtest-all.o' libtool=no @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCXX_FALSE@ DEPDIR=$(DEPDIR) $(CXXDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCXX_FALSE@ $(CXX) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(test_fused_gtest_test_CPPFLAGS) $(CPPFLAGS) $(AM_CXXFLAGS) $(CXXFLAGS) -c -o fused-src/gtest/test_fused_gtest_test-gtest-all.o `test -f 'fused-src/gtest/gtest-all.cc' || echo '$(srcdir)/'`fused-src/gtest/gtest-all.cc fused-src/gtest/test_fused_gtest_test-gtest-all.obj: fused-src/gtest/gtest-all.cc @am__fastdepCXX_TRUE@ $(CXX) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(test_fused_gtest_test_CPPFLAGS) $(CPPFLAGS) $(AM_CXXFLAGS) $(CXXFLAGS) -MT fused-src/gtest/test_fused_gtest_test-gtest-all.obj -MD -MP -MF fused-src/gtest/$(DEPDIR)/test_fused_gtest_test-gtest-all.Tpo -c -o fused-src/gtest/test_fused_gtest_test-gtest-all.obj `if test -f 'fused-src/gtest/gtest-all.cc'; then $(CYGPATH_W) 'fused-src/gtest/gtest-all.cc'; else $(CYGPATH_W) '$(srcdir)/fused-src/gtest/gtest-all.cc'; fi` @am__fastdepCXX_TRUE@ $(am__mv) fused-src/gtest/$(DEPDIR)/test_fused_gtest_test-gtest-all.Tpo fused-src/gtest/$(DEPDIR)/test_fused_gtest_test-gtest-all.Po @AMDEP_TRUE@@am__fastdepCXX_FALSE@ source='fused-src/gtest/gtest-all.cc' object='fused-src/gtest/test_fused_gtest_test-gtest-all.obj' libtool=no @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCXX_FALSE@ DEPDIR=$(DEPDIR) $(CXXDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCXX_FALSE@ $(CXX) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(test_fused_gtest_test_CPPFLAGS) $(CPPFLAGS) $(AM_CXXFLAGS) $(CXXFLAGS) -c -o fused-src/gtest/test_fused_gtest_test-gtest-all.obj `if test -f 'fused-src/gtest/gtest-all.cc'; then $(CYGPATH_W) 'fused-src/gtest/gtest-all.cc'; else $(CYGPATH_W) '$(srcdir)/fused-src/gtest/gtest-all.cc'; fi` fused-src/gtest/test_fused_gtest_test-gtest_main.o: fused-src/gtest/gtest_main.cc @am__fastdepCXX_TRUE@ $(CXX) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(test_fused_gtest_test_CPPFLAGS) $(CPPFLAGS) $(AM_CXXFLAGS) $(CXXFLAGS) -MT fused-src/gtest/test_fused_gtest_test-gtest_main.o -MD -MP -MF fused-src/gtest/$(DEPDIR)/test_fused_gtest_test-gtest_main.Tpo -c -o fused-src/gtest/test_fused_gtest_test-gtest_main.o `test -f 'fused-src/gtest/gtest_main.cc' || echo '$(srcdir)/'`fused-src/gtest/gtest_main.cc @am__fastdepCXX_TRUE@ $(am__mv) fused-src/gtest/$(DEPDIR)/test_fused_gtest_test-gtest_main.Tpo fused-src/gtest/$(DEPDIR)/test_fused_gtest_test-gtest_main.Po @AMDEP_TRUE@@am__fastdepCXX_FALSE@ source='fused-src/gtest/gtest_main.cc' object='fused-src/gtest/test_fused_gtest_test-gtest_main.o' libtool=no @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCXX_FALSE@ DEPDIR=$(DEPDIR) $(CXXDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCXX_FALSE@ $(CXX) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(test_fused_gtest_test_CPPFLAGS) $(CPPFLAGS) $(AM_CXXFLAGS) $(CXXFLAGS) -c -o fused-src/gtest/test_fused_gtest_test-gtest_main.o `test -f 'fused-src/gtest/gtest_main.cc' || echo '$(srcdir)/'`fused-src/gtest/gtest_main.cc fused-src/gtest/test_fused_gtest_test-gtest_main.obj: fused-src/gtest/gtest_main.cc @am__fastdepCXX_TRUE@ $(CXX) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(test_fused_gtest_test_CPPFLAGS) $(CPPFLAGS) $(AM_CXXFLAGS) $(CXXFLAGS) -MT fused-src/gtest/test_fused_gtest_test-gtest_main.obj -MD -MP -MF fused-src/gtest/$(DEPDIR)/test_fused_gtest_test-gtest_main.Tpo -c -o fused-src/gtest/test_fused_gtest_test-gtest_main.obj `if test -f 'fused-src/gtest/gtest_main.cc'; then $(CYGPATH_W) 'fused-src/gtest/gtest_main.cc'; else $(CYGPATH_W) '$(srcdir)/fused-src/gtest/gtest_main.cc'; fi` @am__fastdepCXX_TRUE@ $(am__mv) fused-src/gtest/$(DEPDIR)/test_fused_gtest_test-gtest_main.Tpo fused-src/gtest/$(DEPDIR)/test_fused_gtest_test-gtest_main.Po @AMDEP_TRUE@@am__fastdepCXX_FALSE@ source='fused-src/gtest/gtest_main.cc' object='fused-src/gtest/test_fused_gtest_test-gtest_main.obj' libtool=no @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCXX_FALSE@ DEPDIR=$(DEPDIR) $(CXXDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCXX_FALSE@ $(CXX) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(test_fused_gtest_test_CPPFLAGS) $(CPPFLAGS) $(AM_CXXFLAGS) $(CXXFLAGS) -c -o fused-src/gtest/test_fused_gtest_test-gtest_main.obj `if test -f 'fused-src/gtest/gtest_main.cc'; then $(CYGPATH_W) 'fused-src/gtest/gtest_main.cc'; else $(CYGPATH_W) '$(srcdir)/fused-src/gtest/gtest_main.cc'; fi` samples/test_fused_gtest_test-sample1.o: samples/sample1.cc @am__fastdepCXX_TRUE@ $(CXX) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(test_fused_gtest_test_CPPFLAGS) $(CPPFLAGS) $(AM_CXXFLAGS) $(CXXFLAGS) -MT samples/test_fused_gtest_test-sample1.o -MD -MP -MF samples/$(DEPDIR)/test_fused_gtest_test-sample1.Tpo -c -o samples/test_fused_gtest_test-sample1.o `test -f 'samples/sample1.cc' || echo '$(srcdir)/'`samples/sample1.cc @am__fastdepCXX_TRUE@ $(am__mv) samples/$(DEPDIR)/test_fused_gtest_test-sample1.Tpo samples/$(DEPDIR)/test_fused_gtest_test-sample1.Po @AMDEP_TRUE@@am__fastdepCXX_FALSE@ source='samples/sample1.cc' object='samples/test_fused_gtest_test-sample1.o' libtool=no @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCXX_FALSE@ DEPDIR=$(DEPDIR) $(CXXDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCXX_FALSE@ $(CXX) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(test_fused_gtest_test_CPPFLAGS) $(CPPFLAGS) $(AM_CXXFLAGS) $(CXXFLAGS) -c -o samples/test_fused_gtest_test-sample1.o `test -f 'samples/sample1.cc' || echo '$(srcdir)/'`samples/sample1.cc samples/test_fused_gtest_test-sample1.obj: samples/sample1.cc @am__fastdepCXX_TRUE@ $(CXX) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(test_fused_gtest_test_CPPFLAGS) $(CPPFLAGS) $(AM_CXXFLAGS) $(CXXFLAGS) -MT samples/test_fused_gtest_test-sample1.obj -MD -MP -MF samples/$(DEPDIR)/test_fused_gtest_test-sample1.Tpo -c -o samples/test_fused_gtest_test-sample1.obj `if test -f 'samples/sample1.cc'; then $(CYGPATH_W) 'samples/sample1.cc'; else $(CYGPATH_W) '$(srcdir)/samples/sample1.cc'; fi` @am__fastdepCXX_TRUE@ $(am__mv) samples/$(DEPDIR)/test_fused_gtest_test-sample1.Tpo samples/$(DEPDIR)/test_fused_gtest_test-sample1.Po @AMDEP_TRUE@@am__fastdepCXX_FALSE@ source='samples/sample1.cc' object='samples/test_fused_gtest_test-sample1.obj' libtool=no @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCXX_FALSE@ DEPDIR=$(DEPDIR) $(CXXDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCXX_FALSE@ $(CXX) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(test_fused_gtest_test_CPPFLAGS) $(CPPFLAGS) $(AM_CXXFLAGS) $(CXXFLAGS) -c -o samples/test_fused_gtest_test-sample1.obj `if test -f 'samples/sample1.cc'; then $(CYGPATH_W) 'samples/sample1.cc'; else $(CYGPATH_W) '$(srcdir)/samples/sample1.cc'; fi` samples/test_fused_gtest_test-sample1_unittest.o: samples/sample1_unittest.cc @am__fastdepCXX_TRUE@ $(CXX) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(test_fused_gtest_test_CPPFLAGS) $(CPPFLAGS) $(AM_CXXFLAGS) $(CXXFLAGS) -MT samples/test_fused_gtest_test-sample1_unittest.o -MD -MP -MF samples/$(DEPDIR)/test_fused_gtest_test-sample1_unittest.Tpo -c -o samples/test_fused_gtest_test-sample1_unittest.o `test -f 'samples/sample1_unittest.cc' || echo '$(srcdir)/'`samples/sample1_unittest.cc @am__fastdepCXX_TRUE@ $(am__mv) samples/$(DEPDIR)/test_fused_gtest_test-sample1_unittest.Tpo samples/$(DEPDIR)/test_fused_gtest_test-sample1_unittest.Po @AMDEP_TRUE@@am__fastdepCXX_FALSE@ source='samples/sample1_unittest.cc' object='samples/test_fused_gtest_test-sample1_unittest.o' libtool=no @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCXX_FALSE@ DEPDIR=$(DEPDIR) $(CXXDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCXX_FALSE@ $(CXX) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(test_fused_gtest_test_CPPFLAGS) $(CPPFLAGS) $(AM_CXXFLAGS) $(CXXFLAGS) -c -o samples/test_fused_gtest_test-sample1_unittest.o `test -f 'samples/sample1_unittest.cc' || echo '$(srcdir)/'`samples/sample1_unittest.cc samples/test_fused_gtest_test-sample1_unittest.obj: samples/sample1_unittest.cc @am__fastdepCXX_TRUE@ $(CXX) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(test_fused_gtest_test_CPPFLAGS) $(CPPFLAGS) $(AM_CXXFLAGS) $(CXXFLAGS) -MT samples/test_fused_gtest_test-sample1_unittest.obj -MD -MP -MF samples/$(DEPDIR)/test_fused_gtest_test-sample1_unittest.Tpo -c -o samples/test_fused_gtest_test-sample1_unittest.obj `if test -f 'samples/sample1_unittest.cc'; then $(CYGPATH_W) 'samples/sample1_unittest.cc'; else $(CYGPATH_W) '$(srcdir)/samples/sample1_unittest.cc'; fi` @am__fastdepCXX_TRUE@ $(am__mv) samples/$(DEPDIR)/test_fused_gtest_test-sample1_unittest.Tpo samples/$(DEPDIR)/test_fused_gtest_test-sample1_unittest.Po @AMDEP_TRUE@@am__fastdepCXX_FALSE@ source='samples/sample1_unittest.cc' object='samples/test_fused_gtest_test-sample1_unittest.obj' libtool=no @AMDEPBACKSLASH@ @AMDEP_TRUE@@am__fastdepCXX_FALSE@ DEPDIR=$(DEPDIR) $(CXXDEPMODE) $(depcomp) @AMDEPBACKSLASH@ @am__fastdepCXX_FALSE@ $(CXX) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(test_fused_gtest_test_CPPFLAGS) $(CPPFLAGS) $(AM_CXXFLAGS) $(CXXFLAGS) -c -o samples/test_fused_gtest_test-sample1_unittest.obj `if test -f 'samples/sample1_unittest.cc'; then $(CYGPATH_W) 'samples/sample1_unittest.cc'; else $(CYGPATH_W) '$(srcdir)/samples/sample1_unittest.cc'; fi` mostlyclean-libtool: -rm -f *.lo clean-libtool: -rm -rf .libs _libs -rm -rf lib/.libs lib/_libs -rm -rf samples/.libs samples/_libs -rm -rf src/.libs src/_libs -rm -rf test/.libs test/_libs distclean-libtool: -rm -f libtool config.lt install-m4dataDATA: $(m4data_DATA) @$(NORMAL_INSTALL) test -z "$(m4datadir)" || $(MKDIR_P) "$(DESTDIR)$(m4datadir)" @list='$(m4data_DATA)'; test -n "$(m4datadir)" || list=; \ for p in $$list; do \ if test -f "$$p"; then d=; else d="$(srcdir)/"; fi; \ echo "$$d$$p"; \ done | $(am__base_list) | \ while read files; do \ echo " $(INSTALL_DATA) $$files '$(DESTDIR)$(m4datadir)'"; \ $(INSTALL_DATA) $$files "$(DESTDIR)$(m4datadir)" || exit $$?; \ done uninstall-m4dataDATA: @$(NORMAL_UNINSTALL) @list='$(m4data_DATA)'; test -n "$(m4datadir)" || list=; \ files=`for p in $$list; do echo $$p; done | sed -e 's|^.*/||'`; \ dir='$(DESTDIR)$(m4datadir)'; $(am__uninstall_files_from_dir) install-pkgincludeHEADERS: $(pkginclude_HEADERS) @$(NORMAL_INSTALL) test -z "$(pkgincludedir)" || $(MKDIR_P) "$(DESTDIR)$(pkgincludedir)" @list='$(pkginclude_HEADERS)'; test -n "$(pkgincludedir)" || list=; \ for p in $$list; do \ if test -f "$$p"; then d=; else d="$(srcdir)/"; fi; \ echo "$$d$$p"; \ done | $(am__base_list) | \ while read files; do \ echo " $(INSTALL_HEADER) $$files '$(DESTDIR)$(pkgincludedir)'"; \ $(INSTALL_HEADER) $$files "$(DESTDIR)$(pkgincludedir)" || exit $$?; \ done uninstall-pkgincludeHEADERS: @$(NORMAL_UNINSTALL) @list='$(pkginclude_HEADERS)'; test -n "$(pkgincludedir)" || list=; \ files=`for p in $$list; do echo $$p; done | sed -e 's|^.*/||'`; \ dir='$(DESTDIR)$(pkgincludedir)'; $(am__uninstall_files_from_dir) install-pkginclude_internalHEADERS: $(pkginclude_internal_HEADERS) @$(NORMAL_INSTALL) test -z "$(pkginclude_internaldir)" || $(MKDIR_P) "$(DESTDIR)$(pkginclude_internaldir)" @list='$(pkginclude_internal_HEADERS)'; test -n "$(pkginclude_internaldir)" || list=; \ for p in $$list; do \ if test -f "$$p"; then d=; else d="$(srcdir)/"; fi; \ echo "$$d$$p"; \ done | $(am__base_list) | \ while read files; do \ echo " $(INSTALL_HEADER) $$files '$(DESTDIR)$(pkginclude_internaldir)'"; \ $(INSTALL_HEADER) $$files "$(DESTDIR)$(pkginclude_internaldir)" || exit $$?; \ done uninstall-pkginclude_internalHEADERS: @$(NORMAL_UNINSTALL) @list='$(pkginclude_internal_HEADERS)'; test -n "$(pkginclude_internaldir)" || list=; \ files=`for p in $$list; do echo $$p; done | sed -e 's|^.*/||'`; \ dir='$(DESTDIR)$(pkginclude_internaldir)'; $(am__uninstall_files_from_dir) ID: $(HEADERS) $(SOURCES) $(LISP) $(TAGS_FILES) list='$(SOURCES) $(HEADERS) $(LISP) $(TAGS_FILES)'; \ unique=`for i in $$list; do \ if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \ done | \ $(AWK) '{ files[$$0] = 1; nonempty = 1; } \ END { if (nonempty) { for (i in files) print i; }; }'`; \ mkid -fID $$unique tags: TAGS TAGS: $(HEADERS) $(SOURCES) $(TAGS_DEPENDENCIES) \ $(TAGS_FILES) $(LISP) set x; \ here=`pwd`; \ list='$(SOURCES) $(HEADERS) $(LISP) $(TAGS_FILES)'; \ unique=`for i in $$list; do \ if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \ done | \ $(AWK) '{ files[$$0] = 1; nonempty = 1; } \ END { if (nonempty) { for (i in files) print i; }; }'`; \ shift; \ if test -z "$(ETAGS_ARGS)$$*$$unique"; then :; else \ test -n "$$unique" || unique=$$empty_fix; \ if test $$# -gt 0; then \ $(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \ "$$@" $$unique; \ else \ $(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \ $$unique; \ fi; \ fi ctags: CTAGS CTAGS: $(HEADERS) $(SOURCES) $(TAGS_DEPENDENCIES) \ $(TAGS_FILES) $(LISP) list='$(SOURCES) $(HEADERS) $(LISP) $(TAGS_FILES)'; \ unique=`for i in $$list; do \ if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \ done | \ $(AWK) '{ files[$$0] = 1; nonempty = 1; } \ END { if (nonempty) { for (i in files) print i; }; }'`; \ test -z "$(CTAGS_ARGS)$$unique" \ || $(CTAGS) $(CTAGSFLAGS) $(AM_CTAGSFLAGS) $(CTAGS_ARGS) \ $$unique GTAGS: here=`$(am__cd) $(top_builddir) && pwd` \ && $(am__cd) $(top_srcdir) \ && gtags -i $(GTAGS_ARGS) "$$here" distclean-tags: -rm -f TAGS ID GTAGS GRTAGS GSYMS GPATH tags check-TESTS: $(TESTS) @failed=0; all=0; xfail=0; xpass=0; skip=0; \ srcdir=$(srcdir); export srcdir; \ list=' $(TESTS) '; \ $(am__tty_colors); \ if test -n "$$list"; then \ for tst in $$list; do \ if test -f ./$$tst; then dir=./; \ elif test -f $$tst; then dir=; \ else dir="$(srcdir)/"; fi; \ if $(TESTS_ENVIRONMENT) $${dir}$$tst; then \ all=`expr $$all + 1`; \ case " $(XFAIL_TESTS) " in \ *[\ \ ]$$tst[\ \ ]*) \ xpass=`expr $$xpass + 1`; \ failed=`expr $$failed + 1`; \ col=$$red; res=XPASS; \ ;; \ *) \ col=$$grn; res=PASS; \ ;; \ esac; \ elif test $$? -ne 77; then \ all=`expr $$all + 1`; \ case " $(XFAIL_TESTS) " in \ *[\ \ ]$$tst[\ \ ]*) \ xfail=`expr $$xfail + 1`; \ col=$$lgn; res=XFAIL; \ ;; \ *) \ failed=`expr $$failed + 1`; \ col=$$red; res=FAIL; \ ;; \ esac; \ else \ skip=`expr $$skip + 1`; \ col=$$blu; res=SKIP; \ fi; \ echo "$${col}$$res$${std}: $$tst"; \ done; \ if test "$$all" -eq 1; then \ tests="test"; \ All=""; \ else \ tests="tests"; \ All="All "; \ fi; \ if test "$$failed" -eq 0; then \ if test "$$xfail" -eq 0; then \ banner="$$All$$all $$tests passed"; \ else \ if test "$$xfail" -eq 1; then failures=failure; else failures=failures; fi; \ banner="$$All$$all $$tests behaved as expected ($$xfail expected $$failures)"; \ fi; \ else \ if test "$$xpass" -eq 0; then \ banner="$$failed of $$all $$tests failed"; \ else \ if test "$$xpass" -eq 1; then passes=pass; else passes=passes; fi; \ banner="$$failed of $$all $$tests did not behave as expected ($$xpass unexpected $$passes)"; \ fi; \ fi; \ dashes="$$banner"; \ skipped=""; \ if test "$$skip" -ne 0; then \ if test "$$skip" -eq 1; then \ skipped="($$skip test was not run)"; \ else \ skipped="($$skip tests were not run)"; \ fi; \ test `echo "$$skipped" | wc -c` -le `echo "$$banner" | wc -c` || \ dashes="$$skipped"; \ fi; \ report=""; \ if test "$$failed" -ne 0 && test -n "$(PACKAGE_BUGREPORT)"; then \ report="Please report to $(PACKAGE_BUGREPORT)"; \ test `echo "$$report" | wc -c` -le `echo "$$banner" | wc -c` || \ dashes="$$report"; \ fi; \ dashes=`echo "$$dashes" | sed s/./=/g`; \ if test "$$failed" -eq 0; then \ col="$$grn"; \ else \ col="$$red"; \ fi; \ echo "$${col}$$dashes$${std}"; \ echo "$${col}$$banner$${std}"; \ test -z "$$skipped" || echo "$${col}$$skipped$${std}"; \ test -z "$$report" || echo "$${col}$$report$${std}"; \ echo "$${col}$$dashes$${std}"; \ test "$$failed" -eq 0; \ else :; fi distdir: $(DISTFILES) $(am__remove_distdir) test -d "$(distdir)" || mkdir "$(distdir)" @srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ topsrcdirstrip=`echo "$(top_srcdir)" | sed 's/[].[^$$\\*]/\\\\&/g'`; \ list='$(DISTFILES)'; \ dist_files=`for file in $$list; do echo $$file; done | \ sed -e "s|^$$srcdirstrip/||;t" \ -e "s|^$$topsrcdirstrip/|$(top_builddir)/|;t"`; \ case $$dist_files in \ */*) $(MKDIR_P) `echo "$$dist_files" | \ sed '/\//!d;s|^|$(distdir)/|;s,/[^/]*$$,,' | \ sort -u` ;; \ esac; \ for file in $$dist_files; do \ if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \ if test -d $$d/$$file; then \ dir=`echo "/$$file" | sed -e 's,/[^/]*$$,,'`; \ if test -d "$(distdir)/$$file"; then \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \ cp -fpR $(srcdir)/$$file "$(distdir)$$dir" || exit 1; \ find "$(distdir)/$$file" -type d ! -perm -700 -exec chmod u+rwx {} \;; \ fi; \ cp -fpR $$d/$$file "$(distdir)$$dir" || exit 1; \ else \ test -f "$(distdir)/$$file" \ || cp -p $$d/$$file "$(distdir)/$$file" \ || exit 1; \ fi; \ done -test -n "$(am__skip_mode_fix)" \ || find "$(distdir)" -type d ! -perm -755 \ -exec chmod u+rwx,go+rx {} \; -o \ ! -type d ! -perm -444 -links 1 -exec chmod a+r {} \; -o \ ! -type d ! -perm -400 -exec chmod a+r {} \; -o \ ! -type d ! -perm -444 -exec $(install_sh) -c -m a+r {} {} \; \ || chmod -R a+r "$(distdir)" dist-gzip: distdir tardir=$(distdir) && $(am__tar) | GZIP=$(GZIP_ENV) gzip -c >$(distdir).tar.gz $(am__remove_distdir) dist-bzip2: distdir tardir=$(distdir) && $(am__tar) | BZIP2=$${BZIP2--9} bzip2 -c >$(distdir).tar.bz2 $(am__remove_distdir) dist-lzip: distdir tardir=$(distdir) && $(am__tar) | lzip -c $${LZIP_OPT--9} >$(distdir).tar.lz $(am__remove_distdir) dist-lzma: distdir tardir=$(distdir) && $(am__tar) | lzma -9 -c >$(distdir).tar.lzma $(am__remove_distdir) dist-xz: distdir tardir=$(distdir) && $(am__tar) | XZ_OPT=$${XZ_OPT--e} xz -c >$(distdir).tar.xz $(am__remove_distdir) dist-tarZ: distdir tardir=$(distdir) && $(am__tar) | compress -c >$(distdir).tar.Z $(am__remove_distdir) dist-shar: distdir shar $(distdir) | GZIP=$(GZIP_ENV) gzip -c >$(distdir).shar.gz $(am__remove_distdir) dist-zip: distdir -rm -f $(distdir).zip zip -rq $(distdir).zip $(distdir) $(am__remove_distdir) dist dist-all: distdir tardir=$(distdir) && $(am__tar) | GZIP=$(GZIP_ENV) gzip -c >$(distdir).tar.gz tardir=$(distdir) && $(am__tar) | BZIP2=$${BZIP2--9} bzip2 -c >$(distdir).tar.bz2 -rm -f $(distdir).zip zip -rq $(distdir).zip $(distdir) $(am__remove_distdir) # This target untars the dist file and tries a VPATH configuration. Then # it guarantees that the distribution is self-contained by making another # tarfile. distcheck: dist case '$(DIST_ARCHIVES)' in \ *.tar.gz*) \ GZIP=$(GZIP_ENV) gzip -dc $(distdir).tar.gz | $(am__untar) ;;\ *.tar.bz2*) \ bzip2 -dc $(distdir).tar.bz2 | $(am__untar) ;;\ *.tar.lzma*) \ lzma -dc $(distdir).tar.lzma | $(am__untar) ;;\ *.tar.lz*) \ lzip -dc $(distdir).tar.lz | $(am__untar) ;;\ *.tar.xz*) \ xz -dc $(distdir).tar.xz | $(am__untar) ;;\ *.tar.Z*) \ uncompress -c $(distdir).tar.Z | $(am__untar) ;;\ *.shar.gz*) \ GZIP=$(GZIP_ENV) gzip -dc $(distdir).shar.gz | unshar ;;\ *.zip*) \ unzip $(distdir).zip ;;\ esac chmod -R a-w $(distdir); chmod a+w $(distdir) mkdir $(distdir)/_build mkdir $(distdir)/_inst chmod a-w $(distdir) test -d $(distdir)/_build || exit 0; \ dc_install_base=`$(am__cd) $(distdir)/_inst && pwd | sed -e 's,^[^:\\/]:[\\/],/,'` \ && dc_destdir="$${TMPDIR-/tmp}/am-dc-$$$$/" \ && am__cwd=`pwd` \ && $(am__cd) $(distdir)/_build \ && ../configure --srcdir=.. --prefix="$$dc_install_base" \ $(AM_DISTCHECK_CONFIGURE_FLAGS) \ $(DISTCHECK_CONFIGURE_FLAGS) \ && $(MAKE) $(AM_MAKEFLAGS) \ && $(MAKE) $(AM_MAKEFLAGS) dvi \ && $(MAKE) $(AM_MAKEFLAGS) check \ && $(MAKE) $(AM_MAKEFLAGS) install \ && $(MAKE) $(AM_MAKEFLAGS) installcheck \ && $(MAKE) $(AM_MAKEFLAGS) uninstall \ && $(MAKE) $(AM_MAKEFLAGS) distuninstallcheck_dir="$$dc_install_base" \ distuninstallcheck \ && chmod -R a-w "$$dc_install_base" \ && ({ \ (cd ../.. && umask 077 && mkdir "$$dc_destdir") \ && $(MAKE) $(AM_MAKEFLAGS) DESTDIR="$$dc_destdir" install \ && $(MAKE) $(AM_MAKEFLAGS) DESTDIR="$$dc_destdir" uninstall \ && $(MAKE) $(AM_MAKEFLAGS) DESTDIR="$$dc_destdir" \ distuninstallcheck_dir="$$dc_destdir" distuninstallcheck; \ } || { rm -rf "$$dc_destdir"; exit 1; }) \ && rm -rf "$$dc_destdir" \ && $(MAKE) $(AM_MAKEFLAGS) dist \ && rm -rf $(DIST_ARCHIVES) \ && $(MAKE) $(AM_MAKEFLAGS) distcleancheck \ && cd "$$am__cwd" \ || exit 1 $(am__remove_distdir) @(echo "$(distdir) archives ready for distribution: "; \ list='$(DIST_ARCHIVES)'; for i in $$list; do echo $$i; done) | \ sed -e 1h -e 1s/./=/g -e 1p -e 1x -e '$$p' -e '$$x' distuninstallcheck: @test -n '$(distuninstallcheck_dir)' || { \ echo 'ERROR: trying to run $@ with an empty' \ '$$(distuninstallcheck_dir)' >&2; \ exit 1; \ }; \ $(am__cd) '$(distuninstallcheck_dir)' || { \ echo 'ERROR: cannot chdir into $(distuninstallcheck_dir)' >&2; \ exit 1; \ }; \ test `$(am__distuninstallcheck_listfiles) | wc -l` -eq 0 \ || { echo "ERROR: files left after uninstall:" ; \ if test -n "$(DESTDIR)"; then \ echo " (check DESTDIR support)"; \ fi ; \ $(distuninstallcheck_listfiles) ; \ exit 1; } >&2 distcleancheck: distclean @if test '$(srcdir)' = . ; then \ echo "ERROR: distcleancheck can only run from a VPATH build" ; \ exit 1 ; \ fi @test `$(distcleancheck_listfiles) | wc -l` -eq 0 \ || { echo "ERROR: files left in build directory after distclean:" ; \ $(distcleancheck_listfiles) ; \ exit 1; } >&2 check-am: all-am $(MAKE) $(AM_MAKEFLAGS) $(check_PROGRAMS) $(MAKE) $(AM_MAKEFLAGS) check-TESTS check: check-am all-am: Makefile $(LTLIBRARIES) $(DATA) $(HEADERS) installdirs: for dir in "$(DESTDIR)$(libdir)" "$(DESTDIR)$(m4datadir)" "$(DESTDIR)$(pkgincludedir)" "$(DESTDIR)$(pkginclude_internaldir)"; do \ test -z "$$dir" || $(MKDIR_P) "$$dir"; \ done install: install-am install-exec: install-exec-am install-data: install-data-am uninstall: uninstall-am install-am: all-am @$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am installcheck: installcheck-am install-strip: if test -z '$(STRIP)'; then \ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ install; \ else \ $(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \ install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \ "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'" install; \ fi mostlyclean-generic: clean-generic: -test -z "$(CLEANFILES)" || rm -f $(CLEANFILES) distclean-generic: -test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES) -test . = "$(srcdir)" || test -z "$(CONFIG_CLEAN_VPATH_FILES)" || rm -f $(CONFIG_CLEAN_VPATH_FILES) -rm -f fused-src/gtest/$(DEPDIR)/$(am__dirstamp) -rm -f fused-src/gtest/$(am__dirstamp) -rm -f lib/$(am__dirstamp) -rm -f samples/$(DEPDIR)/$(am__dirstamp) -rm -f samples/$(am__dirstamp) -rm -f src/$(DEPDIR)/$(am__dirstamp) -rm -f src/$(am__dirstamp) -rm -f test/$(DEPDIR)/$(am__dirstamp) -rm -f test/$(am__dirstamp) maintainer-clean-generic: @echo "This command is intended for maintainers to use" @echo "it deletes files that may require special tools to rebuild." @HAVE_PYTHON_FALSE@maintainer-clean-local: clean: clean-am clean-am: clean-checkPROGRAMS clean-generic clean-libLTLIBRARIES \ clean-libtool clean-noinstLTLIBRARIES mostlyclean-am distclean: distclean-am -rm -f $(am__CONFIG_DISTCLEAN_FILES) -rm -rf fused-src/gtest/$(DEPDIR) samples/$(DEPDIR) src/$(DEPDIR) test/$(DEPDIR) -rm -f Makefile distclean-am: clean-am distclean-compile distclean-generic \ distclean-hdr distclean-libtool distclean-tags dvi: dvi-am dvi-am: html: html-am html-am: info: info-am info-am: install-data-am: install-data-local install-m4dataDATA \ install-pkgincludeHEADERS install-pkginclude_internalHEADERS install-dvi: install-dvi-am install-dvi-am: install-exec-am: install-exec-local install-libLTLIBRARIES install-html: install-html-am install-html-am: install-info: install-info-am install-info-am: install-man: install-pdf: install-pdf-am install-pdf-am: install-ps: install-ps-am install-ps-am: installcheck-am: maintainer-clean: maintainer-clean-am -rm -f $(am__CONFIG_DISTCLEAN_FILES) -rm -rf $(top_srcdir)/autom4te.cache -rm -rf fused-src/gtest/$(DEPDIR) samples/$(DEPDIR) src/$(DEPDIR) test/$(DEPDIR) -rm -f Makefile maintainer-clean-am: distclean-am maintainer-clean-generic \ maintainer-clean-local mostlyclean: mostlyclean-am mostlyclean-am: mostlyclean-compile mostlyclean-generic \ mostlyclean-libtool pdf: pdf-am pdf-am: ps: ps-am ps-am: uninstall-am: uninstall-libLTLIBRARIES uninstall-m4dataDATA \ uninstall-pkgincludeHEADERS \ uninstall-pkginclude_internalHEADERS .MAKE: check-am install-am install-strip .PHONY: CTAGS GTAGS all all-am am--refresh check check-TESTS check-am \ clean clean-checkPROGRAMS clean-generic clean-libLTLIBRARIES \ clean-libtool clean-noinstLTLIBRARIES ctags dist dist-all \ dist-bzip2 dist-gzip dist-lzip dist-lzma dist-shar dist-tarZ \ dist-xz dist-zip distcheck distclean distclean-compile \ distclean-generic distclean-hdr distclean-libtool \ distclean-tags distcleancheck distdir distuninstallcheck dvi \ dvi-am html html-am info info-am install install-am \ install-data install-data-am install-data-local install-dvi \ install-dvi-am install-exec install-exec-am install-exec-local \ install-html install-html-am install-info install-info-am \ install-libLTLIBRARIES install-m4dataDATA install-man \ install-pdf install-pdf-am install-pkgincludeHEADERS \ install-pkginclude_internalHEADERS install-ps install-ps-am \ install-strip installcheck installcheck-am installdirs \ maintainer-clean maintainer-clean-generic \ maintainer-clean-local mostlyclean mostlyclean-compile \ mostlyclean-generic mostlyclean-libtool pdf pdf-am ps ps-am \ tags uninstall uninstall-am uninstall-libLTLIBRARIES \ uninstall-m4dataDATA uninstall-pkgincludeHEADERS \ uninstall-pkginclude_internalHEADERS # Build rules for putting fused Google Test files into the distribution # package. The user can also create those files by manually running # scripts/fuse_gtest_files.py. @HAVE_PYTHON_TRUE@$(test_fused_gtest_test_SOURCES): fused-gtest @HAVE_PYTHON_TRUE@fused-gtest: $(pkginclude_HEADERS) $(pkginclude_internal_HEADERS) \ @HAVE_PYTHON_TRUE@ $(GTEST_SRC) src/gtest-all.cc src/gtest_main.cc \ @HAVE_PYTHON_TRUE@ scripts/fuse_gtest_files.py @HAVE_PYTHON_TRUE@ mkdir -p "$(srcdir)/fused-src" @HAVE_PYTHON_TRUE@ chmod -R u+w "$(srcdir)/fused-src" @HAVE_PYTHON_TRUE@ rm -f "$(srcdir)/fused-src/gtest/gtest-all.cc" @HAVE_PYTHON_TRUE@ rm -f "$(srcdir)/fused-src/gtest/gtest.h" @HAVE_PYTHON_TRUE@ "$(srcdir)/scripts/fuse_gtest_files.py" "$(srcdir)/fused-src" @HAVE_PYTHON_TRUE@ cp -f "$(srcdir)/src/gtest_main.cc" "$(srcdir)/fused-src/gtest/" @HAVE_PYTHON_TRUE@maintainer-clean-local: @HAVE_PYTHON_TRUE@ rm -rf "$(srcdir)/fused-src" # Disables 'make install' as installing a compiled version of Google # Test can lead to undefined behavior due to violation of the # One-Definition Rule. install-exec-local: echo "'make install' is dangerous and not supported. Instead, see README for how to integrate Google Test into your build system." false install-data-local: echo "'make install' is dangerous and not supported. Instead, see README for how to integrate Google Test into your build system." false # Tell versions [3.59,3.63) of GNU make to not export all variables. # Otherwise a system limit (for SysV at least) may be exceeded. .NOEXPORT: ffc-2019.1.0.post0/libs/gtest-1.7.0/README000066400000000000000000000373321346026536000171630ustar00rootroot00000000000000Google C++ Testing Framework ============================ http://code.google.com/p/googletest/ Overview -------- Google's framework for writing C++ tests on a variety of platforms (Linux, Mac OS X, Windows, Windows CE, Symbian, etc). Based on the xUnit architecture. Supports automatic test discovery, a rich set of assertions, user-defined assertions, death tests, fatal and non-fatal failures, various options for running the tests, and XML test report generation. Please see the project page above for more information as well as the mailing list for questions, discussions, and development. There is also an IRC channel on OFTC (irc.oftc.net) #gtest available. Please join us! Requirements for End Users -------------------------- Google Test is designed to have fairly minimal requirements to build and use with your projects, but there are some. Currently, we support Linux, Windows, Mac OS X, and Cygwin. We will also make our best effort to support other platforms (e.g. Solaris, AIX, and z/OS). However, since core members of the Google Test project have no access to these platforms, Google Test may have outstanding issues there. If you notice any problems on your platform, please notify googletestframework@googlegroups.com. Patches for fixing them are even more welcome! ### Linux Requirements ### These are the base requirements to build and use Google Test from a source package (as described below): * GNU-compatible Make or gmake * POSIX-standard shell * POSIX(-2) Regular Expressions (regex.h) * A C++98-standard-compliant compiler ### Windows Requirements ### * Microsoft Visual C++ 7.1 or newer ### Cygwin Requirements ### * Cygwin 1.5.25-14 or newer ### Mac OS X Requirements ### * Mac OS X 10.4 Tiger or newer * Developer Tools Installed Also, you'll need CMake 2.6.4 or higher if you want to build the samples using the provided CMake script, regardless of the platform. Requirements for Contributors ----------------------------- We welcome patches. If you plan to contribute a patch, you need to build Google Test and its own tests from an SVN checkout (described below), which has further requirements: * Python version 2.3 or newer (for running some of the tests and re-generating certain source files from templates) * CMake 2.6.4 or newer Getting the Source ------------------ There are two primary ways of getting Google Test's source code: you can download a stable source release in your preferred archive format, or directly check out the source from our Subversion (SVN) repositary. The SVN checkout requires a few extra steps and some extra software packages on your system, but lets you track the latest development and make patches much more easily, so we highly encourage it. ### Source Package ### Google Test is released in versioned source packages which can be downloaded from the download page [1]. Several different archive formats are provided, but the only difference is the tools used to manipulate them, and the size of the resulting file. Download whichever you are most comfortable with. [1] http://code.google.com/p/googletest/downloads/list Once the package is downloaded, expand it using whichever tools you prefer for that type. This will result in a new directory with the name "gtest-X.Y.Z" which contains all of the source code. Here are some examples on Linux: tar -xvzf gtest-X.Y.Z.tar.gz tar -xvjf gtest-X.Y.Z.tar.bz2 unzip gtest-X.Y.Z.zip ### SVN Checkout ### To check out the main branch (also known as the "trunk") of Google Test, run the following Subversion command: svn checkout http://googletest.googlecode.com/svn/trunk/ gtest-svn Setting up the Build -------------------- To build Google Test and your tests that use it, you need to tell your build system where to find its headers and source files. The exact way to do it depends on which build system you use, and is usually straightforward. ### Generic Build Instructions ### Suppose you put Google Test in directory ${GTEST_DIR}. To build it, create a library build target (or a project as called by Visual Studio and Xcode) to compile ${GTEST_DIR}/src/gtest-all.cc with ${GTEST_DIR}/include in the system header search path and ${GTEST_DIR} in the normal header search path. Assuming a Linux-like system and gcc, something like the following will do: g++ -isystem ${GTEST_DIR}/include -I${GTEST_DIR} \ -pthread -c ${GTEST_DIR}/src/gtest-all.cc ar -rv libgtest.a gtest-all.o (We need -pthread as Google Test uses threads.) Next, you should compile your test source file with ${GTEST_DIR}/include in the system header search path, and link it with gtest and any other necessary libraries: g++ -isystem ${GTEST_DIR}/include -pthread path/to/your_test.cc libgtest.a \ -o your_test As an example, the make/ directory contains a Makefile that you can use to build Google Test on systems where GNU make is available (e.g. Linux, Mac OS X, and Cygwin). It doesn't try to build Google Test's own tests. Instead, it just builds the Google Test library and a sample test. You can use it as a starting point for your own build script. If the default settings are correct for your environment, the following commands should succeed: cd ${GTEST_DIR}/make make ./sample1_unittest If you see errors, try to tweak the contents of make/Makefile to make them go away. There are instructions in make/Makefile on how to do it. ### Using CMake ### Google Test comes with a CMake build script (CMakeLists.txt) that can be used on a wide range of platforms ("C" stands for cross-platofrm.). If you don't have CMake installed already, you can download it for free from http://www.cmake.org/. CMake works by generating native makefiles or build projects that can be used in the compiler environment of your choice. The typical workflow starts with: mkdir mybuild # Create a directory to hold the build output. cd mybuild cmake ${GTEST_DIR} # Generate native build scripts. If you want to build Google Test's samples, you should replace the last command with cmake -Dgtest_build_samples=ON ${GTEST_DIR} If you are on a *nix system, you should now see a Makefile in the current directory. Just type 'make' to build gtest. If you use Windows and have Vistual Studio installed, a gtest.sln file and several .vcproj files will be created. You can then build them using Visual Studio. On Mac OS X with Xcode installed, a .xcodeproj file will be generated. ### Legacy Build Scripts ### Before settling on CMake, we have been providing hand-maintained build projects/scripts for Visual Studio, Xcode, and Autotools. While we continue to provide them for convenience, they are not actively maintained any more. We highly recommend that you follow the instructions in the previous two sections to integrate Google Test with your existing build system. If you still need to use the legacy build scripts, here's how: The msvc\ folder contains two solutions with Visual C++ projects. Open the gtest.sln or gtest-md.sln file using Visual Studio, and you are ready to build Google Test the same way you build any Visual Studio project. Files that have names ending with -md use DLL versions of Microsoft runtime libraries (the /MD or the /MDd compiler option). Files without that suffix use static versions of the runtime libraries (the /MT or the /MTd option). Please note that one must use the same option to compile both gtest and the test code. If you use Visual Studio 2005 or above, we recommend the -md version as /MD is the default for new projects in these versions of Visual Studio. On Mac OS X, open the gtest.xcodeproj in the xcode/ folder using Xcode. Build the "gtest" target. The universal binary framework will end up in your selected build directory (selected in the Xcode "Preferences..." -> "Building" pane and defaults to xcode/build). Alternatively, at the command line, enter: xcodebuild This will build the "Release" configuration of gtest.framework in your default build location. See the "xcodebuild" man page for more information about building different configurations and building in different locations. If you wish to use the Google Test Xcode project with Xcode 4.x and above, you need to either: * update the SDK configuration options in xcode/Config/General.xconfig. Comment options SDKROOT, MACOS_DEPLOYMENT_TARGET, and GCC_VERSION. If you choose this route you lose the ability to target earlier versions of MacOS X. * Install an SDK for an earlier version. This doesn't appear to be supported by Apple, but has been reported to work (http://stackoverflow.com/questions/5378518). Tweaking Google Test -------------------- Google Test can be used in diverse environments. The default configuration may not work (or may not work well) out of the box in some environments. However, you can easily tweak Google Test by defining control macros on the compiler command line. Generally, these macros are named like GTEST_XYZ and you define them to either 1 or 0 to enable or disable a certain feature. We list the most frequently used macros below. For a complete list, see file include/gtest/internal/gtest-port.h. ### Choosing a TR1 Tuple Library ### Some Google Test features require the C++ Technical Report 1 (TR1) tuple library, which is not yet available with all compilers. The good news is that Google Test implements a subset of TR1 tuple that's enough for its own need, and will automatically use this when the compiler doesn't provide TR1 tuple. Usually you don't need to care about which tuple library Google Test uses. However, if your project already uses TR1 tuple, you need to tell Google Test to use the same TR1 tuple library the rest of your project uses, or the two tuple implementations will clash. To do that, add -DGTEST_USE_OWN_TR1_TUPLE=0 to the compiler flags while compiling Google Test and your tests. If you want to force Google Test to use its own tuple library, just add -DGTEST_USE_OWN_TR1_TUPLE=1 to the compiler flags instead. If you don't want Google Test to use tuple at all, add -DGTEST_HAS_TR1_TUPLE=0 and all features using tuple will be disabled. ### Multi-threaded Tests ### Google Test is thread-safe where the pthread library is available. After #include "gtest/gtest.h", you can check the GTEST_IS_THREADSAFE macro to see whether this is the case (yes if the macro is #defined to 1, no if it's undefined.). If Google Test doesn't correctly detect whether pthread is available in your environment, you can force it with -DGTEST_HAS_PTHREAD=1 or -DGTEST_HAS_PTHREAD=0 When Google Test uses pthread, you may need to add flags to your compiler and/or linker to select the pthread library, or you'll get link errors. If you use the CMake script or the deprecated Autotools script, this is taken care of for you. If you use your own build script, you'll need to read your compiler and linker's manual to figure out what flags to add. ### As a Shared Library (DLL) ### Google Test is compact, so most users can build and link it as a static library for the simplicity. You can choose to use Google Test as a shared library (known as a DLL on Windows) if you prefer. To compile *gtest* as a shared library, add -DGTEST_CREATE_SHARED_LIBRARY=1 to the compiler flags. You'll also need to tell the linker to produce a shared library instead - consult your linker's manual for how to do it. To compile your *tests* that use the gtest shared library, add -DGTEST_LINKED_AS_SHARED_LIBRARY=1 to the compiler flags. Note: while the above steps aren't technically necessary today when using some compilers (e.g. GCC), they may become necessary in the future, if we decide to improve the speed of loading the library (see http://gcc.gnu.org/wiki/Visibility for details). Therefore you are recommended to always add the above flags when using Google Test as a shared library. Otherwise a future release of Google Test may break your build script. ### Avoiding Macro Name Clashes ### In C++, macros don't obey namespaces. Therefore two libraries that both define a macro of the same name will clash if you #include both definitions. In case a Google Test macro clashes with another library, you can force Google Test to rename its macro to avoid the conflict. Specifically, if both Google Test and some other code define macro FOO, you can add -DGTEST_DONT_DEFINE_FOO=1 to the compiler flags to tell Google Test to change the macro's name from FOO to GTEST_FOO. Currently FOO can be FAIL, SUCCEED, or TEST. For example, with -DGTEST_DONT_DEFINE_TEST=1, you'll need to write GTEST_TEST(SomeTest, DoesThis) { ... } instead of TEST(SomeTest, DoesThis) { ... } in order to define a test. Upgrating from an Earlier Version --------------------------------- We strive to keep Google Test releases backward compatible. Sometimes, though, we have to make some breaking changes for the users' long-term benefits. This section describes what you'll need to do if you are upgrading from an earlier version of Google Test. ### Upgrading from 1.3.0 or Earlier ### You may need to explicitly enable or disable Google Test's own TR1 tuple library. See the instructions in section "Choosing a TR1 Tuple Library". ### Upgrading from 1.4.0 or Earlier ### The Autotools build script (configure + make) is no longer officially supportted. You are encouraged to migrate to your own build system or use CMake. If you still need to use Autotools, you can find instructions in the README file from Google Test 1.4.0. On platforms where the pthread library is available, Google Test uses it in order to be thread-safe. See the "Multi-threaded Tests" section for what this means to your build script. If you use Microsoft Visual C++ 7.1 with exceptions disabled, Google Test will no longer compile. This should affect very few people, as a large portion of STL (including ) doesn't compile in this mode anyway. We decided to stop supporting it in order to greatly simplify Google Test's implementation. Developing Google Test ---------------------- This section discusses how to make your own changes to Google Test. ### Testing Google Test Itself ### To make sure your changes work as intended and don't break existing functionality, you'll want to compile and run Google Test's own tests. For that you can use CMake: mkdir mybuild cd mybuild cmake -Dgtest_build_tests=ON ${GTEST_DIR} Make sure you have Python installed, as some of Google Test's tests are written in Python. If the cmake command complains about not being able to find Python ("Could NOT find PythonInterp (missing: PYTHON_EXECUTABLE)"), try telling it explicitly where your Python executable can be found: cmake -DPYTHON_EXECUTABLE=path/to/python -Dgtest_build_tests=ON ${GTEST_DIR} Next, you can build Google Test and all of its own tests. On *nix, this is usually done by 'make'. To run the tests, do make test All tests should pass. ### Regenerating Source Files ### Some of Google Test's source files are generated from templates (not in the C++ sense) using a script. A template file is named FOO.pump, where FOO is the name of the file it will generate. For example, the file include/gtest/internal/gtest-type-util.h.pump is used to generate gtest-type-util.h in the same directory. Normally you don't need to worry about regenerating the source files, unless you need to modify them. In that case, you should modify the corresponding .pump files instead and run the pump.py Python script to regenerate them. You can find pump.py in the scripts/ directory. Read the Pump manual [2] for how to use it. [2] http://code.google.com/p/googletest/wiki/PumpManual ### Contributing a Patch ### We welcome patches. Please read the Google Test developer's guide [3] for how you can contribute. In particular, make sure you have signed the Contributor License Agreement, or we won't be able to accept the patch. [3] http://code.google.com/p/googletest/wiki/GoogleTestDevGuide Happy testing! ffc-2019.1.0.post0/libs/gtest-1.7.0/aclocal.m4000066400000000000000000001253711346026536000201440ustar00rootroot00000000000000# generated automatically by aclocal 1.11.3 -*- Autoconf -*- # Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, # 2005, 2006, 2007, 2008, 2009, 2010, 2011 Free Software Foundation, # Inc. # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY, to the extent permitted by law; without # even the implied warranty of MERCHANTABILITY or FITNESS FOR A # PARTICULAR PURPOSE. m4_ifndef([AC_AUTOCONF_VERSION], [m4_copy([m4_PACKAGE_VERSION], [AC_AUTOCONF_VERSION])])dnl m4_if(m4_defn([AC_AUTOCONF_VERSION]), [2.68],, [m4_warning([this file was generated for autoconf 2.68. You have another version of autoconf. It may work, but is not guaranteed to. If you have problems, you may need to regenerate the build system entirely. To do so, use the procedure documented by the package, typically `autoreconf'.])]) # Copyright (C) 2002, 2003, 2005, 2006, 2007, 2008, 2011 Free Software # Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # serial 1 # AM_AUTOMAKE_VERSION(VERSION) # ---------------------------- # Automake X.Y traces this macro to ensure aclocal.m4 has been # generated from the m4 files accompanying Automake X.Y. # (This private macro should not be called outside this file.) AC_DEFUN([AM_AUTOMAKE_VERSION], [am__api_version='1.11' dnl Some users find AM_AUTOMAKE_VERSION and mistake it for a way to dnl require some minimum version. Point them to the right macro. m4_if([$1], [1.11.3], [], [AC_FATAL([Do not call $0, use AM_INIT_AUTOMAKE([$1]).])])dnl ]) # _AM_AUTOCONF_VERSION(VERSION) # ----------------------------- # aclocal traces this macro to find the Autoconf version. # This is a private macro too. Using m4_define simplifies # the logic in aclocal, which can simply ignore this definition. m4_define([_AM_AUTOCONF_VERSION], []) # AM_SET_CURRENT_AUTOMAKE_VERSION # ------------------------------- # Call AM_AUTOMAKE_VERSION and AM_AUTOMAKE_VERSION so they can be traced. # This function is AC_REQUIREd by AM_INIT_AUTOMAKE. AC_DEFUN([AM_SET_CURRENT_AUTOMAKE_VERSION], [AM_AUTOMAKE_VERSION([1.11.3])dnl m4_ifndef([AC_AUTOCONF_VERSION], [m4_copy([m4_PACKAGE_VERSION], [AC_AUTOCONF_VERSION])])dnl _AM_AUTOCONF_VERSION(m4_defn([AC_AUTOCONF_VERSION]))]) # AM_AUX_DIR_EXPAND -*- Autoconf -*- # Copyright (C) 2001, 2003, 2005, 2011 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # serial 1 # For projects using AC_CONFIG_AUX_DIR([foo]), Autoconf sets # $ac_aux_dir to `$srcdir/foo'. In other projects, it is set to # `$srcdir', `$srcdir/..', or `$srcdir/../..'. # # Of course, Automake must honor this variable whenever it calls a # tool from the auxiliary directory. The problem is that $srcdir (and # therefore $ac_aux_dir as well) can be either absolute or relative, # depending on how configure is run. This is pretty annoying, since # it makes $ac_aux_dir quite unusable in subdirectories: in the top # source directory, any form will work fine, but in subdirectories a # relative path needs to be adjusted first. # # $ac_aux_dir/missing # fails when called from a subdirectory if $ac_aux_dir is relative # $top_srcdir/$ac_aux_dir/missing # fails if $ac_aux_dir is absolute, # fails when called from a subdirectory in a VPATH build with # a relative $ac_aux_dir # # The reason of the latter failure is that $top_srcdir and $ac_aux_dir # are both prefixed by $srcdir. In an in-source build this is usually # harmless because $srcdir is `.', but things will broke when you # start a VPATH build or use an absolute $srcdir. # # So we could use something similar to $top_srcdir/$ac_aux_dir/missing, # iff we strip the leading $srcdir from $ac_aux_dir. That would be: # am_aux_dir='\$(top_srcdir)/'`expr "$ac_aux_dir" : "$srcdir//*\(.*\)"` # and then we would define $MISSING as # MISSING="\${SHELL} $am_aux_dir/missing" # This will work as long as MISSING is not called from configure, because # unfortunately $(top_srcdir) has no meaning in configure. # However there are other variables, like CC, which are often used in # configure, and could therefore not use this "fixed" $ac_aux_dir. # # Another solution, used here, is to always expand $ac_aux_dir to an # absolute PATH. The drawback is that using absolute paths prevent a # configured tree to be moved without reconfiguration. AC_DEFUN([AM_AUX_DIR_EXPAND], [dnl Rely on autoconf to set up CDPATH properly. AC_PREREQ([2.50])dnl # expand $ac_aux_dir to an absolute path am_aux_dir=`cd $ac_aux_dir && pwd` ]) # AM_CONDITIONAL -*- Autoconf -*- # Copyright (C) 1997, 2000, 2001, 2003, 2004, 2005, 2006, 2008 # Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # serial 9 # AM_CONDITIONAL(NAME, SHELL-CONDITION) # ------------------------------------- # Define a conditional. AC_DEFUN([AM_CONDITIONAL], [AC_PREREQ(2.52)dnl ifelse([$1], [TRUE], [AC_FATAL([$0: invalid condition: $1])], [$1], [FALSE], [AC_FATAL([$0: invalid condition: $1])])dnl AC_SUBST([$1_TRUE])dnl AC_SUBST([$1_FALSE])dnl _AM_SUBST_NOTMAKE([$1_TRUE])dnl _AM_SUBST_NOTMAKE([$1_FALSE])dnl m4_define([_AM_COND_VALUE_$1], [$2])dnl if $2; then $1_TRUE= $1_FALSE='#' else $1_TRUE='#' $1_FALSE= fi AC_CONFIG_COMMANDS_PRE( [if test -z "${$1_TRUE}" && test -z "${$1_FALSE}"; then AC_MSG_ERROR([[conditional "$1" was never defined. Usually this means the macro was only invoked conditionally.]]) fi])]) # Copyright (C) 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2009, # 2010, 2011 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # serial 12 # There are a few dirty hacks below to avoid letting `AC_PROG_CC' be # written in clear, in which case automake, when reading aclocal.m4, # will think it sees a *use*, and therefore will trigger all it's # C support machinery. Also note that it means that autoscan, seeing # CC etc. in the Makefile, will ask for an AC_PROG_CC use... # _AM_DEPENDENCIES(NAME) # ---------------------- # See how the compiler implements dependency checking. # NAME is "CC", "CXX", "GCJ", or "OBJC". # We try a few techniques and use that to set a single cache variable. # # We don't AC_REQUIRE the corresponding AC_PROG_CC since the latter was # modified to invoke _AM_DEPENDENCIES(CC); we would have a circular # dependency, and given that the user is not expected to run this macro, # just rely on AC_PROG_CC. AC_DEFUN([_AM_DEPENDENCIES], [AC_REQUIRE([AM_SET_DEPDIR])dnl AC_REQUIRE([AM_OUTPUT_DEPENDENCY_COMMANDS])dnl AC_REQUIRE([AM_MAKE_INCLUDE])dnl AC_REQUIRE([AM_DEP_TRACK])dnl ifelse([$1], CC, [depcc="$CC" am_compiler_list=], [$1], CXX, [depcc="$CXX" am_compiler_list=], [$1], OBJC, [depcc="$OBJC" am_compiler_list='gcc3 gcc'], [$1], UPC, [depcc="$UPC" am_compiler_list=], [$1], GCJ, [depcc="$GCJ" am_compiler_list='gcc3 gcc'], [depcc="$$1" am_compiler_list=]) AC_CACHE_CHECK([dependency style of $depcc], [am_cv_$1_dependencies_compiler_type], [if test -z "$AMDEP_TRUE" && test -f "$am_depcomp"; then # We make a subdir and do the tests there. Otherwise we can end up # making bogus files that we don't know about and never remove. For # instance it was reported that on HP-UX the gcc test will end up # making a dummy file named `D' -- because `-MD' means `put the output # in D'. rm -rf conftest.dir mkdir conftest.dir # Copy depcomp to subdir because otherwise we won't find it if we're # using a relative directory. cp "$am_depcomp" conftest.dir cd conftest.dir # We will build objects and dependencies in a subdirectory because # it helps to detect inapplicable dependency modes. For instance # both Tru64's cc and ICC support -MD to output dependencies as a # side effect of compilation, but ICC will put the dependencies in # the current directory while Tru64 will put them in the object # directory. mkdir sub am_cv_$1_dependencies_compiler_type=none if test "$am_compiler_list" = ""; then am_compiler_list=`sed -n ['s/^#*\([a-zA-Z0-9]*\))$/\1/p'] < ./depcomp` fi am__universal=false m4_case([$1], [CC], [case " $depcc " in #( *\ -arch\ *\ -arch\ *) am__universal=true ;; esac], [CXX], [case " $depcc " in #( *\ -arch\ *\ -arch\ *) am__universal=true ;; esac]) for depmode in $am_compiler_list; do # Setup a source with many dependencies, because some compilers # like to wrap large dependency lists on column 80 (with \), and # we should not choose a depcomp mode which is confused by this. # # We need to recreate these files for each test, as the compiler may # overwrite some of them when testing with obscure command lines. # This happens at least with the AIX C compiler. : > sub/conftest.c for i in 1 2 3 4 5 6; do echo '#include "conftst'$i'.h"' >> sub/conftest.c # Using `: > sub/conftst$i.h' creates only sub/conftst1.h with # Solaris 8's {/usr,}/bin/sh. touch sub/conftst$i.h done echo "${am__include} ${am__quote}sub/conftest.Po${am__quote}" > confmf # We check with `-c' and `-o' for the sake of the "dashmstdout" # mode. It turns out that the SunPro C++ compiler does not properly # handle `-M -o', and we need to detect this. Also, some Intel # versions had trouble with output in subdirs am__obj=sub/conftest.${OBJEXT-o} am__minus_obj="-o $am__obj" case $depmode in gcc) # This depmode causes a compiler race in universal mode. test "$am__universal" = false || continue ;; nosideeffect) # after this tag, mechanisms are not by side-effect, so they'll # only be used when explicitly requested if test "x$enable_dependency_tracking" = xyes; then continue else break fi ;; msvc7 | msvc7msys | msvisualcpp | msvcmsys) # This compiler won't grok `-c -o', but also, the minuso test has # not run yet. These depmodes are late enough in the game, and # so weak that their functioning should not be impacted. am__obj=conftest.${OBJEXT-o} am__minus_obj= ;; none) break ;; esac if depmode=$depmode \ source=sub/conftest.c object=$am__obj \ depfile=sub/conftest.Po tmpdepfile=sub/conftest.TPo \ $SHELL ./depcomp $depcc -c $am__minus_obj sub/conftest.c \ >/dev/null 2>conftest.err && grep sub/conftst1.h sub/conftest.Po > /dev/null 2>&1 && grep sub/conftst6.h sub/conftest.Po > /dev/null 2>&1 && grep $am__obj sub/conftest.Po > /dev/null 2>&1 && ${MAKE-make} -s -f confmf > /dev/null 2>&1; then # icc doesn't choke on unknown options, it will just issue warnings # or remarks (even with -Werror). So we grep stderr for any message # that says an option was ignored or not supported. # When given -MP, icc 7.0 and 7.1 complain thusly: # icc: Command line warning: ignoring option '-M'; no argument required # The diagnosis changed in icc 8.0: # icc: Command line remark: option '-MP' not supported if (grep 'ignoring option' conftest.err || grep 'not supported' conftest.err) >/dev/null 2>&1; then :; else am_cv_$1_dependencies_compiler_type=$depmode break fi fi done cd .. rm -rf conftest.dir else am_cv_$1_dependencies_compiler_type=none fi ]) AC_SUBST([$1DEPMODE], [depmode=$am_cv_$1_dependencies_compiler_type]) AM_CONDITIONAL([am__fastdep$1], [ test "x$enable_dependency_tracking" != xno \ && test "$am_cv_$1_dependencies_compiler_type" = gcc3]) ]) # AM_SET_DEPDIR # ------------- # Choose a directory name for dependency files. # This macro is AC_REQUIREd in _AM_DEPENDENCIES AC_DEFUN([AM_SET_DEPDIR], [AC_REQUIRE([AM_SET_LEADING_DOT])dnl AC_SUBST([DEPDIR], ["${am__leading_dot}deps"])dnl ]) # AM_DEP_TRACK # ------------ AC_DEFUN([AM_DEP_TRACK], [AC_ARG_ENABLE(dependency-tracking, [ --disable-dependency-tracking speeds up one-time build --enable-dependency-tracking do not reject slow dependency extractors]) if test "x$enable_dependency_tracking" != xno; then am_depcomp="$ac_aux_dir/depcomp" AMDEPBACKSLASH='\' am__nodep='_no' fi AM_CONDITIONAL([AMDEP], [test "x$enable_dependency_tracking" != xno]) AC_SUBST([AMDEPBACKSLASH])dnl _AM_SUBST_NOTMAKE([AMDEPBACKSLASH])dnl AC_SUBST([am__nodep])dnl _AM_SUBST_NOTMAKE([am__nodep])dnl ]) # Generate code to set up dependency tracking. -*- Autoconf -*- # Copyright (C) 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2008 # Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. #serial 5 # _AM_OUTPUT_DEPENDENCY_COMMANDS # ------------------------------ AC_DEFUN([_AM_OUTPUT_DEPENDENCY_COMMANDS], [{ # Autoconf 2.62 quotes --file arguments for eval, but not when files # are listed without --file. Let's play safe and only enable the eval # if we detect the quoting. case $CONFIG_FILES in *\'*) eval set x "$CONFIG_FILES" ;; *) set x $CONFIG_FILES ;; esac shift for mf do # Strip MF so we end up with the name of the file. mf=`echo "$mf" | sed -e 's/:.*$//'` # Check whether this is an Automake generated Makefile or not. # We used to match only the files named `Makefile.in', but # some people rename them; so instead we look at the file content. # Grep'ing the first line is not enough: some people post-process # each Makefile.in and add a new line on top of each file to say so. # Grep'ing the whole file is not good either: AIX grep has a line # limit of 2048, but all sed's we know have understand at least 4000. if sed -n 's,^#.*generated by automake.*,X,p' "$mf" | grep X >/dev/null 2>&1; then dirpart=`AS_DIRNAME("$mf")` else continue fi # Extract the definition of DEPDIR, am__include, and am__quote # from the Makefile without running `make'. DEPDIR=`sed -n 's/^DEPDIR = //p' < "$mf"` test -z "$DEPDIR" && continue am__include=`sed -n 's/^am__include = //p' < "$mf"` test -z "am__include" && continue am__quote=`sed -n 's/^am__quote = //p' < "$mf"` # When using ansi2knr, U may be empty or an underscore; expand it U=`sed -n 's/^U = //p' < "$mf"` # Find all dependency output files, they are included files with # $(DEPDIR) in their names. We invoke sed twice because it is the # simplest approach to changing $(DEPDIR) to its actual value in the # expansion. for file in `sed -n " s/^$am__include $am__quote\(.*(DEPDIR).*\)$am__quote"'$/\1/p' <"$mf" | \ sed -e 's/\$(DEPDIR)/'"$DEPDIR"'/g' -e 's/\$U/'"$U"'/g'`; do # Make sure the directory exists. test -f "$dirpart/$file" && continue fdir=`AS_DIRNAME(["$file"])` AS_MKDIR_P([$dirpart/$fdir]) # echo "creating $dirpart/$file" echo '# dummy' > "$dirpart/$file" done done } ])# _AM_OUTPUT_DEPENDENCY_COMMANDS # AM_OUTPUT_DEPENDENCY_COMMANDS # ----------------------------- # This macro should only be invoked once -- use via AC_REQUIRE. # # This code is only required when automatic dependency tracking # is enabled. FIXME. This creates each `.P' file that we will # need in order to bootstrap the dependency handling code. AC_DEFUN([AM_OUTPUT_DEPENDENCY_COMMANDS], [AC_CONFIG_COMMANDS([depfiles], [test x"$AMDEP_TRUE" != x"" || _AM_OUTPUT_DEPENDENCY_COMMANDS], [AMDEP_TRUE="$AMDEP_TRUE" ac_aux_dir="$ac_aux_dir"]) ]) # Do all the work for Automake. -*- Autoconf -*- # Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, # 2005, 2006, 2008, 2009 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # serial 16 # This macro actually does too much. Some checks are only needed if # your package does certain things. But this isn't really a big deal. # AM_INIT_AUTOMAKE(PACKAGE, VERSION, [NO-DEFINE]) # AM_INIT_AUTOMAKE([OPTIONS]) # ----------------------------------------------- # The call with PACKAGE and VERSION arguments is the old style # call (pre autoconf-2.50), which is being phased out. PACKAGE # and VERSION should now be passed to AC_INIT and removed from # the call to AM_INIT_AUTOMAKE. # We support both call styles for the transition. After # the next Automake release, Autoconf can make the AC_INIT # arguments mandatory, and then we can depend on a new Autoconf # release and drop the old call support. AC_DEFUN([AM_INIT_AUTOMAKE], [AC_PREREQ([2.62])dnl dnl Autoconf wants to disallow AM_ names. We explicitly allow dnl the ones we care about. m4_pattern_allow([^AM_[A-Z]+FLAGS$])dnl AC_REQUIRE([AM_SET_CURRENT_AUTOMAKE_VERSION])dnl AC_REQUIRE([AC_PROG_INSTALL])dnl if test "`cd $srcdir && pwd`" != "`pwd`"; then # Use -I$(srcdir) only when $(srcdir) != ., so that make's output # is not polluted with repeated "-I." AC_SUBST([am__isrc], [' -I$(srcdir)'])_AM_SUBST_NOTMAKE([am__isrc])dnl # test to see if srcdir already configured if test -f $srcdir/config.status; then AC_MSG_ERROR([source directory already configured; run "make distclean" there first]) fi fi # test whether we have cygpath if test -z "$CYGPATH_W"; then if (cygpath --version) >/dev/null 2>/dev/null; then CYGPATH_W='cygpath -w' else CYGPATH_W=echo fi fi AC_SUBST([CYGPATH_W]) # Define the identity of the package. dnl Distinguish between old-style and new-style calls. m4_ifval([$2], [m4_ifval([$3], [_AM_SET_OPTION([no-define])])dnl AC_SUBST([PACKAGE], [$1])dnl AC_SUBST([VERSION], [$2])], [_AM_SET_OPTIONS([$1])dnl dnl Diagnose old-style AC_INIT with new-style AM_AUTOMAKE_INIT. m4_if(m4_ifdef([AC_PACKAGE_NAME], 1)m4_ifdef([AC_PACKAGE_VERSION], 1), 11,, [m4_fatal([AC_INIT should be called with package and version arguments])])dnl AC_SUBST([PACKAGE], ['AC_PACKAGE_TARNAME'])dnl AC_SUBST([VERSION], ['AC_PACKAGE_VERSION'])])dnl _AM_IF_OPTION([no-define],, [AC_DEFINE_UNQUOTED(PACKAGE, "$PACKAGE", [Name of package]) AC_DEFINE_UNQUOTED(VERSION, "$VERSION", [Version number of package])])dnl # Some tools Automake needs. AC_REQUIRE([AM_SANITY_CHECK])dnl AC_REQUIRE([AC_ARG_PROGRAM])dnl AM_MISSING_PROG(ACLOCAL, aclocal-${am__api_version}) AM_MISSING_PROG(AUTOCONF, autoconf) AM_MISSING_PROG(AUTOMAKE, automake-${am__api_version}) AM_MISSING_PROG(AUTOHEADER, autoheader) AM_MISSING_PROG(MAKEINFO, makeinfo) AC_REQUIRE([AM_PROG_INSTALL_SH])dnl AC_REQUIRE([AM_PROG_INSTALL_STRIP])dnl AC_REQUIRE([AM_PROG_MKDIR_P])dnl # We need awk for the "check" target. The system "awk" is bad on # some platforms. AC_REQUIRE([AC_PROG_AWK])dnl AC_REQUIRE([AC_PROG_MAKE_SET])dnl AC_REQUIRE([AM_SET_LEADING_DOT])dnl _AM_IF_OPTION([tar-ustar], [_AM_PROG_TAR([ustar])], [_AM_IF_OPTION([tar-pax], [_AM_PROG_TAR([pax])], [_AM_PROG_TAR([v7])])]) _AM_IF_OPTION([no-dependencies],, [AC_PROVIDE_IFELSE([AC_PROG_CC], [_AM_DEPENDENCIES(CC)], [define([AC_PROG_CC], defn([AC_PROG_CC])[_AM_DEPENDENCIES(CC)])])dnl AC_PROVIDE_IFELSE([AC_PROG_CXX], [_AM_DEPENDENCIES(CXX)], [define([AC_PROG_CXX], defn([AC_PROG_CXX])[_AM_DEPENDENCIES(CXX)])])dnl AC_PROVIDE_IFELSE([AC_PROG_OBJC], [_AM_DEPENDENCIES(OBJC)], [define([AC_PROG_OBJC], defn([AC_PROG_OBJC])[_AM_DEPENDENCIES(OBJC)])])dnl ]) _AM_IF_OPTION([silent-rules], [AC_REQUIRE([AM_SILENT_RULES])])dnl dnl The `parallel-tests' driver may need to know about EXEEXT, so add the dnl `am__EXEEXT' conditional if _AM_COMPILER_EXEEXT was seen. This macro dnl is hooked onto _AC_COMPILER_EXEEXT early, see below. AC_CONFIG_COMMANDS_PRE(dnl [m4_provide_if([_AM_COMPILER_EXEEXT], [AM_CONDITIONAL([am__EXEEXT], [test -n "$EXEEXT"])])])dnl ]) dnl Hook into `_AC_COMPILER_EXEEXT' early to learn its expansion. Do not dnl add the conditional right here, as _AC_COMPILER_EXEEXT may be further dnl mangled by Autoconf and run in a shell conditional statement. m4_define([_AC_COMPILER_EXEEXT], m4_defn([_AC_COMPILER_EXEEXT])[m4_provide([_AM_COMPILER_EXEEXT])]) # When config.status generates a header, we must update the stamp-h file. # This file resides in the same directory as the config header # that is generated. The stamp files are numbered to have different names. # Autoconf calls _AC_AM_CONFIG_HEADER_HOOK (when defined) in the # loop where config.status creates the headers, so we can generate # our stamp files there. AC_DEFUN([_AC_AM_CONFIG_HEADER_HOOK], [# Compute $1's index in $config_headers. _am_arg=$1 _am_stamp_count=1 for _am_header in $config_headers :; do case $_am_header in $_am_arg | $_am_arg:* ) break ;; * ) _am_stamp_count=`expr $_am_stamp_count + 1` ;; esac done echo "timestamp for $_am_arg" >`AS_DIRNAME(["$_am_arg"])`/stamp-h[]$_am_stamp_count]) # Copyright (C) 2001, 2003, 2005, 2008, 2011 Free Software Foundation, # Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # serial 1 # AM_PROG_INSTALL_SH # ------------------ # Define $install_sh. AC_DEFUN([AM_PROG_INSTALL_SH], [AC_REQUIRE([AM_AUX_DIR_EXPAND])dnl if test x"${install_sh}" != xset; then case $am_aux_dir in *\ * | *\ *) install_sh="\${SHELL} '$am_aux_dir/install-sh'" ;; *) install_sh="\${SHELL} $am_aux_dir/install-sh" esac fi AC_SUBST(install_sh)]) # Copyright (C) 2003, 2005 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # serial 2 # Check whether the underlying file-system supports filenames # with a leading dot. For instance MS-DOS doesn't. AC_DEFUN([AM_SET_LEADING_DOT], [rm -rf .tst 2>/dev/null mkdir .tst 2>/dev/null if test -d .tst; then am__leading_dot=. else am__leading_dot=_ fi rmdir .tst 2>/dev/null AC_SUBST([am__leading_dot])]) # Check to see how 'make' treats includes. -*- Autoconf -*- # Copyright (C) 2001, 2002, 2003, 2005, 2009 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # serial 4 # AM_MAKE_INCLUDE() # ----------------- # Check to see how make treats includes. AC_DEFUN([AM_MAKE_INCLUDE], [am_make=${MAKE-make} cat > confinc << 'END' am__doit: @echo this is the am__doit target .PHONY: am__doit END # If we don't find an include directive, just comment out the code. AC_MSG_CHECKING([for style of include used by $am_make]) am__include="#" am__quote= _am_result=none # First try GNU make style include. echo "include confinc" > confmf # Ignore all kinds of additional output from `make'. case `$am_make -s -f confmf 2> /dev/null` in #( *the\ am__doit\ target*) am__include=include am__quote= _am_result=GNU ;; esac # Now try BSD make style include. if test "$am__include" = "#"; then echo '.include "confinc"' > confmf case `$am_make -s -f confmf 2> /dev/null` in #( *the\ am__doit\ target*) am__include=.include am__quote="\"" _am_result=BSD ;; esac fi AC_SUBST([am__include]) AC_SUBST([am__quote]) AC_MSG_RESULT([$_am_result]) rm -f confinc confmf ]) # Fake the existence of programs that GNU maintainers use. -*- Autoconf -*- # Copyright (C) 1997, 1999, 2000, 2001, 2003, 2004, 2005, 2008 # Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # serial 6 # AM_MISSING_PROG(NAME, PROGRAM) # ------------------------------ AC_DEFUN([AM_MISSING_PROG], [AC_REQUIRE([AM_MISSING_HAS_RUN]) $1=${$1-"${am_missing_run}$2"} AC_SUBST($1)]) # AM_MISSING_HAS_RUN # ------------------ # Define MISSING if not defined so far and test if it supports --run. # If it does, set am_missing_run to use it, otherwise, to nothing. AC_DEFUN([AM_MISSING_HAS_RUN], [AC_REQUIRE([AM_AUX_DIR_EXPAND])dnl AC_REQUIRE_AUX_FILE([missing])dnl if test x"${MISSING+set}" != xset; then case $am_aux_dir in *\ * | *\ *) MISSING="\${SHELL} \"$am_aux_dir/missing\"" ;; *) MISSING="\${SHELL} $am_aux_dir/missing" ;; esac fi # Use eval to expand $SHELL if eval "$MISSING --run true"; then am_missing_run="$MISSING --run " else am_missing_run= AC_MSG_WARN([`missing' script is too old or missing]) fi ]) # Copyright (C) 2003, 2004, 2005, 2006, 2011 Free Software Foundation, # Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # serial 1 # AM_PROG_MKDIR_P # --------------- # Check for `mkdir -p'. AC_DEFUN([AM_PROG_MKDIR_P], [AC_PREREQ([2.60])dnl AC_REQUIRE([AC_PROG_MKDIR_P])dnl dnl Automake 1.8 to 1.9.6 used to define mkdir_p. We now use MKDIR_P, dnl while keeping a definition of mkdir_p for backward compatibility. dnl @MKDIR_P@ is magic: AC_OUTPUT adjusts its value for each Makefile. dnl However we cannot define mkdir_p as $(MKDIR_P) for the sake of dnl Makefile.ins that do not define MKDIR_P, so we do our own dnl adjustment using top_builddir (which is defined more often than dnl MKDIR_P). AC_SUBST([mkdir_p], ["$MKDIR_P"])dnl case $mkdir_p in [[\\/$]]* | ?:[[\\/]]*) ;; */*) mkdir_p="\$(top_builddir)/$mkdir_p" ;; esac ]) # Helper functions for option handling. -*- Autoconf -*- # Copyright (C) 2001, 2002, 2003, 2005, 2008, 2010 Free Software # Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # serial 5 # _AM_MANGLE_OPTION(NAME) # ----------------------- AC_DEFUN([_AM_MANGLE_OPTION], [[_AM_OPTION_]m4_bpatsubst($1, [[^a-zA-Z0-9_]], [_])]) # _AM_SET_OPTION(NAME) # -------------------- # Set option NAME. Presently that only means defining a flag for this option. AC_DEFUN([_AM_SET_OPTION], [m4_define(_AM_MANGLE_OPTION([$1]), 1)]) # _AM_SET_OPTIONS(OPTIONS) # ------------------------ # OPTIONS is a space-separated list of Automake options. AC_DEFUN([_AM_SET_OPTIONS], [m4_foreach_w([_AM_Option], [$1], [_AM_SET_OPTION(_AM_Option)])]) # _AM_IF_OPTION(OPTION, IF-SET, [IF-NOT-SET]) # ------------------------------------------- # Execute IF-SET if OPTION is set, IF-NOT-SET otherwise. AC_DEFUN([_AM_IF_OPTION], [m4_ifset(_AM_MANGLE_OPTION([$1]), [$2], [$3])]) # Copyright (C) 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2008, 2009, # 2011 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # serial 2 # AM_PATH_PYTHON([MINIMUM-VERSION], [ACTION-IF-FOUND], [ACTION-IF-NOT-FOUND]) # --------------------------------------------------------------------------- # Adds support for distributing Python modules and packages. To # install modules, copy them to $(pythondir), using the python_PYTHON # automake variable. To install a package with the same name as the # automake package, install to $(pkgpythondir), or use the # pkgpython_PYTHON automake variable. # # The variables $(pyexecdir) and $(pkgpyexecdir) are provided as # locations to install python extension modules (shared libraries). # Another macro is required to find the appropriate flags to compile # extension modules. # # If your package is configured with a different prefix to python, # users will have to add the install directory to the PYTHONPATH # environment variable, or create a .pth file (see the python # documentation for details). # # If the MINIMUM-VERSION argument is passed, AM_PATH_PYTHON will # cause an error if the version of python installed on the system # doesn't meet the requirement. MINIMUM-VERSION should consist of # numbers and dots only. AC_DEFUN([AM_PATH_PYTHON], [ dnl Find a Python interpreter. Python versions prior to 2.0 are not dnl supported. (2.0 was released on October 16, 2000). m4_define_default([_AM_PYTHON_INTERPRETER_LIST], [python python2 python3 python3.2 python3.1 python3.0 python2.7 dnl python2.6 python2.5 python2.4 python2.3 python2.2 python2.1 python2.0]) AC_ARG_VAR([PYTHON], [the Python interpreter]) m4_if([$1],[],[ dnl No version check is needed. # Find any Python interpreter. if test -z "$PYTHON"; then AC_PATH_PROGS([PYTHON], _AM_PYTHON_INTERPRETER_LIST, :) fi am_display_PYTHON=python ], [ dnl A version check is needed. if test -n "$PYTHON"; then # If the user set $PYTHON, use it and don't search something else. AC_MSG_CHECKING([whether $PYTHON version >= $1]) AM_PYTHON_CHECK_VERSION([$PYTHON], [$1], [AC_MSG_RESULT(yes)], [AC_MSG_ERROR(too old)]) am_display_PYTHON=$PYTHON else # Otherwise, try each interpreter until we find one that satisfies # VERSION. AC_CACHE_CHECK([for a Python interpreter with version >= $1], [am_cv_pathless_PYTHON],[ for am_cv_pathless_PYTHON in _AM_PYTHON_INTERPRETER_LIST none; do test "$am_cv_pathless_PYTHON" = none && break AM_PYTHON_CHECK_VERSION([$am_cv_pathless_PYTHON], [$1], [break]) done]) # Set $PYTHON to the absolute path of $am_cv_pathless_PYTHON. if test "$am_cv_pathless_PYTHON" = none; then PYTHON=: else AC_PATH_PROG([PYTHON], [$am_cv_pathless_PYTHON]) fi am_display_PYTHON=$am_cv_pathless_PYTHON fi ]) if test "$PYTHON" = :; then dnl Run any user-specified action, or abort. m4_default([$3], [AC_MSG_ERROR([no suitable Python interpreter found])]) else dnl Query Python for its version number. Getting [:3] seems to be dnl the best way to do this; it's what "site.py" does in the standard dnl library. AC_CACHE_CHECK([for $am_display_PYTHON version], [am_cv_python_version], [am_cv_python_version=`$PYTHON -c "import sys; sys.stdout.write(sys.version[[:3]])"`]) AC_SUBST([PYTHON_VERSION], [$am_cv_python_version]) dnl Use the values of $prefix and $exec_prefix for the corresponding dnl values of PYTHON_PREFIX and PYTHON_EXEC_PREFIX. These are made dnl distinct variables so they can be overridden if need be. However, dnl general consensus is that you shouldn't need this ability. AC_SUBST([PYTHON_PREFIX], ['${prefix}']) AC_SUBST([PYTHON_EXEC_PREFIX], ['${exec_prefix}']) dnl At times (like when building shared libraries) you may want dnl to know which OS platform Python thinks this is. AC_CACHE_CHECK([for $am_display_PYTHON platform], [am_cv_python_platform], [am_cv_python_platform=`$PYTHON -c "import sys; sys.stdout.write(sys.platform)"`]) AC_SUBST([PYTHON_PLATFORM], [$am_cv_python_platform]) dnl Set up 4 directories: dnl pythondir -- where to install python scripts. This is the dnl site-packages directory, not the python standard library dnl directory like in previous automake betas. This behavior dnl is more consistent with lispdir.m4 for example. dnl Query distutils for this directory. AC_CACHE_CHECK([for $am_display_PYTHON script directory], [am_cv_python_pythondir], [if test "x$prefix" = xNONE then am_py_prefix=$ac_default_prefix else am_py_prefix=$prefix fi am_cv_python_pythondir=`$PYTHON -c "import sys; from distutils import sysconfig; sys.stdout.write(sysconfig.get_python_lib(0,0,prefix='$am_py_prefix'))" 2>/dev/null` case $am_cv_python_pythondir in $am_py_prefix*) am__strip_prefix=`echo "$am_py_prefix" | sed 's|.|.|g'` am_cv_python_pythondir=`echo "$am_cv_python_pythondir" | sed "s,^$am__strip_prefix,$PYTHON_PREFIX,"` ;; *) case $am_py_prefix in /usr|/System*) ;; *) am_cv_python_pythondir=$PYTHON_PREFIX/lib/python$PYTHON_VERSION/site-packages ;; esac ;; esac ]) AC_SUBST([pythondir], [$am_cv_python_pythondir]) dnl pkgpythondir -- $PACKAGE directory under pythondir. Was dnl PYTHON_SITE_PACKAGE in previous betas, but this naming is dnl more consistent with the rest of automake. AC_SUBST([pkgpythondir], [\${pythondir}/$PACKAGE]) dnl pyexecdir -- directory for installing python extension modules dnl (shared libraries) dnl Query distutils for this directory. AC_CACHE_CHECK([for $am_display_PYTHON extension module directory], [am_cv_python_pyexecdir], [if test "x$exec_prefix" = xNONE then am_py_exec_prefix=$am_py_prefix else am_py_exec_prefix=$exec_prefix fi am_cv_python_pyexecdir=`$PYTHON -c "import sys; from distutils import sysconfig; sys.stdout.write(sysconfig.get_python_lib(1,0,prefix='$am_py_exec_prefix'))" 2>/dev/null` case $am_cv_python_pyexecdir in $am_py_exec_prefix*) am__strip_prefix=`echo "$am_py_exec_prefix" | sed 's|.|.|g'` am_cv_python_pyexecdir=`echo "$am_cv_python_pyexecdir" | sed "s,^$am__strip_prefix,$PYTHON_EXEC_PREFIX,"` ;; *) case $am_py_exec_prefix in /usr|/System*) ;; *) am_cv_python_pyexecdir=$PYTHON_EXEC_PREFIX/lib/python$PYTHON_VERSION/site-packages ;; esac ;; esac ]) AC_SUBST([pyexecdir], [$am_cv_python_pyexecdir]) dnl pkgpyexecdir -- $(pyexecdir)/$(PACKAGE) AC_SUBST([pkgpyexecdir], [\${pyexecdir}/$PACKAGE]) dnl Run any user-specified action. $2 fi ]) # AM_PYTHON_CHECK_VERSION(PROG, VERSION, [ACTION-IF-TRUE], [ACTION-IF-FALSE]) # --------------------------------------------------------------------------- # Run ACTION-IF-TRUE if the Python interpreter PROG has version >= VERSION. # Run ACTION-IF-FALSE otherwise. # This test uses sys.hexversion instead of the string equivalent (first # word of sys.version), in order to cope with versions such as 2.2c1. # This supports Python 2.0 or higher. (2.0 was released on October 16, 2000). AC_DEFUN([AM_PYTHON_CHECK_VERSION], [prog="import sys # split strings by '.' and convert to numeric. Append some zeros # because we need at least 4 digits for the hex conversion. # map returns an iterator in Python 3.0 and a list in 2.x minver = list(map(int, '$2'.split('.'))) + [[0, 0, 0]] minverhex = 0 # xrange is not present in Python 3.0 and range returns an iterator for i in list(range(0, 4)): minverhex = (minverhex << 8) + minver[[i]] sys.exit(sys.hexversion < minverhex)" AS_IF([AM_RUN_LOG([$1 -c "$prog"])], [$3], [$4])]) # Copyright (C) 2001, 2003, 2005, 2011 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # serial 1 # AM_RUN_LOG(COMMAND) # ------------------- # Run COMMAND, save the exit status in ac_status, and log it. # (This has been adapted from Autoconf's _AC_RUN_LOG macro.) AC_DEFUN([AM_RUN_LOG], [{ echo "$as_me:$LINENO: $1" >&AS_MESSAGE_LOG_FD ($1) >&AS_MESSAGE_LOG_FD 2>&AS_MESSAGE_LOG_FD ac_status=$? echo "$as_me:$LINENO: \$? = $ac_status" >&AS_MESSAGE_LOG_FD (exit $ac_status); }]) # Check to make sure that the build environment is sane. -*- Autoconf -*- # Copyright (C) 1996, 1997, 2000, 2001, 2003, 2005, 2008 # Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # serial 5 # AM_SANITY_CHECK # --------------- AC_DEFUN([AM_SANITY_CHECK], [AC_MSG_CHECKING([whether build environment is sane]) # Just in case sleep 1 echo timestamp > conftest.file # Reject unsafe characters in $srcdir or the absolute working directory # name. Accept space and tab only in the latter. am_lf=' ' case `pwd` in *[[\\\"\#\$\&\'\`$am_lf]]*) AC_MSG_ERROR([unsafe absolute working directory name]);; esac case $srcdir in *[[\\\"\#\$\&\'\`$am_lf\ \ ]]*) AC_MSG_ERROR([unsafe srcdir value: `$srcdir']);; esac # Do `set' in a subshell so we don't clobber the current shell's # arguments. Must try -L first in case configure is actually a # symlink; some systems play weird games with the mod time of symlinks # (eg FreeBSD returns the mod time of the symlink's containing # directory). if ( set X `ls -Lt "$srcdir/configure" conftest.file 2> /dev/null` if test "$[*]" = "X"; then # -L didn't work. set X `ls -t "$srcdir/configure" conftest.file` fi rm -f conftest.file if test "$[*]" != "X $srcdir/configure conftest.file" \ && test "$[*]" != "X conftest.file $srcdir/configure"; then # If neither matched, then we have a broken ls. This can happen # if, for instance, CONFIG_SHELL is bash and it inherits a # broken ls alias from the environment. This has actually # happened. Such a system could not be considered "sane". AC_MSG_ERROR([ls -t appears to fail. Make sure there is not a broken alias in your environment]) fi test "$[2]" = conftest.file ) then # Ok. : else AC_MSG_ERROR([newly created file is older than distributed files! Check your system clock]) fi AC_MSG_RESULT(yes)]) # Copyright (C) 2001, 2003, 2005, 2011 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # serial 1 # AM_PROG_INSTALL_STRIP # --------------------- # One issue with vendor `install' (even GNU) is that you can't # specify the program used to strip binaries. This is especially # annoying in cross-compiling environments, where the build's strip # is unlikely to handle the host's binaries. # Fortunately install-sh will honor a STRIPPROG variable, so we # always use install-sh in `make install-strip', and initialize # STRIPPROG with the value of the STRIP variable (set by the user). AC_DEFUN([AM_PROG_INSTALL_STRIP], [AC_REQUIRE([AM_PROG_INSTALL_SH])dnl # Installed binaries are usually stripped using `strip' when the user # run `make install-strip'. However `strip' might not be the right # tool to use in cross-compilation environments, therefore Automake # will honor the `STRIP' environment variable to overrule this program. dnl Don't test for $cross_compiling = yes, because it might be `maybe'. if test "$cross_compiling" != no; then AC_CHECK_TOOL([STRIP], [strip], :) fi INSTALL_STRIP_PROGRAM="\$(install_sh) -c -s" AC_SUBST([INSTALL_STRIP_PROGRAM])]) # Copyright (C) 2006, 2008, 2010 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # serial 3 # _AM_SUBST_NOTMAKE(VARIABLE) # --------------------------- # Prevent Automake from outputting VARIABLE = @VARIABLE@ in Makefile.in. # This macro is traced by Automake. AC_DEFUN([_AM_SUBST_NOTMAKE]) # AM_SUBST_NOTMAKE(VARIABLE) # -------------------------- # Public sister of _AM_SUBST_NOTMAKE. AC_DEFUN([AM_SUBST_NOTMAKE], [_AM_SUBST_NOTMAKE($@)]) # Check how to create a tarball. -*- Autoconf -*- # Copyright (C) 2004, 2005, 2012 Free Software Foundation, Inc. # # This file is free software; the Free Software Foundation # gives unlimited permission to copy and/or distribute it, # with or without modifications, as long as this notice is preserved. # serial 2 # _AM_PROG_TAR(FORMAT) # -------------------- # Check how to create a tarball in format FORMAT. # FORMAT should be one of `v7', `ustar', or `pax'. # # Substitute a variable $(am__tar) that is a command # writing to stdout a FORMAT-tarball containing the directory # $tardir. # tardir=directory && $(am__tar) > result.tar # # Substitute a variable $(am__untar) that extract such # a tarball read from stdin. # $(am__untar) < result.tar AC_DEFUN([_AM_PROG_TAR], [# Always define AMTAR for backward compatibility. Yes, it's still used # in the wild :-( We should find a proper way to deprecate it ... AC_SUBST([AMTAR], ['$${TAR-tar}']) m4_if([$1], [v7], [am__tar='$${TAR-tar} chof - "$$tardir"' am__untar='$${TAR-tar} xf -'], [m4_case([$1], [ustar],, [pax],, [m4_fatal([Unknown tar format])]) AC_MSG_CHECKING([how to create a $1 tar archive]) # Loop over all known methods to create a tar archive until one works. _am_tools='gnutar m4_if([$1], [ustar], [plaintar]) pax cpio none' _am_tools=${am_cv_prog_tar_$1-$_am_tools} # Do not fold the above two line into one, because Tru64 sh and # Solaris sh will not grok spaces in the rhs of `-'. for _am_tool in $_am_tools do case $_am_tool in gnutar) for _am_tar in tar gnutar gtar; do AM_RUN_LOG([$_am_tar --version]) && break done am__tar="$_am_tar --format=m4_if([$1], [pax], [posix], [$1]) -chf - "'"$$tardir"' am__tar_="$_am_tar --format=m4_if([$1], [pax], [posix], [$1]) -chf - "'"$tardir"' am__untar="$_am_tar -xf -" ;; plaintar) # Must skip GNU tar: if it does not support --format= it doesn't create # ustar tarball either. (tar --version) >/dev/null 2>&1 && continue am__tar='tar chf - "$$tardir"' am__tar_='tar chf - "$tardir"' am__untar='tar xf -' ;; pax) am__tar='pax -L -x $1 -w "$$tardir"' am__tar_='pax -L -x $1 -w "$tardir"' am__untar='pax -r' ;; cpio) am__tar='find "$$tardir" -print | cpio -o -H $1 -L' am__tar_='find "$tardir" -print | cpio -o -H $1 -L' am__untar='cpio -i -H $1 -d' ;; none) am__tar=false am__tar_=false am__untar=false ;; esac # If the value was cached, stop now. We just wanted to have am__tar # and am__untar set. test -n "${am_cv_prog_tar_$1}" && break # tar/untar a dummy directory, and stop if the command works rm -rf conftest.dir mkdir conftest.dir echo GrepMe > conftest.dir/file AM_RUN_LOG([tardir=conftest.dir && eval $am__tar_ >conftest.tar]) rm -rf conftest.dir if test -s conftest.tar; then AM_RUN_LOG([$am__untar /dev/null 2>&1 && break fi done rm -rf conftest.dir AC_CACHE_VAL([am_cv_prog_tar_$1], [am_cv_prog_tar_$1=$_am_tool]) AC_MSG_RESULT([$am_cv_prog_tar_$1])]) AC_SUBST([am__tar]) AC_SUBST([am__untar]) ]) # _AM_PROG_TAR m4_include([m4/libtool.m4]) m4_include([m4/ltoptions.m4]) m4_include([m4/ltsugar.m4]) m4_include([m4/ltversion.m4]) m4_include([m4/lt~obsolete.m4]) ffc-2019.1.0.post0/libs/gtest-1.7.0/build-aux/000077500000000000000000000000001346026536000201655ustar00rootroot00000000000000ffc-2019.1.0.post0/libs/gtest-1.7.0/build-aux/config.guess000077500000000000000000001274321346026536000225160ustar00rootroot00000000000000#! /bin/sh # Attempt to guess a canonical system name. # Copyright (C) 1992, 1993, 1994, 1995, 1996, 1997, 1998, 1999, # 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, # 2011, 2012 Free Software Foundation, Inc. timestamp='2012-02-10' # This file is free software; you can redistribute it and/or modify it # under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, see . # # As a special exception to the GNU General Public License, if you # distribute this file as part of a program that contains a # configuration script generated by Autoconf, you may include it under # the same distribution terms that you use for the rest of that program. # Originally written by Per Bothner. Please send patches (context # diff format) to and include a ChangeLog # entry. # # This script attempts to guess a canonical system name similar to # config.sub. If it succeeds, it prints the system name on stdout, and # exits with 0. Otherwise, it exits with 1. # # You can get the latest version of this script from: # http://git.savannah.gnu.org/gitweb/?p=config.git;a=blob_plain;f=config.guess;hb=HEAD me=`echo "$0" | sed -e 's,.*/,,'` usage="\ Usage: $0 [OPTION] Output the configuration name of the system \`$me' is run on. Operation modes: -h, --help print this help, then exit -t, --time-stamp print date of last modification, then exit -v, --version print version number, then exit Report bugs and patches to ." version="\ GNU config.guess ($timestamp) Originally written by Per Bothner. Copyright (C) 1992, 1993, 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE." help=" Try \`$me --help' for more information." # Parse command line while test $# -gt 0 ; do case $1 in --time-stamp | --time* | -t ) echo "$timestamp" ; exit ;; --version | -v ) echo "$version" ; exit ;; --help | --h* | -h ) echo "$usage"; exit ;; -- ) # Stop option processing shift; break ;; - ) # Use stdin as input. break ;; -* ) echo "$me: invalid option $1$help" >&2 exit 1 ;; * ) break ;; esac done if test $# != 0; then echo "$me: too many arguments$help" >&2 exit 1 fi trap 'exit 1' 1 2 15 # CC_FOR_BUILD -- compiler used by this script. Note that the use of a # compiler to aid in system detection is discouraged as it requires # temporary files to be created and, as you can see below, it is a # headache to deal with in a portable fashion. # Historically, `CC_FOR_BUILD' used to be named `HOST_CC'. We still # use `HOST_CC' if defined, but it is deprecated. # Portable tmp directory creation inspired by the Autoconf team. set_cc_for_build=' trap "exitcode=\$?; (rm -f \$tmpfiles 2>/dev/null; rmdir \$tmp 2>/dev/null) && exit \$exitcode" 0 ; trap "rm -f \$tmpfiles 2>/dev/null; rmdir \$tmp 2>/dev/null; exit 1" 1 2 13 15 ; : ${TMPDIR=/tmp} ; { tmp=`(umask 077 && mktemp -d "$TMPDIR/cgXXXXXX") 2>/dev/null` && test -n "$tmp" && test -d "$tmp" ; } || { test -n "$RANDOM" && tmp=$TMPDIR/cg$$-$RANDOM && (umask 077 && mkdir $tmp) ; } || { tmp=$TMPDIR/cg-$$ && (umask 077 && mkdir $tmp) && echo "Warning: creating insecure temp directory" >&2 ; } || { echo "$me: cannot create a temporary directory in $TMPDIR" >&2 ; exit 1 ; } ; dummy=$tmp/dummy ; tmpfiles="$dummy.c $dummy.o $dummy.rel $dummy" ; case $CC_FOR_BUILD,$HOST_CC,$CC in ,,) echo "int x;" > $dummy.c ; for c in cc gcc c89 c99 ; do if ($c -c -o $dummy.o $dummy.c) >/dev/null 2>&1 ; then CC_FOR_BUILD="$c"; break ; fi ; done ; if test x"$CC_FOR_BUILD" = x ; then CC_FOR_BUILD=no_compiler_found ; fi ;; ,,*) CC_FOR_BUILD=$CC ;; ,*,*) CC_FOR_BUILD=$HOST_CC ;; esac ; set_cc_for_build= ;' # This is needed to find uname on a Pyramid OSx when run in the BSD universe. # (ghazi@noc.rutgers.edu 1994-08-24) if (test -f /.attbin/uname) >/dev/null 2>&1 ; then PATH=$PATH:/.attbin ; export PATH fi UNAME_MACHINE=`(uname -m) 2>/dev/null` || UNAME_MACHINE=unknown UNAME_RELEASE=`(uname -r) 2>/dev/null` || UNAME_RELEASE=unknown UNAME_SYSTEM=`(uname -s) 2>/dev/null` || UNAME_SYSTEM=unknown UNAME_VERSION=`(uname -v) 2>/dev/null` || UNAME_VERSION=unknown # Note: order is significant - the case branches are not exclusive. case "${UNAME_MACHINE}:${UNAME_SYSTEM}:${UNAME_RELEASE}:${UNAME_VERSION}" in *:NetBSD:*:*) # NetBSD (nbsd) targets should (where applicable) match one or # more of the tuples: *-*-netbsdelf*, *-*-netbsdaout*, # *-*-netbsdecoff* and *-*-netbsd*. For targets that recently # switched to ELF, *-*-netbsd* would select the old # object file format. This provides both forward # compatibility and a consistent mechanism for selecting the # object file format. # # Note: NetBSD doesn't particularly care about the vendor # portion of the name. We always set it to "unknown". sysctl="sysctl -n hw.machine_arch" UNAME_MACHINE_ARCH=`(/sbin/$sysctl 2>/dev/null || \ /usr/sbin/$sysctl 2>/dev/null || echo unknown)` case "${UNAME_MACHINE_ARCH}" in armeb) machine=armeb-unknown ;; arm*) machine=arm-unknown ;; sh3el) machine=shl-unknown ;; sh3eb) machine=sh-unknown ;; sh5el) machine=sh5le-unknown ;; *) machine=${UNAME_MACHINE_ARCH}-unknown ;; esac # The Operating System including object format, if it has switched # to ELF recently, or will in the future. case "${UNAME_MACHINE_ARCH}" in arm*|i386|m68k|ns32k|sh3*|sparc|vax) eval $set_cc_for_build if echo __ELF__ | $CC_FOR_BUILD -E - 2>/dev/null \ | grep -q __ELF__ then # Once all utilities can be ECOFF (netbsdecoff) or a.out (netbsdaout). # Return netbsd for either. FIX? os=netbsd else os=netbsdelf fi ;; *) os=netbsd ;; esac # The OS release # Debian GNU/NetBSD machines have a different userland, and # thus, need a distinct triplet. However, they do not need # kernel version information, so it can be replaced with a # suitable tag, in the style of linux-gnu. case "${UNAME_VERSION}" in Debian*) release='-gnu' ;; *) release=`echo ${UNAME_RELEASE}|sed -e 's/[-_].*/\./'` ;; esac # Since CPU_TYPE-MANUFACTURER-KERNEL-OPERATING_SYSTEM: # contains redundant information, the shorter form: # CPU_TYPE-MANUFACTURER-OPERATING_SYSTEM is used. echo "${machine}-${os}${release}" exit ;; *:OpenBSD:*:*) UNAME_MACHINE_ARCH=`arch | sed 's/OpenBSD.//'` echo ${UNAME_MACHINE_ARCH}-unknown-openbsd${UNAME_RELEASE} exit ;; *:ekkoBSD:*:*) echo ${UNAME_MACHINE}-unknown-ekkobsd${UNAME_RELEASE} exit ;; *:SolidBSD:*:*) echo ${UNAME_MACHINE}-unknown-solidbsd${UNAME_RELEASE} exit ;; macppc:MirBSD:*:*) echo powerpc-unknown-mirbsd${UNAME_RELEASE} exit ;; *:MirBSD:*:*) echo ${UNAME_MACHINE}-unknown-mirbsd${UNAME_RELEASE} exit ;; alpha:OSF1:*:*) case $UNAME_RELEASE in *4.0) UNAME_RELEASE=`/usr/sbin/sizer -v | awk '{print $3}'` ;; *5.*) UNAME_RELEASE=`/usr/sbin/sizer -v | awk '{print $4}'` ;; esac # According to Compaq, /usr/sbin/psrinfo has been available on # OSF/1 and Tru64 systems produced since 1995. I hope that # covers most systems running today. This code pipes the CPU # types through head -n 1, so we only detect the type of CPU 0. ALPHA_CPU_TYPE=`/usr/sbin/psrinfo -v | sed -n -e 's/^ The alpha \(.*\) processor.*$/\1/p' | head -n 1` case "$ALPHA_CPU_TYPE" in "EV4 (21064)") UNAME_MACHINE="alpha" ;; "EV4.5 (21064)") UNAME_MACHINE="alpha" ;; "LCA4 (21066/21068)") UNAME_MACHINE="alpha" ;; "EV5 (21164)") UNAME_MACHINE="alphaev5" ;; "EV5.6 (21164A)") UNAME_MACHINE="alphaev56" ;; "EV5.6 (21164PC)") UNAME_MACHINE="alphapca56" ;; "EV5.7 (21164PC)") UNAME_MACHINE="alphapca57" ;; "EV6 (21264)") UNAME_MACHINE="alphaev6" ;; "EV6.7 (21264A)") UNAME_MACHINE="alphaev67" ;; "EV6.8CB (21264C)") UNAME_MACHINE="alphaev68" ;; "EV6.8AL (21264B)") UNAME_MACHINE="alphaev68" ;; "EV6.8CX (21264D)") UNAME_MACHINE="alphaev68" ;; "EV6.9A (21264/EV69A)") UNAME_MACHINE="alphaev69" ;; "EV7 (21364)") UNAME_MACHINE="alphaev7" ;; "EV7.9 (21364A)") UNAME_MACHINE="alphaev79" ;; esac # A Pn.n version is a patched version. # A Vn.n version is a released version. # A Tn.n version is a released field test version. # A Xn.n version is an unreleased experimental baselevel. # 1.2 uses "1.2" for uname -r. echo ${UNAME_MACHINE}-dec-osf`echo ${UNAME_RELEASE} | sed -e 's/^[PVTX]//' | tr 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' 'abcdefghijklmnopqrstuvwxyz'` # Reset EXIT trap before exiting to avoid spurious non-zero exit code. exitcode=$? trap '' 0 exit $exitcode ;; Alpha\ *:Windows_NT*:*) # How do we know it's Interix rather than the generic POSIX subsystem? # Should we change UNAME_MACHINE based on the output of uname instead # of the specific Alpha model? echo alpha-pc-interix exit ;; 21064:Windows_NT:50:3) echo alpha-dec-winnt3.5 exit ;; Amiga*:UNIX_System_V:4.0:*) echo m68k-unknown-sysv4 exit ;; *:[Aa]miga[Oo][Ss]:*:*) echo ${UNAME_MACHINE}-unknown-amigaos exit ;; *:[Mm]orph[Oo][Ss]:*:*) echo ${UNAME_MACHINE}-unknown-morphos exit ;; *:OS/390:*:*) echo i370-ibm-openedition exit ;; *:z/VM:*:*) echo s390-ibm-zvmoe exit ;; *:OS400:*:*) echo powerpc-ibm-os400 exit ;; arm:RISC*:1.[012]*:*|arm:riscix:1.[012]*:*) echo arm-acorn-riscix${UNAME_RELEASE} exit ;; arm:riscos:*:*|arm:RISCOS:*:*) echo arm-unknown-riscos exit ;; SR2?01:HI-UX/MPP:*:* | SR8000:HI-UX/MPP:*:*) echo hppa1.1-hitachi-hiuxmpp exit ;; Pyramid*:OSx*:*:* | MIS*:OSx*:*:* | MIS*:SMP_DC-OSx*:*:*) # akee@wpdis03.wpafb.af.mil (Earle F. Ake) contributed MIS and NILE. if test "`(/bin/universe) 2>/dev/null`" = att ; then echo pyramid-pyramid-sysv3 else echo pyramid-pyramid-bsd fi exit ;; NILE*:*:*:dcosx) echo pyramid-pyramid-svr4 exit ;; DRS?6000:unix:4.0:6*) echo sparc-icl-nx6 exit ;; DRS?6000:UNIX_SV:4.2*:7* | DRS?6000:isis:4.2*:7*) case `/usr/bin/uname -p` in sparc) echo sparc-icl-nx7; exit ;; esac ;; s390x:SunOS:*:*) echo ${UNAME_MACHINE}-ibm-solaris2`echo ${UNAME_RELEASE}|sed -e 's/[^.]*//'` exit ;; sun4H:SunOS:5.*:*) echo sparc-hal-solaris2`echo ${UNAME_RELEASE}|sed -e 's/[^.]*//'` exit ;; sun4*:SunOS:5.*:* | tadpole*:SunOS:5.*:*) echo sparc-sun-solaris2`echo ${UNAME_RELEASE}|sed -e 's/[^.]*//'` exit ;; i86pc:AuroraUX:5.*:* | i86xen:AuroraUX:5.*:*) echo i386-pc-auroraux${UNAME_RELEASE} exit ;; i86pc:SunOS:5.*:* | i86xen:SunOS:5.*:*) eval $set_cc_for_build SUN_ARCH="i386" # If there is a compiler, see if it is configured for 64-bit objects. # Note that the Sun cc does not turn __LP64__ into 1 like gcc does. # This test works for both compilers. if [ "$CC_FOR_BUILD" != 'no_compiler_found' ]; then if (echo '#ifdef __amd64'; echo IS_64BIT_ARCH; echo '#endif') | \ (CCOPTS= $CC_FOR_BUILD -E - 2>/dev/null) | \ grep IS_64BIT_ARCH >/dev/null then SUN_ARCH="x86_64" fi fi echo ${SUN_ARCH}-pc-solaris2`echo ${UNAME_RELEASE}|sed -e 's/[^.]*//'` exit ;; sun4*:SunOS:6*:*) # According to config.sub, this is the proper way to canonicalize # SunOS6. Hard to guess exactly what SunOS6 will be like, but # it's likely to be more like Solaris than SunOS4. echo sparc-sun-solaris3`echo ${UNAME_RELEASE}|sed -e 's/[^.]*//'` exit ;; sun4*:SunOS:*:*) case "`/usr/bin/arch -k`" in Series*|S4*) UNAME_RELEASE=`uname -v` ;; esac # Japanese Language versions have a version number like `4.1.3-JL'. echo sparc-sun-sunos`echo ${UNAME_RELEASE}|sed -e 's/-/_/'` exit ;; sun3*:SunOS:*:*) echo m68k-sun-sunos${UNAME_RELEASE} exit ;; sun*:*:4.2BSD:*) UNAME_RELEASE=`(sed 1q /etc/motd | awk '{print substr($5,1,3)}') 2>/dev/null` test "x${UNAME_RELEASE}" = "x" && UNAME_RELEASE=3 case "`/bin/arch`" in sun3) echo m68k-sun-sunos${UNAME_RELEASE} ;; sun4) echo sparc-sun-sunos${UNAME_RELEASE} ;; esac exit ;; aushp:SunOS:*:*) echo sparc-auspex-sunos${UNAME_RELEASE} exit ;; # The situation for MiNT is a little confusing. The machine name # can be virtually everything (everything which is not # "atarist" or "atariste" at least should have a processor # > m68000). The system name ranges from "MiNT" over "FreeMiNT" # to the lowercase version "mint" (or "freemint"). Finally # the system name "TOS" denotes a system which is actually not # MiNT. But MiNT is downward compatible to TOS, so this should # be no problem. atarist[e]:*MiNT:*:* | atarist[e]:*mint:*:* | atarist[e]:*TOS:*:*) echo m68k-atari-mint${UNAME_RELEASE} exit ;; atari*:*MiNT:*:* | atari*:*mint:*:* | atarist[e]:*TOS:*:*) echo m68k-atari-mint${UNAME_RELEASE} exit ;; *falcon*:*MiNT:*:* | *falcon*:*mint:*:* | *falcon*:*TOS:*:*) echo m68k-atari-mint${UNAME_RELEASE} exit ;; milan*:*MiNT:*:* | milan*:*mint:*:* | *milan*:*TOS:*:*) echo m68k-milan-mint${UNAME_RELEASE} exit ;; hades*:*MiNT:*:* | hades*:*mint:*:* | *hades*:*TOS:*:*) echo m68k-hades-mint${UNAME_RELEASE} exit ;; *:*MiNT:*:* | *:*mint:*:* | *:*TOS:*:*) echo m68k-unknown-mint${UNAME_RELEASE} exit ;; m68k:machten:*:*) echo m68k-apple-machten${UNAME_RELEASE} exit ;; powerpc:machten:*:*) echo powerpc-apple-machten${UNAME_RELEASE} exit ;; RISC*:Mach:*:*) echo mips-dec-mach_bsd4.3 exit ;; RISC*:ULTRIX:*:*) echo mips-dec-ultrix${UNAME_RELEASE} exit ;; VAX*:ULTRIX*:*:*) echo vax-dec-ultrix${UNAME_RELEASE} exit ;; 2020:CLIX:*:* | 2430:CLIX:*:*) echo clipper-intergraph-clix${UNAME_RELEASE} exit ;; mips:*:*:UMIPS | mips:*:*:RISCos) eval $set_cc_for_build sed 's/^ //' << EOF >$dummy.c #ifdef __cplusplus #include /* for printf() prototype */ int main (int argc, char *argv[]) { #else int main (argc, argv) int argc; char *argv[]; { #endif #if defined (host_mips) && defined (MIPSEB) #if defined (SYSTYPE_SYSV) printf ("mips-mips-riscos%ssysv\n", argv[1]); exit (0); #endif #if defined (SYSTYPE_SVR4) printf ("mips-mips-riscos%ssvr4\n", argv[1]); exit (0); #endif #if defined (SYSTYPE_BSD43) || defined(SYSTYPE_BSD) printf ("mips-mips-riscos%sbsd\n", argv[1]); exit (0); #endif #endif exit (-1); } EOF $CC_FOR_BUILD -o $dummy $dummy.c && dummyarg=`echo "${UNAME_RELEASE}" | sed -n 's/\([0-9]*\).*/\1/p'` && SYSTEM_NAME=`$dummy $dummyarg` && { echo "$SYSTEM_NAME"; exit; } echo mips-mips-riscos${UNAME_RELEASE} exit ;; Motorola:PowerMAX_OS:*:*) echo powerpc-motorola-powermax exit ;; Motorola:*:4.3:PL8-*) echo powerpc-harris-powermax exit ;; Night_Hawk:*:*:PowerMAX_OS | Synergy:PowerMAX_OS:*:*) echo powerpc-harris-powermax exit ;; Night_Hawk:Power_UNIX:*:*) echo powerpc-harris-powerunix exit ;; m88k:CX/UX:7*:*) echo m88k-harris-cxux7 exit ;; m88k:*:4*:R4*) echo m88k-motorola-sysv4 exit ;; m88k:*:3*:R3*) echo m88k-motorola-sysv3 exit ;; AViiON:dgux:*:*) # DG/UX returns AViiON for all architectures UNAME_PROCESSOR=`/usr/bin/uname -p` if [ $UNAME_PROCESSOR = mc88100 ] || [ $UNAME_PROCESSOR = mc88110 ] then if [ ${TARGET_BINARY_INTERFACE}x = m88kdguxelfx ] || \ [ ${TARGET_BINARY_INTERFACE}x = x ] then echo m88k-dg-dgux${UNAME_RELEASE} else echo m88k-dg-dguxbcs${UNAME_RELEASE} fi else echo i586-dg-dgux${UNAME_RELEASE} fi exit ;; M88*:DolphinOS:*:*) # DolphinOS (SVR3) echo m88k-dolphin-sysv3 exit ;; M88*:*:R3*:*) # Delta 88k system running SVR3 echo m88k-motorola-sysv3 exit ;; XD88*:*:*:*) # Tektronix XD88 system running UTekV (SVR3) echo m88k-tektronix-sysv3 exit ;; Tek43[0-9][0-9]:UTek:*:*) # Tektronix 4300 system running UTek (BSD) echo m68k-tektronix-bsd exit ;; *:IRIX*:*:*) echo mips-sgi-irix`echo ${UNAME_RELEASE}|sed -e 's/-/_/g'` exit ;; ????????:AIX?:[12].1:2) # AIX 2.2.1 or AIX 2.1.1 is RT/PC AIX. echo romp-ibm-aix # uname -m gives an 8 hex-code CPU id exit ;; # Note that: echo "'`uname -s`'" gives 'AIX ' i*86:AIX:*:*) echo i386-ibm-aix exit ;; ia64:AIX:*:*) if [ -x /usr/bin/oslevel ] ; then IBM_REV=`/usr/bin/oslevel` else IBM_REV=${UNAME_VERSION}.${UNAME_RELEASE} fi echo ${UNAME_MACHINE}-ibm-aix${IBM_REV} exit ;; *:AIX:2:3) if grep bos325 /usr/include/stdio.h >/dev/null 2>&1; then eval $set_cc_for_build sed 's/^ //' << EOF >$dummy.c #include main() { if (!__power_pc()) exit(1); puts("powerpc-ibm-aix3.2.5"); exit(0); } EOF if $CC_FOR_BUILD -o $dummy $dummy.c && SYSTEM_NAME=`$dummy` then echo "$SYSTEM_NAME" else echo rs6000-ibm-aix3.2.5 fi elif grep bos324 /usr/include/stdio.h >/dev/null 2>&1; then echo rs6000-ibm-aix3.2.4 else echo rs6000-ibm-aix3.2 fi exit ;; *:AIX:*:[4567]) IBM_CPU_ID=`/usr/sbin/lsdev -C -c processor -S available | sed 1q | awk '{ print $1 }'` if /usr/sbin/lsattr -El ${IBM_CPU_ID} | grep ' POWER' >/dev/null 2>&1; then IBM_ARCH=rs6000 else IBM_ARCH=powerpc fi if [ -x /usr/bin/oslevel ] ; then IBM_REV=`/usr/bin/oslevel` else IBM_REV=${UNAME_VERSION}.${UNAME_RELEASE} fi echo ${IBM_ARCH}-ibm-aix${IBM_REV} exit ;; *:AIX:*:*) echo rs6000-ibm-aix exit ;; ibmrt:4.4BSD:*|romp-ibm:BSD:*) echo romp-ibm-bsd4.4 exit ;; ibmrt:*BSD:*|romp-ibm:BSD:*) # covers RT/PC BSD and echo romp-ibm-bsd${UNAME_RELEASE} # 4.3 with uname added to exit ;; # report: romp-ibm BSD 4.3 *:BOSX:*:*) echo rs6000-bull-bosx exit ;; DPX/2?00:B.O.S.:*:*) echo m68k-bull-sysv3 exit ;; 9000/[34]??:4.3bsd:1.*:*) echo m68k-hp-bsd exit ;; hp300:4.4BSD:*:* | 9000/[34]??:4.3bsd:2.*:*) echo m68k-hp-bsd4.4 exit ;; 9000/[34678]??:HP-UX:*:*) HPUX_REV=`echo ${UNAME_RELEASE}|sed -e 's/[^.]*.[0B]*//'` case "${UNAME_MACHINE}" in 9000/31? ) HP_ARCH=m68000 ;; 9000/[34]?? ) HP_ARCH=m68k ;; 9000/[678][0-9][0-9]) if [ -x /usr/bin/getconf ]; then sc_cpu_version=`/usr/bin/getconf SC_CPU_VERSION 2>/dev/null` sc_kernel_bits=`/usr/bin/getconf SC_KERNEL_BITS 2>/dev/null` case "${sc_cpu_version}" in 523) HP_ARCH="hppa1.0" ;; # CPU_PA_RISC1_0 528) HP_ARCH="hppa1.1" ;; # CPU_PA_RISC1_1 532) # CPU_PA_RISC2_0 case "${sc_kernel_bits}" in 32) HP_ARCH="hppa2.0n" ;; 64) HP_ARCH="hppa2.0w" ;; '') HP_ARCH="hppa2.0" ;; # HP-UX 10.20 esac ;; esac fi if [ "${HP_ARCH}" = "" ]; then eval $set_cc_for_build sed 's/^ //' << EOF >$dummy.c #define _HPUX_SOURCE #include #include int main () { #if defined(_SC_KERNEL_BITS) long bits = sysconf(_SC_KERNEL_BITS); #endif long cpu = sysconf (_SC_CPU_VERSION); switch (cpu) { case CPU_PA_RISC1_0: puts ("hppa1.0"); break; case CPU_PA_RISC1_1: puts ("hppa1.1"); break; case CPU_PA_RISC2_0: #if defined(_SC_KERNEL_BITS) switch (bits) { case 64: puts ("hppa2.0w"); break; case 32: puts ("hppa2.0n"); break; default: puts ("hppa2.0"); break; } break; #else /* !defined(_SC_KERNEL_BITS) */ puts ("hppa2.0"); break; #endif default: puts ("hppa1.0"); break; } exit (0); } EOF (CCOPTS= $CC_FOR_BUILD -o $dummy $dummy.c 2>/dev/null) && HP_ARCH=`$dummy` test -z "$HP_ARCH" && HP_ARCH=hppa fi ;; esac if [ ${HP_ARCH} = "hppa2.0w" ] then eval $set_cc_for_build # hppa2.0w-hp-hpux* has a 64-bit kernel and a compiler generating # 32-bit code. hppa64-hp-hpux* has the same kernel and a compiler # generating 64-bit code. GNU and HP use different nomenclature: # # $ CC_FOR_BUILD=cc ./config.guess # => hppa2.0w-hp-hpux11.23 # $ CC_FOR_BUILD="cc +DA2.0w" ./config.guess # => hppa64-hp-hpux11.23 if echo __LP64__ | (CCOPTS= $CC_FOR_BUILD -E - 2>/dev/null) | grep -q __LP64__ then HP_ARCH="hppa2.0w" else HP_ARCH="hppa64" fi fi echo ${HP_ARCH}-hp-hpux${HPUX_REV} exit ;; ia64:HP-UX:*:*) HPUX_REV=`echo ${UNAME_RELEASE}|sed -e 's/[^.]*.[0B]*//'` echo ia64-hp-hpux${HPUX_REV} exit ;; 3050*:HI-UX:*:*) eval $set_cc_for_build sed 's/^ //' << EOF >$dummy.c #include int main () { long cpu = sysconf (_SC_CPU_VERSION); /* The order matters, because CPU_IS_HP_MC68K erroneously returns true for CPU_PA_RISC1_0. CPU_IS_PA_RISC returns correct results, however. */ if (CPU_IS_PA_RISC (cpu)) { switch (cpu) { case CPU_PA_RISC1_0: puts ("hppa1.0-hitachi-hiuxwe2"); break; case CPU_PA_RISC1_1: puts ("hppa1.1-hitachi-hiuxwe2"); break; case CPU_PA_RISC2_0: puts ("hppa2.0-hitachi-hiuxwe2"); break; default: puts ("hppa-hitachi-hiuxwe2"); break; } } else if (CPU_IS_HP_MC68K (cpu)) puts ("m68k-hitachi-hiuxwe2"); else puts ("unknown-hitachi-hiuxwe2"); exit (0); } EOF $CC_FOR_BUILD -o $dummy $dummy.c && SYSTEM_NAME=`$dummy` && { echo "$SYSTEM_NAME"; exit; } echo unknown-hitachi-hiuxwe2 exit ;; 9000/7??:4.3bsd:*:* | 9000/8?[79]:4.3bsd:*:* ) echo hppa1.1-hp-bsd exit ;; 9000/8??:4.3bsd:*:*) echo hppa1.0-hp-bsd exit ;; *9??*:MPE/iX:*:* | *3000*:MPE/iX:*:*) echo hppa1.0-hp-mpeix exit ;; hp7??:OSF1:*:* | hp8?[79]:OSF1:*:* ) echo hppa1.1-hp-osf exit ;; hp8??:OSF1:*:*) echo hppa1.0-hp-osf exit ;; i*86:OSF1:*:*) if [ -x /usr/sbin/sysversion ] ; then echo ${UNAME_MACHINE}-unknown-osf1mk else echo ${UNAME_MACHINE}-unknown-osf1 fi exit ;; parisc*:Lites*:*:*) echo hppa1.1-hp-lites exit ;; C1*:ConvexOS:*:* | convex:ConvexOS:C1*:*) echo c1-convex-bsd exit ;; C2*:ConvexOS:*:* | convex:ConvexOS:C2*:*) if getsysinfo -f scalar_acc then echo c32-convex-bsd else echo c2-convex-bsd fi exit ;; C34*:ConvexOS:*:* | convex:ConvexOS:C34*:*) echo c34-convex-bsd exit ;; C38*:ConvexOS:*:* | convex:ConvexOS:C38*:*) echo c38-convex-bsd exit ;; C4*:ConvexOS:*:* | convex:ConvexOS:C4*:*) echo c4-convex-bsd exit ;; CRAY*Y-MP:*:*:*) echo ymp-cray-unicos${UNAME_RELEASE} | sed -e 's/\.[^.]*$/.X/' exit ;; CRAY*[A-Z]90:*:*:*) echo ${UNAME_MACHINE}-cray-unicos${UNAME_RELEASE} \ | sed -e 's/CRAY.*\([A-Z]90\)/\1/' \ -e y/ABCDEFGHIJKLMNOPQRSTUVWXYZ/abcdefghijklmnopqrstuvwxyz/ \ -e 's/\.[^.]*$/.X/' exit ;; CRAY*TS:*:*:*) echo t90-cray-unicos${UNAME_RELEASE} | sed -e 's/\.[^.]*$/.X/' exit ;; CRAY*T3E:*:*:*) echo alphaev5-cray-unicosmk${UNAME_RELEASE} | sed -e 's/\.[^.]*$/.X/' exit ;; CRAY*SV1:*:*:*) echo sv1-cray-unicos${UNAME_RELEASE} | sed -e 's/\.[^.]*$/.X/' exit ;; *:UNICOS/mp:*:*) echo craynv-cray-unicosmp${UNAME_RELEASE} | sed -e 's/\.[^.]*$/.X/' exit ;; F30[01]:UNIX_System_V:*:* | F700:UNIX_System_V:*:*) FUJITSU_PROC=`uname -m | tr 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' 'abcdefghijklmnopqrstuvwxyz'` FUJITSU_SYS=`uname -p | tr 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' 'abcdefghijklmnopqrstuvwxyz' | sed -e 's/\///'` FUJITSU_REL=`echo ${UNAME_RELEASE} | sed -e 's/ /_/'` echo "${FUJITSU_PROC}-fujitsu-${FUJITSU_SYS}${FUJITSU_REL}" exit ;; 5000:UNIX_System_V:4.*:*) FUJITSU_SYS=`uname -p | tr 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' 'abcdefghijklmnopqrstuvwxyz' | sed -e 's/\///'` FUJITSU_REL=`echo ${UNAME_RELEASE} | tr 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' 'abcdefghijklmnopqrstuvwxyz' | sed -e 's/ /_/'` echo "sparc-fujitsu-${FUJITSU_SYS}${FUJITSU_REL}" exit ;; i*86:BSD/386:*:* | i*86:BSD/OS:*:* | *:Ascend\ Embedded/OS:*:*) echo ${UNAME_MACHINE}-pc-bsdi${UNAME_RELEASE} exit ;; sparc*:BSD/OS:*:*) echo sparc-unknown-bsdi${UNAME_RELEASE} exit ;; *:BSD/OS:*:*) echo ${UNAME_MACHINE}-unknown-bsdi${UNAME_RELEASE} exit ;; *:FreeBSD:*:*) UNAME_PROCESSOR=`/usr/bin/uname -p` case ${UNAME_PROCESSOR} in amd64) echo x86_64-unknown-freebsd`echo ${UNAME_RELEASE}|sed -e 's/[-(].*//'` ;; *) echo ${UNAME_PROCESSOR}-unknown-freebsd`echo ${UNAME_RELEASE}|sed -e 's/[-(].*//'` ;; esac exit ;; i*:CYGWIN*:*) echo ${UNAME_MACHINE}-pc-cygwin exit ;; *:MINGW*:*) echo ${UNAME_MACHINE}-pc-mingw32 exit ;; i*:MSYS*:*) echo ${UNAME_MACHINE}-pc-msys exit ;; i*:windows32*:*) # uname -m includes "-pc" on this system. echo ${UNAME_MACHINE}-mingw32 exit ;; i*:PW*:*) echo ${UNAME_MACHINE}-pc-pw32 exit ;; *:Interix*:*) case ${UNAME_MACHINE} in x86) echo i586-pc-interix${UNAME_RELEASE} exit ;; authenticamd | genuineintel | EM64T) echo x86_64-unknown-interix${UNAME_RELEASE} exit ;; IA64) echo ia64-unknown-interix${UNAME_RELEASE} exit ;; esac ;; [345]86:Windows_95:* | [345]86:Windows_98:* | [345]86:Windows_NT:*) echo i${UNAME_MACHINE}-pc-mks exit ;; 8664:Windows_NT:*) echo x86_64-pc-mks exit ;; i*:Windows_NT*:* | Pentium*:Windows_NT*:*) # How do we know it's Interix rather than the generic POSIX subsystem? # It also conflicts with pre-2.0 versions of AT&T UWIN. Should we # UNAME_MACHINE based on the output of uname instead of i386? echo i586-pc-interix exit ;; i*:UWIN*:*) echo ${UNAME_MACHINE}-pc-uwin exit ;; amd64:CYGWIN*:*:* | x86_64:CYGWIN*:*:*) echo x86_64-unknown-cygwin exit ;; p*:CYGWIN*:*) echo powerpcle-unknown-cygwin exit ;; prep*:SunOS:5.*:*) echo powerpcle-unknown-solaris2`echo ${UNAME_RELEASE}|sed -e 's/[^.]*//'` exit ;; *:GNU:*:*) # the GNU system echo `echo ${UNAME_MACHINE}|sed -e 's,[-/].*$,,'`-unknown-gnu`echo ${UNAME_RELEASE}|sed -e 's,/.*$,,'` exit ;; *:GNU/*:*:*) # other systems with GNU libc and userland echo ${UNAME_MACHINE}-unknown-`echo ${UNAME_SYSTEM} | sed 's,^[^/]*/,,' | tr '[A-Z]' '[a-z]'``echo ${UNAME_RELEASE}|sed -e 's/[-(].*//'`-gnu exit ;; i*86:Minix:*:*) echo ${UNAME_MACHINE}-pc-minix exit ;; aarch64:Linux:*:*) echo ${UNAME_MACHINE}-unknown-linux-gnu exit ;; aarch64_be:Linux:*:*) UNAME_MACHINE=aarch64_be echo ${UNAME_MACHINE}-unknown-linux-gnu exit ;; alpha:Linux:*:*) case `sed -n '/^cpu model/s/^.*: \(.*\)/\1/p' < /proc/cpuinfo` in EV5) UNAME_MACHINE=alphaev5 ;; EV56) UNAME_MACHINE=alphaev56 ;; PCA56) UNAME_MACHINE=alphapca56 ;; PCA57) UNAME_MACHINE=alphapca56 ;; EV6) UNAME_MACHINE=alphaev6 ;; EV67) UNAME_MACHINE=alphaev67 ;; EV68*) UNAME_MACHINE=alphaev68 ;; esac objdump --private-headers /bin/sh | grep -q ld.so.1 if test "$?" = 0 ; then LIBC="libc1" ; else LIBC="" ; fi echo ${UNAME_MACHINE}-unknown-linux-gnu${LIBC} exit ;; arm*:Linux:*:*) eval $set_cc_for_build if echo __ARM_EABI__ | $CC_FOR_BUILD -E - 2>/dev/null \ | grep -q __ARM_EABI__ then echo ${UNAME_MACHINE}-unknown-linux-gnu else if echo __ARM_PCS_VFP | $CC_FOR_BUILD -E - 2>/dev/null \ | grep -q __ARM_PCS_VFP then echo ${UNAME_MACHINE}-unknown-linux-gnueabi else echo ${UNAME_MACHINE}-unknown-linux-gnueabihf fi fi exit ;; avr32*:Linux:*:*) echo ${UNAME_MACHINE}-unknown-linux-gnu exit ;; cris:Linux:*:*) echo ${UNAME_MACHINE}-axis-linux-gnu exit ;; crisv32:Linux:*:*) echo ${UNAME_MACHINE}-axis-linux-gnu exit ;; frv:Linux:*:*) echo ${UNAME_MACHINE}-unknown-linux-gnu exit ;; hexagon:Linux:*:*) echo ${UNAME_MACHINE}-unknown-linux-gnu exit ;; i*86:Linux:*:*) LIBC=gnu eval $set_cc_for_build sed 's/^ //' << EOF >$dummy.c #ifdef __dietlibc__ LIBC=dietlibc #endif EOF eval `$CC_FOR_BUILD -E $dummy.c 2>/dev/null | grep '^LIBC'` echo "${UNAME_MACHINE}-pc-linux-${LIBC}" exit ;; ia64:Linux:*:*) echo ${UNAME_MACHINE}-unknown-linux-gnu exit ;; m32r*:Linux:*:*) echo ${UNAME_MACHINE}-unknown-linux-gnu exit ;; m68*:Linux:*:*) echo ${UNAME_MACHINE}-unknown-linux-gnu exit ;; mips:Linux:*:* | mips64:Linux:*:*) eval $set_cc_for_build sed 's/^ //' << EOF >$dummy.c #undef CPU #undef ${UNAME_MACHINE} #undef ${UNAME_MACHINE}el #if defined(__MIPSEL__) || defined(__MIPSEL) || defined(_MIPSEL) || defined(MIPSEL) CPU=${UNAME_MACHINE}el #else #if defined(__MIPSEB__) || defined(__MIPSEB) || defined(_MIPSEB) || defined(MIPSEB) CPU=${UNAME_MACHINE} #else CPU= #endif #endif EOF eval `$CC_FOR_BUILD -E $dummy.c 2>/dev/null | grep '^CPU'` test x"${CPU}" != x && { echo "${CPU}-unknown-linux-gnu"; exit; } ;; or32:Linux:*:*) echo ${UNAME_MACHINE}-unknown-linux-gnu exit ;; padre:Linux:*:*) echo sparc-unknown-linux-gnu exit ;; parisc64:Linux:*:* | hppa64:Linux:*:*) echo hppa64-unknown-linux-gnu exit ;; parisc:Linux:*:* | hppa:Linux:*:*) # Look for CPU level case `grep '^cpu[^a-z]*:' /proc/cpuinfo 2>/dev/null | cut -d' ' -f2` in PA7*) echo hppa1.1-unknown-linux-gnu ;; PA8*) echo hppa2.0-unknown-linux-gnu ;; *) echo hppa-unknown-linux-gnu ;; esac exit ;; ppc64:Linux:*:*) echo powerpc64-unknown-linux-gnu exit ;; ppc:Linux:*:*) echo powerpc-unknown-linux-gnu exit ;; s390:Linux:*:* | s390x:Linux:*:*) echo ${UNAME_MACHINE}-ibm-linux exit ;; sh64*:Linux:*:*) echo ${UNAME_MACHINE}-unknown-linux-gnu exit ;; sh*:Linux:*:*) echo ${UNAME_MACHINE}-unknown-linux-gnu exit ;; sparc:Linux:*:* | sparc64:Linux:*:*) echo ${UNAME_MACHINE}-unknown-linux-gnu exit ;; tile*:Linux:*:*) echo ${UNAME_MACHINE}-unknown-linux-gnu exit ;; vax:Linux:*:*) echo ${UNAME_MACHINE}-dec-linux-gnu exit ;; x86_64:Linux:*:*) echo ${UNAME_MACHINE}-unknown-linux-gnu exit ;; xtensa*:Linux:*:*) echo ${UNAME_MACHINE}-unknown-linux-gnu exit ;; i*86:DYNIX/ptx:4*:*) # ptx 4.0 does uname -s correctly, with DYNIX/ptx in there. # earlier versions are messed up and put the nodename in both # sysname and nodename. echo i386-sequent-sysv4 exit ;; i*86:UNIX_SV:4.2MP:2.*) # Unixware is an offshoot of SVR4, but it has its own version # number series starting with 2... # I am not positive that other SVR4 systems won't match this, # I just have to hope. -- rms. # Use sysv4.2uw... so that sysv4* matches it. echo ${UNAME_MACHINE}-pc-sysv4.2uw${UNAME_VERSION} exit ;; i*86:OS/2:*:*) # If we were able to find `uname', then EMX Unix compatibility # is probably installed. echo ${UNAME_MACHINE}-pc-os2-emx exit ;; i*86:XTS-300:*:STOP) echo ${UNAME_MACHINE}-unknown-stop exit ;; i*86:atheos:*:*) echo ${UNAME_MACHINE}-unknown-atheos exit ;; i*86:syllable:*:*) echo ${UNAME_MACHINE}-pc-syllable exit ;; i*86:LynxOS:2.*:* | i*86:LynxOS:3.[01]*:* | i*86:LynxOS:4.[02]*:*) echo i386-unknown-lynxos${UNAME_RELEASE} exit ;; i*86:*DOS:*:*) echo ${UNAME_MACHINE}-pc-msdosdjgpp exit ;; i*86:*:4.*:* | i*86:SYSTEM_V:4.*:*) UNAME_REL=`echo ${UNAME_RELEASE} | sed 's/\/MP$//'` if grep Novell /usr/include/link.h >/dev/null 2>/dev/null; then echo ${UNAME_MACHINE}-univel-sysv${UNAME_REL} else echo ${UNAME_MACHINE}-pc-sysv${UNAME_REL} fi exit ;; i*86:*:5:[678]*) # UnixWare 7.x, OpenUNIX and OpenServer 6. case `/bin/uname -X | grep "^Machine"` in *486*) UNAME_MACHINE=i486 ;; *Pentium) UNAME_MACHINE=i586 ;; *Pent*|*Celeron) UNAME_MACHINE=i686 ;; esac echo ${UNAME_MACHINE}-unknown-sysv${UNAME_RELEASE}${UNAME_SYSTEM}${UNAME_VERSION} exit ;; i*86:*:3.2:*) if test -f /usr/options/cb.name; then UNAME_REL=`sed -n 's/.*Version //p' /dev/null >/dev/null ; then UNAME_REL=`(/bin/uname -X|grep Release|sed -e 's/.*= //')` (/bin/uname -X|grep i80486 >/dev/null) && UNAME_MACHINE=i486 (/bin/uname -X|grep '^Machine.*Pentium' >/dev/null) \ && UNAME_MACHINE=i586 (/bin/uname -X|grep '^Machine.*Pent *II' >/dev/null) \ && UNAME_MACHINE=i686 (/bin/uname -X|grep '^Machine.*Pentium Pro' >/dev/null) \ && UNAME_MACHINE=i686 echo ${UNAME_MACHINE}-pc-sco$UNAME_REL else echo ${UNAME_MACHINE}-pc-sysv32 fi exit ;; pc:*:*:*) # Left here for compatibility: # uname -m prints for DJGPP always 'pc', but it prints nothing about # the processor, so we play safe by assuming i586. # Note: whatever this is, it MUST be the same as what config.sub # prints for the "djgpp" host, or else GDB configury will decide that # this is a cross-build. echo i586-pc-msdosdjgpp exit ;; Intel:Mach:3*:*) echo i386-pc-mach3 exit ;; paragon:*:*:*) echo i860-intel-osf1 exit ;; i860:*:4.*:*) # i860-SVR4 if grep Stardent /usr/include/sys/uadmin.h >/dev/null 2>&1 ; then echo i860-stardent-sysv${UNAME_RELEASE} # Stardent Vistra i860-SVR4 else # Add other i860-SVR4 vendors below as they are discovered. echo i860-unknown-sysv${UNAME_RELEASE} # Unknown i860-SVR4 fi exit ;; mini*:CTIX:SYS*5:*) # "miniframe" echo m68010-convergent-sysv exit ;; mc68k:UNIX:SYSTEM5:3.51m) echo m68k-convergent-sysv exit ;; M680?0:D-NIX:5.3:*) echo m68k-diab-dnix exit ;; M68*:*:R3V[5678]*:*) test -r /sysV68 && { echo 'm68k-motorola-sysv'; exit; } ;; 3[345]??:*:4.0:3.0 | 3[34]??A:*:4.0:3.0 | 3[34]??,*:*:4.0:3.0 | 3[34]??/*:*:4.0:3.0 | 4400:*:4.0:3.0 | 4850:*:4.0:3.0 | SKA40:*:4.0:3.0 | SDS2:*:4.0:3.0 | SHG2:*:4.0:3.0 | S7501*:*:4.0:3.0) OS_REL='' test -r /etc/.relid \ && OS_REL=.`sed -n 's/[^ ]* [^ ]* \([0-9][0-9]\).*/\1/p' < /etc/.relid` /bin/uname -p 2>/dev/null | grep 86 >/dev/null \ && { echo i486-ncr-sysv4.3${OS_REL}; exit; } /bin/uname -p 2>/dev/null | /bin/grep entium >/dev/null \ && { echo i586-ncr-sysv4.3${OS_REL}; exit; } ;; 3[34]??:*:4.0:* | 3[34]??,*:*:4.0:*) /bin/uname -p 2>/dev/null | grep 86 >/dev/null \ && { echo i486-ncr-sysv4; exit; } ;; NCR*:*:4.2:* | MPRAS*:*:4.2:*) OS_REL='.3' test -r /etc/.relid \ && OS_REL=.`sed -n 's/[^ ]* [^ ]* \([0-9][0-9]\).*/\1/p' < /etc/.relid` /bin/uname -p 2>/dev/null | grep 86 >/dev/null \ && { echo i486-ncr-sysv4.3${OS_REL}; exit; } /bin/uname -p 2>/dev/null | /bin/grep entium >/dev/null \ && { echo i586-ncr-sysv4.3${OS_REL}; exit; } /bin/uname -p 2>/dev/null | /bin/grep pteron >/dev/null \ && { echo i586-ncr-sysv4.3${OS_REL}; exit; } ;; m68*:LynxOS:2.*:* | m68*:LynxOS:3.0*:*) echo m68k-unknown-lynxos${UNAME_RELEASE} exit ;; mc68030:UNIX_System_V:4.*:*) echo m68k-atari-sysv4 exit ;; TSUNAMI:LynxOS:2.*:*) echo sparc-unknown-lynxos${UNAME_RELEASE} exit ;; rs6000:LynxOS:2.*:*) echo rs6000-unknown-lynxos${UNAME_RELEASE} exit ;; PowerPC:LynxOS:2.*:* | PowerPC:LynxOS:3.[01]*:* | PowerPC:LynxOS:4.[02]*:*) echo powerpc-unknown-lynxos${UNAME_RELEASE} exit ;; SM[BE]S:UNIX_SV:*:*) echo mips-dde-sysv${UNAME_RELEASE} exit ;; RM*:ReliantUNIX-*:*:*) echo mips-sni-sysv4 exit ;; RM*:SINIX-*:*:*) echo mips-sni-sysv4 exit ;; *:SINIX-*:*:*) if uname -p 2>/dev/null >/dev/null ; then UNAME_MACHINE=`(uname -p) 2>/dev/null` echo ${UNAME_MACHINE}-sni-sysv4 else echo ns32k-sni-sysv fi exit ;; PENTIUM:*:4.0*:*) # Unisys `ClearPath HMP IX 4000' SVR4/MP effort # says echo i586-unisys-sysv4 exit ;; *:UNIX_System_V:4*:FTX*) # From Gerald Hewes . # How about differentiating between stratus architectures? -djm echo hppa1.1-stratus-sysv4 exit ;; *:*:*:FTX*) # From seanf@swdc.stratus.com. echo i860-stratus-sysv4 exit ;; i*86:VOS:*:*) # From Paul.Green@stratus.com. echo ${UNAME_MACHINE}-stratus-vos exit ;; *:VOS:*:*) # From Paul.Green@stratus.com. echo hppa1.1-stratus-vos exit ;; mc68*:A/UX:*:*) echo m68k-apple-aux${UNAME_RELEASE} exit ;; news*:NEWS-OS:6*:*) echo mips-sony-newsos6 exit ;; R[34]000:*System_V*:*:* | R4000:UNIX_SYSV:*:* | R*000:UNIX_SV:*:*) if [ -d /usr/nec ]; then echo mips-nec-sysv${UNAME_RELEASE} else echo mips-unknown-sysv${UNAME_RELEASE} fi exit ;; BeBox:BeOS:*:*) # BeOS running on hardware made by Be, PPC only. echo powerpc-be-beos exit ;; BeMac:BeOS:*:*) # BeOS running on Mac or Mac clone, PPC only. echo powerpc-apple-beos exit ;; BePC:BeOS:*:*) # BeOS running on Intel PC compatible. echo i586-pc-beos exit ;; BePC:Haiku:*:*) # Haiku running on Intel PC compatible. echo i586-pc-haiku exit ;; SX-4:SUPER-UX:*:*) echo sx4-nec-superux${UNAME_RELEASE} exit ;; SX-5:SUPER-UX:*:*) echo sx5-nec-superux${UNAME_RELEASE} exit ;; SX-6:SUPER-UX:*:*) echo sx6-nec-superux${UNAME_RELEASE} exit ;; SX-7:SUPER-UX:*:*) echo sx7-nec-superux${UNAME_RELEASE} exit ;; SX-8:SUPER-UX:*:*) echo sx8-nec-superux${UNAME_RELEASE} exit ;; SX-8R:SUPER-UX:*:*) echo sx8r-nec-superux${UNAME_RELEASE} exit ;; Power*:Rhapsody:*:*) echo powerpc-apple-rhapsody${UNAME_RELEASE} exit ;; *:Rhapsody:*:*) echo ${UNAME_MACHINE}-apple-rhapsody${UNAME_RELEASE} exit ;; *:Darwin:*:*) UNAME_PROCESSOR=`uname -p` || UNAME_PROCESSOR=unknown case $UNAME_PROCESSOR in i386) eval $set_cc_for_build if [ "$CC_FOR_BUILD" != 'no_compiler_found' ]; then if (echo '#ifdef __LP64__'; echo IS_64BIT_ARCH; echo '#endif') | \ (CCOPTS= $CC_FOR_BUILD -E - 2>/dev/null) | \ grep IS_64BIT_ARCH >/dev/null then UNAME_PROCESSOR="x86_64" fi fi ;; unknown) UNAME_PROCESSOR=powerpc ;; esac echo ${UNAME_PROCESSOR}-apple-darwin${UNAME_RELEASE} exit ;; *:procnto*:*:* | *:QNX:[0123456789]*:*) UNAME_PROCESSOR=`uname -p` if test "$UNAME_PROCESSOR" = "x86"; then UNAME_PROCESSOR=i386 UNAME_MACHINE=pc fi echo ${UNAME_PROCESSOR}-${UNAME_MACHINE}-nto-qnx${UNAME_RELEASE} exit ;; *:QNX:*:4*) echo i386-pc-qnx exit ;; NEO-?:NONSTOP_KERNEL:*:*) echo neo-tandem-nsk${UNAME_RELEASE} exit ;; NSE-?:NONSTOP_KERNEL:*:*) echo nse-tandem-nsk${UNAME_RELEASE} exit ;; NSR-?:NONSTOP_KERNEL:*:*) echo nsr-tandem-nsk${UNAME_RELEASE} exit ;; *:NonStop-UX:*:*) echo mips-compaq-nonstopux exit ;; BS2000:POSIX*:*:*) echo bs2000-siemens-sysv exit ;; DS/*:UNIX_System_V:*:*) echo ${UNAME_MACHINE}-${UNAME_SYSTEM}-${UNAME_RELEASE} exit ;; *:Plan9:*:*) # "uname -m" is not consistent, so use $cputype instead. 386 # is converted to i386 for consistency with other x86 # operating systems. if test "$cputype" = "386"; then UNAME_MACHINE=i386 else UNAME_MACHINE="$cputype" fi echo ${UNAME_MACHINE}-unknown-plan9 exit ;; *:TOPS-10:*:*) echo pdp10-unknown-tops10 exit ;; *:TENEX:*:*) echo pdp10-unknown-tenex exit ;; KS10:TOPS-20:*:* | KL10:TOPS-20:*:* | TYPE4:TOPS-20:*:*) echo pdp10-dec-tops20 exit ;; XKL-1:TOPS-20:*:* | TYPE5:TOPS-20:*:*) echo pdp10-xkl-tops20 exit ;; *:TOPS-20:*:*) echo pdp10-unknown-tops20 exit ;; *:ITS:*:*) echo pdp10-unknown-its exit ;; SEI:*:*:SEIUX) echo mips-sei-seiux${UNAME_RELEASE} exit ;; *:DragonFly:*:*) echo ${UNAME_MACHINE}-unknown-dragonfly`echo ${UNAME_RELEASE}|sed -e 's/[-(].*//'` exit ;; *:*VMS:*:*) UNAME_MACHINE=`(uname -p) 2>/dev/null` case "${UNAME_MACHINE}" in A*) echo alpha-dec-vms ; exit ;; I*) echo ia64-dec-vms ; exit ;; V*) echo vax-dec-vms ; exit ;; esac ;; *:XENIX:*:SysV) echo i386-pc-xenix exit ;; i*86:skyos:*:*) echo ${UNAME_MACHINE}-pc-skyos`echo ${UNAME_RELEASE}` | sed -e 's/ .*$//' exit ;; i*86:rdos:*:*) echo ${UNAME_MACHINE}-pc-rdos exit ;; i*86:AROS:*:*) echo ${UNAME_MACHINE}-pc-aros exit ;; x86_64:VMkernel:*:*) echo ${UNAME_MACHINE}-unknown-esx exit ;; esac #echo '(No uname command or uname output not recognized.)' 1>&2 #echo "${UNAME_MACHINE}:${UNAME_SYSTEM}:${UNAME_RELEASE}:${UNAME_VERSION}" 1>&2 eval $set_cc_for_build cat >$dummy.c < # include #endif main () { #if defined (sony) #if defined (MIPSEB) /* BFD wants "bsd" instead of "newsos". Perhaps BFD should be changed, I don't know.... */ printf ("mips-sony-bsd\n"); exit (0); #else #include printf ("m68k-sony-newsos%s\n", #ifdef NEWSOS4 "4" #else "" #endif ); exit (0); #endif #endif #if defined (__arm) && defined (__acorn) && defined (__unix) printf ("arm-acorn-riscix\n"); exit (0); #endif #if defined (hp300) && !defined (hpux) printf ("m68k-hp-bsd\n"); exit (0); #endif #if defined (NeXT) #if !defined (__ARCHITECTURE__) #define __ARCHITECTURE__ "m68k" #endif int version; version=`(hostinfo | sed -n 's/.*NeXT Mach \([0-9]*\).*/\1/p') 2>/dev/null`; if (version < 4) printf ("%s-next-nextstep%d\n", __ARCHITECTURE__, version); else printf ("%s-next-openstep%d\n", __ARCHITECTURE__, version); exit (0); #endif #if defined (MULTIMAX) || defined (n16) #if defined (UMAXV) printf ("ns32k-encore-sysv\n"); exit (0); #else #if defined (CMU) printf ("ns32k-encore-mach\n"); exit (0); #else printf ("ns32k-encore-bsd\n"); exit (0); #endif #endif #endif #if defined (__386BSD__) printf ("i386-pc-bsd\n"); exit (0); #endif #if defined (sequent) #if defined (i386) printf ("i386-sequent-dynix\n"); exit (0); #endif #if defined (ns32000) printf ("ns32k-sequent-dynix\n"); exit (0); #endif #endif #if defined (_SEQUENT_) struct utsname un; uname(&un); if (strncmp(un.version, "V2", 2) == 0) { printf ("i386-sequent-ptx2\n"); exit (0); } if (strncmp(un.version, "V1", 2) == 0) { /* XXX is V1 correct? */ printf ("i386-sequent-ptx1\n"); exit (0); } printf ("i386-sequent-ptx\n"); exit (0); #endif #if defined (vax) # if !defined (ultrix) # include # if defined (BSD) # if BSD == 43 printf ("vax-dec-bsd4.3\n"); exit (0); # else # if BSD == 199006 printf ("vax-dec-bsd4.3reno\n"); exit (0); # else printf ("vax-dec-bsd\n"); exit (0); # endif # endif # else printf ("vax-dec-bsd\n"); exit (0); # endif # else printf ("vax-dec-ultrix\n"); exit (0); # endif #endif #if defined (alliant) && defined (i860) printf ("i860-alliant-bsd\n"); exit (0); #endif exit (1); } EOF $CC_FOR_BUILD -o $dummy $dummy.c 2>/dev/null && SYSTEM_NAME=`$dummy` && { echo "$SYSTEM_NAME"; exit; } # Apollos put the system type in the environment. test -d /usr/apollo && { echo ${ISP}-apollo-${SYSTYPE}; exit; } # Convex versions that predate uname can use getsysinfo(1) if [ -x /usr/convex/getsysinfo ] then case `getsysinfo -f cpu_type` in c1*) echo c1-convex-bsd exit ;; c2*) if getsysinfo -f scalar_acc then echo c32-convex-bsd else echo c2-convex-bsd fi exit ;; c34*) echo c34-convex-bsd exit ;; c38*) echo c38-convex-bsd exit ;; c4*) echo c4-convex-bsd exit ;; esac fi cat >&2 < in order to provide the needed information to handle your system. config.guess timestamp = $timestamp uname -m = `(uname -m) 2>/dev/null || echo unknown` uname -r = `(uname -r) 2>/dev/null || echo unknown` uname -s = `(uname -s) 2>/dev/null || echo unknown` uname -v = `(uname -v) 2>/dev/null || echo unknown` /usr/bin/uname -p = `(/usr/bin/uname -p) 2>/dev/null` /bin/uname -X = `(/bin/uname -X) 2>/dev/null` hostinfo = `(hostinfo) 2>/dev/null` /bin/universe = `(/bin/universe) 2>/dev/null` /usr/bin/arch -k = `(/usr/bin/arch -k) 2>/dev/null` /bin/arch = `(/bin/arch) 2>/dev/null` /usr/bin/oslevel = `(/usr/bin/oslevel) 2>/dev/null` /usr/convex/getsysinfo = `(/usr/convex/getsysinfo) 2>/dev/null` UNAME_MACHINE = ${UNAME_MACHINE} UNAME_RELEASE = ${UNAME_RELEASE} UNAME_SYSTEM = ${UNAME_SYSTEM} UNAME_VERSION = ${UNAME_VERSION} EOF exit 1 # Local variables: # eval: (add-hook 'write-file-hooks 'time-stamp) # time-stamp-start: "timestamp='" # time-stamp-format: "%:y-%02m-%02d" # time-stamp-end: "'" # End: ffc-2019.1.0.post0/libs/gtest-1.7.0/build-aux/config.h.in000066400000000000000000000034611346026536000222140ustar00rootroot00000000000000/* build-aux/config.h.in. Generated from configure.ac by autoheader. */ /* Define to 1 if you have the header file. */ #undef HAVE_DLFCN_H /* Define to 1 if you have the header file. */ #undef HAVE_INTTYPES_H /* Define to 1 if you have the header file. */ #undef HAVE_MEMORY_H /* Define if you have POSIX threads libraries and header files. */ #undef HAVE_PTHREAD /* Define to 1 if you have the header file. */ #undef HAVE_STDINT_H /* Define to 1 if you have the header file. */ #undef HAVE_STDLIB_H /* Define to 1 if you have the header file. */ #undef HAVE_STRINGS_H /* Define to 1 if you have the header file. */ #undef HAVE_STRING_H /* Define to 1 if you have the header file. */ #undef HAVE_SYS_STAT_H /* Define to 1 if you have the header file. */ #undef HAVE_SYS_TYPES_H /* Define to 1 if you have the header file. */ #undef HAVE_UNISTD_H /* Define to the sub-directory in which libtool stores uninstalled libraries. */ #undef LT_OBJDIR /* Name of package */ #undef PACKAGE /* Define to the address where bug reports for this package should be sent. */ #undef PACKAGE_BUGREPORT /* Define to the full name of this package. */ #undef PACKAGE_NAME /* Define to the full name and version of this package. */ #undef PACKAGE_STRING /* Define to the one symbol short name of this package. */ #undef PACKAGE_TARNAME /* Define to the home page for this package. */ #undef PACKAGE_URL /* Define to the version of this package. */ #undef PACKAGE_VERSION /* Define to necessary symbol if this constant uses a non-standard name on your system. */ #undef PTHREAD_CREATE_JOINABLE /* Define to 1 if you have the ANSI C header files. */ #undef STDC_HEADERS /* Version number of package */ #undef VERSION ffc-2019.1.0.post0/libs/gtest-1.7.0/build-aux/config.sub000077500000000000000000001051761346026536000221620ustar00rootroot00000000000000#! /bin/sh # Configuration validation subroutine script. # Copyright (C) 1992, 1993, 1994, 1995, 1996, 1997, 1998, 1999, # 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, # 2011, 2012 Free Software Foundation, Inc. timestamp='2012-02-10' # This file is (in principle) common to ALL GNU software. # The presence of a machine in this file suggests that SOME GNU software # can handle that machine. It does not imply ALL GNU software can. # # This file is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with this program; if not, see . # # As a special exception to the GNU General Public License, if you # distribute this file as part of a program that contains a # configuration script generated by Autoconf, you may include it under # the same distribution terms that you use for the rest of that program. # Please send patches to . Submit a context # diff and a properly formatted GNU ChangeLog entry. # # Configuration subroutine to validate and canonicalize a configuration type. # Supply the specified configuration type as an argument. # If it is invalid, we print an error message on stderr and exit with code 1. # Otherwise, we print the canonical config type on stdout and succeed. # You can get the latest version of this script from: # http://git.savannah.gnu.org/gitweb/?p=config.git;a=blob_plain;f=config.sub;hb=HEAD # This file is supposed to be the same for all GNU packages # and recognize all the CPU types, system types and aliases # that are meaningful with *any* GNU software. # Each package is responsible for reporting which valid configurations # it does not support. The user should be able to distinguish # a failure to support a valid configuration from a meaningless # configuration. # The goal of this file is to map all the various variations of a given # machine specification into a single specification in the form: # CPU_TYPE-MANUFACTURER-OPERATING_SYSTEM # or in some cases, the newer four-part form: # CPU_TYPE-MANUFACTURER-KERNEL-OPERATING_SYSTEM # It is wrong to echo any other type of specification. me=`echo "$0" | sed -e 's,.*/,,'` usage="\ Usage: $0 [OPTION] CPU-MFR-OPSYS $0 [OPTION] ALIAS Canonicalize a configuration name. Operation modes: -h, --help print this help, then exit -t, --time-stamp print date of last modification, then exit -v, --version print version number, then exit Report bugs and patches to ." version="\ GNU config.sub ($timestamp) Copyright (C) 1992, 1993, 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012 Free Software Foundation, Inc. This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE." help=" Try \`$me --help' for more information." # Parse command line while test $# -gt 0 ; do case $1 in --time-stamp | --time* | -t ) echo "$timestamp" ; exit ;; --version | -v ) echo "$version" ; exit ;; --help | --h* | -h ) echo "$usage"; exit ;; -- ) # Stop option processing shift; break ;; - ) # Use stdin as input. break ;; -* ) echo "$me: invalid option $1$help" exit 1 ;; *local*) # First pass through any local machine types. echo $1 exit ;; * ) break ;; esac done case $# in 0) echo "$me: missing argument$help" >&2 exit 1;; 1) ;; *) echo "$me: too many arguments$help" >&2 exit 1;; esac # Separate what the user gave into CPU-COMPANY and OS or KERNEL-OS (if any). # Here we must recognize all the valid KERNEL-OS combinations. maybe_os=`echo $1 | sed 's/^\(.*\)-\([^-]*-[^-]*\)$/\2/'` case $maybe_os in nto-qnx* | linux-gnu* | linux-android* | linux-dietlibc | linux-newlib* | \ linux-uclibc* | uclinux-uclibc* | uclinux-gnu* | kfreebsd*-gnu* | \ knetbsd*-gnu* | netbsd*-gnu* | \ kopensolaris*-gnu* | \ storm-chaos* | os2-emx* | rtmk-nova*) os=-$maybe_os basic_machine=`echo $1 | sed 's/^\(.*\)-\([^-]*-[^-]*\)$/\1/'` ;; android-linux) os=-linux-android basic_machine=`echo $1 | sed 's/^\(.*\)-\([^-]*-[^-]*\)$/\1/'`-unknown ;; *) basic_machine=`echo $1 | sed 's/-[^-]*$//'` if [ $basic_machine != $1 ] then os=`echo $1 | sed 's/.*-/-/'` else os=; fi ;; esac ### Let's recognize common machines as not being operating systems so ### that things like config.sub decstation-3100 work. We also ### recognize some manufacturers as not being operating systems, so we ### can provide default operating systems below. case $os in -sun*os*) # Prevent following clause from handling this invalid input. ;; -dec* | -mips* | -sequent* | -encore* | -pc532* | -sgi* | -sony* | \ -att* | -7300* | -3300* | -delta* | -motorola* | -sun[234]* | \ -unicom* | -ibm* | -next | -hp | -isi* | -apollo | -altos* | \ -convergent* | -ncr* | -news | -32* | -3600* | -3100* | -hitachi* |\ -c[123]* | -convex* | -sun | -crds | -omron* | -dg | -ultra | -tti* | \ -harris | -dolphin | -highlevel | -gould | -cbm | -ns | -masscomp | \ -apple | -axis | -knuth | -cray | -microblaze) os= basic_machine=$1 ;; -bluegene*) os=-cnk ;; -sim | -cisco | -oki | -wec | -winbond) os= basic_machine=$1 ;; -scout) ;; -wrs) os=-vxworks basic_machine=$1 ;; -chorusos*) os=-chorusos basic_machine=$1 ;; -chorusrdb) os=-chorusrdb basic_machine=$1 ;; -hiux*) os=-hiuxwe2 ;; -sco6) os=-sco5v6 basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'` ;; -sco5) os=-sco3.2v5 basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'` ;; -sco4) os=-sco3.2v4 basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'` ;; -sco3.2.[4-9]*) os=`echo $os | sed -e 's/sco3.2./sco3.2v/'` basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'` ;; -sco3.2v[4-9]*) # Don't forget version if it is 3.2v4 or newer. basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'` ;; -sco5v6*) # Don't forget version if it is 3.2v4 or newer. basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'` ;; -sco*) os=-sco3.2v2 basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'` ;; -udk*) basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'` ;; -isc) os=-isc2.2 basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'` ;; -clix*) basic_machine=clipper-intergraph ;; -isc*) basic_machine=`echo $1 | sed -e 's/86-.*/86-pc/'` ;; -lynx*) os=-lynxos ;; -ptx*) basic_machine=`echo $1 | sed -e 's/86-.*/86-sequent/'` ;; -windowsnt*) os=`echo $os | sed -e 's/windowsnt/winnt/'` ;; -psos*) os=-psos ;; -mint | -mint[0-9]*) basic_machine=m68k-atari os=-mint ;; esac # Decode aliases for certain CPU-COMPANY combinations. case $basic_machine in # Recognize the basic CPU types without company name. # Some are omitted here because they have special meanings below. 1750a | 580 \ | a29k \ | aarch64 | aarch64_be \ | alpha | alphaev[4-8] | alphaev56 | alphaev6[78] | alphapca5[67] \ | alpha64 | alpha64ev[4-8] | alpha64ev56 | alpha64ev6[78] | alpha64pca5[67] \ | am33_2.0 \ | arc | arm | arm[bl]e | arme[lb] | armv[2345] | armv[345][lb] | avr | avr32 \ | be32 | be64 \ | bfin \ | c4x | clipper \ | d10v | d30v | dlx | dsp16xx \ | epiphany \ | fido | fr30 | frv \ | h8300 | h8500 | hppa | hppa1.[01] | hppa2.0 | hppa2.0[nw] | hppa64 \ | hexagon \ | i370 | i860 | i960 | ia64 \ | ip2k | iq2000 \ | le32 | le64 \ | lm32 \ | m32c | m32r | m32rle | m68000 | m68k | m88k \ | maxq | mb | microblaze | mcore | mep | metag \ | mips | mipsbe | mipseb | mipsel | mipsle \ | mips16 \ | mips64 | mips64el \ | mips64octeon | mips64octeonel \ | mips64orion | mips64orionel \ | mips64r5900 | mips64r5900el \ | mips64vr | mips64vrel \ | mips64vr4100 | mips64vr4100el \ | mips64vr4300 | mips64vr4300el \ | mips64vr5000 | mips64vr5000el \ | mips64vr5900 | mips64vr5900el \ | mipsisa32 | mipsisa32el \ | mipsisa32r2 | mipsisa32r2el \ | mipsisa64 | mipsisa64el \ | mipsisa64r2 | mipsisa64r2el \ | mipsisa64sb1 | mipsisa64sb1el \ | mipsisa64sr71k | mipsisa64sr71kel \ | mipstx39 | mipstx39el \ | mn10200 | mn10300 \ | moxie \ | mt \ | msp430 \ | nds32 | nds32le | nds32be \ | nios | nios2 \ | ns16k | ns32k \ | open8 \ | or32 \ | pdp10 | pdp11 | pj | pjl \ | powerpc | powerpc64 | powerpc64le | powerpcle \ | pyramid \ | rl78 | rx \ | score \ | sh | sh[1234] | sh[24]a | sh[24]aeb | sh[23]e | sh[34]eb | sheb | shbe | shle | sh[1234]le | sh3ele \ | sh64 | sh64le \ | sparc | sparc64 | sparc64b | sparc64v | sparc86x | sparclet | sparclite \ | sparcv8 | sparcv9 | sparcv9b | sparcv9v \ | spu \ | tahoe | tic4x | tic54x | tic55x | tic6x | tic80 | tron \ | ubicom32 \ | v850 | v850e | v850e1 | v850e2 | v850es | v850e2v3 \ | we32k \ | x86 | xc16x | xstormy16 | xtensa \ | z8k | z80) basic_machine=$basic_machine-unknown ;; c54x) basic_machine=tic54x-unknown ;; c55x) basic_machine=tic55x-unknown ;; c6x) basic_machine=tic6x-unknown ;; m6811 | m68hc11 | m6812 | m68hc12 | m68hcs12x | picochip) basic_machine=$basic_machine-unknown os=-none ;; m88110 | m680[12346]0 | m683?2 | m68360 | m5200 | v70 | w65 | z8k) ;; ms1) basic_machine=mt-unknown ;; strongarm | thumb | xscale) basic_machine=arm-unknown ;; xgate) basic_machine=$basic_machine-unknown os=-none ;; xscaleeb) basic_machine=armeb-unknown ;; xscaleel) basic_machine=armel-unknown ;; # We use `pc' rather than `unknown' # because (1) that's what they normally are, and # (2) the word "unknown" tends to confuse beginning users. i*86 | x86_64) basic_machine=$basic_machine-pc ;; # Object if more than one company name word. *-*-*) echo Invalid configuration \`$1\': machine \`$basic_machine\' not recognized 1>&2 exit 1 ;; # Recognize the basic CPU types with company name. 580-* \ | a29k-* \ | aarch64-* | aarch64_be-* \ | alpha-* | alphaev[4-8]-* | alphaev56-* | alphaev6[78]-* \ | alpha64-* | alpha64ev[4-8]-* | alpha64ev56-* | alpha64ev6[78]-* \ | alphapca5[67]-* | alpha64pca5[67]-* | arc-* \ | arm-* | armbe-* | armle-* | armeb-* | armv*-* \ | avr-* | avr32-* \ | be32-* | be64-* \ | bfin-* | bs2000-* \ | c[123]* | c30-* | [cjt]90-* | c4x-* \ | clipper-* | craynv-* | cydra-* \ | d10v-* | d30v-* | dlx-* \ | elxsi-* \ | f30[01]-* | f700-* | fido-* | fr30-* | frv-* | fx80-* \ | h8300-* | h8500-* \ | hppa-* | hppa1.[01]-* | hppa2.0-* | hppa2.0[nw]-* | hppa64-* \ | hexagon-* \ | i*86-* | i860-* | i960-* | ia64-* \ | ip2k-* | iq2000-* \ | le32-* | le64-* \ | lm32-* \ | m32c-* | m32r-* | m32rle-* \ | m68000-* | m680[012346]0-* | m68360-* | m683?2-* | m68k-* \ | m88110-* | m88k-* | maxq-* | mcore-* | metag-* | microblaze-* \ | mips-* | mipsbe-* | mipseb-* | mipsel-* | mipsle-* \ | mips16-* \ | mips64-* | mips64el-* \ | mips64octeon-* | mips64octeonel-* \ | mips64orion-* | mips64orionel-* \ | mips64r5900-* | mips64r5900el-* \ | mips64vr-* | mips64vrel-* \ | mips64vr4100-* | mips64vr4100el-* \ | mips64vr4300-* | mips64vr4300el-* \ | mips64vr5000-* | mips64vr5000el-* \ | mips64vr5900-* | mips64vr5900el-* \ | mipsisa32-* | mipsisa32el-* \ | mipsisa32r2-* | mipsisa32r2el-* \ | mipsisa64-* | mipsisa64el-* \ | mipsisa64r2-* | mipsisa64r2el-* \ | mipsisa64sb1-* | mipsisa64sb1el-* \ | mipsisa64sr71k-* | mipsisa64sr71kel-* \ | mipstx39-* | mipstx39el-* \ | mmix-* \ | mt-* \ | msp430-* \ | nds32-* | nds32le-* | nds32be-* \ | nios-* | nios2-* \ | none-* | np1-* | ns16k-* | ns32k-* \ | open8-* \ | orion-* \ | pdp10-* | pdp11-* | pj-* | pjl-* | pn-* | power-* \ | powerpc-* | powerpc64-* | powerpc64le-* | powerpcle-* \ | pyramid-* \ | rl78-* | romp-* | rs6000-* | rx-* \ | sh-* | sh[1234]-* | sh[24]a-* | sh[24]aeb-* | sh[23]e-* | sh[34]eb-* | sheb-* | shbe-* \ | shle-* | sh[1234]le-* | sh3ele-* | sh64-* | sh64le-* \ | sparc-* | sparc64-* | sparc64b-* | sparc64v-* | sparc86x-* | sparclet-* \ | sparclite-* \ | sparcv8-* | sparcv9-* | sparcv9b-* | sparcv9v-* | sv1-* | sx?-* \ | tahoe-* \ | tic30-* | tic4x-* | tic54x-* | tic55x-* | tic6x-* | tic80-* \ | tile*-* \ | tron-* \ | ubicom32-* \ | v850-* | v850e-* | v850e1-* | v850es-* | v850e2-* | v850e2v3-* \ | vax-* \ | we32k-* \ | x86-* | x86_64-* | xc16x-* | xps100-* \ | xstormy16-* | xtensa*-* \ | ymp-* \ | z8k-* | z80-*) ;; # Recognize the basic CPU types without company name, with glob match. xtensa*) basic_machine=$basic_machine-unknown ;; # Recognize the various machine names and aliases which stand # for a CPU type and a company and sometimes even an OS. 386bsd) basic_machine=i386-unknown os=-bsd ;; 3b1 | 7300 | 7300-att | att-7300 | pc7300 | safari | unixpc) basic_machine=m68000-att ;; 3b*) basic_machine=we32k-att ;; a29khif) basic_machine=a29k-amd os=-udi ;; abacus) basic_machine=abacus-unknown ;; adobe68k) basic_machine=m68010-adobe os=-scout ;; alliant | fx80) basic_machine=fx80-alliant ;; altos | altos3068) basic_machine=m68k-altos ;; am29k) basic_machine=a29k-none os=-bsd ;; amd64) basic_machine=x86_64-pc ;; amd64-*) basic_machine=x86_64-`echo $basic_machine | sed 's/^[^-]*-//'` ;; amdahl) basic_machine=580-amdahl os=-sysv ;; amiga | amiga-*) basic_machine=m68k-unknown ;; amigaos | amigados) basic_machine=m68k-unknown os=-amigaos ;; amigaunix | amix) basic_machine=m68k-unknown os=-sysv4 ;; apollo68) basic_machine=m68k-apollo os=-sysv ;; apollo68bsd) basic_machine=m68k-apollo os=-bsd ;; aros) basic_machine=i386-pc os=-aros ;; aux) basic_machine=m68k-apple os=-aux ;; balance) basic_machine=ns32k-sequent os=-dynix ;; blackfin) basic_machine=bfin-unknown os=-linux ;; blackfin-*) basic_machine=bfin-`echo $basic_machine | sed 's/^[^-]*-//'` os=-linux ;; bluegene*) basic_machine=powerpc-ibm os=-cnk ;; c54x-*) basic_machine=tic54x-`echo $basic_machine | sed 's/^[^-]*-//'` ;; c55x-*) basic_machine=tic55x-`echo $basic_machine | sed 's/^[^-]*-//'` ;; c6x-*) basic_machine=tic6x-`echo $basic_machine | sed 's/^[^-]*-//'` ;; c90) basic_machine=c90-cray os=-unicos ;; cegcc) basic_machine=arm-unknown os=-cegcc ;; convex-c1) basic_machine=c1-convex os=-bsd ;; convex-c2) basic_machine=c2-convex os=-bsd ;; convex-c32) basic_machine=c32-convex os=-bsd ;; convex-c34) basic_machine=c34-convex os=-bsd ;; convex-c38) basic_machine=c38-convex os=-bsd ;; cray | j90) basic_machine=j90-cray os=-unicos ;; craynv) basic_machine=craynv-cray os=-unicosmp ;; cr16 | cr16-*) basic_machine=cr16-unknown os=-elf ;; crds | unos) basic_machine=m68k-crds ;; crisv32 | crisv32-* | etraxfs*) basic_machine=crisv32-axis ;; cris | cris-* | etrax*) basic_machine=cris-axis ;; crx) basic_machine=crx-unknown os=-elf ;; da30 | da30-*) basic_machine=m68k-da30 ;; decstation | decstation-3100 | pmax | pmax-* | pmin | dec3100 | decstatn) basic_machine=mips-dec ;; decsystem10* | dec10*) basic_machine=pdp10-dec os=-tops10 ;; decsystem20* | dec20*) basic_machine=pdp10-dec os=-tops20 ;; delta | 3300 | motorola-3300 | motorola-delta \ | 3300-motorola | delta-motorola) basic_machine=m68k-motorola ;; delta88) basic_machine=m88k-motorola os=-sysv3 ;; dicos) basic_machine=i686-pc os=-dicos ;; djgpp) basic_machine=i586-pc os=-msdosdjgpp ;; dpx20 | dpx20-*) basic_machine=rs6000-bull os=-bosx ;; dpx2* | dpx2*-bull) basic_machine=m68k-bull os=-sysv3 ;; ebmon29k) basic_machine=a29k-amd os=-ebmon ;; elxsi) basic_machine=elxsi-elxsi os=-bsd ;; encore | umax | mmax) basic_machine=ns32k-encore ;; es1800 | OSE68k | ose68k | ose | OSE) basic_machine=m68k-ericsson os=-ose ;; fx2800) basic_machine=i860-alliant ;; genix) basic_machine=ns32k-ns ;; gmicro) basic_machine=tron-gmicro os=-sysv ;; go32) basic_machine=i386-pc os=-go32 ;; h3050r* | hiux*) basic_machine=hppa1.1-hitachi os=-hiuxwe2 ;; h8300hms) basic_machine=h8300-hitachi os=-hms ;; h8300xray) basic_machine=h8300-hitachi os=-xray ;; h8500hms) basic_machine=h8500-hitachi os=-hms ;; harris) basic_machine=m88k-harris os=-sysv3 ;; hp300-*) basic_machine=m68k-hp ;; hp300bsd) basic_machine=m68k-hp os=-bsd ;; hp300hpux) basic_machine=m68k-hp os=-hpux ;; hp3k9[0-9][0-9] | hp9[0-9][0-9]) basic_machine=hppa1.0-hp ;; hp9k2[0-9][0-9] | hp9k31[0-9]) basic_machine=m68000-hp ;; hp9k3[2-9][0-9]) basic_machine=m68k-hp ;; hp9k6[0-9][0-9] | hp6[0-9][0-9]) basic_machine=hppa1.0-hp ;; hp9k7[0-79][0-9] | hp7[0-79][0-9]) basic_machine=hppa1.1-hp ;; hp9k78[0-9] | hp78[0-9]) # FIXME: really hppa2.0-hp basic_machine=hppa1.1-hp ;; hp9k8[67]1 | hp8[67]1 | hp9k80[24] | hp80[24] | hp9k8[78]9 | hp8[78]9 | hp9k893 | hp893) # FIXME: really hppa2.0-hp basic_machine=hppa1.1-hp ;; hp9k8[0-9][13679] | hp8[0-9][13679]) basic_machine=hppa1.1-hp ;; hp9k8[0-9][0-9] | hp8[0-9][0-9]) basic_machine=hppa1.0-hp ;; hppa-next) os=-nextstep3 ;; hppaosf) basic_machine=hppa1.1-hp os=-osf ;; hppro) basic_machine=hppa1.1-hp os=-proelf ;; i370-ibm* | ibm*) basic_machine=i370-ibm ;; i*86v32) basic_machine=`echo $1 | sed -e 's/86.*/86-pc/'` os=-sysv32 ;; i*86v4*) basic_machine=`echo $1 | sed -e 's/86.*/86-pc/'` os=-sysv4 ;; i*86v) basic_machine=`echo $1 | sed -e 's/86.*/86-pc/'` os=-sysv ;; i*86sol2) basic_machine=`echo $1 | sed -e 's/86.*/86-pc/'` os=-solaris2 ;; i386mach) basic_machine=i386-mach os=-mach ;; i386-vsta | vsta) basic_machine=i386-unknown os=-vsta ;; iris | iris4d) basic_machine=mips-sgi case $os in -irix*) ;; *) os=-irix4 ;; esac ;; isi68 | isi) basic_machine=m68k-isi os=-sysv ;; m68knommu) basic_machine=m68k-unknown os=-linux ;; m68knommu-*) basic_machine=m68k-`echo $basic_machine | sed 's/^[^-]*-//'` os=-linux ;; m88k-omron*) basic_machine=m88k-omron ;; magnum | m3230) basic_machine=mips-mips os=-sysv ;; merlin) basic_machine=ns32k-utek os=-sysv ;; microblaze) basic_machine=microblaze-xilinx ;; mingw32) basic_machine=i386-pc os=-mingw32 ;; mingw32ce) basic_machine=arm-unknown os=-mingw32ce ;; miniframe) basic_machine=m68000-convergent ;; *mint | -mint[0-9]* | *MiNT | *MiNT[0-9]*) basic_machine=m68k-atari os=-mint ;; mips3*-*) basic_machine=`echo $basic_machine | sed -e 's/mips3/mips64/'` ;; mips3*) basic_machine=`echo $basic_machine | sed -e 's/mips3/mips64/'`-unknown ;; monitor) basic_machine=m68k-rom68k os=-coff ;; morphos) basic_machine=powerpc-unknown os=-morphos ;; msdos) basic_machine=i386-pc os=-msdos ;; ms1-*) basic_machine=`echo $basic_machine | sed -e 's/ms1-/mt-/'` ;; msys) basic_machine=i386-pc os=-msys ;; mvs) basic_machine=i370-ibm os=-mvs ;; nacl) basic_machine=le32-unknown os=-nacl ;; ncr3000) basic_machine=i486-ncr os=-sysv4 ;; netbsd386) basic_machine=i386-unknown os=-netbsd ;; netwinder) basic_machine=armv4l-rebel os=-linux ;; news | news700 | news800 | news900) basic_machine=m68k-sony os=-newsos ;; news1000) basic_machine=m68030-sony os=-newsos ;; news-3600 | risc-news) basic_machine=mips-sony os=-newsos ;; necv70) basic_machine=v70-nec os=-sysv ;; next | m*-next ) basic_machine=m68k-next case $os in -nextstep* ) ;; -ns2*) os=-nextstep2 ;; *) os=-nextstep3 ;; esac ;; nh3000) basic_machine=m68k-harris os=-cxux ;; nh[45]000) basic_machine=m88k-harris os=-cxux ;; nindy960) basic_machine=i960-intel os=-nindy ;; mon960) basic_machine=i960-intel os=-mon960 ;; nonstopux) basic_machine=mips-compaq os=-nonstopux ;; np1) basic_machine=np1-gould ;; neo-tandem) basic_machine=neo-tandem ;; nse-tandem) basic_machine=nse-tandem ;; nsr-tandem) basic_machine=nsr-tandem ;; op50n-* | op60c-*) basic_machine=hppa1.1-oki os=-proelf ;; openrisc | openrisc-*) basic_machine=or32-unknown ;; os400) basic_machine=powerpc-ibm os=-os400 ;; OSE68000 | ose68000) basic_machine=m68000-ericsson os=-ose ;; os68k) basic_machine=m68k-none os=-os68k ;; pa-hitachi) basic_machine=hppa1.1-hitachi os=-hiuxwe2 ;; paragon) basic_machine=i860-intel os=-osf ;; parisc) basic_machine=hppa-unknown os=-linux ;; parisc-*) basic_machine=hppa-`echo $basic_machine | sed 's/^[^-]*-//'` os=-linux ;; pbd) basic_machine=sparc-tti ;; pbb) basic_machine=m68k-tti ;; pc532 | pc532-*) basic_machine=ns32k-pc532 ;; pc98) basic_machine=i386-pc ;; pc98-*) basic_machine=i386-`echo $basic_machine | sed 's/^[^-]*-//'` ;; pentium | p5 | k5 | k6 | nexgen | viac3) basic_machine=i586-pc ;; pentiumpro | p6 | 6x86 | athlon | athlon_*) basic_machine=i686-pc ;; pentiumii | pentium2 | pentiumiii | pentium3) basic_machine=i686-pc ;; pentium4) basic_machine=i786-pc ;; pentium-* | p5-* | k5-* | k6-* | nexgen-* | viac3-*) basic_machine=i586-`echo $basic_machine | sed 's/^[^-]*-//'` ;; pentiumpro-* | p6-* | 6x86-* | athlon-*) basic_machine=i686-`echo $basic_machine | sed 's/^[^-]*-//'` ;; pentiumii-* | pentium2-* | pentiumiii-* | pentium3-*) basic_machine=i686-`echo $basic_machine | sed 's/^[^-]*-//'` ;; pentium4-*) basic_machine=i786-`echo $basic_machine | sed 's/^[^-]*-//'` ;; pn) basic_machine=pn-gould ;; power) basic_machine=power-ibm ;; ppc | ppcbe) basic_machine=powerpc-unknown ;; ppc-* | ppcbe-*) basic_machine=powerpc-`echo $basic_machine | sed 's/^[^-]*-//'` ;; ppcle | powerpclittle | ppc-le | powerpc-little) basic_machine=powerpcle-unknown ;; ppcle-* | powerpclittle-*) basic_machine=powerpcle-`echo $basic_machine | sed 's/^[^-]*-//'` ;; ppc64) basic_machine=powerpc64-unknown ;; ppc64-*) basic_machine=powerpc64-`echo $basic_machine | sed 's/^[^-]*-//'` ;; ppc64le | powerpc64little | ppc64-le | powerpc64-little) basic_machine=powerpc64le-unknown ;; ppc64le-* | powerpc64little-*) basic_machine=powerpc64le-`echo $basic_machine | sed 's/^[^-]*-//'` ;; ps2) basic_machine=i386-ibm ;; pw32) basic_machine=i586-unknown os=-pw32 ;; rdos) basic_machine=i386-pc os=-rdos ;; rom68k) basic_machine=m68k-rom68k os=-coff ;; rm[46]00) basic_machine=mips-siemens ;; rtpc | rtpc-*) basic_machine=romp-ibm ;; s390 | s390-*) basic_machine=s390-ibm ;; s390x | s390x-*) basic_machine=s390x-ibm ;; sa29200) basic_machine=a29k-amd os=-udi ;; sb1) basic_machine=mipsisa64sb1-unknown ;; sb1el) basic_machine=mipsisa64sb1el-unknown ;; sde) basic_machine=mipsisa32-sde os=-elf ;; sei) basic_machine=mips-sei os=-seiux ;; sequent) basic_machine=i386-sequent ;; sh) basic_machine=sh-hitachi os=-hms ;; sh5el) basic_machine=sh5le-unknown ;; sh64) basic_machine=sh64-unknown ;; sparclite-wrs | simso-wrs) basic_machine=sparclite-wrs os=-vxworks ;; sps7) basic_machine=m68k-bull os=-sysv2 ;; spur) basic_machine=spur-unknown ;; st2000) basic_machine=m68k-tandem ;; stratus) basic_machine=i860-stratus os=-sysv4 ;; strongarm-* | thumb-*) basic_machine=arm-`echo $basic_machine | sed 's/^[^-]*-//'` ;; sun2) basic_machine=m68000-sun ;; sun2os3) basic_machine=m68000-sun os=-sunos3 ;; sun2os4) basic_machine=m68000-sun os=-sunos4 ;; sun3os3) basic_machine=m68k-sun os=-sunos3 ;; sun3os4) basic_machine=m68k-sun os=-sunos4 ;; sun4os3) basic_machine=sparc-sun os=-sunos3 ;; sun4os4) basic_machine=sparc-sun os=-sunos4 ;; sun4sol2) basic_machine=sparc-sun os=-solaris2 ;; sun3 | sun3-*) basic_machine=m68k-sun ;; sun4) basic_machine=sparc-sun ;; sun386 | sun386i | roadrunner) basic_machine=i386-sun ;; sv1) basic_machine=sv1-cray os=-unicos ;; symmetry) basic_machine=i386-sequent os=-dynix ;; t3e) basic_machine=alphaev5-cray os=-unicos ;; t90) basic_machine=t90-cray os=-unicos ;; tile*) basic_machine=$basic_machine-unknown os=-linux-gnu ;; tx39) basic_machine=mipstx39-unknown ;; tx39el) basic_machine=mipstx39el-unknown ;; toad1) basic_machine=pdp10-xkl os=-tops20 ;; tower | tower-32) basic_machine=m68k-ncr ;; tpf) basic_machine=s390x-ibm os=-tpf ;; udi29k) basic_machine=a29k-amd os=-udi ;; ultra3) basic_machine=a29k-nyu os=-sym1 ;; v810 | necv810) basic_machine=v810-nec os=-none ;; vaxv) basic_machine=vax-dec os=-sysv ;; vms) basic_machine=vax-dec os=-vms ;; vpp*|vx|vx-*) basic_machine=f301-fujitsu ;; vxworks960) basic_machine=i960-wrs os=-vxworks ;; vxworks68) basic_machine=m68k-wrs os=-vxworks ;; vxworks29k) basic_machine=a29k-wrs os=-vxworks ;; w65*) basic_machine=w65-wdc os=-none ;; w89k-*) basic_machine=hppa1.1-winbond os=-proelf ;; xbox) basic_machine=i686-pc os=-mingw32 ;; xps | xps100) basic_machine=xps100-honeywell ;; xscale-* | xscalee[bl]-*) basic_machine=`echo $basic_machine | sed 's/^xscale/arm/'` ;; ymp) basic_machine=ymp-cray os=-unicos ;; z8k-*-coff) basic_machine=z8k-unknown os=-sim ;; z80-*-coff) basic_machine=z80-unknown os=-sim ;; none) basic_machine=none-none os=-none ;; # Here we handle the default manufacturer of certain CPU types. It is in # some cases the only manufacturer, in others, it is the most popular. w89k) basic_machine=hppa1.1-winbond ;; op50n) basic_machine=hppa1.1-oki ;; op60c) basic_machine=hppa1.1-oki ;; romp) basic_machine=romp-ibm ;; mmix) basic_machine=mmix-knuth ;; rs6000) basic_machine=rs6000-ibm ;; vax) basic_machine=vax-dec ;; pdp10) # there are many clones, so DEC is not a safe bet basic_machine=pdp10-unknown ;; pdp11) basic_machine=pdp11-dec ;; we32k) basic_machine=we32k-att ;; sh[1234] | sh[24]a | sh[24]aeb | sh[34]eb | sh[1234]le | sh[23]ele) basic_machine=sh-unknown ;; sparc | sparcv8 | sparcv9 | sparcv9b | sparcv9v) basic_machine=sparc-sun ;; cydra) basic_machine=cydra-cydrome ;; orion) basic_machine=orion-highlevel ;; orion105) basic_machine=clipper-highlevel ;; mac | mpw | mac-mpw) basic_machine=m68k-apple ;; pmac | pmac-mpw) basic_machine=powerpc-apple ;; *-unknown) # Make sure to match an already-canonicalized machine name. ;; *) echo Invalid configuration \`$1\': machine \`$basic_machine\' not recognized 1>&2 exit 1 ;; esac # Here we canonicalize certain aliases for manufacturers. case $basic_machine in *-digital*) basic_machine=`echo $basic_machine | sed 's/digital.*/dec/'` ;; *-commodore*) basic_machine=`echo $basic_machine | sed 's/commodore.*/cbm/'` ;; *) ;; esac # Decode manufacturer-specific aliases for certain operating systems. if [ x"$os" != x"" ] then case $os in # First match some system type aliases # that might get confused with valid system types. # -solaris* is a basic system type, with this one exception. -auroraux) os=-auroraux ;; -solaris1 | -solaris1.*) os=`echo $os | sed -e 's|solaris1|sunos4|'` ;; -solaris) os=-solaris2 ;; -svr4*) os=-sysv4 ;; -unixware*) os=-sysv4.2uw ;; -gnu/linux*) os=`echo $os | sed -e 's|gnu/linux|linux-gnu|'` ;; # First accept the basic system types. # The portable systems comes first. # Each alternative MUST END IN A *, to match a version number. # -sysv* is not here because it comes later, after sysvr4. -gnu* | -bsd* | -mach* | -minix* | -genix* | -ultrix* | -irix* \ | -*vms* | -sco* | -esix* | -isc* | -aix* | -cnk* | -sunos | -sunos[34]*\ | -hpux* | -unos* | -osf* | -luna* | -dgux* | -auroraux* | -solaris* \ | -sym* | -kopensolaris* \ | -amigaos* | -amigados* | -msdos* | -newsos* | -unicos* | -aof* \ | -aos* | -aros* \ | -nindy* | -vxsim* | -vxworks* | -ebmon* | -hms* | -mvs* \ | -clix* | -riscos* | -uniplus* | -iris* | -rtu* | -xenix* \ | -hiux* | -386bsd* | -knetbsd* | -mirbsd* | -netbsd* \ | -openbsd* | -solidbsd* \ | -ekkobsd* | -kfreebsd* | -freebsd* | -riscix* | -lynxos* \ | -bosx* | -nextstep* | -cxux* | -aout* | -elf* | -oabi* \ | -ptx* | -coff* | -ecoff* | -winnt* | -domain* | -vsta* \ | -udi* | -eabi* | -lites* | -ieee* | -go32* | -aux* \ | -chorusos* | -chorusrdb* | -cegcc* \ | -cygwin* | -msys* | -pe* | -psos* | -moss* | -proelf* | -rtems* \ | -mingw32* | -linux-gnu* | -linux-android* \ | -linux-newlib* | -linux-uclibc* \ | -uxpv* | -beos* | -mpeix* | -udk* \ | -interix* | -uwin* | -mks* | -rhapsody* | -darwin* | -opened* \ | -openstep* | -oskit* | -conix* | -pw32* | -nonstopux* \ | -storm-chaos* | -tops10* | -tenex* | -tops20* | -its* \ | -os2* | -vos* | -palmos* | -uclinux* | -nucleus* \ | -morphos* | -superux* | -rtmk* | -rtmk-nova* | -windiss* \ | -powermax* | -dnix* | -nx6 | -nx7 | -sei* | -dragonfly* \ | -skyos* | -haiku* | -rdos* | -toppers* | -drops* | -es*) # Remember, each alternative MUST END IN *, to match a version number. ;; -qnx*) case $basic_machine in x86-* | i*86-*) ;; *) os=-nto$os ;; esac ;; -nto-qnx*) ;; -nto*) os=`echo $os | sed -e 's|nto|nto-qnx|'` ;; -sim | -es1800* | -hms* | -xray | -os68k* | -none* | -v88r* \ | -windows* | -osx | -abug | -netware* | -os9* | -beos* | -haiku* \ | -macos* | -mpw* | -magic* | -mmixware* | -mon960* | -lnews*) ;; -mac*) os=`echo $os | sed -e 's|mac|macos|'` ;; -linux-dietlibc) os=-linux-dietlibc ;; -linux*) os=`echo $os | sed -e 's|linux|linux-gnu|'` ;; -sunos5*) os=`echo $os | sed -e 's|sunos5|solaris2|'` ;; -sunos6*) os=`echo $os | sed -e 's|sunos6|solaris3|'` ;; -opened*) os=-openedition ;; -os400*) os=-os400 ;; -wince*) os=-wince ;; -osfrose*) os=-osfrose ;; -osf*) os=-osf ;; -utek*) os=-bsd ;; -dynix*) os=-bsd ;; -acis*) os=-aos ;; -atheos*) os=-atheos ;; -syllable*) os=-syllable ;; -386bsd) os=-bsd ;; -ctix* | -uts*) os=-sysv ;; -nova*) os=-rtmk-nova ;; -ns2 ) os=-nextstep2 ;; -nsk*) os=-nsk ;; # Preserve the version number of sinix5. -sinix5.*) os=`echo $os | sed -e 's|sinix|sysv|'` ;; -sinix*) os=-sysv4 ;; -tpf*) os=-tpf ;; -triton*) os=-sysv3 ;; -oss*) os=-sysv3 ;; -svr4) os=-sysv4 ;; -svr3) os=-sysv3 ;; -sysvr4) os=-sysv4 ;; # This must come after -sysvr4. -sysv*) ;; -ose*) os=-ose ;; -es1800*) os=-ose ;; -xenix) os=-xenix ;; -*mint | -mint[0-9]* | -*MiNT | -MiNT[0-9]*) os=-mint ;; -aros*) os=-aros ;; -kaos*) os=-kaos ;; -zvmoe) os=-zvmoe ;; -dicos*) os=-dicos ;; -nacl*) ;; -none) ;; *) # Get rid of the `-' at the beginning of $os. os=`echo $os | sed 's/[^-]*-//'` echo Invalid configuration \`$1\': system \`$os\' not recognized 1>&2 exit 1 ;; esac else # Here we handle the default operating systems that come with various machines. # The value should be what the vendor currently ships out the door with their # machine or put another way, the most popular os provided with the machine. # Note that if you're going to try to match "-MANUFACTURER" here (say, # "-sun"), then you have to tell the case statement up towards the top # that MANUFACTURER isn't an operating system. Otherwise, code above # will signal an error saying that MANUFACTURER isn't an operating # system, and we'll never get to this point. case $basic_machine in score-*) os=-elf ;; spu-*) os=-elf ;; *-acorn) os=-riscix1.2 ;; arm*-rebel) os=-linux ;; arm*-semi) os=-aout ;; c4x-* | tic4x-*) os=-coff ;; tic54x-*) os=-coff ;; tic55x-*) os=-coff ;; tic6x-*) os=-coff ;; # This must come before the *-dec entry. pdp10-*) os=-tops20 ;; pdp11-*) os=-none ;; *-dec | vax-*) os=-ultrix4.2 ;; m68*-apollo) os=-domain ;; i386-sun) os=-sunos4.0.2 ;; m68000-sun) os=-sunos3 ;; m68*-cisco) os=-aout ;; mep-*) os=-elf ;; mips*-cisco) os=-elf ;; mips*-*) os=-elf ;; or32-*) os=-coff ;; *-tti) # must be before sparc entry or we get the wrong os. os=-sysv3 ;; sparc-* | *-sun) os=-sunos4.1.1 ;; *-be) os=-beos ;; *-haiku) os=-haiku ;; *-ibm) os=-aix ;; *-knuth) os=-mmixware ;; *-wec) os=-proelf ;; *-winbond) os=-proelf ;; *-oki) os=-proelf ;; *-hp) os=-hpux ;; *-hitachi) os=-hiux ;; i860-* | *-att | *-ncr | *-altos | *-motorola | *-convergent) os=-sysv ;; *-cbm) os=-amigaos ;; *-dg) os=-dgux ;; *-dolphin) os=-sysv3 ;; m68k-ccur) os=-rtu ;; m88k-omron*) os=-luna ;; *-next ) os=-nextstep ;; *-sequent) os=-ptx ;; *-crds) os=-unos ;; *-ns) os=-genix ;; i370-*) os=-mvs ;; *-next) os=-nextstep3 ;; *-gould) os=-sysv ;; *-highlevel) os=-bsd ;; *-encore) os=-bsd ;; *-sgi) os=-irix ;; *-siemens) os=-sysv4 ;; *-masscomp) os=-rtu ;; f30[01]-fujitsu | f700-fujitsu) os=-uxpv ;; *-rom68k) os=-coff ;; *-*bug) os=-coff ;; *-apple) os=-macos ;; *-atari*) os=-mint ;; *) os=-none ;; esac fi # Here we handle the case where we know the os, and the CPU type, but not the # manufacturer. We pick the logical manufacturer. vendor=unknown case $basic_machine in *-unknown) case $os in -riscix*) vendor=acorn ;; -sunos*) vendor=sun ;; -cnk*|-aix*) vendor=ibm ;; -beos*) vendor=be ;; -hpux*) vendor=hp ;; -mpeix*) vendor=hp ;; -hiux*) vendor=hitachi ;; -unos*) vendor=crds ;; -dgux*) vendor=dg ;; -luna*) vendor=omron ;; -genix*) vendor=ns ;; -mvs* | -opened*) vendor=ibm ;; -os400*) vendor=ibm ;; -ptx*) vendor=sequent ;; -tpf*) vendor=ibm ;; -vxsim* | -vxworks* | -windiss*) vendor=wrs ;; -aux*) vendor=apple ;; -hms*) vendor=hitachi ;; -mpw* | -macos*) vendor=apple ;; -*mint | -mint[0-9]* | -*MiNT | -MiNT[0-9]*) vendor=atari ;; -vos*) vendor=stratus ;; esac basic_machine=`echo $basic_machine | sed "s/unknown/$vendor/"` ;; esac echo $basic_machine$os exit # Local variables: # eval: (add-hook 'write-file-hooks 'time-stamp) # time-stamp-start: "timestamp='" # time-stamp-format: "%:y-%02m-%02d" # time-stamp-end: "'" # End: ffc-2019.1.0.post0/libs/gtest-1.7.0/build-aux/depcomp000077500000000000000000000475561346026536000215630ustar00rootroot00000000000000#! /bin/sh # depcomp - compile a program generating dependencies as side-effects scriptversion=2011-12-04.11; # UTC # Copyright (C) 1999, 2000, 2003, 2004, 2005, 2006, 2007, 2009, 2010, # 2011 Free Software Foundation, Inc. # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2, or (at your option) # any later version. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # You should have received a copy of the GNU General Public License # along with this program. If not, see . # As a special exception to the GNU General Public License, if you # distribute this file as part of a program that contains a # configuration script generated by Autoconf, you may include it under # the same distribution terms that you use for the rest of that program. # Originally written by Alexandre Oliva . case $1 in '') echo "$0: No command. Try \`$0 --help' for more information." 1>&2 exit 1; ;; -h | --h*) cat <<\EOF Usage: depcomp [--help] [--version] PROGRAM [ARGS] Run PROGRAMS ARGS to compile a file, generating dependencies as side-effects. Environment variables: depmode Dependency tracking mode. source Source file read by `PROGRAMS ARGS'. object Object file output by `PROGRAMS ARGS'. DEPDIR directory where to store dependencies. depfile Dependency file to output. tmpdepfile Temporary file to use when outputting dependencies. libtool Whether libtool is used (yes/no). Report bugs to . EOF exit $? ;; -v | --v*) echo "depcomp $scriptversion" exit $? ;; esac if test -z "$depmode" || test -z "$source" || test -z "$object"; then echo "depcomp: Variables source, object and depmode must be set" 1>&2 exit 1 fi # Dependencies for sub/bar.o or sub/bar.obj go into sub/.deps/bar.Po. depfile=${depfile-`echo "$object" | sed 's|[^\\/]*$|'${DEPDIR-.deps}'/&|;s|\.\([^.]*\)$|.P\1|;s|Pobj$|Po|'`} tmpdepfile=${tmpdepfile-`echo "$depfile" | sed 's/\.\([^.]*\)$/.T\1/'`} rm -f "$tmpdepfile" # Some modes work just like other modes, but use different flags. We # parameterize here, but still list the modes in the big case below, # to make depend.m4 easier to write. Note that we *cannot* use a case # here, because this file can only contain one case statement. if test "$depmode" = hp; then # HP compiler uses -M and no extra arg. gccflag=-M depmode=gcc fi if test "$depmode" = dashXmstdout; then # This is just like dashmstdout with a different argument. dashmflag=-xM depmode=dashmstdout fi cygpath_u="cygpath -u -f -" if test "$depmode" = msvcmsys; then # This is just like msvisualcpp but w/o cygpath translation. # Just convert the backslash-escaped backslashes to single forward # slashes to satisfy depend.m4 cygpath_u='sed s,\\\\,/,g' depmode=msvisualcpp fi if test "$depmode" = msvc7msys; then # This is just like msvc7 but w/o cygpath translation. # Just convert the backslash-escaped backslashes to single forward # slashes to satisfy depend.m4 cygpath_u='sed s,\\\\,/,g' depmode=msvc7 fi case "$depmode" in gcc3) ## gcc 3 implements dependency tracking that does exactly what ## we want. Yay! Note: for some reason libtool 1.4 doesn't like ## it if -MD -MP comes after the -MF stuff. Hmm. ## Unfortunately, FreeBSD c89 acceptance of flags depends upon ## the command line argument order; so add the flags where they ## appear in depend2.am. Note that the slowdown incurred here ## affects only configure: in makefiles, %FASTDEP% shortcuts this. for arg do case $arg in -c) set fnord "$@" -MT "$object" -MD -MP -MF "$tmpdepfile" "$arg" ;; *) set fnord "$@" "$arg" ;; esac shift # fnord shift # $arg done "$@" stat=$? if test $stat -eq 0; then : else rm -f "$tmpdepfile" exit $stat fi mv "$tmpdepfile" "$depfile" ;; gcc) ## There are various ways to get dependency output from gcc. Here's ## why we pick this rather obscure method: ## - Don't want to use -MD because we'd like the dependencies to end ## up in a subdir. Having to rename by hand is ugly. ## (We might end up doing this anyway to support other compilers.) ## - The DEPENDENCIES_OUTPUT environment variable makes gcc act like ## -MM, not -M (despite what the docs say). ## - Using -M directly means running the compiler twice (even worse ## than renaming). if test -z "$gccflag"; then gccflag=-MD, fi "$@" -Wp,"$gccflag$tmpdepfile" stat=$? if test $stat -eq 0; then : else rm -f "$tmpdepfile" exit $stat fi rm -f "$depfile" echo "$object : \\" > "$depfile" alpha=ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz ## The second -e expression handles DOS-style file names with drive letters. sed -e 's/^[^:]*: / /' \ -e 's/^['$alpha']:\/[^:]*: / /' < "$tmpdepfile" >> "$depfile" ## This next piece of magic avoids the `deleted header file' problem. ## The problem is that when a header file which appears in a .P file ## is deleted, the dependency causes make to die (because there is ## typically no way to rebuild the header). We avoid this by adding ## dummy dependencies for each header file. Too bad gcc doesn't do ## this for us directly. tr ' ' ' ' < "$tmpdepfile" | ## Some versions of gcc put a space before the `:'. On the theory ## that the space means something, we add a space to the output as ## well. hp depmode also adds that space, but also prefixes the VPATH ## to the object. Take care to not repeat it in the output. ## Some versions of the HPUX 10.20 sed can't process this invocation ## correctly. Breaking it into two sed invocations is a workaround. sed -e 's/^\\$//' -e '/^$/d' -e "s|.*$object$||" -e '/:$/d' \ | sed -e 's/$/ :/' >> "$depfile" rm -f "$tmpdepfile" ;; hp) # This case exists only to let depend.m4 do its work. It works by # looking at the text of this script. This case will never be run, # since it is checked for above. exit 1 ;; sgi) if test "$libtool" = yes; then "$@" "-Wp,-MDupdate,$tmpdepfile" else "$@" -MDupdate "$tmpdepfile" fi stat=$? if test $stat -eq 0; then : else rm -f "$tmpdepfile" exit $stat fi rm -f "$depfile" if test -f "$tmpdepfile"; then # yes, the sourcefile depend on other files echo "$object : \\" > "$depfile" # Clip off the initial element (the dependent). Don't try to be # clever and replace this with sed code, as IRIX sed won't handle # lines with more than a fixed number of characters (4096 in # IRIX 6.2 sed, 8192 in IRIX 6.5). We also remove comment lines; # the IRIX cc adds comments like `#:fec' to the end of the # dependency line. tr ' ' ' ' < "$tmpdepfile" \ | sed -e 's/^.*\.o://' -e 's/#.*$//' -e '/^$/ d' | \ tr ' ' ' ' >> "$depfile" echo >> "$depfile" # The second pass generates a dummy entry for each header file. tr ' ' ' ' < "$tmpdepfile" \ | sed -e 's/^.*\.o://' -e 's/#.*$//' -e '/^$/ d' -e 's/$/:/' \ >> "$depfile" else # The sourcefile does not contain any dependencies, so just # store a dummy comment line, to avoid errors with the Makefile # "include basename.Plo" scheme. echo "#dummy" > "$depfile" fi rm -f "$tmpdepfile" ;; aix) # The C for AIX Compiler uses -M and outputs the dependencies # in a .u file. In older versions, this file always lives in the # current directory. Also, the AIX compiler puts `$object:' at the # start of each line; $object doesn't have directory information. # Version 6 uses the directory in both cases. dir=`echo "$object" | sed -e 's|/[^/]*$|/|'` test "x$dir" = "x$object" && dir= base=`echo "$object" | sed -e 's|^.*/||' -e 's/\.o$//' -e 's/\.lo$//'` if test "$libtool" = yes; then tmpdepfile1=$dir$base.u tmpdepfile2=$base.u tmpdepfile3=$dir.libs/$base.u "$@" -Wc,-M else tmpdepfile1=$dir$base.u tmpdepfile2=$dir$base.u tmpdepfile3=$dir$base.u "$@" -M fi stat=$? if test $stat -eq 0; then : else rm -f "$tmpdepfile1" "$tmpdepfile2" "$tmpdepfile3" exit $stat fi for tmpdepfile in "$tmpdepfile1" "$tmpdepfile2" "$tmpdepfile3" do test -f "$tmpdepfile" && break done if test -f "$tmpdepfile"; then # Each line is of the form `foo.o: dependent.h'. # Do two passes, one to just change these to # `$object: dependent.h' and one to simply `dependent.h:'. sed -e "s,^.*\.[a-z]*:,$object:," < "$tmpdepfile" > "$depfile" # That's a tab and a space in the []. sed -e 's,^.*\.[a-z]*:[ ]*,,' -e 's,$,:,' < "$tmpdepfile" >> "$depfile" else # The sourcefile does not contain any dependencies, so just # store a dummy comment line, to avoid errors with the Makefile # "include basename.Plo" scheme. echo "#dummy" > "$depfile" fi rm -f "$tmpdepfile" ;; icc) # Intel's C compiler understands `-MD -MF file'. However on # icc -MD -MF foo.d -c -o sub/foo.o sub/foo.c # ICC 7.0 will fill foo.d with something like # foo.o: sub/foo.c # foo.o: sub/foo.h # which is wrong. We want: # sub/foo.o: sub/foo.c # sub/foo.o: sub/foo.h # sub/foo.c: # sub/foo.h: # ICC 7.1 will output # foo.o: sub/foo.c sub/foo.h # and will wrap long lines using \ : # foo.o: sub/foo.c ... \ # sub/foo.h ... \ # ... "$@" -MD -MF "$tmpdepfile" stat=$? if test $stat -eq 0; then : else rm -f "$tmpdepfile" exit $stat fi rm -f "$depfile" # Each line is of the form `foo.o: dependent.h', # or `foo.o: dep1.h dep2.h \', or ` dep3.h dep4.h \'. # Do two passes, one to just change these to # `$object: dependent.h' and one to simply `dependent.h:'. sed "s,^[^:]*:,$object :," < "$tmpdepfile" > "$depfile" # Some versions of the HPUX 10.20 sed can't process this invocation # correctly. Breaking it into two sed invocations is a workaround. sed 's,^[^:]*: \(.*\)$,\1,;s/^\\$//;/^$/d;/:$/d' < "$tmpdepfile" | sed -e 's/$/ :/' >> "$depfile" rm -f "$tmpdepfile" ;; hp2) # The "hp" stanza above does not work with aCC (C++) and HP's ia64 # compilers, which have integrated preprocessors. The correct option # to use with these is +Maked; it writes dependencies to a file named # 'foo.d', which lands next to the object file, wherever that # happens to be. # Much of this is similar to the tru64 case; see comments there. dir=`echo "$object" | sed -e 's|/[^/]*$|/|'` test "x$dir" = "x$object" && dir= base=`echo "$object" | sed -e 's|^.*/||' -e 's/\.o$//' -e 's/\.lo$//'` if test "$libtool" = yes; then tmpdepfile1=$dir$base.d tmpdepfile2=$dir.libs/$base.d "$@" -Wc,+Maked else tmpdepfile1=$dir$base.d tmpdepfile2=$dir$base.d "$@" +Maked fi stat=$? if test $stat -eq 0; then : else rm -f "$tmpdepfile1" "$tmpdepfile2" exit $stat fi for tmpdepfile in "$tmpdepfile1" "$tmpdepfile2" do test -f "$tmpdepfile" && break done if test -f "$tmpdepfile"; then sed -e "s,^.*\.[a-z]*:,$object:," "$tmpdepfile" > "$depfile" # Add `dependent.h:' lines. sed -ne '2,${ s/^ *// s/ \\*$// s/$/:/ p }' "$tmpdepfile" >> "$depfile" else echo "#dummy" > "$depfile" fi rm -f "$tmpdepfile" "$tmpdepfile2" ;; tru64) # The Tru64 compiler uses -MD to generate dependencies as a side # effect. `cc -MD -o foo.o ...' puts the dependencies into `foo.o.d'. # At least on Alpha/Redhat 6.1, Compaq CCC V6.2-504 seems to put # dependencies in `foo.d' instead, so we check for that too. # Subdirectories are respected. dir=`echo "$object" | sed -e 's|/[^/]*$|/|'` test "x$dir" = "x$object" && dir= base=`echo "$object" | sed -e 's|^.*/||' -e 's/\.o$//' -e 's/\.lo$//'` if test "$libtool" = yes; then # With Tru64 cc, shared objects can also be used to make a # static library. This mechanism is used in libtool 1.4 series to # handle both shared and static libraries in a single compilation. # With libtool 1.4, dependencies were output in $dir.libs/$base.lo.d. # # With libtool 1.5 this exception was removed, and libtool now # generates 2 separate objects for the 2 libraries. These two # compilations output dependencies in $dir.libs/$base.o.d and # in $dir$base.o.d. We have to check for both files, because # one of the two compilations can be disabled. We should prefer # $dir$base.o.d over $dir.libs/$base.o.d because the latter is # automatically cleaned when .libs/ is deleted, while ignoring # the former would cause a distcleancheck panic. tmpdepfile1=$dir.libs/$base.lo.d # libtool 1.4 tmpdepfile2=$dir$base.o.d # libtool 1.5 tmpdepfile3=$dir.libs/$base.o.d # libtool 1.5 tmpdepfile4=$dir.libs/$base.d # Compaq CCC V6.2-504 "$@" -Wc,-MD else tmpdepfile1=$dir$base.o.d tmpdepfile2=$dir$base.d tmpdepfile3=$dir$base.d tmpdepfile4=$dir$base.d "$@" -MD fi stat=$? if test $stat -eq 0; then : else rm -f "$tmpdepfile1" "$tmpdepfile2" "$tmpdepfile3" "$tmpdepfile4" exit $stat fi for tmpdepfile in "$tmpdepfile1" "$tmpdepfile2" "$tmpdepfile3" "$tmpdepfile4" do test -f "$tmpdepfile" && break done if test -f "$tmpdepfile"; then sed -e "s,^.*\.[a-z]*:,$object:," < "$tmpdepfile" > "$depfile" # That's a tab and a space in the []. sed -e 's,^.*\.[a-z]*:[ ]*,,' -e 's,$,:,' < "$tmpdepfile" >> "$depfile" else echo "#dummy" > "$depfile" fi rm -f "$tmpdepfile" ;; msvc7) if test "$libtool" = yes; then showIncludes=-Wc,-showIncludes else showIncludes=-showIncludes fi "$@" $showIncludes > "$tmpdepfile" stat=$? grep -v '^Note: including file: ' "$tmpdepfile" if test "$stat" = 0; then : else rm -f "$tmpdepfile" exit $stat fi rm -f "$depfile" echo "$object : \\" > "$depfile" # The first sed program below extracts the file names and escapes # backslashes for cygpath. The second sed program outputs the file # name when reading, but also accumulates all include files in the # hold buffer in order to output them again at the end. This only # works with sed implementations that can handle large buffers. sed < "$tmpdepfile" -n ' /^Note: including file: *\(.*\)/ { s//\1/ s/\\/\\\\/g p }' | $cygpath_u | sort -u | sed -n ' s/ /\\ /g s/\(.*\)/ \1 \\/p s/.\(.*\) \\/\1:/ H $ { s/.*/ / G p }' >> "$depfile" rm -f "$tmpdepfile" ;; msvc7msys) # This case exists only to let depend.m4 do its work. It works by # looking at the text of this script. This case will never be run, # since it is checked for above. exit 1 ;; #nosideeffect) # This comment above is used by automake to tell side-effect # dependency tracking mechanisms from slower ones. dashmstdout) # Important note: in order to support this mode, a compiler *must* # always write the preprocessed file to stdout, regardless of -o. "$@" || exit $? # Remove the call to Libtool. if test "$libtool" = yes; then while test "X$1" != 'X--mode=compile'; do shift done shift fi # Remove `-o $object'. IFS=" " for arg do case $arg in -o) shift ;; $object) shift ;; *) set fnord "$@" "$arg" shift # fnord shift # $arg ;; esac done test -z "$dashmflag" && dashmflag=-M # Require at least two characters before searching for `:' # in the target name. This is to cope with DOS-style filenames: # a dependency such as `c:/foo/bar' could be seen as target `c' otherwise. "$@" $dashmflag | sed 's:^[ ]*[^: ][^:][^:]*\:[ ]*:'"$object"'\: :' > "$tmpdepfile" rm -f "$depfile" cat < "$tmpdepfile" > "$depfile" tr ' ' ' ' < "$tmpdepfile" | \ ## Some versions of the HPUX 10.20 sed can't process this invocation ## correctly. Breaking it into two sed invocations is a workaround. sed -e 's/^\\$//' -e '/^$/d' -e '/:$/d' | sed -e 's/$/ :/' >> "$depfile" rm -f "$tmpdepfile" ;; dashXmstdout) # This case only exists to satisfy depend.m4. It is never actually # run, as this mode is specially recognized in the preamble. exit 1 ;; makedepend) "$@" || exit $? # Remove any Libtool call if test "$libtool" = yes; then while test "X$1" != 'X--mode=compile'; do shift done shift fi # X makedepend shift cleared=no eat=no for arg do case $cleared in no) set ""; shift cleared=yes ;; esac if test $eat = yes; then eat=no continue fi case "$arg" in -D*|-I*) set fnord "$@" "$arg"; shift ;; # Strip any option that makedepend may not understand. Remove # the object too, otherwise makedepend will parse it as a source file. -arch) eat=yes ;; -*|$object) ;; *) set fnord "$@" "$arg"; shift ;; esac done obj_suffix=`echo "$object" | sed 's/^.*\././'` touch "$tmpdepfile" ${MAKEDEPEND-makedepend} -o"$obj_suffix" -f"$tmpdepfile" "$@" rm -f "$depfile" # makedepend may prepend the VPATH from the source file name to the object. # No need to regex-escape $object, excess matching of '.' is harmless. sed "s|^.*\($object *:\)|\1|" "$tmpdepfile" > "$depfile" sed '1,2d' "$tmpdepfile" | tr ' ' ' ' | \ ## Some versions of the HPUX 10.20 sed can't process this invocation ## correctly. Breaking it into two sed invocations is a workaround. sed -e 's/^\\$//' -e '/^$/d' -e '/:$/d' | sed -e 's/$/ :/' >> "$depfile" rm -f "$tmpdepfile" "$tmpdepfile".bak ;; cpp) # Important note: in order to support this mode, a compiler *must* # always write the preprocessed file to stdout. "$@" || exit $? # Remove the call to Libtool. if test "$libtool" = yes; then while test "X$1" != 'X--mode=compile'; do shift done shift fi # Remove `-o $object'. IFS=" " for arg do case $arg in -o) shift ;; $object) shift ;; *) set fnord "$@" "$arg" shift # fnord shift # $arg ;; esac done "$@" -E | sed -n -e '/^# [0-9][0-9]* "\([^"]*\)".*/ s:: \1 \\:p' \ -e '/^#line [0-9][0-9]* "\([^"]*\)".*/ s:: \1 \\:p' | sed '$ s: \\$::' > "$tmpdepfile" rm -f "$depfile" echo "$object : \\" > "$depfile" cat < "$tmpdepfile" >> "$depfile" sed < "$tmpdepfile" '/^$/d;s/^ //;s/ \\$//;s/$/ :/' >> "$depfile" rm -f "$tmpdepfile" ;; msvisualcpp) # Important note: in order to support this mode, a compiler *must* # always write the preprocessed file to stdout. "$@" || exit $? # Remove the call to Libtool. if test "$libtool" = yes; then while test "X$1" != 'X--mode=compile'; do shift done shift fi IFS=" " for arg do case "$arg" in -o) shift ;; $object) shift ;; "-Gm"|"/Gm"|"-Gi"|"/Gi"|"-ZI"|"/ZI") set fnord "$@" shift shift ;; *) set fnord "$@" "$arg" shift shift ;; esac done "$@" -E 2>/dev/null | sed -n '/^#line [0-9][0-9]* "\([^"]*\)"/ s::\1:p' | $cygpath_u | sort -u > "$tmpdepfile" rm -f "$depfile" echo "$object : \\" > "$depfile" sed < "$tmpdepfile" -n -e 's% %\\ %g' -e '/^\(.*\)$/ s:: \1 \\:p' >> "$depfile" echo " " >> "$depfile" sed < "$tmpdepfile" -n -e 's% %\\ %g' -e '/^\(.*\)$/ s::\1\::p' >> "$depfile" rm -f "$tmpdepfile" ;; msvcmsys) # This case exists only to let depend.m4 do its work. It works by # looking at the text of this script. This case will never be run, # since it is checked for above. exit 1 ;; none) exec "$@" ;; *) echo "Unknown depmode $depmode" 1>&2 exit 1 ;; esac exit 0 # Local Variables: # mode: shell-script # sh-indentation: 2 # eval: (add-hook 'write-file-hooks 'time-stamp) # time-stamp-start: "scriptversion=" # time-stamp-format: "%:y-%02m-%02d.%02H" # time-stamp-time-zone: "UTC" # time-stamp-end: "; # UTC" # End: ffc-2019.1.0.post0/libs/gtest-1.7.0/build-aux/install-sh000077500000000000000000000332561346026536000222020ustar00rootroot00000000000000#!/bin/sh # install - install a program, script, or datafile scriptversion=2011-01-19.21; # UTC # This originates from X11R5 (mit/util/scripts/install.sh), which was # later released in X11R6 (xc/config/util/install.sh) with the # following copyright and license. # # Copyright (C) 1994 X Consortium # # Permission is hereby granted, free of charge, to any person obtaining a copy # of this software and associated documentation files (the "Software"), to # deal in the Software without restriction, including without limitation the # rights to use, copy, modify, merge, publish, distribute, sublicense, and/or # sell copies of the Software, and to permit persons to whom the Software is # furnished to do so, subject to the following conditions: # # The above copyright notice and this permission notice shall be included in # all copies or substantial portions of the Software. # # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE # X CONSORTIUM BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN # AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNEC- # TION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. # # Except as contained in this notice, the name of the X Consortium shall not # be used in advertising or otherwise to promote the sale, use or other deal- # ings in this Software without prior written authorization from the X Consor- # tium. # # # FSF changes to this file are in the public domain. # # Calling this script install-sh is preferred over install.sh, to prevent # `make' implicit rules from creating a file called install from it # when there is no Makefile. # # This script is compatible with the BSD install script, but was written # from scratch. nl=' ' IFS=" "" $nl" # set DOITPROG to echo to test this script # Don't use :- since 4.3BSD and earlier shells don't like it. doit=${DOITPROG-} if test -z "$doit"; then doit_exec=exec else doit_exec=$doit fi # Put in absolute file names if you don't have them in your path; # or use environment vars. chgrpprog=${CHGRPPROG-chgrp} chmodprog=${CHMODPROG-chmod} chownprog=${CHOWNPROG-chown} cmpprog=${CMPPROG-cmp} cpprog=${CPPROG-cp} mkdirprog=${MKDIRPROG-mkdir} mvprog=${MVPROG-mv} rmprog=${RMPROG-rm} stripprog=${STRIPPROG-strip} posix_glob='?' initialize_posix_glob=' test "$posix_glob" != "?" || { if (set -f) 2>/dev/null; then posix_glob= else posix_glob=: fi } ' posix_mkdir= # Desired mode of installed file. mode=0755 chgrpcmd= chmodcmd=$chmodprog chowncmd= mvcmd=$mvprog rmcmd="$rmprog -f" stripcmd= src= dst= dir_arg= dst_arg= copy_on_change=false no_target_directory= usage="\ Usage: $0 [OPTION]... [-T] SRCFILE DSTFILE or: $0 [OPTION]... SRCFILES... DIRECTORY or: $0 [OPTION]... -t DIRECTORY SRCFILES... or: $0 [OPTION]... -d DIRECTORIES... In the 1st form, copy SRCFILE to DSTFILE. In the 2nd and 3rd, copy all SRCFILES to DIRECTORY. In the 4th, create DIRECTORIES. Options: --help display this help and exit. --version display version info and exit. -c (ignored) -C install only if different (preserve the last data modification time) -d create directories instead of installing files. -g GROUP $chgrpprog installed files to GROUP. -m MODE $chmodprog installed files to MODE. -o USER $chownprog installed files to USER. -s $stripprog installed files. -t DIRECTORY install into DIRECTORY. -T report an error if DSTFILE is a directory. Environment variables override the default commands: CHGRPPROG CHMODPROG CHOWNPROG CMPPROG CPPROG MKDIRPROG MVPROG RMPROG STRIPPROG " while test $# -ne 0; do case $1 in -c) ;; -C) copy_on_change=true;; -d) dir_arg=true;; -g) chgrpcmd="$chgrpprog $2" shift;; --help) echo "$usage"; exit $?;; -m) mode=$2 case $mode in *' '* | *' '* | *' '* | *'*'* | *'?'* | *'['*) echo "$0: invalid mode: $mode" >&2 exit 1;; esac shift;; -o) chowncmd="$chownprog $2" shift;; -s) stripcmd=$stripprog;; -t) dst_arg=$2 # Protect names problematic for `test' and other utilities. case $dst_arg in -* | [=\(\)!]) dst_arg=./$dst_arg;; esac shift;; -T) no_target_directory=true;; --version) echo "$0 $scriptversion"; exit $?;; --) shift break;; -*) echo "$0: invalid option: $1" >&2 exit 1;; *) break;; esac shift done if test $# -ne 0 && test -z "$dir_arg$dst_arg"; then # When -d is used, all remaining arguments are directories to create. # When -t is used, the destination is already specified. # Otherwise, the last argument is the destination. Remove it from $@. for arg do if test -n "$dst_arg"; then # $@ is not empty: it contains at least $arg. set fnord "$@" "$dst_arg" shift # fnord fi shift # arg dst_arg=$arg # Protect names problematic for `test' and other utilities. case $dst_arg in -* | [=\(\)!]) dst_arg=./$dst_arg;; esac done fi if test $# -eq 0; then if test -z "$dir_arg"; then echo "$0: no input file specified." >&2 exit 1 fi # It's OK to call `install-sh -d' without argument. # This can happen when creating conditional directories. exit 0 fi if test -z "$dir_arg"; then do_exit='(exit $ret); exit $ret' trap "ret=129; $do_exit" 1 trap "ret=130; $do_exit" 2 trap "ret=141; $do_exit" 13 trap "ret=143; $do_exit" 15 # Set umask so as not to create temps with too-generous modes. # However, 'strip' requires both read and write access to temps. case $mode in # Optimize common cases. *644) cp_umask=133;; *755) cp_umask=22;; *[0-7]) if test -z "$stripcmd"; then u_plus_rw= else u_plus_rw='% 200' fi cp_umask=`expr '(' 777 - $mode % 1000 ')' $u_plus_rw`;; *) if test -z "$stripcmd"; then u_plus_rw= else u_plus_rw=,u+rw fi cp_umask=$mode$u_plus_rw;; esac fi for src do # Protect names problematic for `test' and other utilities. case $src in -* | [=\(\)!]) src=./$src;; esac if test -n "$dir_arg"; then dst=$src dstdir=$dst test -d "$dstdir" dstdir_status=$? else # Waiting for this to be detected by the "$cpprog $src $dsttmp" command # might cause directories to be created, which would be especially bad # if $src (and thus $dsttmp) contains '*'. if test ! -f "$src" && test ! -d "$src"; then echo "$0: $src does not exist." >&2 exit 1 fi if test -z "$dst_arg"; then echo "$0: no destination specified." >&2 exit 1 fi dst=$dst_arg # If destination is a directory, append the input filename; won't work # if double slashes aren't ignored. if test -d "$dst"; then if test -n "$no_target_directory"; then echo "$0: $dst_arg: Is a directory" >&2 exit 1 fi dstdir=$dst dst=$dstdir/`basename "$src"` dstdir_status=0 else # Prefer dirname, but fall back on a substitute if dirname fails. dstdir=` (dirname "$dst") 2>/dev/null || expr X"$dst" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \ X"$dst" : 'X\(//\)[^/]' \| \ X"$dst" : 'X\(//\)$' \| \ X"$dst" : 'X\(/\)' \| . 2>/dev/null || echo X"$dst" | sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{ s//\1/ q } /^X\(\/\/\)[^/].*/{ s//\1/ q } /^X\(\/\/\)$/{ s//\1/ q } /^X\(\/\).*/{ s//\1/ q } s/.*/./; q' ` test -d "$dstdir" dstdir_status=$? fi fi obsolete_mkdir_used=false if test $dstdir_status != 0; then case $posix_mkdir in '') # Create intermediate dirs using mode 755 as modified by the umask. # This is like FreeBSD 'install' as of 1997-10-28. umask=`umask` case $stripcmd.$umask in # Optimize common cases. *[2367][2367]) mkdir_umask=$umask;; .*0[02][02] | .[02][02] | .[02]) mkdir_umask=22;; *[0-7]) mkdir_umask=`expr $umask + 22 \ - $umask % 100 % 40 + $umask % 20 \ - $umask % 10 % 4 + $umask % 2 `;; *) mkdir_umask=$umask,go-w;; esac # With -d, create the new directory with the user-specified mode. # Otherwise, rely on $mkdir_umask. if test -n "$dir_arg"; then mkdir_mode=-m$mode else mkdir_mode= fi posix_mkdir=false case $umask in *[123567][0-7][0-7]) # POSIX mkdir -p sets u+wx bits regardless of umask, which # is incompatible with FreeBSD 'install' when (umask & 300) != 0. ;; *) tmpdir=${TMPDIR-/tmp}/ins$RANDOM-$$ trap 'ret=$?; rmdir "$tmpdir/d" "$tmpdir" 2>/dev/null; exit $ret' 0 if (umask $mkdir_umask && exec $mkdirprog $mkdir_mode -p -- "$tmpdir/d") >/dev/null 2>&1 then if test -z "$dir_arg" || { # Check for POSIX incompatibilities with -m. # HP-UX 11.23 and IRIX 6.5 mkdir -m -p sets group- or # other-writeable bit of parent directory when it shouldn't. # FreeBSD 6.1 mkdir -m -p sets mode of existing directory. ls_ld_tmpdir=`ls -ld "$tmpdir"` case $ls_ld_tmpdir in d????-?r-*) different_mode=700;; d????-?--*) different_mode=755;; *) false;; esac && $mkdirprog -m$different_mode -p -- "$tmpdir" && { ls_ld_tmpdir_1=`ls -ld "$tmpdir"` test "$ls_ld_tmpdir" = "$ls_ld_tmpdir_1" } } then posix_mkdir=: fi rmdir "$tmpdir/d" "$tmpdir" else # Remove any dirs left behind by ancient mkdir implementations. rmdir ./$mkdir_mode ./-p ./-- 2>/dev/null fi trap '' 0;; esac;; esac if $posix_mkdir && ( umask $mkdir_umask && $doit_exec $mkdirprog $mkdir_mode -p -- "$dstdir" ) then : else # The umask is ridiculous, or mkdir does not conform to POSIX, # or it failed possibly due to a race condition. Create the # directory the slow way, step by step, checking for races as we go. case $dstdir in /*) prefix='/';; [-=\(\)!]*) prefix='./';; *) prefix='';; esac eval "$initialize_posix_glob" oIFS=$IFS IFS=/ $posix_glob set -f set fnord $dstdir shift $posix_glob set +f IFS=$oIFS prefixes= for d do test X"$d" = X && continue prefix=$prefix$d if test -d "$prefix"; then prefixes= else if $posix_mkdir; then (umask=$mkdir_umask && $doit_exec $mkdirprog $mkdir_mode -p -- "$dstdir") && break # Don't fail if two instances are running concurrently. test -d "$prefix" || exit 1 else case $prefix in *\'*) qprefix=`echo "$prefix" | sed "s/'/'\\\\\\\\''/g"`;; *) qprefix=$prefix;; esac prefixes="$prefixes '$qprefix'" fi fi prefix=$prefix/ done if test -n "$prefixes"; then # Don't fail if two instances are running concurrently. (umask $mkdir_umask && eval "\$doit_exec \$mkdirprog $prefixes") || test -d "$dstdir" || exit 1 obsolete_mkdir_used=true fi fi fi if test -n "$dir_arg"; then { test -z "$chowncmd" || $doit $chowncmd "$dst"; } && { test -z "$chgrpcmd" || $doit $chgrpcmd "$dst"; } && { test "$obsolete_mkdir_used$chowncmd$chgrpcmd" = false || test -z "$chmodcmd" || $doit $chmodcmd $mode "$dst"; } || exit 1 else # Make a couple of temp file names in the proper directory. dsttmp=$dstdir/_inst.$$_ rmtmp=$dstdir/_rm.$$_ # Trap to clean up those temp files at exit. trap 'ret=$?; rm -f "$dsttmp" "$rmtmp" && exit $ret' 0 # Copy the file name to the temp name. (umask $cp_umask && $doit_exec $cpprog "$src" "$dsttmp") && # and set any options; do chmod last to preserve setuid bits. # # If any of these fail, we abort the whole thing. If we want to # ignore errors from any of these, just make sure not to ignore # errors from the above "$doit $cpprog $src $dsttmp" command. # { test -z "$chowncmd" || $doit $chowncmd "$dsttmp"; } && { test -z "$chgrpcmd" || $doit $chgrpcmd "$dsttmp"; } && { test -z "$stripcmd" || $doit $stripcmd "$dsttmp"; } && { test -z "$chmodcmd" || $doit $chmodcmd $mode "$dsttmp"; } && # If -C, don't bother to copy if it wouldn't change the file. if $copy_on_change && old=`LC_ALL=C ls -dlL "$dst" 2>/dev/null` && new=`LC_ALL=C ls -dlL "$dsttmp" 2>/dev/null` && eval "$initialize_posix_glob" && $posix_glob set -f && set X $old && old=:$2:$4:$5:$6 && set X $new && new=:$2:$4:$5:$6 && $posix_glob set +f && test "$old" = "$new" && $cmpprog "$dst" "$dsttmp" >/dev/null 2>&1 then rm -f "$dsttmp" else # Rename the file to the real destination. $doit $mvcmd -f "$dsttmp" "$dst" 2>/dev/null || # The rename failed, perhaps because mv can't rename something else # to itself, or perhaps because mv is so ancient that it does not # support -f. { # Now remove or move aside any old file at destination location. # We try this two ways since rm can't unlink itself on some # systems and the destination file might be busy for other # reasons. In this case, the final cleanup might fail but the new # file should still install successfully. { test ! -f "$dst" || $doit $rmcmd -f "$dst" 2>/dev/null || { $doit $mvcmd -f "$dst" "$rmtmp" 2>/dev/null && { $doit $rmcmd -f "$rmtmp" 2>/dev/null; :; } } || { echo "$0: cannot unlink or rename $dst" >&2 (exit 1); exit 1 } } && # Now rename the file to the real destination. $doit $mvcmd "$dsttmp" "$dst" } fi || exit 1 trap '' 0 fi done # Local variables: # eval: (add-hook 'write-file-hooks 'time-stamp) # time-stamp-start: "scriptversion=" # time-stamp-format: "%:y-%02m-%02d.%02H" # time-stamp-time-zone: "UTC" # time-stamp-end: "; # UTC" # End: ffc-2019.1.0.post0/libs/gtest-1.7.0/build-aux/ltmain.sh000066400000000000000000010520401346026536000220070ustar00rootroot00000000000000 # libtool (GNU libtool) 2.4.2 # Written by Gordon Matzigkeit , 1996 # Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2003, 2004, 2005, 2006, # 2007, 2008, 2009, 2010, 2011 Free Software Foundation, Inc. # This is free software; see the source for copying conditions. There is NO # warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. # GNU Libtool is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2 of the License, or # (at your option) any later version. # # As a special exception to the GNU General Public License, # if you distribute this file as part of a program or library that # is built using GNU Libtool, you may include this file under the # same distribution terms that you use for the rest of that program. # # GNU Libtool is distributed in the hope that it will be useful, but # WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU # General Public License for more details. # # You should have received a copy of the GNU General Public License # along with GNU Libtool; see the file COPYING. If not, a copy # can be downloaded from http://www.gnu.org/licenses/gpl.html, # or obtained by writing to the Free Software Foundation, Inc., # 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. # Usage: $progname [OPTION]... [MODE-ARG]... # # Provide generalized library-building support services. # # --config show all configuration variables # --debug enable verbose shell tracing # -n, --dry-run display commands without modifying any files # --features display basic configuration information and exit # --mode=MODE use operation mode MODE # --preserve-dup-deps don't remove duplicate dependency libraries # --quiet, --silent don't print informational messages # --no-quiet, --no-silent # print informational messages (default) # --no-warn don't display warning messages # --tag=TAG use configuration variables from tag TAG # -v, --verbose print more informational messages than default # --no-verbose don't print the extra informational messages # --version print version information # -h, --help, --help-all print short, long, or detailed help message # # MODE must be one of the following: # # clean remove files from the build directory # compile compile a source file into a libtool object # execute automatically set library path, then run a program # finish complete the installation of libtool libraries # install install libraries or executables # link create a library or an executable # uninstall remove libraries from an installed directory # # MODE-ARGS vary depending on the MODE. When passed as first option, # `--mode=MODE' may be abbreviated as `MODE' or a unique abbreviation of that. # Try `$progname --help --mode=MODE' for a more detailed description of MODE. # # When reporting a bug, please describe a test case to reproduce it and # include the following information: # # host-triplet: $host # shell: $SHELL # compiler: $LTCC # compiler flags: $LTCFLAGS # linker: $LD (gnu? $with_gnu_ld) # $progname: (GNU libtool) 2.4.2 Debian-2.4.2-1ubuntu1 # automake: $automake_version # autoconf: $autoconf_version # # Report bugs to . # GNU libtool home page: . # General help using GNU software: . PROGRAM=libtool PACKAGE=libtool VERSION="2.4.2 Debian-2.4.2-1ubuntu1" TIMESTAMP="" package_revision=1.3337 # Be Bourne compatible if test -n "${ZSH_VERSION+set}" && (emulate sh) >/dev/null 2>&1; then emulate sh NULLCMD=: # Zsh 3.x and 4.x performs word splitting on ${1+"$@"}, which # is contrary to our usage. Disable this feature. alias -g '${1+"$@"}'='"$@"' setopt NO_GLOB_SUBST else case `(set -o) 2>/dev/null` in *posix*) set -o posix;; esac fi BIN_SH=xpg4; export BIN_SH # for Tru64 DUALCASE=1; export DUALCASE # for MKS sh # A function that is used when there is no print builtin or printf. func_fallback_echo () { eval 'cat <<_LTECHO_EOF $1 _LTECHO_EOF' } # NLS nuisances: We save the old values to restore during execute mode. lt_user_locale= lt_safe_locale= for lt_var in LANG LANGUAGE LC_ALL LC_CTYPE LC_COLLATE LC_MESSAGES do eval "if test \"\${$lt_var+set}\" = set; then save_$lt_var=\$$lt_var $lt_var=C export $lt_var lt_user_locale=\"$lt_var=\\\$save_\$lt_var; \$lt_user_locale\" lt_safe_locale=\"$lt_var=C; \$lt_safe_locale\" fi" done LC_ALL=C LANGUAGE=C export LANGUAGE LC_ALL $lt_unset CDPATH # Work around backward compatibility issue on IRIX 6.5. On IRIX 6.4+, sh # is ksh but when the shell is invoked as "sh" and the current value of # the _XPG environment variable is not equal to 1 (one), the special # positional parameter $0, within a function call, is the name of the # function. progpath="$0" : ${CP="cp -f"} test "${ECHO+set}" = set || ECHO=${as_echo-'printf %s\n'} : ${MAKE="make"} : ${MKDIR="mkdir"} : ${MV="mv -f"} : ${RM="rm -f"} : ${SHELL="${CONFIG_SHELL-/bin/sh}"} : ${Xsed="$SED -e 1s/^X//"} # Global variables: EXIT_SUCCESS=0 EXIT_FAILURE=1 EXIT_MISMATCH=63 # $? = 63 is used to indicate version mismatch to missing. EXIT_SKIP=77 # $? = 77 is used to indicate a skipped test to automake. exit_status=$EXIT_SUCCESS # Make sure IFS has a sensible default lt_nl=' ' IFS=" $lt_nl" dirname="s,/[^/]*$,," basename="s,^.*/,," # func_dirname file append nondir_replacement # Compute the dirname of FILE. If nonempty, add APPEND to the result, # otherwise set result to NONDIR_REPLACEMENT. func_dirname () { func_dirname_result=`$ECHO "${1}" | $SED "$dirname"` if test "X$func_dirname_result" = "X${1}"; then func_dirname_result="${3}" else func_dirname_result="$func_dirname_result${2}" fi } # func_dirname may be replaced by extended shell implementation # func_basename file func_basename () { func_basename_result=`$ECHO "${1}" | $SED "$basename"` } # func_basename may be replaced by extended shell implementation # func_dirname_and_basename file append nondir_replacement # perform func_basename and func_dirname in a single function # call: # dirname: Compute the dirname of FILE. If nonempty, # add APPEND to the result, otherwise set result # to NONDIR_REPLACEMENT. # value returned in "$func_dirname_result" # basename: Compute filename of FILE. # value retuned in "$func_basename_result" # Implementation must be kept synchronized with func_dirname # and func_basename. For efficiency, we do not delegate to # those functions but instead duplicate the functionality here. func_dirname_and_basename () { # Extract subdirectory from the argument. func_dirname_result=`$ECHO "${1}" | $SED -e "$dirname"` if test "X$func_dirname_result" = "X${1}"; then func_dirname_result="${3}" else func_dirname_result="$func_dirname_result${2}" fi func_basename_result=`$ECHO "${1}" | $SED -e "$basename"` } # func_dirname_and_basename may be replaced by extended shell implementation # func_stripname prefix suffix name # strip PREFIX and SUFFIX off of NAME. # PREFIX and SUFFIX must not contain globbing or regex special # characters, hashes, percent signs, but SUFFIX may contain a leading # dot (in which case that matches only a dot). # func_strip_suffix prefix name func_stripname () { case ${2} in .*) func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%\\\\${2}\$%%"`;; *) func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%${2}\$%%"`;; esac } # func_stripname may be replaced by extended shell implementation # These SED scripts presuppose an absolute path with a trailing slash. pathcar='s,^/\([^/]*\).*$,\1,' pathcdr='s,^/[^/]*,,' removedotparts=':dotsl s@/\./@/@g t dotsl s,/\.$,/,' collapseslashes='s@/\{1,\}@/@g' finalslash='s,/*$,/,' # func_normal_abspath PATH # Remove doubled-up and trailing slashes, "." path components, # and cancel out any ".." path components in PATH after making # it an absolute path. # value returned in "$func_normal_abspath_result" func_normal_abspath () { # Start from root dir and reassemble the path. func_normal_abspath_result= func_normal_abspath_tpath=$1 func_normal_abspath_altnamespace= case $func_normal_abspath_tpath in "") # Empty path, that just means $cwd. func_stripname '' '/' "`pwd`" func_normal_abspath_result=$func_stripname_result return ;; # The next three entries are used to spot a run of precisely # two leading slashes without using negated character classes; # we take advantage of case's first-match behaviour. ///*) # Unusual form of absolute path, do nothing. ;; //*) # Not necessarily an ordinary path; POSIX reserves leading '//' # and for example Cygwin uses it to access remote file shares # over CIFS/SMB, so we conserve a leading double slash if found. func_normal_abspath_altnamespace=/ ;; /*) # Absolute path, do nothing. ;; *) # Relative path, prepend $cwd. func_normal_abspath_tpath=`pwd`/$func_normal_abspath_tpath ;; esac # Cancel out all the simple stuff to save iterations. We also want # the path to end with a slash for ease of parsing, so make sure # there is one (and only one) here. func_normal_abspath_tpath=`$ECHO "$func_normal_abspath_tpath" | $SED \ -e "$removedotparts" -e "$collapseslashes" -e "$finalslash"` while :; do # Processed it all yet? if test "$func_normal_abspath_tpath" = / ; then # If we ascended to the root using ".." the result may be empty now. if test -z "$func_normal_abspath_result" ; then func_normal_abspath_result=/ fi break fi func_normal_abspath_tcomponent=`$ECHO "$func_normal_abspath_tpath" | $SED \ -e "$pathcar"` func_normal_abspath_tpath=`$ECHO "$func_normal_abspath_tpath" | $SED \ -e "$pathcdr"` # Figure out what to do with it case $func_normal_abspath_tcomponent in "") # Trailing empty path component, ignore it. ;; ..) # Parent dir; strip last assembled component from result. func_dirname "$func_normal_abspath_result" func_normal_abspath_result=$func_dirname_result ;; *) # Actual path component, append it. func_normal_abspath_result=$func_normal_abspath_result/$func_normal_abspath_tcomponent ;; esac done # Restore leading double-slash if one was found on entry. func_normal_abspath_result=$func_normal_abspath_altnamespace$func_normal_abspath_result } # func_relative_path SRCDIR DSTDIR # generates a relative path from SRCDIR to DSTDIR, with a trailing # slash if non-empty, suitable for immediately appending a filename # without needing to append a separator. # value returned in "$func_relative_path_result" func_relative_path () { func_relative_path_result= func_normal_abspath "$1" func_relative_path_tlibdir=$func_normal_abspath_result func_normal_abspath "$2" func_relative_path_tbindir=$func_normal_abspath_result # Ascend the tree starting from libdir while :; do # check if we have found a prefix of bindir case $func_relative_path_tbindir in $func_relative_path_tlibdir) # found an exact match func_relative_path_tcancelled= break ;; $func_relative_path_tlibdir*) # found a matching prefix func_stripname "$func_relative_path_tlibdir" '' "$func_relative_path_tbindir" func_relative_path_tcancelled=$func_stripname_result if test -z "$func_relative_path_result"; then func_relative_path_result=. fi break ;; *) func_dirname $func_relative_path_tlibdir func_relative_path_tlibdir=${func_dirname_result} if test "x$func_relative_path_tlibdir" = x ; then # Have to descend all the way to the root! func_relative_path_result=../$func_relative_path_result func_relative_path_tcancelled=$func_relative_path_tbindir break fi func_relative_path_result=../$func_relative_path_result ;; esac done # Now calculate path; take care to avoid doubling-up slashes. func_stripname '' '/' "$func_relative_path_result" func_relative_path_result=$func_stripname_result func_stripname '/' '/' "$func_relative_path_tcancelled" if test "x$func_stripname_result" != x ; then func_relative_path_result=${func_relative_path_result}/${func_stripname_result} fi # Normalisation. If bindir is libdir, return empty string, # else relative path ending with a slash; either way, target # file name can be directly appended. if test ! -z "$func_relative_path_result"; then func_stripname './' '' "$func_relative_path_result/" func_relative_path_result=$func_stripname_result fi } # The name of this program: func_dirname_and_basename "$progpath" progname=$func_basename_result # Make sure we have an absolute path for reexecution: case $progpath in [\\/]*|[A-Za-z]:\\*) ;; *[\\/]*) progdir=$func_dirname_result progdir=`cd "$progdir" && pwd` progpath="$progdir/$progname" ;; *) save_IFS="$IFS" IFS=${PATH_SEPARATOR-:} for progdir in $PATH; do IFS="$save_IFS" test -x "$progdir/$progname" && break done IFS="$save_IFS" test -n "$progdir" || progdir=`pwd` progpath="$progdir/$progname" ;; esac # Sed substitution that helps us do robust quoting. It backslashifies # metacharacters that are still active within double-quoted strings. Xsed="${SED}"' -e 1s/^X//' sed_quote_subst='s/\([`"$\\]\)/\\\1/g' # Same as above, but do not quote variable references. double_quote_subst='s/\(["`\\]\)/\\\1/g' # Sed substitution that turns a string into a regex matching for the # string literally. sed_make_literal_regex='s,[].[^$\\*\/],\\&,g' # Sed substitution that converts a w32 file name or path # which contains forward slashes, into one that contains # (escaped) backslashes. A very naive implementation. lt_sed_naive_backslashify='s|\\\\*|\\|g;s|/|\\|g;s|\\|\\\\|g' # Re-`\' parameter expansions in output of double_quote_subst that were # `\'-ed in input to the same. If an odd number of `\' preceded a '$' # in input to double_quote_subst, that '$' was protected from expansion. # Since each input `\' is now two `\'s, look for any number of runs of # four `\'s followed by two `\'s and then a '$'. `\' that '$'. bs='\\' bs2='\\\\' bs4='\\\\\\\\' dollar='\$' sed_double_backslash="\ s/$bs4/&\\ /g s/^$bs2$dollar/$bs&/ s/\\([^$bs]\\)$bs2$dollar/\\1$bs2$bs$dollar/g s/\n//g" # Standard options: opt_dry_run=false opt_help=false opt_quiet=false opt_verbose=false opt_warning=: # func_echo arg... # Echo program name prefixed message, along with the current mode # name if it has been set yet. func_echo () { $ECHO "$progname: ${opt_mode+$opt_mode: }$*" } # func_verbose arg... # Echo program name prefixed message in verbose mode only. func_verbose () { $opt_verbose && func_echo ${1+"$@"} # A bug in bash halts the script if the last line of a function # fails when set -e is in force, so we need another command to # work around that: : } # func_echo_all arg... # Invoke $ECHO with all args, space-separated. func_echo_all () { $ECHO "$*" } # func_error arg... # Echo program name prefixed message to standard error. func_error () { $ECHO "$progname: ${opt_mode+$opt_mode: }"${1+"$@"} 1>&2 } # func_warning arg... # Echo program name prefixed warning message to standard error. func_warning () { $opt_warning && $ECHO "$progname: ${opt_mode+$opt_mode: }warning: "${1+"$@"} 1>&2 # bash bug again: : } # func_fatal_error arg... # Echo program name prefixed message to standard error, and exit. func_fatal_error () { func_error ${1+"$@"} exit $EXIT_FAILURE } # func_fatal_help arg... # Echo program name prefixed message to standard error, followed by # a help hint, and exit. func_fatal_help () { func_error ${1+"$@"} func_fatal_error "$help" } help="Try \`$progname --help' for more information." ## default # func_grep expression filename # Check whether EXPRESSION matches any line of FILENAME, without output. func_grep () { $GREP "$1" "$2" >/dev/null 2>&1 } # func_mkdir_p directory-path # Make sure the entire path to DIRECTORY-PATH is available. func_mkdir_p () { my_directory_path="$1" my_dir_list= if test -n "$my_directory_path" && test "$opt_dry_run" != ":"; then # Protect directory names starting with `-' case $my_directory_path in -*) my_directory_path="./$my_directory_path" ;; esac # While some portion of DIR does not yet exist... while test ! -d "$my_directory_path"; do # ...make a list in topmost first order. Use a colon delimited # list incase some portion of path contains whitespace. my_dir_list="$my_directory_path:$my_dir_list" # If the last portion added has no slash in it, the list is done case $my_directory_path in */*) ;; *) break ;; esac # ...otherwise throw away the child directory and loop my_directory_path=`$ECHO "$my_directory_path" | $SED -e "$dirname"` done my_dir_list=`$ECHO "$my_dir_list" | $SED 's,:*$,,'` save_mkdir_p_IFS="$IFS"; IFS=':' for my_dir in $my_dir_list; do IFS="$save_mkdir_p_IFS" # mkdir can fail with a `File exist' error if two processes # try to create one of the directories concurrently. Don't # stop in that case! $MKDIR "$my_dir" 2>/dev/null || : done IFS="$save_mkdir_p_IFS" # Bail out if we (or some other process) failed to create a directory. test -d "$my_directory_path" || \ func_fatal_error "Failed to create \`$1'" fi } # func_mktempdir [string] # Make a temporary directory that won't clash with other running # libtool processes, and avoids race conditions if possible. If # given, STRING is the basename for that directory. func_mktempdir () { my_template="${TMPDIR-/tmp}/${1-$progname}" if test "$opt_dry_run" = ":"; then # Return a directory name, but don't create it in dry-run mode my_tmpdir="${my_template}-$$" else # If mktemp works, use that first and foremost my_tmpdir=`mktemp -d "${my_template}-XXXXXXXX" 2>/dev/null` if test ! -d "$my_tmpdir"; then # Failing that, at least try and use $RANDOM to avoid a race my_tmpdir="${my_template}-${RANDOM-0}$$" save_mktempdir_umask=`umask` umask 0077 $MKDIR "$my_tmpdir" umask $save_mktempdir_umask fi # If we're not in dry-run mode, bomb out on failure test -d "$my_tmpdir" || \ func_fatal_error "cannot create temporary directory \`$my_tmpdir'" fi $ECHO "$my_tmpdir" } # func_quote_for_eval arg # Aesthetically quote ARG to be evaled later. # This function returns two values: FUNC_QUOTE_FOR_EVAL_RESULT # is double-quoted, suitable for a subsequent eval, whereas # FUNC_QUOTE_FOR_EVAL_UNQUOTED_RESULT has merely all characters # which are still active within double quotes backslashified. func_quote_for_eval () { case $1 in *[\\\`\"\$]*) func_quote_for_eval_unquoted_result=`$ECHO "$1" | $SED "$sed_quote_subst"` ;; *) func_quote_for_eval_unquoted_result="$1" ;; esac case $func_quote_for_eval_unquoted_result in # Double-quote args containing shell metacharacters to delay # word splitting, command substitution and and variable # expansion for a subsequent eval. # Many Bourne shells cannot handle close brackets correctly # in scan sets, so we specify it separately. *[\[\~\#\^\&\*\(\)\{\}\|\;\<\>\?\'\ \ ]*|*]*|"") func_quote_for_eval_result="\"$func_quote_for_eval_unquoted_result\"" ;; *) func_quote_for_eval_result="$func_quote_for_eval_unquoted_result" esac } # func_quote_for_expand arg # Aesthetically quote ARG to be evaled later; same as above, # but do not quote variable references. func_quote_for_expand () { case $1 in *[\\\`\"]*) my_arg=`$ECHO "$1" | $SED \ -e "$double_quote_subst" -e "$sed_double_backslash"` ;; *) my_arg="$1" ;; esac case $my_arg in # Double-quote args containing shell metacharacters to delay # word splitting and command substitution for a subsequent eval. # Many Bourne shells cannot handle close brackets correctly # in scan sets, so we specify it separately. *[\[\~\#\^\&\*\(\)\{\}\|\;\<\>\?\'\ \ ]*|*]*|"") my_arg="\"$my_arg\"" ;; esac func_quote_for_expand_result="$my_arg" } # func_show_eval cmd [fail_exp] # Unless opt_silent is true, then output CMD. Then, if opt_dryrun is # not true, evaluate CMD. If the evaluation of CMD fails, and FAIL_EXP # is given, then evaluate it. func_show_eval () { my_cmd="$1" my_fail_exp="${2-:}" ${opt_silent-false} || { func_quote_for_expand "$my_cmd" eval "func_echo $func_quote_for_expand_result" } if ${opt_dry_run-false}; then :; else eval "$my_cmd" my_status=$? if test "$my_status" -eq 0; then :; else eval "(exit $my_status); $my_fail_exp" fi fi } # func_show_eval_locale cmd [fail_exp] # Unless opt_silent is true, then output CMD. Then, if opt_dryrun is # not true, evaluate CMD. If the evaluation of CMD fails, and FAIL_EXP # is given, then evaluate it. Use the saved locale for evaluation. func_show_eval_locale () { my_cmd="$1" my_fail_exp="${2-:}" ${opt_silent-false} || { func_quote_for_expand "$my_cmd" eval "func_echo $func_quote_for_expand_result" } if ${opt_dry_run-false}; then :; else eval "$lt_user_locale $my_cmd" my_status=$? eval "$lt_safe_locale" if test "$my_status" -eq 0; then :; else eval "(exit $my_status); $my_fail_exp" fi fi } # func_tr_sh # Turn $1 into a string suitable for a shell variable name. # Result is stored in $func_tr_sh_result. All characters # not in the set a-zA-Z0-9_ are replaced with '_'. Further, # if $1 begins with a digit, a '_' is prepended as well. func_tr_sh () { case $1 in [0-9]* | *[!a-zA-Z0-9_]*) func_tr_sh_result=`$ECHO "$1" | $SED 's/^\([0-9]\)/_\1/; s/[^a-zA-Z0-9_]/_/g'` ;; * ) func_tr_sh_result=$1 ;; esac } # func_version # Echo version message to standard output and exit. func_version () { $opt_debug $SED -n '/(C)/!b go :more /\./!{ N s/\n# / / b more } :go /^# '$PROGRAM' (GNU /,/# warranty; / { s/^# // s/^# *$// s/\((C)\)[ 0-9,-]*\( [1-9][0-9]*\)/\1\2/ p }' < "$progpath" exit $? } # func_usage # Echo short help message to standard output and exit. func_usage () { $opt_debug $SED -n '/^# Usage:/,/^# *.*--help/ { s/^# // s/^# *$// s/\$progname/'$progname'/ p }' < "$progpath" echo $ECHO "run \`$progname --help | more' for full usage" exit $? } # func_help [NOEXIT] # Echo long help message to standard output and exit, # unless 'noexit' is passed as argument. func_help () { $opt_debug $SED -n '/^# Usage:/,/# Report bugs to/ { :print s/^# // s/^# *$// s*\$progname*'$progname'* s*\$host*'"$host"'* s*\$SHELL*'"$SHELL"'* s*\$LTCC*'"$LTCC"'* s*\$LTCFLAGS*'"$LTCFLAGS"'* s*\$LD*'"$LD"'* s/\$with_gnu_ld/'"$with_gnu_ld"'/ s/\$automake_version/'"`(${AUTOMAKE-automake} --version) 2>/dev/null |$SED 1q`"'/ s/\$autoconf_version/'"`(${AUTOCONF-autoconf} --version) 2>/dev/null |$SED 1q`"'/ p d } /^# .* home page:/b print /^# General help using/b print ' < "$progpath" ret=$? if test -z "$1"; then exit $ret fi } # func_missing_arg argname # Echo program name prefixed message to standard error and set global # exit_cmd. func_missing_arg () { $opt_debug func_error "missing argument for $1." exit_cmd=exit } # func_split_short_opt shortopt # Set func_split_short_opt_name and func_split_short_opt_arg shell # variables after splitting SHORTOPT after the 2nd character. func_split_short_opt () { my_sed_short_opt='1s/^\(..\).*$/\1/;q' my_sed_short_rest='1s/^..\(.*\)$/\1/;q' func_split_short_opt_name=`$ECHO "$1" | $SED "$my_sed_short_opt"` func_split_short_opt_arg=`$ECHO "$1" | $SED "$my_sed_short_rest"` } # func_split_short_opt may be replaced by extended shell implementation # func_split_long_opt longopt # Set func_split_long_opt_name and func_split_long_opt_arg shell # variables after splitting LONGOPT at the `=' sign. func_split_long_opt () { my_sed_long_opt='1s/^\(--[^=]*\)=.*/\1/;q' my_sed_long_arg='1s/^--[^=]*=//' func_split_long_opt_name=`$ECHO "$1" | $SED "$my_sed_long_opt"` func_split_long_opt_arg=`$ECHO "$1" | $SED "$my_sed_long_arg"` } # func_split_long_opt may be replaced by extended shell implementation exit_cmd=: magic="%%%MAGIC variable%%%" magic_exe="%%%MAGIC EXE variable%%%" # Global variables. nonopt= preserve_args= lo2o="s/\\.lo\$/.${objext}/" o2lo="s/\\.${objext}\$/.lo/" extracted_archives= extracted_serial=0 # If this variable is set in any of the actions, the command in it # will be execed at the end. This prevents here-documents from being # left over by shells. exec_cmd= # func_append var value # Append VALUE to the end of shell variable VAR. func_append () { eval "${1}=\$${1}\${2}" } # func_append may be replaced by extended shell implementation # func_append_quoted var value # Quote VALUE and append to the end of shell variable VAR, separated # by a space. func_append_quoted () { func_quote_for_eval "${2}" eval "${1}=\$${1}\\ \$func_quote_for_eval_result" } # func_append_quoted may be replaced by extended shell implementation # func_arith arithmetic-term... func_arith () { func_arith_result=`expr "${@}"` } # func_arith may be replaced by extended shell implementation # func_len string # STRING may not start with a hyphen. func_len () { func_len_result=`expr "${1}" : ".*" 2>/dev/null || echo $max_cmd_len` } # func_len may be replaced by extended shell implementation # func_lo2o object func_lo2o () { func_lo2o_result=`$ECHO "${1}" | $SED "$lo2o"` } # func_lo2o may be replaced by extended shell implementation # func_xform libobj-or-source func_xform () { func_xform_result=`$ECHO "${1}" | $SED 's/\.[^.]*$/.lo/'` } # func_xform may be replaced by extended shell implementation # func_fatal_configuration arg... # Echo program name prefixed message to standard error, followed by # a configuration failure hint, and exit. func_fatal_configuration () { func_error ${1+"$@"} func_error "See the $PACKAGE documentation for more information." func_fatal_error "Fatal configuration error." } # func_config # Display the configuration for all the tags in this script. func_config () { re_begincf='^# ### BEGIN LIBTOOL' re_endcf='^# ### END LIBTOOL' # Default configuration. $SED "1,/$re_begincf CONFIG/d;/$re_endcf CONFIG/,\$d" < "$progpath" # Now print the configurations for the tags. for tagname in $taglist; do $SED -n "/$re_begincf TAG CONFIG: $tagname\$/,/$re_endcf TAG CONFIG: $tagname\$/p" < "$progpath" done exit $? } # func_features # Display the features supported by this script. func_features () { echo "host: $host" if test "$build_libtool_libs" = yes; then echo "enable shared libraries" else echo "disable shared libraries" fi if test "$build_old_libs" = yes; then echo "enable static libraries" else echo "disable static libraries" fi exit $? } # func_enable_tag tagname # Verify that TAGNAME is valid, and either flag an error and exit, or # enable the TAGNAME tag. We also add TAGNAME to the global $taglist # variable here. func_enable_tag () { # Global variable: tagname="$1" re_begincf="^# ### BEGIN LIBTOOL TAG CONFIG: $tagname\$" re_endcf="^# ### END LIBTOOL TAG CONFIG: $tagname\$" sed_extractcf="/$re_begincf/,/$re_endcf/p" # Validate tagname. case $tagname in *[!-_A-Za-z0-9,/]*) func_fatal_error "invalid tag name: $tagname" ;; esac # Don't test for the "default" C tag, as we know it's # there but not specially marked. case $tagname in CC) ;; *) if $GREP "$re_begincf" "$progpath" >/dev/null 2>&1; then taglist="$taglist $tagname" # Evaluate the configuration. Be careful to quote the path # and the sed script, to avoid splitting on whitespace, but # also don't use non-portable quotes within backquotes within # quotes we have to do it in 2 steps: extractedcf=`$SED -n -e "$sed_extractcf" < "$progpath"` eval "$extractedcf" else func_error "ignoring unknown tag $tagname" fi ;; esac } # func_check_version_match # Ensure that we are using m4 macros, and libtool script from the same # release of libtool. func_check_version_match () { if test "$package_revision" != "$macro_revision"; then if test "$VERSION" != "$macro_version"; then if test -z "$macro_version"; then cat >&2 <<_LT_EOF $progname: Version mismatch error. This is $PACKAGE $VERSION, but the $progname: definition of this LT_INIT comes from an older release. $progname: You should recreate aclocal.m4 with macros from $PACKAGE $VERSION $progname: and run autoconf again. _LT_EOF else cat >&2 <<_LT_EOF $progname: Version mismatch error. This is $PACKAGE $VERSION, but the $progname: definition of this LT_INIT comes from $PACKAGE $macro_version. $progname: You should recreate aclocal.m4 with macros from $PACKAGE $VERSION $progname: and run autoconf again. _LT_EOF fi else cat >&2 <<_LT_EOF $progname: Version mismatch error. This is $PACKAGE $VERSION, revision $package_revision, $progname: but the definition of this LT_INIT comes from revision $macro_revision. $progname: You should recreate aclocal.m4 with macros from revision $package_revision $progname: of $PACKAGE $VERSION and run autoconf again. _LT_EOF fi exit $EXIT_MISMATCH fi } # Shorthand for --mode=foo, only valid as the first argument case $1 in clean|clea|cle|cl) shift; set dummy --mode clean ${1+"$@"}; shift ;; compile|compil|compi|comp|com|co|c) shift; set dummy --mode compile ${1+"$@"}; shift ;; execute|execut|execu|exec|exe|ex|e) shift; set dummy --mode execute ${1+"$@"}; shift ;; finish|finis|fini|fin|fi|f) shift; set dummy --mode finish ${1+"$@"}; shift ;; install|instal|insta|inst|ins|in|i) shift; set dummy --mode install ${1+"$@"}; shift ;; link|lin|li|l) shift; set dummy --mode link ${1+"$@"}; shift ;; uninstall|uninstal|uninsta|uninst|unins|unin|uni|un|u) shift; set dummy --mode uninstall ${1+"$@"}; shift ;; esac # Option defaults: opt_debug=: opt_dry_run=false opt_config=false opt_preserve_dup_deps=false opt_features=false opt_finish=false opt_help=false opt_help_all=false opt_silent=: opt_warning=: opt_verbose=: opt_silent=false opt_verbose=false # Parse options once, thoroughly. This comes as soon as possible in the # script to make things like `--version' happen as quickly as we can. { # this just eases exit handling while test $# -gt 0; do opt="$1" shift case $opt in --debug|-x) opt_debug='set -x' func_echo "enabling shell trace mode" $opt_debug ;; --dry-run|--dryrun|-n) opt_dry_run=: ;; --config) opt_config=: func_config ;; --dlopen|-dlopen) optarg="$1" opt_dlopen="${opt_dlopen+$opt_dlopen }$optarg" shift ;; --preserve-dup-deps) opt_preserve_dup_deps=: ;; --features) opt_features=: func_features ;; --finish) opt_finish=: set dummy --mode finish ${1+"$@"}; shift ;; --help) opt_help=: ;; --help-all) opt_help_all=: opt_help=': help-all' ;; --mode) test $# = 0 && func_missing_arg $opt && break optarg="$1" opt_mode="$optarg" case $optarg in # Valid mode arguments: clean|compile|execute|finish|install|link|relink|uninstall) ;; # Catch anything else as an error *) func_error "invalid argument for $opt" exit_cmd=exit break ;; esac shift ;; --no-silent|--no-quiet) opt_silent=false func_append preserve_args " $opt" ;; --no-warning|--no-warn) opt_warning=false func_append preserve_args " $opt" ;; --no-verbose) opt_verbose=false func_append preserve_args " $opt" ;; --silent|--quiet) opt_silent=: func_append preserve_args " $opt" opt_verbose=false ;; --verbose|-v) opt_verbose=: func_append preserve_args " $opt" opt_silent=false ;; --tag) test $# = 0 && func_missing_arg $opt && break optarg="$1" opt_tag="$optarg" func_append preserve_args " $opt $optarg" func_enable_tag "$optarg" shift ;; -\?|-h) func_usage ;; --help) func_help ;; --version) func_version ;; # Separate optargs to long options: --*=*) func_split_long_opt "$opt" set dummy "$func_split_long_opt_name" "$func_split_long_opt_arg" ${1+"$@"} shift ;; # Separate non-argument short options: -\?*|-h*|-n*|-v*) func_split_short_opt "$opt" set dummy "$func_split_short_opt_name" "-$func_split_short_opt_arg" ${1+"$@"} shift ;; --) break ;; -*) func_fatal_help "unrecognized option \`$opt'" ;; *) set dummy "$opt" ${1+"$@"}; shift; break ;; esac done # Validate options: # save first non-option argument if test "$#" -gt 0; then nonopt="$opt" shift fi # preserve --debug test "$opt_debug" = : || func_append preserve_args " --debug" case $host in *cygwin* | *mingw* | *pw32* | *cegcc*) # don't eliminate duplications in $postdeps and $predeps opt_duplicate_compiler_generated_deps=: ;; *) opt_duplicate_compiler_generated_deps=$opt_preserve_dup_deps ;; esac $opt_help || { # Sanity checks first: func_check_version_match if test "$build_libtool_libs" != yes && test "$build_old_libs" != yes; then func_fatal_configuration "not configured to build any kind of library" fi # Darwin sucks eval std_shrext=\"$shrext_cmds\" # Only execute mode is allowed to have -dlopen flags. if test -n "$opt_dlopen" && test "$opt_mode" != execute; then func_error "unrecognized option \`-dlopen'" $ECHO "$help" 1>&2 exit $EXIT_FAILURE fi # Change the help message to a mode-specific one. generic_help="$help" help="Try \`$progname --help --mode=$opt_mode' for more information." } # Bail if the options were screwed $exit_cmd $EXIT_FAILURE } ## ----------- ## ## Main. ## ## ----------- ## # func_lalib_p file # True iff FILE is a libtool `.la' library or `.lo' object file. # This function is only a basic sanity check; it will hardly flush out # determined imposters. func_lalib_p () { test -f "$1" && $SED -e 4q "$1" 2>/dev/null \ | $GREP "^# Generated by .*$PACKAGE" > /dev/null 2>&1 } # func_lalib_unsafe_p file # True iff FILE is a libtool `.la' library or `.lo' object file. # This function implements the same check as func_lalib_p without # resorting to external programs. To this end, it redirects stdin and # closes it afterwards, without saving the original file descriptor. # As a safety measure, use it only where a negative result would be # fatal anyway. Works if `file' does not exist. func_lalib_unsafe_p () { lalib_p=no if test -f "$1" && test -r "$1" && exec 5<&0 <"$1"; then for lalib_p_l in 1 2 3 4 do read lalib_p_line case "$lalib_p_line" in \#\ Generated\ by\ *$PACKAGE* ) lalib_p=yes; break;; esac done exec 0<&5 5<&- fi test "$lalib_p" = yes } # func_ltwrapper_script_p file # True iff FILE is a libtool wrapper script # This function is only a basic sanity check; it will hardly flush out # determined imposters. func_ltwrapper_script_p () { func_lalib_p "$1" } # func_ltwrapper_executable_p file # True iff FILE is a libtool wrapper executable # This function is only a basic sanity check; it will hardly flush out # determined imposters. func_ltwrapper_executable_p () { func_ltwrapper_exec_suffix= case $1 in *.exe) ;; *) func_ltwrapper_exec_suffix=.exe ;; esac $GREP "$magic_exe" "$1$func_ltwrapper_exec_suffix" >/dev/null 2>&1 } # func_ltwrapper_scriptname file # Assumes file is an ltwrapper_executable # uses $file to determine the appropriate filename for a # temporary ltwrapper_script. func_ltwrapper_scriptname () { func_dirname_and_basename "$1" "" "." func_stripname '' '.exe' "$func_basename_result" func_ltwrapper_scriptname_result="$func_dirname_result/$objdir/${func_stripname_result}_ltshwrapper" } # func_ltwrapper_p file # True iff FILE is a libtool wrapper script or wrapper executable # This function is only a basic sanity check; it will hardly flush out # determined imposters. func_ltwrapper_p () { func_ltwrapper_script_p "$1" || func_ltwrapper_executable_p "$1" } # func_execute_cmds commands fail_cmd # Execute tilde-delimited COMMANDS. # If FAIL_CMD is given, eval that upon failure. # FAIL_CMD may read-access the current command in variable CMD! func_execute_cmds () { $opt_debug save_ifs=$IFS; IFS='~' for cmd in $1; do IFS=$save_ifs eval cmd=\"$cmd\" func_show_eval "$cmd" "${2-:}" done IFS=$save_ifs } # func_source file # Source FILE, adding directory component if necessary. # Note that it is not necessary on cygwin/mingw to append a dot to # FILE even if both FILE and FILE.exe exist: automatic-append-.exe # behavior happens only for exec(3), not for open(2)! Also, sourcing # `FILE.' does not work on cygwin managed mounts. func_source () { $opt_debug case $1 in */* | *\\*) . "$1" ;; *) . "./$1" ;; esac } # func_resolve_sysroot PATH # Replace a leading = in PATH with a sysroot. Store the result into # func_resolve_sysroot_result func_resolve_sysroot () { func_resolve_sysroot_result=$1 case $func_resolve_sysroot_result in =*) func_stripname '=' '' "$func_resolve_sysroot_result" func_resolve_sysroot_result=$lt_sysroot$func_stripname_result ;; esac } # func_replace_sysroot PATH # If PATH begins with the sysroot, replace it with = and # store the result into func_replace_sysroot_result. func_replace_sysroot () { case "$lt_sysroot:$1" in ?*:"$lt_sysroot"*) func_stripname "$lt_sysroot" '' "$1" func_replace_sysroot_result="=$func_stripname_result" ;; *) # Including no sysroot. func_replace_sysroot_result=$1 ;; esac } # func_infer_tag arg # Infer tagged configuration to use if any are available and # if one wasn't chosen via the "--tag" command line option. # Only attempt this if the compiler in the base compile # command doesn't match the default compiler. # arg is usually of the form 'gcc ...' func_infer_tag () { $opt_debug if test -n "$available_tags" && test -z "$tagname"; then CC_quoted= for arg in $CC; do func_append_quoted CC_quoted "$arg" done CC_expanded=`func_echo_all $CC` CC_quoted_expanded=`func_echo_all $CC_quoted` case $@ in # Blanks in the command may have been stripped by the calling shell, # but not from the CC environment variable when configure was run. " $CC "* | "$CC "* | " $CC_expanded "* | "$CC_expanded "* | \ " $CC_quoted"* | "$CC_quoted "* | " $CC_quoted_expanded "* | "$CC_quoted_expanded "*) ;; # Blanks at the start of $base_compile will cause this to fail # if we don't check for them as well. *) for z in $available_tags; do if $GREP "^# ### BEGIN LIBTOOL TAG CONFIG: $z$" < "$progpath" > /dev/null; then # Evaluate the configuration. eval "`${SED} -n -e '/^# ### BEGIN LIBTOOL TAG CONFIG: '$z'$/,/^# ### END LIBTOOL TAG CONFIG: '$z'$/p' < $progpath`" CC_quoted= for arg in $CC; do # Double-quote args containing other shell metacharacters. func_append_quoted CC_quoted "$arg" done CC_expanded=`func_echo_all $CC` CC_quoted_expanded=`func_echo_all $CC_quoted` case "$@ " in " $CC "* | "$CC "* | " $CC_expanded "* | "$CC_expanded "* | \ " $CC_quoted"* | "$CC_quoted "* | " $CC_quoted_expanded "* | "$CC_quoted_expanded "*) # The compiler in the base compile command matches # the one in the tagged configuration. # Assume this is the tagged configuration we want. tagname=$z break ;; esac fi done # If $tagname still isn't set, then no tagged configuration # was found and let the user know that the "--tag" command # line option must be used. if test -z "$tagname"; then func_echo "unable to infer tagged configuration" func_fatal_error "specify a tag with \`--tag'" # else # func_verbose "using $tagname tagged configuration" fi ;; esac fi } # func_write_libtool_object output_name pic_name nonpic_name # Create a libtool object file (analogous to a ".la" file), # but don't create it if we're doing a dry run. func_write_libtool_object () { write_libobj=${1} if test "$build_libtool_libs" = yes; then write_lobj=\'${2}\' else write_lobj=none fi if test "$build_old_libs" = yes; then write_oldobj=\'${3}\' else write_oldobj=none fi $opt_dry_run || { cat >${write_libobj}T </dev/null` if test "$?" -eq 0 && test -n "${func_convert_core_file_wine_to_w32_tmp}"; then func_convert_core_file_wine_to_w32_result=`$ECHO "$func_convert_core_file_wine_to_w32_tmp" | $SED -e "$lt_sed_naive_backslashify"` else func_convert_core_file_wine_to_w32_result= fi fi } # end: func_convert_core_file_wine_to_w32 # func_convert_core_path_wine_to_w32 ARG # Helper function used by path conversion functions when $build is *nix, and # $host is mingw, cygwin, or some other w32 environment. Relies on a correctly # configured wine environment available, with the winepath program in $build's # $PATH. Assumes ARG has no leading or trailing path separator characters. # # ARG is path to be converted from $build format to win32. # Result is available in $func_convert_core_path_wine_to_w32_result. # Unconvertible file (directory) names in ARG are skipped; if no directory names # are convertible, then the result may be empty. func_convert_core_path_wine_to_w32 () { $opt_debug # unfortunately, winepath doesn't convert paths, only file names func_convert_core_path_wine_to_w32_result="" if test -n "$1"; then oldIFS=$IFS IFS=: for func_convert_core_path_wine_to_w32_f in $1; do IFS=$oldIFS func_convert_core_file_wine_to_w32 "$func_convert_core_path_wine_to_w32_f" if test -n "$func_convert_core_file_wine_to_w32_result" ; then if test -z "$func_convert_core_path_wine_to_w32_result"; then func_convert_core_path_wine_to_w32_result="$func_convert_core_file_wine_to_w32_result" else func_append func_convert_core_path_wine_to_w32_result ";$func_convert_core_file_wine_to_w32_result" fi fi done IFS=$oldIFS fi } # end: func_convert_core_path_wine_to_w32 # func_cygpath ARGS... # Wrapper around calling the cygpath program via LT_CYGPATH. This is used when # when (1) $build is *nix and Cygwin is hosted via a wine environment; or (2) # $build is MSYS and $host is Cygwin, or (3) $build is Cygwin. In case (1) or # (2), returns the Cygwin file name or path in func_cygpath_result (input # file name or path is assumed to be in w32 format, as previously converted # from $build's *nix or MSYS format). In case (3), returns the w32 file name # or path in func_cygpath_result (input file name or path is assumed to be in # Cygwin format). Returns an empty string on error. # # ARGS are passed to cygpath, with the last one being the file name or path to # be converted. # # Specify the absolute *nix (or w32) name to cygpath in the LT_CYGPATH # environment variable; do not put it in $PATH. func_cygpath () { $opt_debug if test -n "$LT_CYGPATH" && test -f "$LT_CYGPATH"; then func_cygpath_result=`$LT_CYGPATH "$@" 2>/dev/null` if test "$?" -ne 0; then # on failure, ensure result is empty func_cygpath_result= fi else func_cygpath_result= func_error "LT_CYGPATH is empty or specifies non-existent file: \`$LT_CYGPATH'" fi } #end: func_cygpath # func_convert_core_msys_to_w32 ARG # Convert file name or path ARG from MSYS format to w32 format. Return # result in func_convert_core_msys_to_w32_result. func_convert_core_msys_to_w32 () { $opt_debug # awkward: cmd appends spaces to result func_convert_core_msys_to_w32_result=`( cmd //c echo "$1" ) 2>/dev/null | $SED -e 's/[ ]*$//' -e "$lt_sed_naive_backslashify"` } #end: func_convert_core_msys_to_w32 # func_convert_file_check ARG1 ARG2 # Verify that ARG1 (a file name in $build format) was converted to $host # format in ARG2. Otherwise, emit an error message, but continue (resetting # func_to_host_file_result to ARG1). func_convert_file_check () { $opt_debug if test -z "$2" && test -n "$1" ; then func_error "Could not determine host file name corresponding to" func_error " \`$1'" func_error "Continuing, but uninstalled executables may not work." # Fallback: func_to_host_file_result="$1" fi } # end func_convert_file_check # func_convert_path_check FROM_PATHSEP TO_PATHSEP FROM_PATH TO_PATH # Verify that FROM_PATH (a path in $build format) was converted to $host # format in TO_PATH. Otherwise, emit an error message, but continue, resetting # func_to_host_file_result to a simplistic fallback value (see below). func_convert_path_check () { $opt_debug if test -z "$4" && test -n "$3"; then func_error "Could not determine the host path corresponding to" func_error " \`$3'" func_error "Continuing, but uninstalled executables may not work." # Fallback. This is a deliberately simplistic "conversion" and # should not be "improved". See libtool.info. if test "x$1" != "x$2"; then lt_replace_pathsep_chars="s|$1|$2|g" func_to_host_path_result=`echo "$3" | $SED -e "$lt_replace_pathsep_chars"` else func_to_host_path_result="$3" fi fi } # end func_convert_path_check # func_convert_path_front_back_pathsep FRONTPAT BACKPAT REPL ORIG # Modifies func_to_host_path_result by prepending REPL if ORIG matches FRONTPAT # and appending REPL if ORIG matches BACKPAT. func_convert_path_front_back_pathsep () { $opt_debug case $4 in $1 ) func_to_host_path_result="$3$func_to_host_path_result" ;; esac case $4 in $2 ) func_append func_to_host_path_result "$3" ;; esac } # end func_convert_path_front_back_pathsep ################################################## # $build to $host FILE NAME CONVERSION FUNCTIONS # ################################################## # invoked via `$to_host_file_cmd ARG' # # In each case, ARG is the path to be converted from $build to $host format. # Result will be available in $func_to_host_file_result. # func_to_host_file ARG # Converts the file name ARG from $build format to $host format. Return result # in func_to_host_file_result. func_to_host_file () { $opt_debug $to_host_file_cmd "$1" } # end func_to_host_file # func_to_tool_file ARG LAZY # converts the file name ARG from $build format to toolchain format. Return # result in func_to_tool_file_result. If the conversion in use is listed # in (the comma separated) LAZY, no conversion takes place. func_to_tool_file () { $opt_debug case ,$2, in *,"$to_tool_file_cmd",*) func_to_tool_file_result=$1 ;; *) $to_tool_file_cmd "$1" func_to_tool_file_result=$func_to_host_file_result ;; esac } # end func_to_tool_file # func_convert_file_noop ARG # Copy ARG to func_to_host_file_result. func_convert_file_noop () { func_to_host_file_result="$1" } # end func_convert_file_noop # func_convert_file_msys_to_w32 ARG # Convert file name ARG from (mingw) MSYS to (mingw) w32 format; automatic # conversion to w32 is not available inside the cwrapper. Returns result in # func_to_host_file_result. func_convert_file_msys_to_w32 () { $opt_debug func_to_host_file_result="$1" if test -n "$1"; then func_convert_core_msys_to_w32 "$1" func_to_host_file_result="$func_convert_core_msys_to_w32_result" fi func_convert_file_check "$1" "$func_to_host_file_result" } # end func_convert_file_msys_to_w32 # func_convert_file_cygwin_to_w32 ARG # Convert file name ARG from Cygwin to w32 format. Returns result in # func_to_host_file_result. func_convert_file_cygwin_to_w32 () { $opt_debug func_to_host_file_result="$1" if test -n "$1"; then # because $build is cygwin, we call "the" cygpath in $PATH; no need to use # LT_CYGPATH in this case. func_to_host_file_result=`cygpath -m "$1"` fi func_convert_file_check "$1" "$func_to_host_file_result" } # end func_convert_file_cygwin_to_w32 # func_convert_file_nix_to_w32 ARG # Convert file name ARG from *nix to w32 format. Requires a wine environment # and a working winepath. Returns result in func_to_host_file_result. func_convert_file_nix_to_w32 () { $opt_debug func_to_host_file_result="$1" if test -n "$1"; then func_convert_core_file_wine_to_w32 "$1" func_to_host_file_result="$func_convert_core_file_wine_to_w32_result" fi func_convert_file_check "$1" "$func_to_host_file_result" } # end func_convert_file_nix_to_w32 # func_convert_file_msys_to_cygwin ARG # Convert file name ARG from MSYS to Cygwin format. Requires LT_CYGPATH set. # Returns result in func_to_host_file_result. func_convert_file_msys_to_cygwin () { $opt_debug func_to_host_file_result="$1" if test -n "$1"; then func_convert_core_msys_to_w32 "$1" func_cygpath -u "$func_convert_core_msys_to_w32_result" func_to_host_file_result="$func_cygpath_result" fi func_convert_file_check "$1" "$func_to_host_file_result" } # end func_convert_file_msys_to_cygwin # func_convert_file_nix_to_cygwin ARG # Convert file name ARG from *nix to Cygwin format. Requires Cygwin installed # in a wine environment, working winepath, and LT_CYGPATH set. Returns result # in func_to_host_file_result. func_convert_file_nix_to_cygwin () { $opt_debug func_to_host_file_result="$1" if test -n "$1"; then # convert from *nix to w32, then use cygpath to convert from w32 to cygwin. func_convert_core_file_wine_to_w32 "$1" func_cygpath -u "$func_convert_core_file_wine_to_w32_result" func_to_host_file_result="$func_cygpath_result" fi func_convert_file_check "$1" "$func_to_host_file_result" } # end func_convert_file_nix_to_cygwin ############################################# # $build to $host PATH CONVERSION FUNCTIONS # ############################################# # invoked via `$to_host_path_cmd ARG' # # In each case, ARG is the path to be converted from $build to $host format. # The result will be available in $func_to_host_path_result. # # Path separators are also converted from $build format to $host format. If # ARG begins or ends with a path separator character, it is preserved (but # converted to $host format) on output. # # All path conversion functions are named using the following convention: # file name conversion function : func_convert_file_X_to_Y () # path conversion function : func_convert_path_X_to_Y () # where, for any given $build/$host combination the 'X_to_Y' value is the # same. If conversion functions are added for new $build/$host combinations, # the two new functions must follow this pattern, or func_init_to_host_path_cmd # will break. # func_init_to_host_path_cmd # Ensures that function "pointer" variable $to_host_path_cmd is set to the # appropriate value, based on the value of $to_host_file_cmd. to_host_path_cmd= func_init_to_host_path_cmd () { $opt_debug if test -z "$to_host_path_cmd"; then func_stripname 'func_convert_file_' '' "$to_host_file_cmd" to_host_path_cmd="func_convert_path_${func_stripname_result}" fi } # func_to_host_path ARG # Converts the path ARG from $build format to $host format. Return result # in func_to_host_path_result. func_to_host_path () { $opt_debug func_init_to_host_path_cmd $to_host_path_cmd "$1" } # end func_to_host_path # func_convert_path_noop ARG # Copy ARG to func_to_host_path_result. func_convert_path_noop () { func_to_host_path_result="$1" } # end func_convert_path_noop # func_convert_path_msys_to_w32 ARG # Convert path ARG from (mingw) MSYS to (mingw) w32 format; automatic # conversion to w32 is not available inside the cwrapper. Returns result in # func_to_host_path_result. func_convert_path_msys_to_w32 () { $opt_debug func_to_host_path_result="$1" if test -n "$1"; then # Remove leading and trailing path separator characters from ARG. MSYS # behavior is inconsistent here; cygpath turns them into '.;' and ';.'; # and winepath ignores them completely. func_stripname : : "$1" func_to_host_path_tmp1=$func_stripname_result func_convert_core_msys_to_w32 "$func_to_host_path_tmp1" func_to_host_path_result="$func_convert_core_msys_to_w32_result" func_convert_path_check : ";" \ "$func_to_host_path_tmp1" "$func_to_host_path_result" func_convert_path_front_back_pathsep ":*" "*:" ";" "$1" fi } # end func_convert_path_msys_to_w32 # func_convert_path_cygwin_to_w32 ARG # Convert path ARG from Cygwin to w32 format. Returns result in # func_to_host_file_result. func_convert_path_cygwin_to_w32 () { $opt_debug func_to_host_path_result="$1" if test -n "$1"; then # See func_convert_path_msys_to_w32: func_stripname : : "$1" func_to_host_path_tmp1=$func_stripname_result func_to_host_path_result=`cygpath -m -p "$func_to_host_path_tmp1"` func_convert_path_check : ";" \ "$func_to_host_path_tmp1" "$func_to_host_path_result" func_convert_path_front_back_pathsep ":*" "*:" ";" "$1" fi } # end func_convert_path_cygwin_to_w32 # func_convert_path_nix_to_w32 ARG # Convert path ARG from *nix to w32 format. Requires a wine environment and # a working winepath. Returns result in func_to_host_file_result. func_convert_path_nix_to_w32 () { $opt_debug func_to_host_path_result="$1" if test -n "$1"; then # See func_convert_path_msys_to_w32: func_stripname : : "$1" func_to_host_path_tmp1=$func_stripname_result func_convert_core_path_wine_to_w32 "$func_to_host_path_tmp1" func_to_host_path_result="$func_convert_core_path_wine_to_w32_result" func_convert_path_check : ";" \ "$func_to_host_path_tmp1" "$func_to_host_path_result" func_convert_path_front_back_pathsep ":*" "*:" ";" "$1" fi } # end func_convert_path_nix_to_w32 # func_convert_path_msys_to_cygwin ARG # Convert path ARG from MSYS to Cygwin format. Requires LT_CYGPATH set. # Returns result in func_to_host_file_result. func_convert_path_msys_to_cygwin () { $opt_debug func_to_host_path_result="$1" if test -n "$1"; then # See func_convert_path_msys_to_w32: func_stripname : : "$1" func_to_host_path_tmp1=$func_stripname_result func_convert_core_msys_to_w32 "$func_to_host_path_tmp1" func_cygpath -u -p "$func_convert_core_msys_to_w32_result" func_to_host_path_result="$func_cygpath_result" func_convert_path_check : : \ "$func_to_host_path_tmp1" "$func_to_host_path_result" func_convert_path_front_back_pathsep ":*" "*:" : "$1" fi } # end func_convert_path_msys_to_cygwin # func_convert_path_nix_to_cygwin ARG # Convert path ARG from *nix to Cygwin format. Requires Cygwin installed in a # a wine environment, working winepath, and LT_CYGPATH set. Returns result in # func_to_host_file_result. func_convert_path_nix_to_cygwin () { $opt_debug func_to_host_path_result="$1" if test -n "$1"; then # Remove leading and trailing path separator characters from # ARG. msys behavior is inconsistent here, cygpath turns them # into '.;' and ';.', and winepath ignores them completely. func_stripname : : "$1" func_to_host_path_tmp1=$func_stripname_result func_convert_core_path_wine_to_w32 "$func_to_host_path_tmp1" func_cygpath -u -p "$func_convert_core_path_wine_to_w32_result" func_to_host_path_result="$func_cygpath_result" func_convert_path_check : : \ "$func_to_host_path_tmp1" "$func_to_host_path_result" func_convert_path_front_back_pathsep ":*" "*:" : "$1" fi } # end func_convert_path_nix_to_cygwin # func_mode_compile arg... func_mode_compile () { $opt_debug # Get the compilation command and the source file. base_compile= srcfile="$nonopt" # always keep a non-empty value in "srcfile" suppress_opt=yes suppress_output= arg_mode=normal libobj= later= pie_flag= for arg do case $arg_mode in arg ) # do not "continue". Instead, add this to base_compile lastarg="$arg" arg_mode=normal ;; target ) libobj="$arg" arg_mode=normal continue ;; normal ) # Accept any command-line options. case $arg in -o) test -n "$libobj" && \ func_fatal_error "you cannot specify \`-o' more than once" arg_mode=target continue ;; -pie | -fpie | -fPIE) func_append pie_flag " $arg" continue ;; -shared | -static | -prefer-pic | -prefer-non-pic) func_append later " $arg" continue ;; -no-suppress) suppress_opt=no continue ;; -Xcompiler) arg_mode=arg # the next one goes into the "base_compile" arg list continue # The current "srcfile" will either be retained or ;; # replaced later. I would guess that would be a bug. -Wc,*) func_stripname '-Wc,' '' "$arg" args=$func_stripname_result lastarg= save_ifs="$IFS"; IFS=',' for arg in $args; do IFS="$save_ifs" func_append_quoted lastarg "$arg" done IFS="$save_ifs" func_stripname ' ' '' "$lastarg" lastarg=$func_stripname_result # Add the arguments to base_compile. func_append base_compile " $lastarg" continue ;; *) # Accept the current argument as the source file. # The previous "srcfile" becomes the current argument. # lastarg="$srcfile" srcfile="$arg" ;; esac # case $arg ;; esac # case $arg_mode # Aesthetically quote the previous argument. func_append_quoted base_compile "$lastarg" done # for arg case $arg_mode in arg) func_fatal_error "you must specify an argument for -Xcompile" ;; target) func_fatal_error "you must specify a target with \`-o'" ;; *) # Get the name of the library object. test -z "$libobj" && { func_basename "$srcfile" libobj="$func_basename_result" } ;; esac # Recognize several different file suffixes. # If the user specifies -o file.o, it is replaced with file.lo case $libobj in *.[cCFSifmso] | \ *.ada | *.adb | *.ads | *.asm | \ *.c++ | *.cc | *.ii | *.class | *.cpp | *.cxx | \ *.[fF][09]? | *.for | *.java | *.go | *.obj | *.sx | *.cu | *.cup) func_xform "$libobj" libobj=$func_xform_result ;; esac case $libobj in *.lo) func_lo2o "$libobj"; obj=$func_lo2o_result ;; *) func_fatal_error "cannot determine name of library object from \`$libobj'" ;; esac func_infer_tag $base_compile for arg in $later; do case $arg in -shared) test "$build_libtool_libs" != yes && \ func_fatal_configuration "can not build a shared library" build_old_libs=no continue ;; -static) build_libtool_libs=no build_old_libs=yes continue ;; -prefer-pic) pic_mode=yes continue ;; -prefer-non-pic) pic_mode=no continue ;; esac done func_quote_for_eval "$libobj" test "X$libobj" != "X$func_quote_for_eval_result" \ && $ECHO "X$libobj" | $GREP '[]~#^*{};<>?"'"'"' &()|`$[]' \ && func_warning "libobj name \`$libobj' may not contain shell special characters." func_dirname_and_basename "$obj" "/" "" objname="$func_basename_result" xdir="$func_dirname_result" lobj=${xdir}$objdir/$objname test -z "$base_compile" && \ func_fatal_help "you must specify a compilation command" # Delete any leftover library objects. if test "$build_old_libs" = yes; then removelist="$obj $lobj $libobj ${libobj}T" else removelist="$lobj $libobj ${libobj}T" fi # On Cygwin there's no "real" PIC flag so we must build both object types case $host_os in cygwin* | mingw* | pw32* | os2* | cegcc*) pic_mode=default ;; esac if test "$pic_mode" = no && test "$deplibs_check_method" != pass_all; then # non-PIC code in shared libraries is not supported pic_mode=default fi # Calculate the filename of the output object if compiler does # not support -o with -c if test "$compiler_c_o" = no; then output_obj=`$ECHO "$srcfile" | $SED 's%^.*/%%; s%\.[^.]*$%%'`.${objext} lockfile="$output_obj.lock" else output_obj= need_locks=no lockfile= fi # Lock this critical section if it is needed # We use this script file to make the link, it avoids creating a new file if test "$need_locks" = yes; then until $opt_dry_run || ln "$progpath" "$lockfile" 2>/dev/null; do func_echo "Waiting for $lockfile to be removed" sleep 2 done elif test "$need_locks" = warn; then if test -f "$lockfile"; then $ECHO "\ *** ERROR, $lockfile exists and contains: `cat $lockfile 2>/dev/null` This indicates that another process is trying to use the same temporary object file, and libtool could not work around it because your compiler does not support \`-c' and \`-o' together. If you repeat this compilation, it may succeed, by chance, but you had better avoid parallel builds (make -j) in this platform, or get a better compiler." $opt_dry_run || $RM $removelist exit $EXIT_FAILURE fi func_append removelist " $output_obj" $ECHO "$srcfile" > "$lockfile" fi $opt_dry_run || $RM $removelist func_append removelist " $lockfile" trap '$opt_dry_run || $RM $removelist; exit $EXIT_FAILURE' 1 2 15 func_to_tool_file "$srcfile" func_convert_file_msys_to_w32 srcfile=$func_to_tool_file_result func_quote_for_eval "$srcfile" qsrcfile=$func_quote_for_eval_result # Only build a PIC object if we are building libtool libraries. if test "$build_libtool_libs" = yes; then # Without this assignment, base_compile gets emptied. fbsd_hideous_sh_bug=$base_compile if test "$pic_mode" != no; then command="$base_compile $qsrcfile $pic_flag" else # Don't build PIC code command="$base_compile $qsrcfile" fi func_mkdir_p "$xdir$objdir" if test -z "$output_obj"; then # Place PIC objects in $objdir func_append command " -o $lobj" fi func_show_eval_locale "$command" \ 'test -n "$output_obj" && $RM $removelist; exit $EXIT_FAILURE' if test "$need_locks" = warn && test "X`cat $lockfile 2>/dev/null`" != "X$srcfile"; then $ECHO "\ *** ERROR, $lockfile contains: `cat $lockfile 2>/dev/null` but it should contain: $srcfile This indicates that another process is trying to use the same temporary object file, and libtool could not work around it because your compiler does not support \`-c' and \`-o' together. If you repeat this compilation, it may succeed, by chance, but you had better avoid parallel builds (make -j) in this platform, or get a better compiler." $opt_dry_run || $RM $removelist exit $EXIT_FAILURE fi # Just move the object if needed, then go on to compile the next one if test -n "$output_obj" && test "X$output_obj" != "X$lobj"; then func_show_eval '$MV "$output_obj" "$lobj"' \ 'error=$?; $opt_dry_run || $RM $removelist; exit $error' fi # Allow error messages only from the first compilation. if test "$suppress_opt" = yes; then suppress_output=' >/dev/null 2>&1' fi fi # Only build a position-dependent object if we build old libraries. if test "$build_old_libs" = yes; then if test "$pic_mode" != yes; then # Don't build PIC code command="$base_compile $qsrcfile$pie_flag" else command="$base_compile $qsrcfile $pic_flag" fi if test "$compiler_c_o" = yes; then func_append command " -o $obj" fi # Suppress compiler output if we already did a PIC compilation. func_append command "$suppress_output" func_show_eval_locale "$command" \ '$opt_dry_run || $RM $removelist; exit $EXIT_FAILURE' if test "$need_locks" = warn && test "X`cat $lockfile 2>/dev/null`" != "X$srcfile"; then $ECHO "\ *** ERROR, $lockfile contains: `cat $lockfile 2>/dev/null` but it should contain: $srcfile This indicates that another process is trying to use the same temporary object file, and libtool could not work around it because your compiler does not support \`-c' and \`-o' together. If you repeat this compilation, it may succeed, by chance, but you had better avoid parallel builds (make -j) in this platform, or get a better compiler." $opt_dry_run || $RM $removelist exit $EXIT_FAILURE fi # Just move the object if needed if test -n "$output_obj" && test "X$output_obj" != "X$obj"; then func_show_eval '$MV "$output_obj" "$obj"' \ 'error=$?; $opt_dry_run || $RM $removelist; exit $error' fi fi $opt_dry_run || { func_write_libtool_object "$libobj" "$objdir/$objname" "$objname" # Unlock the critical section if it was locked if test "$need_locks" != no; then removelist=$lockfile $RM "$lockfile" fi } exit $EXIT_SUCCESS } $opt_help || { test "$opt_mode" = compile && func_mode_compile ${1+"$@"} } func_mode_help () { # We need to display help for each of the modes. case $opt_mode in "") # Generic help is extracted from the usage comments # at the start of this file. func_help ;; clean) $ECHO \ "Usage: $progname [OPTION]... --mode=clean RM [RM-OPTION]... FILE... Remove files from the build directory. RM is the name of the program to use to delete files associated with each FILE (typically \`/bin/rm'). RM-OPTIONS are options (such as \`-f') to be passed to RM. If FILE is a libtool library, object or program, all the files associated with it are deleted. Otherwise, only FILE itself is deleted using RM." ;; compile) $ECHO \ "Usage: $progname [OPTION]... --mode=compile COMPILE-COMMAND... SOURCEFILE Compile a source file into a libtool library object. This mode accepts the following additional options: -o OUTPUT-FILE set the output file name to OUTPUT-FILE -no-suppress do not suppress compiler output for multiple passes -prefer-pic try to build PIC objects only -prefer-non-pic try to build non-PIC objects only -shared do not build a \`.o' file suitable for static linking -static only build a \`.o' file suitable for static linking -Wc,FLAG pass FLAG directly to the compiler COMPILE-COMMAND is a command to be used in creating a \`standard' object file from the given SOURCEFILE. The output file name is determined by removing the directory component from SOURCEFILE, then substituting the C source code suffix \`.c' with the library object suffix, \`.lo'." ;; execute) $ECHO \ "Usage: $progname [OPTION]... --mode=execute COMMAND [ARGS]... Automatically set library path, then run a program. This mode accepts the following additional options: -dlopen FILE add the directory containing FILE to the library path This mode sets the library path environment variable according to \`-dlopen' flags. If any of the ARGS are libtool executable wrappers, then they are translated into their corresponding uninstalled binary, and any of their required library directories are added to the library path. Then, COMMAND is executed, with ARGS as arguments." ;; finish) $ECHO \ "Usage: $progname [OPTION]... --mode=finish [LIBDIR]... Complete the installation of libtool libraries. Each LIBDIR is a directory that contains libtool libraries. The commands that this mode executes may require superuser privileges. Use the \`--dry-run' option if you just want to see what would be executed." ;; install) $ECHO \ "Usage: $progname [OPTION]... --mode=install INSTALL-COMMAND... Install executables or libraries. INSTALL-COMMAND is the installation command. The first component should be either the \`install' or \`cp' program. The following components of INSTALL-COMMAND are treated specially: -inst-prefix-dir PREFIX-DIR Use PREFIX-DIR as a staging area for installation The rest of the components are interpreted as arguments to that command (only BSD-compatible install options are recognized)." ;; link) $ECHO \ "Usage: $progname [OPTION]... --mode=link LINK-COMMAND... Link object files or libraries together to form another library, or to create an executable program. LINK-COMMAND is a command using the C compiler that you would use to create a program from several object files. The following components of LINK-COMMAND are treated specially: -all-static do not do any dynamic linking at all -avoid-version do not add a version suffix if possible -bindir BINDIR specify path to binaries directory (for systems where libraries must be found in the PATH setting at runtime) -dlopen FILE \`-dlpreopen' FILE if it cannot be dlopened at runtime -dlpreopen FILE link in FILE and add its symbols to lt_preloaded_symbols -export-dynamic allow symbols from OUTPUT-FILE to be resolved with dlsym(3) -export-symbols SYMFILE try to export only the symbols listed in SYMFILE -export-symbols-regex REGEX try to export only the symbols matching REGEX -LLIBDIR search LIBDIR for required installed libraries -lNAME OUTPUT-FILE requires the installed library libNAME -module build a library that can dlopened -no-fast-install disable the fast-install mode -no-install link a not-installable executable -no-undefined declare that a library does not refer to external symbols -o OUTPUT-FILE create OUTPUT-FILE from the specified objects -objectlist FILE Use a list of object files found in FILE to specify objects -precious-files-regex REGEX don't remove output files matching REGEX -release RELEASE specify package release information -rpath LIBDIR the created library will eventually be installed in LIBDIR -R[ ]LIBDIR add LIBDIR to the runtime path of programs and libraries -shared only do dynamic linking of libtool libraries -shrext SUFFIX override the standard shared library file extension -static do not do any dynamic linking of uninstalled libtool libraries -static-libtool-libs do not do any dynamic linking of libtool libraries -version-info CURRENT[:REVISION[:AGE]] specify library version info [each variable defaults to 0] -weak LIBNAME declare that the target provides the LIBNAME interface -Wc,FLAG -Xcompiler FLAG pass linker-specific FLAG directly to the compiler -Wl,FLAG -Xlinker FLAG pass linker-specific FLAG directly to the linker -XCClinker FLAG pass link-specific FLAG to the compiler driver (CC) All other options (arguments beginning with \`-') are ignored. Every other argument is treated as a filename. Files ending in \`.la' are treated as uninstalled libtool libraries, other files are standard or library object files. If the OUTPUT-FILE ends in \`.la', then a libtool library is created, only library objects (\`.lo' files) may be specified, and \`-rpath' is required, except when creating a convenience library. If OUTPUT-FILE ends in \`.a' or \`.lib', then a standard library is created using \`ar' and \`ranlib', or on Windows using \`lib'. If OUTPUT-FILE ends in \`.lo' or \`.${objext}', then a reloadable object file is created, otherwise an executable program is created." ;; uninstall) $ECHO \ "Usage: $progname [OPTION]... --mode=uninstall RM [RM-OPTION]... FILE... Remove libraries from an installation directory. RM is the name of the program to use to delete files associated with each FILE (typically \`/bin/rm'). RM-OPTIONS are options (such as \`-f') to be passed to RM. If FILE is a libtool library, all the files associated with it are deleted. Otherwise, only FILE itself is deleted using RM." ;; *) func_fatal_help "invalid operation mode \`$opt_mode'" ;; esac echo $ECHO "Try \`$progname --help' for more information about other modes." } # Now that we've collected a possible --mode arg, show help if necessary if $opt_help; then if test "$opt_help" = :; then func_mode_help else { func_help noexit for opt_mode in compile link execute install finish uninstall clean; do func_mode_help done } | sed -n '1p; 2,$s/^Usage:/ or: /p' { func_help noexit for opt_mode in compile link execute install finish uninstall clean; do echo func_mode_help done } | sed '1d /^When reporting/,/^Report/{ H d } $x /information about other modes/d /more detailed .*MODE/d s/^Usage:.*--mode=\([^ ]*\) .*/Description of \1 mode:/' fi exit $? fi # func_mode_execute arg... func_mode_execute () { $opt_debug # The first argument is the command name. cmd="$nonopt" test -z "$cmd" && \ func_fatal_help "you must specify a COMMAND" # Handle -dlopen flags immediately. for file in $opt_dlopen; do test -f "$file" \ || func_fatal_help "\`$file' is not a file" dir= case $file in *.la) func_resolve_sysroot "$file" file=$func_resolve_sysroot_result # Check to see that this really is a libtool archive. func_lalib_unsafe_p "$file" \ || func_fatal_help "\`$lib' is not a valid libtool archive" # Read the libtool library. dlname= library_names= func_source "$file" # Skip this library if it cannot be dlopened. if test -z "$dlname"; then # Warn if it was a shared library. test -n "$library_names" && \ func_warning "\`$file' was not linked with \`-export-dynamic'" continue fi func_dirname "$file" "" "." dir="$func_dirname_result" if test -f "$dir/$objdir/$dlname"; then func_append dir "/$objdir" else if test ! -f "$dir/$dlname"; then func_fatal_error "cannot find \`$dlname' in \`$dir' or \`$dir/$objdir'" fi fi ;; *.lo) # Just add the directory containing the .lo file. func_dirname "$file" "" "." dir="$func_dirname_result" ;; *) func_warning "\`-dlopen' is ignored for non-libtool libraries and objects" continue ;; esac # Get the absolute pathname. absdir=`cd "$dir" && pwd` test -n "$absdir" && dir="$absdir" # Now add the directory to shlibpath_var. if eval "test -z \"\$$shlibpath_var\""; then eval "$shlibpath_var=\"\$dir\"" else eval "$shlibpath_var=\"\$dir:\$$shlibpath_var\"" fi done # This variable tells wrapper scripts just to set shlibpath_var # rather than running their programs. libtool_execute_magic="$magic" # Check if any of the arguments is a wrapper script. args= for file do case $file in -* | *.la | *.lo ) ;; *) # Do a test to see if this is really a libtool program. if func_ltwrapper_script_p "$file"; then func_source "$file" # Transform arg to wrapped name. file="$progdir/$program" elif func_ltwrapper_executable_p "$file"; then func_ltwrapper_scriptname "$file" func_source "$func_ltwrapper_scriptname_result" # Transform arg to wrapped name. file="$progdir/$program" fi ;; esac # Quote arguments (to preserve shell metacharacters). func_append_quoted args "$file" done if test "X$opt_dry_run" = Xfalse; then if test -n "$shlibpath_var"; then # Export the shlibpath_var. eval "export $shlibpath_var" fi # Restore saved environment variables for lt_var in LANG LANGUAGE LC_ALL LC_CTYPE LC_COLLATE LC_MESSAGES do eval "if test \"\${save_$lt_var+set}\" = set; then $lt_var=\$save_$lt_var; export $lt_var else $lt_unset $lt_var fi" done # Now prepare to actually exec the command. exec_cmd="\$cmd$args" else # Display what would be done. if test -n "$shlibpath_var"; then eval "\$ECHO \"\$shlibpath_var=\$$shlibpath_var\"" echo "export $shlibpath_var" fi $ECHO "$cmd$args" exit $EXIT_SUCCESS fi } test "$opt_mode" = execute && func_mode_execute ${1+"$@"} # func_mode_finish arg... func_mode_finish () { $opt_debug libs= libdirs= admincmds= for opt in "$nonopt" ${1+"$@"} do if test -d "$opt"; then func_append libdirs " $opt" elif test -f "$opt"; then if func_lalib_unsafe_p "$opt"; then func_append libs " $opt" else func_warning "\`$opt' is not a valid libtool archive" fi else func_fatal_error "invalid argument \`$opt'" fi done if test -n "$libs"; then if test -n "$lt_sysroot"; then sysroot_regex=`$ECHO "$lt_sysroot" | $SED "$sed_make_literal_regex"` sysroot_cmd="s/\([ ']\)$sysroot_regex/\1/g;" else sysroot_cmd= fi # Remove sysroot references if $opt_dry_run; then for lib in $libs; do echo "removing references to $lt_sysroot and \`=' prefixes from $lib" done else tmpdir=`func_mktempdir` for lib in $libs; do sed -e "${sysroot_cmd} s/\([ ']-[LR]\)=/\1/g; s/\([ ']\)=/\1/g" $lib \ > $tmpdir/tmp-la mv -f $tmpdir/tmp-la $lib done ${RM}r "$tmpdir" fi fi if test -n "$finish_cmds$finish_eval" && test -n "$libdirs"; then for libdir in $libdirs; do if test -n "$finish_cmds"; then # Do each command in the finish commands. func_execute_cmds "$finish_cmds" 'admincmds="$admincmds '"$cmd"'"' fi if test -n "$finish_eval"; then # Do the single finish_eval. eval cmds=\"$finish_eval\" $opt_dry_run || eval "$cmds" || func_append admincmds " $cmds" fi done fi # Exit here if they wanted silent mode. $opt_silent && exit $EXIT_SUCCESS if test -n "$finish_cmds$finish_eval" && test -n "$libdirs"; then echo "----------------------------------------------------------------------" echo "Libraries have been installed in:" for libdir in $libdirs; do $ECHO " $libdir" done echo echo "If you ever happen to want to link against installed libraries" echo "in a given directory, LIBDIR, you must either use libtool, and" echo "specify the full pathname of the library, or use the \`-LLIBDIR'" echo "flag during linking and do at least one of the following:" if test -n "$shlibpath_var"; then echo " - add LIBDIR to the \`$shlibpath_var' environment variable" echo " during execution" fi if test -n "$runpath_var"; then echo " - add LIBDIR to the \`$runpath_var' environment variable" echo " during linking" fi if test -n "$hardcode_libdir_flag_spec"; then libdir=LIBDIR eval flag=\"$hardcode_libdir_flag_spec\" $ECHO " - use the \`$flag' linker flag" fi if test -n "$admincmds"; then $ECHO " - have your system administrator run these commands:$admincmds" fi if test -f /etc/ld.so.conf; then echo " - have your system administrator add LIBDIR to \`/etc/ld.so.conf'" fi echo echo "See any operating system documentation about shared libraries for" case $host in solaris2.[6789]|solaris2.1[0-9]) echo "more information, such as the ld(1), crle(1) and ld.so(8) manual" echo "pages." ;; *) echo "more information, such as the ld(1) and ld.so(8) manual pages." ;; esac echo "----------------------------------------------------------------------" fi exit $EXIT_SUCCESS } test "$opt_mode" = finish && func_mode_finish ${1+"$@"} # func_mode_install arg... func_mode_install () { $opt_debug # There may be an optional sh(1) argument at the beginning of # install_prog (especially on Windows NT). if test "$nonopt" = "$SHELL" || test "$nonopt" = /bin/sh || # Allow the use of GNU shtool's install command. case $nonopt in *shtool*) :;; *) false;; esac; then # Aesthetically quote it. func_quote_for_eval "$nonopt" install_prog="$func_quote_for_eval_result " arg=$1 shift else install_prog= arg=$nonopt fi # The real first argument should be the name of the installation program. # Aesthetically quote it. func_quote_for_eval "$arg" func_append install_prog "$func_quote_for_eval_result" install_shared_prog=$install_prog case " $install_prog " in *[\\\ /]cp\ *) install_cp=: ;; *) install_cp=false ;; esac # We need to accept at least all the BSD install flags. dest= files= opts= prev= install_type= isdir=no stripme= no_mode=: for arg do arg2= if test -n "$dest"; then func_append files " $dest" dest=$arg continue fi case $arg in -d) isdir=yes ;; -f) if $install_cp; then :; else prev=$arg fi ;; -g | -m | -o) prev=$arg ;; -s) stripme=" -s" continue ;; -*) ;; *) # If the previous option needed an argument, then skip it. if test -n "$prev"; then if test "x$prev" = x-m && test -n "$install_override_mode"; then arg2=$install_override_mode no_mode=false fi prev= else dest=$arg continue fi ;; esac # Aesthetically quote the argument. func_quote_for_eval "$arg" func_append install_prog " $func_quote_for_eval_result" if test -n "$arg2"; then func_quote_for_eval "$arg2" fi func_append install_shared_prog " $func_quote_for_eval_result" done test -z "$install_prog" && \ func_fatal_help "you must specify an install program" test -n "$prev" && \ func_fatal_help "the \`$prev' option requires an argument" if test -n "$install_override_mode" && $no_mode; then if $install_cp; then :; else func_quote_for_eval "$install_override_mode" func_append install_shared_prog " -m $func_quote_for_eval_result" fi fi if test -z "$files"; then if test -z "$dest"; then func_fatal_help "no file or destination specified" else func_fatal_help "you must specify a destination" fi fi # Strip any trailing slash from the destination. func_stripname '' '/' "$dest" dest=$func_stripname_result # Check to see that the destination is a directory. test -d "$dest" && isdir=yes if test "$isdir" = yes; then destdir="$dest" destname= else func_dirname_and_basename "$dest" "" "." destdir="$func_dirname_result" destname="$func_basename_result" # Not a directory, so check to see that there is only one file specified. set dummy $files; shift test "$#" -gt 1 && \ func_fatal_help "\`$dest' is not a directory" fi case $destdir in [\\/]* | [A-Za-z]:[\\/]*) ;; *) for file in $files; do case $file in *.lo) ;; *) func_fatal_help "\`$destdir' must be an absolute directory name" ;; esac done ;; esac # This variable tells wrapper scripts just to set variables rather # than running their programs. libtool_install_magic="$magic" staticlibs= future_libdirs= current_libdirs= for file in $files; do # Do each installation. case $file in *.$libext) # Do the static libraries later. func_append staticlibs " $file" ;; *.la) func_resolve_sysroot "$file" file=$func_resolve_sysroot_result # Check to see that this really is a libtool archive. func_lalib_unsafe_p "$file" \ || func_fatal_help "\`$file' is not a valid libtool archive" library_names= old_library= relink_command= func_source "$file" # Add the libdir to current_libdirs if it is the destination. if test "X$destdir" = "X$libdir"; then case "$current_libdirs " in *" $libdir "*) ;; *) func_append current_libdirs " $libdir" ;; esac else # Note the libdir as a future libdir. case "$future_libdirs " in *" $libdir "*) ;; *) func_append future_libdirs " $libdir" ;; esac fi func_dirname "$file" "/" "" dir="$func_dirname_result" func_append dir "$objdir" if test -n "$relink_command"; then # Determine the prefix the user has applied to our future dir. inst_prefix_dir=`$ECHO "$destdir" | $SED -e "s%$libdir\$%%"` # Don't allow the user to place us outside of our expected # location b/c this prevents finding dependent libraries that # are installed to the same prefix. # At present, this check doesn't affect windows .dll's that # are installed into $libdir/../bin (currently, that works fine) # but it's something to keep an eye on. test "$inst_prefix_dir" = "$destdir" && \ func_fatal_error "error: cannot install \`$file' to a directory not ending in $libdir" if test -n "$inst_prefix_dir"; then # Stick the inst_prefix_dir data into the link command. relink_command=`$ECHO "$relink_command" | $SED "s%@inst_prefix_dir@%-inst-prefix-dir $inst_prefix_dir%"` else relink_command=`$ECHO "$relink_command" | $SED "s%@inst_prefix_dir@%%"` fi func_warning "relinking \`$file'" func_show_eval "$relink_command" \ 'func_fatal_error "error: relink \`$file'\'' with the above command before installing it"' fi # See the names of the shared library. set dummy $library_names; shift if test -n "$1"; then realname="$1" shift srcname="$realname" test -n "$relink_command" && srcname="$realname"T # Install the shared library and build the symlinks. func_show_eval "$install_shared_prog $dir/$srcname $destdir/$realname" \ 'exit $?' tstripme="$stripme" case $host_os in cygwin* | mingw* | pw32* | cegcc*) case $realname in *.dll.a) tstripme="" ;; esac ;; esac if test -n "$tstripme" && test -n "$striplib"; then func_show_eval "$striplib $destdir/$realname" 'exit $?' fi if test "$#" -gt 0; then # Delete the old symlinks, and create new ones. # Try `ln -sf' first, because the `ln' binary might depend on # the symlink we replace! Solaris /bin/ln does not understand -f, # so we also need to try rm && ln -s. for linkname do test "$linkname" != "$realname" \ && func_show_eval "(cd $destdir && { $LN_S -f $realname $linkname || { $RM $linkname && $LN_S $realname $linkname; }; })" done fi # Do each command in the postinstall commands. lib="$destdir/$realname" func_execute_cmds "$postinstall_cmds" 'exit $?' fi # Install the pseudo-library for information purposes. func_basename "$file" name="$func_basename_result" instname="$dir/$name"i func_show_eval "$install_prog $instname $destdir/$name" 'exit $?' # Maybe install the static library, too. test -n "$old_library" && func_append staticlibs " $dir/$old_library" ;; *.lo) # Install (i.e. copy) a libtool object. # Figure out destination file name, if it wasn't already specified. if test -n "$destname"; then destfile="$destdir/$destname" else func_basename "$file" destfile="$func_basename_result" destfile="$destdir/$destfile" fi # Deduce the name of the destination old-style object file. case $destfile in *.lo) func_lo2o "$destfile" staticdest=$func_lo2o_result ;; *.$objext) staticdest="$destfile" destfile= ;; *) func_fatal_help "cannot copy a libtool object to \`$destfile'" ;; esac # Install the libtool object if requested. test -n "$destfile" && \ func_show_eval "$install_prog $file $destfile" 'exit $?' # Install the old object if enabled. if test "$build_old_libs" = yes; then # Deduce the name of the old-style object file. func_lo2o "$file" staticobj=$func_lo2o_result func_show_eval "$install_prog \$staticobj \$staticdest" 'exit $?' fi exit $EXIT_SUCCESS ;; *) # Figure out destination file name, if it wasn't already specified. if test -n "$destname"; then destfile="$destdir/$destname" else func_basename "$file" destfile="$func_basename_result" destfile="$destdir/$destfile" fi # If the file is missing, and there is a .exe on the end, strip it # because it is most likely a libtool script we actually want to # install stripped_ext="" case $file in *.exe) if test ! -f "$file"; then func_stripname '' '.exe' "$file" file=$func_stripname_result stripped_ext=".exe" fi ;; esac # Do a test to see if this is really a libtool program. case $host in *cygwin* | *mingw*) if func_ltwrapper_executable_p "$file"; then func_ltwrapper_scriptname "$file" wrapper=$func_ltwrapper_scriptname_result else func_stripname '' '.exe' "$file" wrapper=$func_stripname_result fi ;; *) wrapper=$file ;; esac if func_ltwrapper_script_p "$wrapper"; then notinst_deplibs= relink_command= func_source "$wrapper" # Check the variables that should have been set. test -z "$generated_by_libtool_version" && \ func_fatal_error "invalid libtool wrapper script \`$wrapper'" finalize=yes for lib in $notinst_deplibs; do # Check to see that each library is installed. libdir= if test -f "$lib"; then func_source "$lib" fi libfile="$libdir/"`$ECHO "$lib" | $SED 's%^.*/%%g'` ### testsuite: skip nested quoting test if test -n "$libdir" && test ! -f "$libfile"; then func_warning "\`$lib' has not been installed in \`$libdir'" finalize=no fi done relink_command= func_source "$wrapper" outputname= if test "$fast_install" = no && test -n "$relink_command"; then $opt_dry_run || { if test "$finalize" = yes; then tmpdir=`func_mktempdir` func_basename "$file$stripped_ext" file="$func_basename_result" outputname="$tmpdir/$file" # Replace the output file specification. relink_command=`$ECHO "$relink_command" | $SED 's%@OUTPUT@%'"$outputname"'%g'` $opt_silent || { func_quote_for_expand "$relink_command" eval "func_echo $func_quote_for_expand_result" } if eval "$relink_command"; then : else func_error "error: relink \`$file' with the above command before installing it" $opt_dry_run || ${RM}r "$tmpdir" continue fi file="$outputname" else func_warning "cannot relink \`$file'" fi } else # Install the binary that we compiled earlier. file=`$ECHO "$file$stripped_ext" | $SED "s%\([^/]*\)$%$objdir/\1%"` fi fi # remove .exe since cygwin /usr/bin/install will append another # one anyway case $install_prog,$host in */usr/bin/install*,*cygwin*) case $file:$destfile in *.exe:*.exe) # this is ok ;; *.exe:*) destfile=$destfile.exe ;; *:*.exe) func_stripname '' '.exe' "$destfile" destfile=$func_stripname_result ;; esac ;; esac func_show_eval "$install_prog\$stripme \$file \$destfile" 'exit $?' $opt_dry_run || if test -n "$outputname"; then ${RM}r "$tmpdir" fi ;; esac done for file in $staticlibs; do func_basename "$file" name="$func_basename_result" # Set up the ranlib parameters. oldlib="$destdir/$name" func_to_tool_file "$oldlib" func_convert_file_msys_to_w32 tool_oldlib=$func_to_tool_file_result func_show_eval "$install_prog \$file \$oldlib" 'exit $?' if test -n "$stripme" && test -n "$old_striplib"; then func_show_eval "$old_striplib $tool_oldlib" 'exit $?' fi # Do each command in the postinstall commands. func_execute_cmds "$old_postinstall_cmds" 'exit $?' done test -n "$future_libdirs" && \ func_warning "remember to run \`$progname --finish$future_libdirs'" if test -n "$current_libdirs"; then # Maybe just do a dry run. $opt_dry_run && current_libdirs=" -n$current_libdirs" exec_cmd='$SHELL $progpath $preserve_args --finish$current_libdirs' else exit $EXIT_SUCCESS fi } test "$opt_mode" = install && func_mode_install ${1+"$@"} # func_generate_dlsyms outputname originator pic_p # Extract symbols from dlprefiles and create ${outputname}S.o with # a dlpreopen symbol table. func_generate_dlsyms () { $opt_debug my_outputname="$1" my_originator="$2" my_pic_p="${3-no}" my_prefix=`$ECHO "$my_originator" | sed 's%[^a-zA-Z0-9]%_%g'` my_dlsyms= if test -n "$dlfiles$dlprefiles" || test "$dlself" != no; then if test -n "$NM" && test -n "$global_symbol_pipe"; then my_dlsyms="${my_outputname}S.c" else func_error "not configured to extract global symbols from dlpreopened files" fi fi if test -n "$my_dlsyms"; then case $my_dlsyms in "") ;; *.c) # Discover the nlist of each of the dlfiles. nlist="$output_objdir/${my_outputname}.nm" func_show_eval "$RM $nlist ${nlist}S ${nlist}T" # Parse the name list into a source file. func_verbose "creating $output_objdir/$my_dlsyms" $opt_dry_run || $ECHO > "$output_objdir/$my_dlsyms" "\ /* $my_dlsyms - symbol resolution table for \`$my_outputname' dlsym emulation. */ /* Generated by $PROGRAM (GNU $PACKAGE$TIMESTAMP) $VERSION */ #ifdef __cplusplus extern \"C\" { #endif #if defined(__GNUC__) && (((__GNUC__ == 4) && (__GNUC_MINOR__ >= 4)) || (__GNUC__ > 4)) #pragma GCC diagnostic ignored \"-Wstrict-prototypes\" #endif /* Keep this code in sync between libtool.m4, ltmain, lt_system.h, and tests. */ #if defined(_WIN32) || defined(__CYGWIN__) || defined(_WIN32_WCE) /* DATA imports from DLLs on WIN32 con't be const, because runtime relocations are performed -- see ld's documentation on pseudo-relocs. */ # define LT_DLSYM_CONST #elif defined(__osf__) /* This system does not cope well with relocations in const data. */ # define LT_DLSYM_CONST #else # define LT_DLSYM_CONST const #endif /* External symbol declarations for the compiler. */\ " if test "$dlself" = yes; then func_verbose "generating symbol list for \`$output'" $opt_dry_run || echo ': @PROGRAM@ ' > "$nlist" # Add our own program objects to the symbol list. progfiles=`$ECHO "$objs$old_deplibs" | $SP2NL | $SED "$lo2o" | $NL2SP` for progfile in $progfiles; do func_to_tool_file "$progfile" func_convert_file_msys_to_w32 func_verbose "extracting global C symbols from \`$func_to_tool_file_result'" $opt_dry_run || eval "$NM $func_to_tool_file_result | $global_symbol_pipe >> '$nlist'" done if test -n "$exclude_expsyms"; then $opt_dry_run || { eval '$EGREP -v " ($exclude_expsyms)$" "$nlist" > "$nlist"T' eval '$MV "$nlist"T "$nlist"' } fi if test -n "$export_symbols_regex"; then $opt_dry_run || { eval '$EGREP -e "$export_symbols_regex" "$nlist" > "$nlist"T' eval '$MV "$nlist"T "$nlist"' } fi # Prepare the list of exported symbols if test -z "$export_symbols"; then export_symbols="$output_objdir/$outputname.exp" $opt_dry_run || { $RM $export_symbols eval "${SED} -n -e '/^: @PROGRAM@ $/d' -e 's/^.* \(.*\)$/\1/p' "'< "$nlist" > "$export_symbols"' case $host in *cygwin* | *mingw* | *cegcc* ) eval "echo EXPORTS "'> "$output_objdir/$outputname.def"' eval 'cat "$export_symbols" >> "$output_objdir/$outputname.def"' ;; esac } else $opt_dry_run || { eval "${SED} -e 's/\([].[*^$]\)/\\\\\1/g' -e 's/^/ /' -e 's/$/$/'"' < "$export_symbols" > "$output_objdir/$outputname.exp"' eval '$GREP -f "$output_objdir/$outputname.exp" < "$nlist" > "$nlist"T' eval '$MV "$nlist"T "$nlist"' case $host in *cygwin* | *mingw* | *cegcc* ) eval "echo EXPORTS "'> "$output_objdir/$outputname.def"' eval 'cat "$nlist" >> "$output_objdir/$outputname.def"' ;; esac } fi fi for dlprefile in $dlprefiles; do func_verbose "extracting global C symbols from \`$dlprefile'" func_basename "$dlprefile" name="$func_basename_result" case $host in *cygwin* | *mingw* | *cegcc* ) # if an import library, we need to obtain dlname if func_win32_import_lib_p "$dlprefile"; then func_tr_sh "$dlprefile" eval "curr_lafile=\$libfile_$func_tr_sh_result" dlprefile_dlbasename="" if test -n "$curr_lafile" && func_lalib_p "$curr_lafile"; then # Use subshell, to avoid clobbering current variable values dlprefile_dlname=`source "$curr_lafile" && echo "$dlname"` if test -n "$dlprefile_dlname" ; then func_basename "$dlprefile_dlname" dlprefile_dlbasename="$func_basename_result" else # no lafile. user explicitly requested -dlpreopen . $sharedlib_from_linklib_cmd "$dlprefile" dlprefile_dlbasename=$sharedlib_from_linklib_result fi fi $opt_dry_run || { if test -n "$dlprefile_dlbasename" ; then eval '$ECHO ": $dlprefile_dlbasename" >> "$nlist"' else func_warning "Could not compute DLL name from $name" eval '$ECHO ": $name " >> "$nlist"' fi func_to_tool_file "$dlprefile" func_convert_file_msys_to_w32 eval "$NM \"$func_to_tool_file_result\" 2>/dev/null | $global_symbol_pipe | $SED -e '/I __imp/d' -e 's/I __nm_/D /;s/_nm__//' >> '$nlist'" } else # not an import lib $opt_dry_run || { eval '$ECHO ": $name " >> "$nlist"' func_to_tool_file "$dlprefile" func_convert_file_msys_to_w32 eval "$NM \"$func_to_tool_file_result\" 2>/dev/null | $global_symbol_pipe >> '$nlist'" } fi ;; *) $opt_dry_run || { eval '$ECHO ": $name " >> "$nlist"' func_to_tool_file "$dlprefile" func_convert_file_msys_to_w32 eval "$NM \"$func_to_tool_file_result\" 2>/dev/null | $global_symbol_pipe >> '$nlist'" } ;; esac done $opt_dry_run || { # Make sure we have at least an empty file. test -f "$nlist" || : > "$nlist" if test -n "$exclude_expsyms"; then $EGREP -v " ($exclude_expsyms)$" "$nlist" > "$nlist"T $MV "$nlist"T "$nlist" fi # Try sorting and uniquifying the output. if $GREP -v "^: " < "$nlist" | if sort -k 3 /dev/null 2>&1; then sort -k 3 else sort +2 fi | uniq > "$nlist"S; then : else $GREP -v "^: " < "$nlist" > "$nlist"S fi if test -f "$nlist"S; then eval "$global_symbol_to_cdecl"' < "$nlist"S >> "$output_objdir/$my_dlsyms"' else echo '/* NONE */' >> "$output_objdir/$my_dlsyms" fi echo >> "$output_objdir/$my_dlsyms" "\ /* The mapping between symbol names and symbols. */ typedef struct { const char *name; void *address; } lt_dlsymlist; extern LT_DLSYM_CONST lt_dlsymlist lt_${my_prefix}_LTX_preloaded_symbols[]; LT_DLSYM_CONST lt_dlsymlist lt_${my_prefix}_LTX_preloaded_symbols[] = {\ { \"$my_originator\", (void *) 0 }," case $need_lib_prefix in no) eval "$global_symbol_to_c_name_address" < "$nlist" >> "$output_objdir/$my_dlsyms" ;; *) eval "$global_symbol_to_c_name_address_lib_prefix" < "$nlist" >> "$output_objdir/$my_dlsyms" ;; esac echo >> "$output_objdir/$my_dlsyms" "\ {0, (void *) 0} }; /* This works around a problem in FreeBSD linker */ #ifdef FREEBSD_WORKAROUND static const void *lt_preloaded_setup() { return lt_${my_prefix}_LTX_preloaded_symbols; } #endif #ifdef __cplusplus } #endif\ " } # !$opt_dry_run pic_flag_for_symtable= case "$compile_command " in *" -static "*) ;; *) case $host in # compiling the symbol table file with pic_flag works around # a FreeBSD bug that causes programs to crash when -lm is # linked before any other PIC object. But we must not use # pic_flag when linking with -static. The problem exists in # FreeBSD 2.2.6 and is fixed in FreeBSD 3.1. *-*-freebsd2.*|*-*-freebsd3.0*|*-*-freebsdelf3.0*) pic_flag_for_symtable=" $pic_flag -DFREEBSD_WORKAROUND" ;; *-*-hpux*) pic_flag_for_symtable=" $pic_flag" ;; *) if test "X$my_pic_p" != Xno; then pic_flag_for_symtable=" $pic_flag" fi ;; esac ;; esac symtab_cflags= for arg in $LTCFLAGS; do case $arg in -pie | -fpie | -fPIE) ;; *) func_append symtab_cflags " $arg" ;; esac done # Now compile the dynamic symbol file. func_show_eval '(cd $output_objdir && $LTCC$symtab_cflags -c$no_builtin_flag$pic_flag_for_symtable "$my_dlsyms")' 'exit $?' # Clean up the generated files. func_show_eval '$RM "$output_objdir/$my_dlsyms" "$nlist" "${nlist}S" "${nlist}T"' # Transform the symbol file into the correct name. symfileobj="$output_objdir/${my_outputname}S.$objext" case $host in *cygwin* | *mingw* | *cegcc* ) if test -f "$output_objdir/$my_outputname.def"; then compile_command=`$ECHO "$compile_command" | $SED "s%@SYMFILE@%$output_objdir/$my_outputname.def $symfileobj%"` finalize_command=`$ECHO "$finalize_command" | $SED "s%@SYMFILE@%$output_objdir/$my_outputname.def $symfileobj%"` else compile_command=`$ECHO "$compile_command" | $SED "s%@SYMFILE@%$symfileobj%"` finalize_command=`$ECHO "$finalize_command" | $SED "s%@SYMFILE@%$symfileobj%"` fi ;; *) compile_command=`$ECHO "$compile_command" | $SED "s%@SYMFILE@%$symfileobj%"` finalize_command=`$ECHO "$finalize_command" | $SED "s%@SYMFILE@%$symfileobj%"` ;; esac ;; *) func_fatal_error "unknown suffix for \`$my_dlsyms'" ;; esac else # We keep going just in case the user didn't refer to # lt_preloaded_symbols. The linker will fail if global_symbol_pipe # really was required. # Nullify the symbol file. compile_command=`$ECHO "$compile_command" | $SED "s% @SYMFILE@%%"` finalize_command=`$ECHO "$finalize_command" | $SED "s% @SYMFILE@%%"` fi } # func_win32_libid arg # return the library type of file 'arg' # # Need a lot of goo to handle *both* DLLs and import libs # Has to be a shell function in order to 'eat' the argument # that is supplied when $file_magic_command is called. # Despite the name, also deal with 64 bit binaries. func_win32_libid () { $opt_debug win32_libid_type="unknown" win32_fileres=`file -L $1 2>/dev/null` case $win32_fileres in *ar\ archive\ import\ library*) # definitely import win32_libid_type="x86 archive import" ;; *ar\ archive*) # could be an import, or static # Keep the egrep pattern in sync with the one in _LT_CHECK_MAGIC_METHOD. if eval $OBJDUMP -f $1 | $SED -e '10q' 2>/dev/null | $EGREP 'file format (pei*-i386(.*architecture: i386)?|pe-arm-wince|pe-x86-64)' >/dev/null; then func_to_tool_file "$1" func_convert_file_msys_to_w32 win32_nmres=`eval $NM -f posix -A \"$func_to_tool_file_result\" | $SED -n -e ' 1,100{ / I /{ s,.*,import, p q } }'` case $win32_nmres in import*) win32_libid_type="x86 archive import";; *) win32_libid_type="x86 archive static";; esac fi ;; *DLL*) win32_libid_type="x86 DLL" ;; *executable*) # but shell scripts are "executable" too... case $win32_fileres in *MS\ Windows\ PE\ Intel*) win32_libid_type="x86 DLL" ;; esac ;; esac $ECHO "$win32_libid_type" } # func_cygming_dll_for_implib ARG # # Platform-specific function to extract the # name of the DLL associated with the specified # import library ARG. # Invoked by eval'ing the libtool variable # $sharedlib_from_linklib_cmd # Result is available in the variable # $sharedlib_from_linklib_result func_cygming_dll_for_implib () { $opt_debug sharedlib_from_linklib_result=`$DLLTOOL --identify-strict --identify "$1"` } # func_cygming_dll_for_implib_fallback_core SECTION_NAME LIBNAMEs # # The is the core of a fallback implementation of a # platform-specific function to extract the name of the # DLL associated with the specified import library LIBNAME. # # SECTION_NAME is either .idata$6 or .idata$7, depending # on the platform and compiler that created the implib. # # Echos the name of the DLL associated with the # specified import library. func_cygming_dll_for_implib_fallback_core () { $opt_debug match_literal=`$ECHO "$1" | $SED "$sed_make_literal_regex"` $OBJDUMP -s --section "$1" "$2" 2>/dev/null | $SED '/^Contents of section '"$match_literal"':/{ # Place marker at beginning of archive member dllname section s/.*/====MARK====/ p d } # These lines can sometimes be longer than 43 characters, but # are always uninteresting /:[ ]*file format pe[i]\{,1\}-/d /^In archive [^:]*:/d # Ensure marker is printed /^====MARK====/p # Remove all lines with less than 43 characters /^.\{43\}/!d # From remaining lines, remove first 43 characters s/^.\{43\}//' | $SED -n ' # Join marker and all lines until next marker into a single line /^====MARK====/ b para H $ b para b :para x s/\n//g # Remove the marker s/^====MARK====// # Remove trailing dots and whitespace s/[\. \t]*$// # Print /./p' | # we now have a list, one entry per line, of the stringified # contents of the appropriate section of all members of the # archive which possess that section. Heuristic: eliminate # all those which have a first or second character that is # a '.' (that is, objdump's representation of an unprintable # character.) This should work for all archives with less than # 0x302f exports -- but will fail for DLLs whose name actually # begins with a literal '.' or a single character followed by # a '.'. # # Of those that remain, print the first one. $SED -e '/^\./d;/^.\./d;q' } # func_cygming_gnu_implib_p ARG # This predicate returns with zero status (TRUE) if # ARG is a GNU/binutils-style import library. Returns # with nonzero status (FALSE) otherwise. func_cygming_gnu_implib_p () { $opt_debug func_to_tool_file "$1" func_convert_file_msys_to_w32 func_cygming_gnu_implib_tmp=`$NM "$func_to_tool_file_result" | eval "$global_symbol_pipe" | $EGREP ' (_head_[A-Za-z0-9_]+_[ad]l*|[A-Za-z0-9_]+_[ad]l*_iname)$'` test -n "$func_cygming_gnu_implib_tmp" } # func_cygming_ms_implib_p ARG # This predicate returns with zero status (TRUE) if # ARG is an MS-style import library. Returns # with nonzero status (FALSE) otherwise. func_cygming_ms_implib_p () { $opt_debug func_to_tool_file "$1" func_convert_file_msys_to_w32 func_cygming_ms_implib_tmp=`$NM "$func_to_tool_file_result" | eval "$global_symbol_pipe" | $GREP '_NULL_IMPORT_DESCRIPTOR'` test -n "$func_cygming_ms_implib_tmp" } # func_cygming_dll_for_implib_fallback ARG # Platform-specific function to extract the # name of the DLL associated with the specified # import library ARG. # # This fallback implementation is for use when $DLLTOOL # does not support the --identify-strict option. # Invoked by eval'ing the libtool variable # $sharedlib_from_linklib_cmd # Result is available in the variable # $sharedlib_from_linklib_result func_cygming_dll_for_implib_fallback () { $opt_debug if func_cygming_gnu_implib_p "$1" ; then # binutils import library sharedlib_from_linklib_result=`func_cygming_dll_for_implib_fallback_core '.idata$7' "$1"` elif func_cygming_ms_implib_p "$1" ; then # ms-generated import library sharedlib_from_linklib_result=`func_cygming_dll_for_implib_fallback_core '.idata$6' "$1"` else # unknown sharedlib_from_linklib_result="" fi } # func_extract_an_archive dir oldlib func_extract_an_archive () { $opt_debug f_ex_an_ar_dir="$1"; shift f_ex_an_ar_oldlib="$1" if test "$lock_old_archive_extraction" = yes; then lockfile=$f_ex_an_ar_oldlib.lock until $opt_dry_run || ln "$progpath" "$lockfile" 2>/dev/null; do func_echo "Waiting for $lockfile to be removed" sleep 2 done fi func_show_eval "(cd \$f_ex_an_ar_dir && $AR x \"\$f_ex_an_ar_oldlib\")" \ 'stat=$?; rm -f "$lockfile"; exit $stat' if test "$lock_old_archive_extraction" = yes; then $opt_dry_run || rm -f "$lockfile" fi if ($AR t "$f_ex_an_ar_oldlib" | sort | sort -uc >/dev/null 2>&1); then : else func_fatal_error "object name conflicts in archive: $f_ex_an_ar_dir/$f_ex_an_ar_oldlib" fi } # func_extract_archives gentop oldlib ... func_extract_archives () { $opt_debug my_gentop="$1"; shift my_oldlibs=${1+"$@"} my_oldobjs="" my_xlib="" my_xabs="" my_xdir="" for my_xlib in $my_oldlibs; do # Extract the objects. case $my_xlib in [\\/]* | [A-Za-z]:[\\/]*) my_xabs="$my_xlib" ;; *) my_xabs=`pwd`"/$my_xlib" ;; esac func_basename "$my_xlib" my_xlib="$func_basename_result" my_xlib_u=$my_xlib while :; do case " $extracted_archives " in *" $my_xlib_u "*) func_arith $extracted_serial + 1 extracted_serial=$func_arith_result my_xlib_u=lt$extracted_serial-$my_xlib ;; *) break ;; esac done extracted_archives="$extracted_archives $my_xlib_u" my_xdir="$my_gentop/$my_xlib_u" func_mkdir_p "$my_xdir" case $host in *-darwin*) func_verbose "Extracting $my_xabs" # Do not bother doing anything if just a dry run $opt_dry_run || { darwin_orig_dir=`pwd` cd $my_xdir || exit $? darwin_archive=$my_xabs darwin_curdir=`pwd` darwin_base_archive=`basename "$darwin_archive"` darwin_arches=`$LIPO -info "$darwin_archive" 2>/dev/null | $GREP Architectures 2>/dev/null || true` if test -n "$darwin_arches"; then darwin_arches=`$ECHO "$darwin_arches" | $SED -e 's/.*are://'` darwin_arch= func_verbose "$darwin_base_archive has multiple architectures $darwin_arches" for darwin_arch in $darwin_arches ; do func_mkdir_p "unfat-$$/${darwin_base_archive}-${darwin_arch}" $LIPO -thin $darwin_arch -output "unfat-$$/${darwin_base_archive}-${darwin_arch}/${darwin_base_archive}" "${darwin_archive}" cd "unfat-$$/${darwin_base_archive}-${darwin_arch}" func_extract_an_archive "`pwd`" "${darwin_base_archive}" cd "$darwin_curdir" $RM "unfat-$$/${darwin_base_archive}-${darwin_arch}/${darwin_base_archive}" done # $darwin_arches ## Okay now we've a bunch of thin objects, gotta fatten them up :) darwin_filelist=`find unfat-$$ -type f -name \*.o -print -o -name \*.lo -print | $SED -e "$basename" | sort -u` darwin_file= darwin_files= for darwin_file in $darwin_filelist; do darwin_files=`find unfat-$$ -name $darwin_file -print | sort | $NL2SP` $LIPO -create -output "$darwin_file" $darwin_files done # $darwin_filelist $RM -rf unfat-$$ cd "$darwin_orig_dir" else cd $darwin_orig_dir func_extract_an_archive "$my_xdir" "$my_xabs" fi # $darwin_arches } # !$opt_dry_run ;; *) func_extract_an_archive "$my_xdir" "$my_xabs" ;; esac my_oldobjs="$my_oldobjs "`find $my_xdir -name \*.$objext -print -o -name \*.lo -print | sort | $NL2SP` done func_extract_archives_result="$my_oldobjs" } # func_emit_wrapper [arg=no] # # Emit a libtool wrapper script on stdout. # Don't directly open a file because we may want to # incorporate the script contents within a cygwin/mingw # wrapper executable. Must ONLY be called from within # func_mode_link because it depends on a number of variables # set therein. # # ARG is the value that the WRAPPER_SCRIPT_BELONGS_IN_OBJDIR # variable will take. If 'yes', then the emitted script # will assume that the directory in which it is stored is # the $objdir directory. This is a cygwin/mingw-specific # behavior. func_emit_wrapper () { func_emit_wrapper_arg1=${1-no} $ECHO "\ #! $SHELL # $output - temporary wrapper script for $objdir/$outputname # Generated by $PROGRAM (GNU $PACKAGE$TIMESTAMP) $VERSION # # The $output program cannot be directly executed until all the libtool # libraries that it depends on are installed. # # This wrapper script should never be moved out of the build directory. # If it is, it will not operate correctly. # Sed substitution that helps us do robust quoting. It backslashifies # metacharacters that are still active within double-quoted strings. sed_quote_subst='$sed_quote_subst' # Be Bourne compatible if test -n \"\${ZSH_VERSION+set}\" && (emulate sh) >/dev/null 2>&1; then emulate sh NULLCMD=: # Zsh 3.x and 4.x performs word splitting on \${1+\"\$@\"}, which # is contrary to our usage. Disable this feature. alias -g '\${1+\"\$@\"}'='\"\$@\"' setopt NO_GLOB_SUBST else case \`(set -o) 2>/dev/null\` in *posix*) set -o posix;; esac fi BIN_SH=xpg4; export BIN_SH # for Tru64 DUALCASE=1; export DUALCASE # for MKS sh # The HP-UX ksh and POSIX shell print the target directory to stdout # if CDPATH is set. (unset CDPATH) >/dev/null 2>&1 && unset CDPATH relink_command=\"$relink_command\" # This environment variable determines our operation mode. if test \"\$libtool_install_magic\" = \"$magic\"; then # install mode needs the following variables: generated_by_libtool_version='$macro_version' notinst_deplibs='$notinst_deplibs' else # When we are sourced in execute mode, \$file and \$ECHO are already set. if test \"\$libtool_execute_magic\" != \"$magic\"; then file=\"\$0\"" qECHO=`$ECHO "$ECHO" | $SED "$sed_quote_subst"` $ECHO "\ # A function that is used when there is no print builtin or printf. func_fallback_echo () { eval 'cat <<_LTECHO_EOF \$1 _LTECHO_EOF' } ECHO=\"$qECHO\" fi # Very basic option parsing. These options are (a) specific to # the libtool wrapper, (b) are identical between the wrapper # /script/ and the wrapper /executable/ which is used only on # windows platforms, and (c) all begin with the string "--lt-" # (application programs are unlikely to have options which match # this pattern). # # There are only two supported options: --lt-debug and # --lt-dump-script. There is, deliberately, no --lt-help. # # The first argument to this parsing function should be the # script's $0 value, followed by "$@". lt_option_debug= func_parse_lt_options () { lt_script_arg0=\$0 shift for lt_opt do case \"\$lt_opt\" in --lt-debug) lt_option_debug=1 ;; --lt-dump-script) lt_dump_D=\`\$ECHO \"X\$lt_script_arg0\" | $SED -e 's/^X//' -e 's%/[^/]*$%%'\` test \"X\$lt_dump_D\" = \"X\$lt_script_arg0\" && lt_dump_D=. lt_dump_F=\`\$ECHO \"X\$lt_script_arg0\" | $SED -e 's/^X//' -e 's%^.*/%%'\` cat \"\$lt_dump_D/\$lt_dump_F\" exit 0 ;; --lt-*) \$ECHO \"Unrecognized --lt- option: '\$lt_opt'\" 1>&2 exit 1 ;; esac done # Print the debug banner immediately: if test -n \"\$lt_option_debug\"; then echo \"${outputname}:${output}:\${LINENO}: libtool wrapper (GNU $PACKAGE$TIMESTAMP) $VERSION\" 1>&2 fi } # Used when --lt-debug. Prints its arguments to stdout # (redirection is the responsibility of the caller) func_lt_dump_args () { lt_dump_args_N=1; for lt_arg do \$ECHO \"${outputname}:${output}:\${LINENO}: newargv[\$lt_dump_args_N]: \$lt_arg\" lt_dump_args_N=\`expr \$lt_dump_args_N + 1\` done } # Core function for launching the target application func_exec_program_core () { " case $host in # Backslashes separate directories on plain windows *-*-mingw | *-*-os2* | *-cegcc*) $ECHO "\ if test -n \"\$lt_option_debug\"; then \$ECHO \"${outputname}:${output}:\${LINENO}: newargv[0]: \$progdir\\\\\$program\" 1>&2 func_lt_dump_args \${1+\"\$@\"} 1>&2 fi exec \"\$progdir\\\\\$program\" \${1+\"\$@\"} " ;; *) $ECHO "\ if test -n \"\$lt_option_debug\"; then \$ECHO \"${outputname}:${output}:\${LINENO}: newargv[0]: \$progdir/\$program\" 1>&2 func_lt_dump_args \${1+\"\$@\"} 1>&2 fi exec \"\$progdir/\$program\" \${1+\"\$@\"} " ;; esac $ECHO "\ \$ECHO \"\$0: cannot exec \$program \$*\" 1>&2 exit 1 } # A function to encapsulate launching the target application # Strips options in the --lt-* namespace from \$@ and # launches target application with the remaining arguments. func_exec_program () { case \" \$* \" in *\\ --lt-*) for lt_wr_arg do case \$lt_wr_arg in --lt-*) ;; *) set x \"\$@\" \"\$lt_wr_arg\"; shift;; esac shift done ;; esac func_exec_program_core \${1+\"\$@\"} } # Parse options func_parse_lt_options \"\$0\" \${1+\"\$@\"} # Find the directory that this script lives in. thisdir=\`\$ECHO \"\$file\" | $SED 's%/[^/]*$%%'\` test \"x\$thisdir\" = \"x\$file\" && thisdir=. # Follow symbolic links until we get to the real thisdir. file=\`ls -ld \"\$file\" | $SED -n 's/.*-> //p'\` while test -n \"\$file\"; do destdir=\`\$ECHO \"\$file\" | $SED 's%/[^/]*\$%%'\` # If there was a directory component, then change thisdir. if test \"x\$destdir\" != \"x\$file\"; then case \"\$destdir\" in [\\\\/]* | [A-Za-z]:[\\\\/]*) thisdir=\"\$destdir\" ;; *) thisdir=\"\$thisdir/\$destdir\" ;; esac fi file=\`\$ECHO \"\$file\" | $SED 's%^.*/%%'\` file=\`ls -ld \"\$thisdir/\$file\" | $SED -n 's/.*-> //p'\` done # Usually 'no', except on cygwin/mingw when embedded into # the cwrapper. WRAPPER_SCRIPT_BELONGS_IN_OBJDIR=$func_emit_wrapper_arg1 if test \"\$WRAPPER_SCRIPT_BELONGS_IN_OBJDIR\" = \"yes\"; then # special case for '.' if test \"\$thisdir\" = \".\"; then thisdir=\`pwd\` fi # remove .libs from thisdir case \"\$thisdir\" in *[\\\\/]$objdir ) thisdir=\`\$ECHO \"\$thisdir\" | $SED 's%[\\\\/][^\\\\/]*$%%'\` ;; $objdir ) thisdir=. ;; esac fi # Try to get the absolute directory name. absdir=\`cd \"\$thisdir\" && pwd\` test -n \"\$absdir\" && thisdir=\"\$absdir\" " if test "$fast_install" = yes; then $ECHO "\ program=lt-'$outputname'$exeext progdir=\"\$thisdir/$objdir\" if test ! -f \"\$progdir/\$program\" || { file=\`ls -1dt \"\$progdir/\$program\" \"\$progdir/../\$program\" 2>/dev/null | ${SED} 1q\`; \\ test \"X\$file\" != \"X\$progdir/\$program\"; }; then file=\"\$\$-\$program\" if test ! -d \"\$progdir\"; then $MKDIR \"\$progdir\" else $RM \"\$progdir/\$file\" fi" $ECHO "\ # relink executable if necessary if test -n \"\$relink_command\"; then if relink_command_output=\`eval \$relink_command 2>&1\`; then : else $ECHO \"\$relink_command_output\" >&2 $RM \"\$progdir/\$file\" exit 1 fi fi $MV \"\$progdir/\$file\" \"\$progdir/\$program\" 2>/dev/null || { $RM \"\$progdir/\$program\"; $MV \"\$progdir/\$file\" \"\$progdir/\$program\"; } $RM \"\$progdir/\$file\" fi" else $ECHO "\ program='$outputname' progdir=\"\$thisdir/$objdir\" " fi $ECHO "\ if test -f \"\$progdir/\$program\"; then" # fixup the dll searchpath if we need to. # # Fix the DLL searchpath if we need to. Do this before prepending # to shlibpath, because on Windows, both are PATH and uninstalled # libraries must come first. if test -n "$dllsearchpath"; then $ECHO "\ # Add the dll search path components to the executable PATH PATH=$dllsearchpath:\$PATH " fi # Export our shlibpath_var if we have one. if test "$shlibpath_overrides_runpath" = yes && test -n "$shlibpath_var" && test -n "$temp_rpath"; then $ECHO "\ # Add our own library path to $shlibpath_var $shlibpath_var=\"$temp_rpath\$$shlibpath_var\" # Some systems cannot cope with colon-terminated $shlibpath_var # The second colon is a workaround for a bug in BeOS R4 sed $shlibpath_var=\`\$ECHO \"\$$shlibpath_var\" | $SED 's/::*\$//'\` export $shlibpath_var " fi $ECHO "\ if test \"\$libtool_execute_magic\" != \"$magic\"; then # Run the actual program with our arguments. func_exec_program \${1+\"\$@\"} fi else # The program doesn't exist. \$ECHO \"\$0: error: \\\`\$progdir/\$program' does not exist\" 1>&2 \$ECHO \"This script is just a wrapper for \$program.\" 1>&2 \$ECHO \"See the $PACKAGE documentation for more information.\" 1>&2 exit 1 fi fi\ " } # func_emit_cwrapperexe_src # emit the source code for a wrapper executable on stdout # Must ONLY be called from within func_mode_link because # it depends on a number of variable set therein. func_emit_cwrapperexe_src () { cat < #include #ifdef _MSC_VER # include # include # include #else # include # include # ifdef __CYGWIN__ # include # endif #endif #include #include #include #include #include #include #include #include /* declarations of non-ANSI functions */ #if defined(__MINGW32__) # ifdef __STRICT_ANSI__ int _putenv (const char *); # endif #elif defined(__CYGWIN__) # ifdef __STRICT_ANSI__ char *realpath (const char *, char *); int putenv (char *); int setenv (const char *, const char *, int); # endif /* #elif defined (other platforms) ... */ #endif /* portability defines, excluding path handling macros */ #if defined(_MSC_VER) # define setmode _setmode # define stat _stat # define chmod _chmod # define getcwd _getcwd # define putenv _putenv # define S_IXUSR _S_IEXEC # ifndef _INTPTR_T_DEFINED # define _INTPTR_T_DEFINED # define intptr_t int # endif #elif defined(__MINGW32__) # define setmode _setmode # define stat _stat # define chmod _chmod # define getcwd _getcwd # define putenv _putenv #elif defined(__CYGWIN__) # define HAVE_SETENV # define FOPEN_WB "wb" /* #elif defined (other platforms) ... */ #endif #if defined(PATH_MAX) # define LT_PATHMAX PATH_MAX #elif defined(MAXPATHLEN) # define LT_PATHMAX MAXPATHLEN #else # define LT_PATHMAX 1024 #endif #ifndef S_IXOTH # define S_IXOTH 0 #endif #ifndef S_IXGRP # define S_IXGRP 0 #endif /* path handling portability macros */ #ifndef DIR_SEPARATOR # define DIR_SEPARATOR '/' # define PATH_SEPARATOR ':' #endif #if defined (_WIN32) || defined (__MSDOS__) || defined (__DJGPP__) || \ defined (__OS2__) # define HAVE_DOS_BASED_FILE_SYSTEM # define FOPEN_WB "wb" # ifndef DIR_SEPARATOR_2 # define DIR_SEPARATOR_2 '\\' # endif # ifndef PATH_SEPARATOR_2 # define PATH_SEPARATOR_2 ';' # endif #endif #ifndef DIR_SEPARATOR_2 # define IS_DIR_SEPARATOR(ch) ((ch) == DIR_SEPARATOR) #else /* DIR_SEPARATOR_2 */ # define IS_DIR_SEPARATOR(ch) \ (((ch) == DIR_SEPARATOR) || ((ch) == DIR_SEPARATOR_2)) #endif /* DIR_SEPARATOR_2 */ #ifndef PATH_SEPARATOR_2 # define IS_PATH_SEPARATOR(ch) ((ch) == PATH_SEPARATOR) #else /* PATH_SEPARATOR_2 */ # define IS_PATH_SEPARATOR(ch) ((ch) == PATH_SEPARATOR_2) #endif /* PATH_SEPARATOR_2 */ #ifndef FOPEN_WB # define FOPEN_WB "w" #endif #ifndef _O_BINARY # define _O_BINARY 0 #endif #define XMALLOC(type, num) ((type *) xmalloc ((num) * sizeof(type))) #define XFREE(stale) do { \ if (stale) { free ((void *) stale); stale = 0; } \ } while (0) #if defined(LT_DEBUGWRAPPER) static int lt_debug = 1; #else static int lt_debug = 0; #endif const char *program_name = "libtool-wrapper"; /* in case xstrdup fails */ void *xmalloc (size_t num); char *xstrdup (const char *string); const char *base_name (const char *name); char *find_executable (const char *wrapper); char *chase_symlinks (const char *pathspec); int make_executable (const char *path); int check_executable (const char *path); char *strendzap (char *str, const char *pat); void lt_debugprintf (const char *file, int line, const char *fmt, ...); void lt_fatal (const char *file, int line, const char *message, ...); static const char *nonnull (const char *s); static const char *nonempty (const char *s); void lt_setenv (const char *name, const char *value); char *lt_extend_str (const char *orig_value, const char *add, int to_end); void lt_update_exe_path (const char *name, const char *value); void lt_update_lib_path (const char *name, const char *value); char **prepare_spawn (char **argv); void lt_dump_script (FILE *f); EOF cat <= 0) && (st.st_mode & (S_IXUSR | S_IXGRP | S_IXOTH))) return 1; else return 0; } int make_executable (const char *path) { int rval = 0; struct stat st; lt_debugprintf (__FILE__, __LINE__, "(make_executable): %s\n", nonempty (path)); if ((!path) || (!*path)) return 0; if (stat (path, &st) >= 0) { rval = chmod (path, st.st_mode | S_IXOTH | S_IXGRP | S_IXUSR); } return rval; } /* Searches for the full path of the wrapper. Returns newly allocated full path name if found, NULL otherwise Does not chase symlinks, even on platforms that support them. */ char * find_executable (const char *wrapper) { int has_slash = 0; const char *p; const char *p_next; /* static buffer for getcwd */ char tmp[LT_PATHMAX + 1]; int tmp_len; char *concat_name; lt_debugprintf (__FILE__, __LINE__, "(find_executable): %s\n", nonempty (wrapper)); if ((wrapper == NULL) || (*wrapper == '\0')) return NULL; /* Absolute path? */ #if defined (HAVE_DOS_BASED_FILE_SYSTEM) if (isalpha ((unsigned char) wrapper[0]) && wrapper[1] == ':') { concat_name = xstrdup (wrapper); if (check_executable (concat_name)) return concat_name; XFREE (concat_name); } else { #endif if (IS_DIR_SEPARATOR (wrapper[0])) { concat_name = xstrdup (wrapper); if (check_executable (concat_name)) return concat_name; XFREE (concat_name); } #if defined (HAVE_DOS_BASED_FILE_SYSTEM) } #endif for (p = wrapper; *p; p++) if (*p == '/') { has_slash = 1; break; } if (!has_slash) { /* no slashes; search PATH */ const char *path = getenv ("PATH"); if (path != NULL) { for (p = path; *p; p = p_next) { const char *q; size_t p_len; for (q = p; *q; q++) if (IS_PATH_SEPARATOR (*q)) break; p_len = q - p; p_next = (*q == '\0' ? q : q + 1); if (p_len == 0) { /* empty path: current directory */ if (getcwd (tmp, LT_PATHMAX) == NULL) lt_fatal (__FILE__, __LINE__, "getcwd failed: %s", nonnull (strerror (errno))); tmp_len = strlen (tmp); concat_name = XMALLOC (char, tmp_len + 1 + strlen (wrapper) + 1); memcpy (concat_name, tmp, tmp_len); concat_name[tmp_len] = '/'; strcpy (concat_name + tmp_len + 1, wrapper); } else { concat_name = XMALLOC (char, p_len + 1 + strlen (wrapper) + 1); memcpy (concat_name, p, p_len); concat_name[p_len] = '/'; strcpy (concat_name + p_len + 1, wrapper); } if (check_executable (concat_name)) return concat_name; XFREE (concat_name); } } /* not found in PATH; assume curdir */ } /* Relative path | not found in path: prepend cwd */ if (getcwd (tmp, LT_PATHMAX) == NULL) lt_fatal (__FILE__, __LINE__, "getcwd failed: %s", nonnull (strerror (errno))); tmp_len = strlen (tmp); concat_name = XMALLOC (char, tmp_len + 1 + strlen (wrapper) + 1); memcpy (concat_name, tmp, tmp_len); concat_name[tmp_len] = '/'; strcpy (concat_name + tmp_len + 1, wrapper); if (check_executable (concat_name)) return concat_name; XFREE (concat_name); return NULL; } char * chase_symlinks (const char *pathspec) { #ifndef S_ISLNK return xstrdup (pathspec); #else char buf[LT_PATHMAX]; struct stat s; char *tmp_pathspec = xstrdup (pathspec); char *p; int has_symlinks = 0; while (strlen (tmp_pathspec) && !has_symlinks) { lt_debugprintf (__FILE__, __LINE__, "checking path component for symlinks: %s\n", tmp_pathspec); if (lstat (tmp_pathspec, &s) == 0) { if (S_ISLNK (s.st_mode) != 0) { has_symlinks = 1; break; } /* search backwards for last DIR_SEPARATOR */ p = tmp_pathspec + strlen (tmp_pathspec) - 1; while ((p > tmp_pathspec) && (!IS_DIR_SEPARATOR (*p))) p--; if ((p == tmp_pathspec) && (!IS_DIR_SEPARATOR (*p))) { /* no more DIR_SEPARATORS left */ break; } *p = '\0'; } else { lt_fatal (__FILE__, __LINE__, "error accessing file \"%s\": %s", tmp_pathspec, nonnull (strerror (errno))); } } XFREE (tmp_pathspec); if (!has_symlinks) { return xstrdup (pathspec); } tmp_pathspec = realpath (pathspec, buf); if (tmp_pathspec == 0) { lt_fatal (__FILE__, __LINE__, "could not follow symlinks for %s", pathspec); } return xstrdup (tmp_pathspec); #endif } char * strendzap (char *str, const char *pat) { size_t len, patlen; assert (str != NULL); assert (pat != NULL); len = strlen (str); patlen = strlen (pat); if (patlen <= len) { str += len - patlen; if (strcmp (str, pat) == 0) *str = '\0'; } return str; } void lt_debugprintf (const char *file, int line, const char *fmt, ...) { va_list args; if (lt_debug) { (void) fprintf (stderr, "%s:%s:%d: ", program_name, file, line); va_start (args, fmt); (void) vfprintf (stderr, fmt, args); va_end (args); } } static void lt_error_core (int exit_status, const char *file, int line, const char *mode, const char *message, va_list ap) { fprintf (stderr, "%s:%s:%d: %s: ", program_name, file, line, mode); vfprintf (stderr, message, ap); fprintf (stderr, ".\n"); if (exit_status >= 0) exit (exit_status); } void lt_fatal (const char *file, int line, const char *message, ...) { va_list ap; va_start (ap, message); lt_error_core (EXIT_FAILURE, file, line, "FATAL", message, ap); va_end (ap); } static const char * nonnull (const char *s) { return s ? s : "(null)"; } static const char * nonempty (const char *s) { return (s && !*s) ? "(empty)" : nonnull (s); } void lt_setenv (const char *name, const char *value) { lt_debugprintf (__FILE__, __LINE__, "(lt_setenv) setting '%s' to '%s'\n", nonnull (name), nonnull (value)); { #ifdef HAVE_SETENV /* always make a copy, for consistency with !HAVE_SETENV */ char *str = xstrdup (value); setenv (name, str, 1); #else int len = strlen (name) + 1 + strlen (value) + 1; char *str = XMALLOC (char, len); sprintf (str, "%s=%s", name, value); if (putenv (str) != EXIT_SUCCESS) { XFREE (str); } #endif } } char * lt_extend_str (const char *orig_value, const char *add, int to_end) { char *new_value; if (orig_value && *orig_value) { int orig_value_len = strlen (orig_value); int add_len = strlen (add); new_value = XMALLOC (char, add_len + orig_value_len + 1); if (to_end) { strcpy (new_value, orig_value); strcpy (new_value + orig_value_len, add); } else { strcpy (new_value, add); strcpy (new_value + add_len, orig_value); } } else { new_value = xstrdup (add); } return new_value; } void lt_update_exe_path (const char *name, const char *value) { lt_debugprintf (__FILE__, __LINE__, "(lt_update_exe_path) modifying '%s' by prepending '%s'\n", nonnull (name), nonnull (value)); if (name && *name && value && *value) { char *new_value = lt_extend_str (getenv (name), value, 0); /* some systems can't cope with a ':'-terminated path #' */ int len = strlen (new_value); while (((len = strlen (new_value)) > 0) && IS_PATH_SEPARATOR (new_value[len-1])) { new_value[len-1] = '\0'; } lt_setenv (name, new_value); XFREE (new_value); } } void lt_update_lib_path (const char *name, const char *value) { lt_debugprintf (__FILE__, __LINE__, "(lt_update_lib_path) modifying '%s' by prepending '%s'\n", nonnull (name), nonnull (value)); if (name && *name && value && *value) { char *new_value = lt_extend_str (getenv (name), value, 0); lt_setenv (name, new_value); XFREE (new_value); } } EOF case $host_os in mingw*) cat <<"EOF" /* Prepares an argument vector before calling spawn(). Note that spawn() does not by itself call the command interpreter (getenv ("COMSPEC") != NULL ? getenv ("COMSPEC") : ({ OSVERSIONINFO v; v.dwOSVersionInfoSize = sizeof(OSVERSIONINFO); GetVersionEx(&v); v.dwPlatformId == VER_PLATFORM_WIN32_NT; }) ? "cmd.exe" : "command.com"). Instead it simply concatenates the arguments, separated by ' ', and calls CreateProcess(). We must quote the arguments since Win32 CreateProcess() interprets characters like ' ', '\t', '\\', '"' (but not '<' and '>') in a special way: - Space and tab are interpreted as delimiters. They are not treated as delimiters if they are surrounded by double quotes: "...". - Unescaped double quotes are removed from the input. Their only effect is that within double quotes, space and tab are treated like normal characters. - Backslashes not followed by double quotes are not special. - But 2*n+1 backslashes followed by a double quote become n backslashes followed by a double quote (n >= 0): \" -> " \\\" -> \" \\\\\" -> \\" */ #define SHELL_SPECIAL_CHARS "\"\\ \001\002\003\004\005\006\007\010\011\012\013\014\015\016\017\020\021\022\023\024\025\026\027\030\031\032\033\034\035\036\037" #define SHELL_SPACE_CHARS " \001\002\003\004\005\006\007\010\011\012\013\014\015\016\017\020\021\022\023\024\025\026\027\030\031\032\033\034\035\036\037" char ** prepare_spawn (char **argv) { size_t argc; char **new_argv; size_t i; /* Count number of arguments. */ for (argc = 0; argv[argc] != NULL; argc++) ; /* Allocate new argument vector. */ new_argv = XMALLOC (char *, argc + 1); /* Put quoted arguments into the new argument vector. */ for (i = 0; i < argc; i++) { const char *string = argv[i]; if (string[0] == '\0') new_argv[i] = xstrdup ("\"\""); else if (strpbrk (string, SHELL_SPECIAL_CHARS) != NULL) { int quote_around = (strpbrk (string, SHELL_SPACE_CHARS) != NULL); size_t length; unsigned int backslashes; const char *s; char *quoted_string; char *p; length = 0; backslashes = 0; if (quote_around) length++; for (s = string; *s != '\0'; s++) { char c = *s; if (c == '"') length += backslashes + 1; length++; if (c == '\\') backslashes++; else backslashes = 0; } if (quote_around) length += backslashes + 1; quoted_string = XMALLOC (char, length + 1); p = quoted_string; backslashes = 0; if (quote_around) *p++ = '"'; for (s = string; *s != '\0'; s++) { char c = *s; if (c == '"') { unsigned int j; for (j = backslashes + 1; j > 0; j--) *p++ = '\\'; } *p++ = c; if (c == '\\') backslashes++; else backslashes = 0; } if (quote_around) { unsigned int j; for (j = backslashes; j > 0; j--) *p++ = '\\'; *p++ = '"'; } *p = '\0'; new_argv[i] = quoted_string; } else new_argv[i] = (char *) string; } new_argv[argc] = NULL; return new_argv; } EOF ;; esac cat <<"EOF" void lt_dump_script (FILE* f) { EOF func_emit_wrapper yes | $SED -n -e ' s/^\(.\{79\}\)\(..*\)/\1\ \2/ h s/\([\\"]\)/\\\1/g s/$/\\n/ s/\([^\n]*\).*/ fputs ("\1", f);/p g D' cat <<"EOF" } EOF } # end: func_emit_cwrapperexe_src # func_win32_import_lib_p ARG # True if ARG is an import lib, as indicated by $file_magic_cmd func_win32_import_lib_p () { $opt_debug case `eval $file_magic_cmd \"\$1\" 2>/dev/null | $SED -e 10q` in *import*) : ;; *) false ;; esac } # func_mode_link arg... func_mode_link () { $opt_debug case $host in *-*-cygwin* | *-*-mingw* | *-*-pw32* | *-*-os2* | *-cegcc*) # It is impossible to link a dll without this setting, and # we shouldn't force the makefile maintainer to figure out # which system we are compiling for in order to pass an extra # flag for every libtool invocation. # allow_undefined=no # FIXME: Unfortunately, there are problems with the above when trying # to make a dll which has undefined symbols, in which case not # even a static library is built. For now, we need to specify # -no-undefined on the libtool link line when we can be certain # that all symbols are satisfied, otherwise we get a static library. allow_undefined=yes ;; *) allow_undefined=yes ;; esac libtool_args=$nonopt base_compile="$nonopt $@" compile_command=$nonopt finalize_command=$nonopt compile_rpath= finalize_rpath= compile_shlibpath= finalize_shlibpath= convenience= old_convenience= deplibs= old_deplibs= compiler_flags= linker_flags= dllsearchpath= lib_search_path=`pwd` inst_prefix_dir= new_inherited_linker_flags= avoid_version=no bindir= dlfiles= dlprefiles= dlself=no export_dynamic=no export_symbols= export_symbols_regex= generated= libobjs= ltlibs= module=no no_install=no objs= non_pic_objects= precious_files_regex= prefer_static_libs=no preload=no prev= prevarg= release= rpath= xrpath= perm_rpath= temp_rpath= thread_safe=no vinfo= vinfo_number=no weak_libs= single_module="${wl}-single_module" func_infer_tag $base_compile # We need to know -static, to get the right output filenames. for arg do case $arg in -shared) test "$build_libtool_libs" != yes && \ func_fatal_configuration "can not build a shared library" build_old_libs=no break ;; -all-static | -static | -static-libtool-libs) case $arg in -all-static) if test "$build_libtool_libs" = yes && test -z "$link_static_flag"; then func_warning "complete static linking is impossible in this configuration" fi if test -n "$link_static_flag"; then dlopen_self=$dlopen_self_static fi prefer_static_libs=yes ;; -static) if test -z "$pic_flag" && test -n "$link_static_flag"; then dlopen_self=$dlopen_self_static fi prefer_static_libs=built ;; -static-libtool-libs) if test -z "$pic_flag" && test -n "$link_static_flag"; then dlopen_self=$dlopen_self_static fi prefer_static_libs=yes ;; esac build_libtool_libs=no build_old_libs=yes break ;; esac done # See if our shared archives depend on static archives. test -n "$old_archive_from_new_cmds" && build_old_libs=yes # Go through the arguments, transforming them on the way. while test "$#" -gt 0; do arg="$1" shift func_quote_for_eval "$arg" qarg=$func_quote_for_eval_unquoted_result func_append libtool_args " $func_quote_for_eval_result" # If the previous option needs an argument, assign it. if test -n "$prev"; then case $prev in output) func_append compile_command " @OUTPUT@" func_append finalize_command " @OUTPUT@" ;; esac case $prev in bindir) bindir="$arg" prev= continue ;; dlfiles|dlprefiles) if test "$preload" = no; then # Add the symbol object into the linking commands. func_append compile_command " @SYMFILE@" func_append finalize_command " @SYMFILE@" preload=yes fi case $arg in *.la | *.lo) ;; # We handle these cases below. force) if test "$dlself" = no; then dlself=needless export_dynamic=yes fi prev= continue ;; self) if test "$prev" = dlprefiles; then dlself=yes elif test "$prev" = dlfiles && test "$dlopen_self" != yes; then dlself=yes else dlself=needless export_dynamic=yes fi prev= continue ;; *) if test "$prev" = dlfiles; then func_append dlfiles " $arg" else func_append dlprefiles " $arg" fi prev= continue ;; esac ;; expsyms) export_symbols="$arg" test -f "$arg" \ || func_fatal_error "symbol file \`$arg' does not exist" prev= continue ;; expsyms_regex) export_symbols_regex="$arg" prev= continue ;; framework) case $host in *-*-darwin*) case "$deplibs " in *" $qarg.ltframework "*) ;; *) func_append deplibs " $qarg.ltframework" # this is fixed later ;; esac ;; esac prev= continue ;; inst_prefix) inst_prefix_dir="$arg" prev= continue ;; objectlist) if test -f "$arg"; then save_arg=$arg moreargs= for fil in `cat "$save_arg"` do # func_append moreargs " $fil" arg=$fil # A libtool-controlled object. # Check to see that this really is a libtool object. if func_lalib_unsafe_p "$arg"; then pic_object= non_pic_object= # Read the .lo file func_source "$arg" if test -z "$pic_object" || test -z "$non_pic_object" || test "$pic_object" = none && test "$non_pic_object" = none; then func_fatal_error "cannot find name of object for \`$arg'" fi # Extract subdirectory from the argument. func_dirname "$arg" "/" "" xdir="$func_dirname_result" if test "$pic_object" != none; then # Prepend the subdirectory the object is found in. pic_object="$xdir$pic_object" if test "$prev" = dlfiles; then if test "$build_libtool_libs" = yes && test "$dlopen_support" = yes; then func_append dlfiles " $pic_object" prev= continue else # If libtool objects are unsupported, then we need to preload. prev=dlprefiles fi fi # CHECK ME: I think I busted this. -Ossama if test "$prev" = dlprefiles; then # Preload the old-style object. func_append dlprefiles " $pic_object" prev= fi # A PIC object. func_append libobjs " $pic_object" arg="$pic_object" fi # Non-PIC object. if test "$non_pic_object" != none; then # Prepend the subdirectory the object is found in. non_pic_object="$xdir$non_pic_object" # A standard non-PIC object func_append non_pic_objects " $non_pic_object" if test -z "$pic_object" || test "$pic_object" = none ; then arg="$non_pic_object" fi else # If the PIC object exists, use it instead. # $xdir was prepended to $pic_object above. non_pic_object="$pic_object" func_append non_pic_objects " $non_pic_object" fi else # Only an error if not doing a dry-run. if $opt_dry_run; then # Extract subdirectory from the argument. func_dirname "$arg" "/" "" xdir="$func_dirname_result" func_lo2o "$arg" pic_object=$xdir$objdir/$func_lo2o_result non_pic_object=$xdir$func_lo2o_result func_append libobjs " $pic_object" func_append non_pic_objects " $non_pic_object" else func_fatal_error "\`$arg' is not a valid libtool object" fi fi done else func_fatal_error "link input file \`$arg' does not exist" fi arg=$save_arg prev= continue ;; precious_regex) precious_files_regex="$arg" prev= continue ;; release) release="-$arg" prev= continue ;; rpath | xrpath) # We need an absolute path. case $arg in [\\/]* | [A-Za-z]:[\\/]*) ;; *) func_fatal_error "only absolute run-paths are allowed" ;; esac if test "$prev" = rpath; then case "$rpath " in *" $arg "*) ;; *) func_append rpath " $arg" ;; esac else case "$xrpath " in *" $arg "*) ;; *) func_append xrpath " $arg" ;; esac fi prev= continue ;; shrext) shrext_cmds="$arg" prev= continue ;; weak) func_append weak_libs " $arg" prev= continue ;; xcclinker) func_append linker_flags " $qarg" func_append compiler_flags " $qarg" prev= func_append compile_command " $qarg" func_append finalize_command " $qarg" continue ;; xcompiler) func_append compiler_flags " $qarg" prev= func_append compile_command " $qarg" func_append finalize_command " $qarg" continue ;; xlinker) func_append linker_flags " $qarg" func_append compiler_flags " $wl$qarg" prev= func_append compile_command " $wl$qarg" func_append finalize_command " $wl$qarg" continue ;; *) eval "$prev=\"\$arg\"" prev= continue ;; esac fi # test -n "$prev" prevarg="$arg" case $arg in -all-static) if test -n "$link_static_flag"; then # See comment for -static flag below, for more details. func_append compile_command " $link_static_flag" func_append finalize_command " $link_static_flag" fi continue ;; -allow-undefined) # FIXME: remove this flag sometime in the future. func_fatal_error "\`-allow-undefined' must not be used because it is the default" ;; -avoid-version) avoid_version=yes continue ;; -bindir) prev=bindir continue ;; -dlopen) prev=dlfiles continue ;; -dlpreopen) prev=dlprefiles continue ;; -export-dynamic) export_dynamic=yes continue ;; -export-symbols | -export-symbols-regex) if test -n "$export_symbols" || test -n "$export_symbols_regex"; then func_fatal_error "more than one -exported-symbols argument is not allowed" fi if test "X$arg" = "X-export-symbols"; then prev=expsyms else prev=expsyms_regex fi continue ;; -framework) prev=framework continue ;; -inst-prefix-dir) prev=inst_prefix continue ;; # The native IRIX linker understands -LANG:*, -LIST:* and -LNO:* # so, if we see these flags be careful not to treat them like -L -L[A-Z][A-Z]*:*) case $with_gcc/$host in no/*-*-irix* | /*-*-irix*) func_append compile_command " $arg" func_append finalize_command " $arg" ;; esac continue ;; -L*) func_stripname "-L" '' "$arg" if test -z "$func_stripname_result"; then if test "$#" -gt 0; then func_fatal_error "require no space between \`-L' and \`$1'" else func_fatal_error "need path for \`-L' option" fi fi func_resolve_sysroot "$func_stripname_result" dir=$func_resolve_sysroot_result # We need an absolute path. case $dir in [\\/]* | [A-Za-z]:[\\/]*) ;; *) absdir=`cd "$dir" && pwd` test -z "$absdir" && \ func_fatal_error "cannot determine absolute directory name of \`$dir'" dir="$absdir" ;; esac case "$deplibs " in *" -L$dir "* | *" $arg "*) # Will only happen for absolute or sysroot arguments ;; *) # Preserve sysroot, but never include relative directories case $dir in [\\/]* | [A-Za-z]:[\\/]* | =*) func_append deplibs " $arg" ;; *) func_append deplibs " -L$dir" ;; esac func_append lib_search_path " $dir" ;; esac case $host in *-*-cygwin* | *-*-mingw* | *-*-pw32* | *-*-os2* | *-cegcc*) testbindir=`$ECHO "$dir" | $SED 's*/lib$*/bin*'` case :$dllsearchpath: in *":$dir:"*) ;; ::) dllsearchpath=$dir;; *) func_append dllsearchpath ":$dir";; esac case :$dllsearchpath: in *":$testbindir:"*) ;; ::) dllsearchpath=$testbindir;; *) func_append dllsearchpath ":$testbindir";; esac ;; esac continue ;; -l*) if test "X$arg" = "X-lc" || test "X$arg" = "X-lm"; then case $host in *-*-cygwin* | *-*-mingw* | *-*-pw32* | *-*-beos* | *-cegcc* | *-*-haiku*) # These systems don't actually have a C or math library (as such) continue ;; *-*-os2*) # These systems don't actually have a C library (as such) test "X$arg" = "X-lc" && continue ;; *-*-openbsd* | *-*-freebsd* | *-*-dragonfly*) # Do not include libc due to us having libc/libc_r. test "X$arg" = "X-lc" && continue ;; *-*-rhapsody* | *-*-darwin1.[012]) # Rhapsody C and math libraries are in the System framework func_append deplibs " System.ltframework" continue ;; *-*-sco3.2v5* | *-*-sco5v6*) # Causes problems with __ctype test "X$arg" = "X-lc" && continue ;; *-*-sysv4.2uw2* | *-*-sysv5* | *-*-unixware* | *-*-OpenUNIX*) # Compiler inserts libc in the correct place for threads to work test "X$arg" = "X-lc" && continue ;; esac elif test "X$arg" = "X-lc_r"; then case $host in *-*-openbsd* | *-*-freebsd* | *-*-dragonfly*) # Do not include libc_r directly, use -pthread flag. continue ;; esac fi func_append deplibs " $arg" continue ;; -module) module=yes continue ;; # Tru64 UNIX uses -model [arg] to determine the layout of C++ # classes, name mangling, and exception handling. # Darwin uses the -arch flag to determine output architecture. -model|-arch|-isysroot|--sysroot) func_append compiler_flags " $arg" func_append compile_command " $arg" func_append finalize_command " $arg" prev=xcompiler continue ;; -mt|-mthreads|-kthread|-Kthread|-pthread|-pthreads|--thread-safe \ |-threads|-fopenmp|-openmp|-mp|-xopenmp|-omp|-qsmp=*) func_append compiler_flags " $arg" func_append compile_command " $arg" func_append finalize_command " $arg" case "$new_inherited_linker_flags " in *" $arg "*) ;; * ) func_append new_inherited_linker_flags " $arg" ;; esac continue ;; -multi_module) single_module="${wl}-multi_module" continue ;; -no-fast-install) fast_install=no continue ;; -no-install) case $host in *-*-cygwin* | *-*-mingw* | *-*-pw32* | *-*-os2* | *-*-darwin* | *-cegcc*) # The PATH hackery in wrapper scripts is required on Windows # and Darwin in order for the loader to find any dlls it needs. func_warning "\`-no-install' is ignored for $host" func_warning "assuming \`-no-fast-install' instead" fast_install=no ;; *) no_install=yes ;; esac continue ;; -no-undefined) allow_undefined=no continue ;; -objectlist) prev=objectlist continue ;; -o) prev=output ;; -precious-files-regex) prev=precious_regex continue ;; -release) prev=release continue ;; -rpath) prev=rpath continue ;; -R) prev=xrpath continue ;; -R*) func_stripname '-R' '' "$arg" dir=$func_stripname_result # We need an absolute path. case $dir in [\\/]* | [A-Za-z]:[\\/]*) ;; =*) func_stripname '=' '' "$dir" dir=$lt_sysroot$func_stripname_result ;; *) func_fatal_error "only absolute run-paths are allowed" ;; esac case "$xrpath " in *" $dir "*) ;; *) func_append xrpath " $dir" ;; esac continue ;; -shared) # The effects of -shared are defined in a previous loop. continue ;; -shrext) prev=shrext continue ;; -static | -static-libtool-libs) # The effects of -static are defined in a previous loop. # We used to do the same as -all-static on platforms that # didn't have a PIC flag, but the assumption that the effects # would be equivalent was wrong. It would break on at least # Digital Unix and AIX. continue ;; -thread-safe) thread_safe=yes continue ;; -version-info) prev=vinfo continue ;; -version-number) prev=vinfo vinfo_number=yes continue ;; -weak) prev=weak continue ;; -Wc,*) func_stripname '-Wc,' '' "$arg" args=$func_stripname_result arg= save_ifs="$IFS"; IFS=',' for flag in $args; do IFS="$save_ifs" func_quote_for_eval "$flag" func_append arg " $func_quote_for_eval_result" func_append compiler_flags " $func_quote_for_eval_result" done IFS="$save_ifs" func_stripname ' ' '' "$arg" arg=$func_stripname_result ;; -Wl,*) func_stripname '-Wl,' '' "$arg" args=$func_stripname_result arg= save_ifs="$IFS"; IFS=',' for flag in $args; do IFS="$save_ifs" func_quote_for_eval "$flag" func_append arg " $wl$func_quote_for_eval_result" func_append compiler_flags " $wl$func_quote_for_eval_result" func_append linker_flags " $func_quote_for_eval_result" done IFS="$save_ifs" func_stripname ' ' '' "$arg" arg=$func_stripname_result ;; -Xcompiler) prev=xcompiler continue ;; -Xlinker) prev=xlinker continue ;; -XCClinker) prev=xcclinker continue ;; # -msg_* for osf cc -msg_*) func_quote_for_eval "$arg" arg="$func_quote_for_eval_result" ;; # Flags to be passed through unchanged, with rationale: # -64, -mips[0-9] enable 64-bit mode for the SGI compiler # -r[0-9][0-9]* specify processor for the SGI compiler # -xarch=*, -xtarget=* enable 64-bit mode for the Sun compiler # +DA*, +DD* enable 64-bit mode for the HP compiler # -q* compiler args for the IBM compiler # -m*, -t[45]*, -txscale* architecture-specific flags for GCC # -F/path path to uninstalled frameworks, gcc on darwin # -p, -pg, --coverage, -fprofile-* profiling flags for GCC # @file GCC response files # -tp=* Portland pgcc target processor selection # --sysroot=* for sysroot support # -O*, -flto*, -fwhopr*, -fuse-linker-plugin GCC link-time optimization -64|-mips[0-9]|-r[0-9][0-9]*|-xarch=*|-xtarget=*|+DA*|+DD*|-q*|-m*| \ -t[45]*|-txscale*|-p|-pg|--coverage|-fprofile-*|-F*|@*|-tp=*|--sysroot=*| \ -O*|-flto*|-fwhopr*|-fuse-linker-plugin) func_quote_for_eval "$arg" arg="$func_quote_for_eval_result" func_append compile_command " $arg" func_append finalize_command " $arg" func_append compiler_flags " $arg" continue ;; # Some other compiler flag. -* | +*) func_quote_for_eval "$arg" arg="$func_quote_for_eval_result" ;; *.$objext) # A standard object. func_append objs " $arg" ;; *.lo) # A libtool-controlled object. # Check to see that this really is a libtool object. if func_lalib_unsafe_p "$arg"; then pic_object= non_pic_object= # Read the .lo file func_source "$arg" if test -z "$pic_object" || test -z "$non_pic_object" || test "$pic_object" = none && test "$non_pic_object" = none; then func_fatal_error "cannot find name of object for \`$arg'" fi # Extract subdirectory from the argument. func_dirname "$arg" "/" "" xdir="$func_dirname_result" if test "$pic_object" != none; then # Prepend the subdirectory the object is found in. pic_object="$xdir$pic_object" if test "$prev" = dlfiles; then if test "$build_libtool_libs" = yes && test "$dlopen_support" = yes; then func_append dlfiles " $pic_object" prev= continue else # If libtool objects are unsupported, then we need to preload. prev=dlprefiles fi fi # CHECK ME: I think I busted this. -Ossama if test "$prev" = dlprefiles; then # Preload the old-style object. func_append dlprefiles " $pic_object" prev= fi # A PIC object. func_append libobjs " $pic_object" arg="$pic_object" fi # Non-PIC object. if test "$non_pic_object" != none; then # Prepend the subdirectory the object is found in. non_pic_object="$xdir$non_pic_object" # A standard non-PIC object func_append non_pic_objects " $non_pic_object" if test -z "$pic_object" || test "$pic_object" = none ; then arg="$non_pic_object" fi else # If the PIC object exists, use it instead. # $xdir was prepended to $pic_object above. non_pic_object="$pic_object" func_append non_pic_objects " $non_pic_object" fi else # Only an error if not doing a dry-run. if $opt_dry_run; then # Extract subdirectory from the argument. func_dirname "$arg" "/" "" xdir="$func_dirname_result" func_lo2o "$arg" pic_object=$xdir$objdir/$func_lo2o_result non_pic_object=$xdir$func_lo2o_result func_append libobjs " $pic_object" func_append non_pic_objects " $non_pic_object" else func_fatal_error "\`$arg' is not a valid libtool object" fi fi ;; *.$libext) # An archive. func_append deplibs " $arg" func_append old_deplibs " $arg" continue ;; *.la) # A libtool-controlled library. func_resolve_sysroot "$arg" if test "$prev" = dlfiles; then # This library was specified with -dlopen. func_append dlfiles " $func_resolve_sysroot_result" prev= elif test "$prev" = dlprefiles; then # The library was specified with -dlpreopen. func_append dlprefiles " $func_resolve_sysroot_result" prev= else func_append deplibs " $func_resolve_sysroot_result" fi continue ;; # Some other compiler argument. *) # Unknown arguments in both finalize_command and compile_command need # to be aesthetically quoted because they are evaled later. func_quote_for_eval "$arg" arg="$func_quote_for_eval_result" ;; esac # arg # Now actually substitute the argument into the commands. if test -n "$arg"; then func_append compile_command " $arg" func_append finalize_command " $arg" fi done # argument parsing loop test -n "$prev" && \ func_fatal_help "the \`$prevarg' option requires an argument" if test "$export_dynamic" = yes && test -n "$export_dynamic_flag_spec"; then eval arg=\"$export_dynamic_flag_spec\" func_append compile_command " $arg" func_append finalize_command " $arg" fi oldlibs= # calculate the name of the file, without its directory func_basename "$output" outputname="$func_basename_result" libobjs_save="$libobjs" if test -n "$shlibpath_var"; then # get the directories listed in $shlibpath_var eval shlib_search_path=\`\$ECHO \"\${$shlibpath_var}\" \| \$SED \'s/:/ /g\'\` else shlib_search_path= fi eval sys_lib_search_path=\"$sys_lib_search_path_spec\" eval sys_lib_dlsearch_path=\"$sys_lib_dlsearch_path_spec\" func_dirname "$output" "/" "" output_objdir="$func_dirname_result$objdir" func_to_tool_file "$output_objdir/" tool_output_objdir=$func_to_tool_file_result # Create the object directory. func_mkdir_p "$output_objdir" # Determine the type of output case $output in "") func_fatal_help "you must specify an output file" ;; *.$libext) linkmode=oldlib ;; *.lo | *.$objext) linkmode=obj ;; *.la) linkmode=lib ;; *) linkmode=prog ;; # Anything else should be a program. esac specialdeplibs= libs= # Find all interdependent deplibs by searching for libraries # that are linked more than once (e.g. -la -lb -la) for deplib in $deplibs; do if $opt_preserve_dup_deps ; then case "$libs " in *" $deplib "*) func_append specialdeplibs " $deplib" ;; esac fi func_append libs " $deplib" done if test "$linkmode" = lib; then libs="$predeps $libs $compiler_lib_search_path $postdeps" # Compute libraries that are listed more than once in $predeps # $postdeps and mark them as special (i.e., whose duplicates are # not to be eliminated). pre_post_deps= if $opt_duplicate_compiler_generated_deps; then for pre_post_dep in $predeps $postdeps; do case "$pre_post_deps " in *" $pre_post_dep "*) func_append specialdeplibs " $pre_post_deps" ;; esac func_append pre_post_deps " $pre_post_dep" done fi pre_post_deps= fi deplibs= newdependency_libs= newlib_search_path= need_relink=no # whether we're linking any uninstalled libtool libraries notinst_deplibs= # not-installed libtool libraries notinst_path= # paths that contain not-installed libtool libraries case $linkmode in lib) passes="conv dlpreopen link" for file in $dlfiles $dlprefiles; do case $file in *.la) ;; *) func_fatal_help "libraries can \`-dlopen' only libtool libraries: $file" ;; esac done ;; prog) compile_deplibs= finalize_deplibs= alldeplibs=no newdlfiles= newdlprefiles= passes="conv scan dlopen dlpreopen link" ;; *) passes="conv" ;; esac for pass in $passes; do # The preopen pass in lib mode reverses $deplibs; put it back here # so that -L comes before libs that need it for instance... if test "$linkmode,$pass" = "lib,link"; then ## FIXME: Find the place where the list is rebuilt in the wrong ## order, and fix it there properly tmp_deplibs= for deplib in $deplibs; do tmp_deplibs="$deplib $tmp_deplibs" done deplibs="$tmp_deplibs" fi if test "$linkmode,$pass" = "lib,link" || test "$linkmode,$pass" = "prog,scan"; then libs="$deplibs" deplibs= fi if test "$linkmode" = prog; then case $pass in dlopen) libs="$dlfiles" ;; dlpreopen) libs="$dlprefiles" ;; link) libs="$deplibs %DEPLIBS%" test "X$link_all_deplibs" != Xno && libs="$libs $dependency_libs" ;; esac fi if test "$linkmode,$pass" = "lib,dlpreopen"; then # Collect and forward deplibs of preopened libtool libs for lib in $dlprefiles; do # Ignore non-libtool-libs dependency_libs= func_resolve_sysroot "$lib" case $lib in *.la) func_source "$func_resolve_sysroot_result" ;; esac # Collect preopened libtool deplibs, except any this library # has declared as weak libs for deplib in $dependency_libs; do func_basename "$deplib" deplib_base=$func_basename_result case " $weak_libs " in *" $deplib_base "*) ;; *) func_append deplibs " $deplib" ;; esac done done libs="$dlprefiles" fi if test "$pass" = dlopen; then # Collect dlpreopened libraries save_deplibs="$deplibs" deplibs= fi for deplib in $libs; do lib= found=no case $deplib in -mt|-mthreads|-kthread|-Kthread|-pthread|-pthreads|--thread-safe \ |-threads|-fopenmp|-openmp|-mp|-xopenmp|-omp|-qsmp=*) if test "$linkmode,$pass" = "prog,link"; then compile_deplibs="$deplib $compile_deplibs" finalize_deplibs="$deplib $finalize_deplibs" else func_append compiler_flags " $deplib" if test "$linkmode" = lib ; then case "$new_inherited_linker_flags " in *" $deplib "*) ;; * ) func_append new_inherited_linker_flags " $deplib" ;; esac fi fi continue ;; -l*) if test "$linkmode" != lib && test "$linkmode" != prog; then func_warning "\`-l' is ignored for archives/objects" continue fi func_stripname '-l' '' "$deplib" name=$func_stripname_result if test "$linkmode" = lib; then searchdirs="$newlib_search_path $lib_search_path $compiler_lib_search_dirs $sys_lib_search_path $shlib_search_path" else searchdirs="$newlib_search_path $lib_search_path $sys_lib_search_path $shlib_search_path" fi for searchdir in $searchdirs; do for search_ext in .la $std_shrext .so .a; do # Search the libtool library lib="$searchdir/lib${name}${search_ext}" if test -f "$lib"; then if test "$search_ext" = ".la"; then found=yes else found=no fi break 2 fi done done if test "$found" != yes; then # deplib doesn't seem to be a libtool library if test "$linkmode,$pass" = "prog,link"; then compile_deplibs="$deplib $compile_deplibs" finalize_deplibs="$deplib $finalize_deplibs" else deplibs="$deplib $deplibs" test "$linkmode" = lib && newdependency_libs="$deplib $newdependency_libs" fi continue else # deplib is a libtool library # If $allow_libtool_libs_with_static_runtimes && $deplib is a stdlib, # We need to do some special things here, and not later. if test "X$allow_libtool_libs_with_static_runtimes" = "Xyes" ; then case " $predeps $postdeps " in *" $deplib "*) if func_lalib_p "$lib"; then library_names= old_library= func_source "$lib" for l in $old_library $library_names; do ll="$l" done if test "X$ll" = "X$old_library" ; then # only static version available found=no func_dirname "$lib" "" "." ladir="$func_dirname_result" lib=$ladir/$old_library if test "$linkmode,$pass" = "prog,link"; then compile_deplibs="$deplib $compile_deplibs" finalize_deplibs="$deplib $finalize_deplibs" else deplibs="$deplib $deplibs" test "$linkmode" = lib && newdependency_libs="$deplib $newdependency_libs" fi continue fi fi ;; *) ;; esac fi fi ;; # -l *.ltframework) if test "$linkmode,$pass" = "prog,link"; then compile_deplibs="$deplib $compile_deplibs" finalize_deplibs="$deplib $finalize_deplibs" else deplibs="$deplib $deplibs" if test "$linkmode" = lib ; then case "$new_inherited_linker_flags " in *" $deplib "*) ;; * ) func_append new_inherited_linker_flags " $deplib" ;; esac fi fi continue ;; -L*) case $linkmode in lib) deplibs="$deplib $deplibs" test "$pass" = conv && continue newdependency_libs="$deplib $newdependency_libs" func_stripname '-L' '' "$deplib" func_resolve_sysroot "$func_stripname_result" func_append newlib_search_path " $func_resolve_sysroot_result" ;; prog) if test "$pass" = conv; then deplibs="$deplib $deplibs" continue fi if test "$pass" = scan; then deplibs="$deplib $deplibs" else compile_deplibs="$deplib $compile_deplibs" finalize_deplibs="$deplib $finalize_deplibs" fi func_stripname '-L' '' "$deplib" func_resolve_sysroot "$func_stripname_result" func_append newlib_search_path " $func_resolve_sysroot_result" ;; *) func_warning "\`-L' is ignored for archives/objects" ;; esac # linkmode continue ;; # -L -R*) if test "$pass" = link; then func_stripname '-R' '' "$deplib" func_resolve_sysroot "$func_stripname_result" dir=$func_resolve_sysroot_result # Make sure the xrpath contains only unique directories. case "$xrpath " in *" $dir "*) ;; *) func_append xrpath " $dir" ;; esac fi deplibs="$deplib $deplibs" continue ;; *.la) func_resolve_sysroot "$deplib" lib=$func_resolve_sysroot_result ;; *.$libext) if test "$pass" = conv; then deplibs="$deplib $deplibs" continue fi case $linkmode in lib) # Linking convenience modules into shared libraries is allowed, # but linking other static libraries is non-portable. case " $dlpreconveniencelibs " in *" $deplib "*) ;; *) valid_a_lib=no case $deplibs_check_method in match_pattern*) set dummy $deplibs_check_method; shift match_pattern_regex=`expr "$deplibs_check_method" : "$1 \(.*\)"` if eval "\$ECHO \"$deplib\"" 2>/dev/null | $SED 10q \ | $EGREP "$match_pattern_regex" > /dev/null; then valid_a_lib=yes fi ;; pass_all) valid_a_lib=yes ;; esac if test "$valid_a_lib" != yes; then echo $ECHO "*** Warning: Trying to link with static lib archive $deplib." echo "*** I have the capability to make that library automatically link in when" echo "*** you link to this library. But I can only do this if you have a" echo "*** shared version of the library, which you do not appear to have" echo "*** because the file extensions .$libext of this argument makes me believe" echo "*** that it is just a static archive that I should not use here." else echo $ECHO "*** Warning: Linking the shared library $output against the" $ECHO "*** static library $deplib is not portable!" deplibs="$deplib $deplibs" fi ;; esac continue ;; prog) if test "$pass" != link; then deplibs="$deplib $deplibs" else compile_deplibs="$deplib $compile_deplibs" finalize_deplibs="$deplib $finalize_deplibs" fi continue ;; esac # linkmode ;; # *.$libext *.lo | *.$objext) if test "$pass" = conv; then deplibs="$deplib $deplibs" elif test "$linkmode" = prog; then if test "$pass" = dlpreopen || test "$dlopen_support" != yes || test "$build_libtool_libs" = no; then # If there is no dlopen support or we're linking statically, # we need to preload. func_append newdlprefiles " $deplib" compile_deplibs="$deplib $compile_deplibs" finalize_deplibs="$deplib $finalize_deplibs" else func_append newdlfiles " $deplib" fi fi continue ;; %DEPLIBS%) alldeplibs=yes continue ;; esac # case $deplib if test "$found" = yes || test -f "$lib"; then : else func_fatal_error "cannot find the library \`$lib' or unhandled argument \`$deplib'" fi # Check to see that this really is a libtool archive. func_lalib_unsafe_p "$lib" \ || func_fatal_error "\`$lib' is not a valid libtool archive" func_dirname "$lib" "" "." ladir="$func_dirname_result" dlname= dlopen= dlpreopen= libdir= library_names= old_library= inherited_linker_flags= # If the library was installed with an old release of libtool, # it will not redefine variables installed, or shouldnotlink installed=yes shouldnotlink=no avoidtemprpath= # Read the .la file func_source "$lib" # Convert "-framework foo" to "foo.ltframework" if test -n "$inherited_linker_flags"; then tmp_inherited_linker_flags=`$ECHO "$inherited_linker_flags" | $SED 's/-framework \([^ $]*\)/\1.ltframework/g'` for tmp_inherited_linker_flag in $tmp_inherited_linker_flags; do case " $new_inherited_linker_flags " in *" $tmp_inherited_linker_flag "*) ;; *) func_append new_inherited_linker_flags " $tmp_inherited_linker_flag";; esac done fi dependency_libs=`$ECHO " $dependency_libs" | $SED 's% \([^ $]*\).ltframework% -framework \1%g'` if test "$linkmode,$pass" = "lib,link" || test "$linkmode,$pass" = "prog,scan" || { test "$linkmode" != prog && test "$linkmode" != lib; }; then test -n "$dlopen" && func_append dlfiles " $dlopen" test -n "$dlpreopen" && func_append dlprefiles " $dlpreopen" fi if test "$pass" = conv; then # Only check for convenience libraries deplibs="$lib $deplibs" if test -z "$libdir"; then if test -z "$old_library"; then func_fatal_error "cannot find name of link library for \`$lib'" fi # It is a libtool convenience library, so add in its objects. func_append convenience " $ladir/$objdir/$old_library" func_append old_convenience " $ladir/$objdir/$old_library" tmp_libs= for deplib in $dependency_libs; do deplibs="$deplib $deplibs" if $opt_preserve_dup_deps ; then case "$tmp_libs " in *" $deplib "*) func_append specialdeplibs " $deplib" ;; esac fi func_append tmp_libs " $deplib" done elif test "$linkmode" != prog && test "$linkmode" != lib; then func_fatal_error "\`$lib' is not a convenience library" fi continue fi # $pass = conv # Get the name of the library we link against. linklib= if test -n "$old_library" && { test "$prefer_static_libs" = yes || test "$prefer_static_libs,$installed" = "built,no"; }; then linklib=$old_library else for l in $old_library $library_names; do linklib="$l" done fi if test -z "$linklib"; then func_fatal_error "cannot find name of link library for \`$lib'" fi # This library was specified with -dlopen. if test "$pass" = dlopen; then if test -z "$libdir"; then func_fatal_error "cannot -dlopen a convenience library: \`$lib'" fi if test -z "$dlname" || test "$dlopen_support" != yes || test "$build_libtool_libs" = no; then # If there is no dlname, no dlopen support or we're linking # statically, we need to preload. We also need to preload any # dependent libraries so libltdl's deplib preloader doesn't # bomb out in the load deplibs phase. func_append dlprefiles " $lib $dependency_libs" else func_append newdlfiles " $lib" fi continue fi # $pass = dlopen # We need an absolute path. case $ladir in [\\/]* | [A-Za-z]:[\\/]*) abs_ladir="$ladir" ;; *) abs_ladir=`cd "$ladir" && pwd` if test -z "$abs_ladir"; then func_warning "cannot determine absolute directory name of \`$ladir'" func_warning "passing it literally to the linker, although it might fail" abs_ladir="$ladir" fi ;; esac func_basename "$lib" laname="$func_basename_result" # Find the relevant object directory and library name. if test "X$installed" = Xyes; then if test ! -f "$lt_sysroot$libdir/$linklib" && test -f "$abs_ladir/$linklib"; then func_warning "library \`$lib' was moved." dir="$ladir" absdir="$abs_ladir" libdir="$abs_ladir" else dir="$lt_sysroot$libdir" absdir="$lt_sysroot$libdir" fi test "X$hardcode_automatic" = Xyes && avoidtemprpath=yes else if test ! -f "$ladir/$objdir/$linklib" && test -f "$abs_ladir/$linklib"; then dir="$ladir" absdir="$abs_ladir" # Remove this search path later func_append notinst_path " $abs_ladir" else dir="$ladir/$objdir" absdir="$abs_ladir/$objdir" # Remove this search path later func_append notinst_path " $abs_ladir" fi fi # $installed = yes func_stripname 'lib' '.la' "$laname" name=$func_stripname_result # This library was specified with -dlpreopen. if test "$pass" = dlpreopen; then if test -z "$libdir" && test "$linkmode" = prog; then func_fatal_error "only libraries may -dlpreopen a convenience library: \`$lib'" fi case "$host" in # special handling for platforms with PE-DLLs. *cygwin* | *mingw* | *cegcc* ) # Linker will automatically link against shared library if both # static and shared are present. Therefore, ensure we extract # symbols from the import library if a shared library is present # (otherwise, the dlopen module name will be incorrect). We do # this by putting the import library name into $newdlprefiles. # We recover the dlopen module name by 'saving' the la file # name in a special purpose variable, and (later) extracting the # dlname from the la file. if test -n "$dlname"; then func_tr_sh "$dir/$linklib" eval "libfile_$func_tr_sh_result=\$abs_ladir/\$laname" func_append newdlprefiles " $dir/$linklib" else func_append newdlprefiles " $dir/$old_library" # Keep a list of preopened convenience libraries to check # that they are being used correctly in the link pass. test -z "$libdir" && \ func_append dlpreconveniencelibs " $dir/$old_library" fi ;; * ) # Prefer using a static library (so that no silly _DYNAMIC symbols # are required to link). if test -n "$old_library"; then func_append newdlprefiles " $dir/$old_library" # Keep a list of preopened convenience libraries to check # that they are being used correctly in the link pass. test -z "$libdir" && \ func_append dlpreconveniencelibs " $dir/$old_library" # Otherwise, use the dlname, so that lt_dlopen finds it. elif test -n "$dlname"; then func_append newdlprefiles " $dir/$dlname" else func_append newdlprefiles " $dir/$linklib" fi ;; esac fi # $pass = dlpreopen if test -z "$libdir"; then # Link the convenience library if test "$linkmode" = lib; then deplibs="$dir/$old_library $deplibs" elif test "$linkmode,$pass" = "prog,link"; then compile_deplibs="$dir/$old_library $compile_deplibs" finalize_deplibs="$dir/$old_library $finalize_deplibs" else deplibs="$lib $deplibs" # used for prog,scan pass fi continue fi if test "$linkmode" = prog && test "$pass" != link; then func_append newlib_search_path " $ladir" deplibs="$lib $deplibs" linkalldeplibs=no if test "$link_all_deplibs" != no || test -z "$library_names" || test "$build_libtool_libs" = no; then linkalldeplibs=yes fi tmp_libs= for deplib in $dependency_libs; do case $deplib in -L*) func_stripname '-L' '' "$deplib" func_resolve_sysroot "$func_stripname_result" func_append newlib_search_path " $func_resolve_sysroot_result" ;; esac # Need to link against all dependency_libs? if test "$linkalldeplibs" = yes; then deplibs="$deplib $deplibs" else # Need to hardcode shared library paths # or/and link against static libraries newdependency_libs="$deplib $newdependency_libs" fi if $opt_preserve_dup_deps ; then case "$tmp_libs " in *" $deplib "*) func_append specialdeplibs " $deplib" ;; esac fi func_append tmp_libs " $deplib" done # for deplib continue fi # $linkmode = prog... if test "$linkmode,$pass" = "prog,link"; then if test -n "$library_names" && { { test "$prefer_static_libs" = no || test "$prefer_static_libs,$installed" = "built,yes"; } || test -z "$old_library"; }; then # We need to hardcode the library path if test -n "$shlibpath_var" && test -z "$avoidtemprpath" ; then # Make sure the rpath contains only unique directories. case "$temp_rpath:" in *"$absdir:"*) ;; *) func_append temp_rpath "$absdir:" ;; esac fi # Hardcode the library path. # Skip directories that are in the system default run-time # search path. case " $sys_lib_dlsearch_path " in *" $absdir "*) ;; *) case "$compile_rpath " in *" $absdir "*) ;; *) func_append compile_rpath " $absdir" ;; esac ;; esac case " $sys_lib_dlsearch_path " in *" $libdir "*) ;; *) case "$finalize_rpath " in *" $libdir "*) ;; *) func_append finalize_rpath " $libdir" ;; esac ;; esac fi # $linkmode,$pass = prog,link... if test "$alldeplibs" = yes && { test "$deplibs_check_method" = pass_all || { test "$build_libtool_libs" = yes && test -n "$library_names"; }; }; then # We only need to search for static libraries continue fi fi link_static=no # Whether the deplib will be linked statically use_static_libs=$prefer_static_libs if test "$use_static_libs" = built && test "$installed" = yes; then use_static_libs=no fi if test -n "$library_names" && { test "$use_static_libs" = no || test -z "$old_library"; }; then case $host in *cygwin* | *mingw* | *cegcc*) # No point in relinking DLLs because paths are not encoded func_append notinst_deplibs " $lib" need_relink=no ;; *) if test "$installed" = no; then func_append notinst_deplibs " $lib" need_relink=yes fi ;; esac # This is a shared library # Warn about portability, can't link against -module's on some # systems (darwin). Don't bleat about dlopened modules though! dlopenmodule="" for dlpremoduletest in $dlprefiles; do if test "X$dlpremoduletest" = "X$lib"; then dlopenmodule="$dlpremoduletest" break fi done if test -z "$dlopenmodule" && test "$shouldnotlink" = yes && test "$pass" = link; then echo if test "$linkmode" = prog; then $ECHO "*** Warning: Linking the executable $output against the loadable module" else $ECHO "*** Warning: Linking the shared library $output against the loadable module" fi $ECHO "*** $linklib is not portable!" fi if test "$linkmode" = lib && test "$hardcode_into_libs" = yes; then # Hardcode the library path. # Skip directories that are in the system default run-time # search path. case " $sys_lib_dlsearch_path " in *" $absdir "*) ;; *) case "$compile_rpath " in *" $absdir "*) ;; *) func_append compile_rpath " $absdir" ;; esac ;; esac case " $sys_lib_dlsearch_path " in *" $libdir "*) ;; *) case "$finalize_rpath " in *" $libdir "*) ;; *) func_append finalize_rpath " $libdir" ;; esac ;; esac fi if test -n "$old_archive_from_expsyms_cmds"; then # figure out the soname set dummy $library_names shift realname="$1" shift libname=`eval "\\$ECHO \"$libname_spec\""` # use dlname if we got it. it's perfectly good, no? if test -n "$dlname"; then soname="$dlname" elif test -n "$soname_spec"; then # bleh windows case $host in *cygwin* | mingw* | *cegcc*) func_arith $current - $age major=$func_arith_result versuffix="-$major" ;; esac eval soname=\"$soname_spec\" else soname="$realname" fi # Make a new name for the extract_expsyms_cmds to use soroot="$soname" func_basename "$soroot" soname="$func_basename_result" func_stripname 'lib' '.dll' "$soname" newlib=libimp-$func_stripname_result.a # If the library has no export list, then create one now if test -f "$output_objdir/$soname-def"; then : else func_verbose "extracting exported symbol list from \`$soname'" func_execute_cmds "$extract_expsyms_cmds" 'exit $?' fi # Create $newlib if test -f "$output_objdir/$newlib"; then :; else func_verbose "generating import library for \`$soname'" func_execute_cmds "$old_archive_from_expsyms_cmds" 'exit $?' fi # make sure the library variables are pointing to the new library dir=$output_objdir linklib=$newlib fi # test -n "$old_archive_from_expsyms_cmds" if test "$linkmode" = prog || test "$opt_mode" != relink; then add_shlibpath= add_dir= add= lib_linked=yes case $hardcode_action in immediate | unsupported) if test "$hardcode_direct" = no; then add="$dir/$linklib" case $host in *-*-sco3.2v5.0.[024]*) add_dir="-L$dir" ;; *-*-sysv4*uw2*) add_dir="-L$dir" ;; *-*-sysv5OpenUNIX* | *-*-sysv5UnixWare7.[01].[10]* | \ *-*-unixware7*) add_dir="-L$dir" ;; *-*-darwin* ) # if the lib is a (non-dlopened) module then we can not # link against it, someone is ignoring the earlier warnings if /usr/bin/file -L $add 2> /dev/null | $GREP ": [^:]* bundle" >/dev/null ; then if test "X$dlopenmodule" != "X$lib"; then $ECHO "*** Warning: lib $linklib is a module, not a shared library" if test -z "$old_library" ; then echo echo "*** And there doesn't seem to be a static archive available" echo "*** The link will probably fail, sorry" else add="$dir/$old_library" fi elif test -n "$old_library"; then add="$dir/$old_library" fi fi esac elif test "$hardcode_minus_L" = no; then case $host in *-*-sunos*) add_shlibpath="$dir" ;; esac add_dir="-L$dir" add="-l$name" elif test "$hardcode_shlibpath_var" = no; then add_shlibpath="$dir" add="-l$name" else lib_linked=no fi ;; relink) if test "$hardcode_direct" = yes && test "$hardcode_direct_absolute" = no; then add="$dir/$linklib" elif test "$hardcode_minus_L" = yes; then add_dir="-L$absdir" # Try looking first in the location we're being installed to. if test -n "$inst_prefix_dir"; then case $libdir in [\\/]*) func_append add_dir " -L$inst_prefix_dir$libdir" ;; esac fi add="-l$name" elif test "$hardcode_shlibpath_var" = yes; then add_shlibpath="$dir" add="-l$name" else lib_linked=no fi ;; *) lib_linked=no ;; esac if test "$lib_linked" != yes; then func_fatal_configuration "unsupported hardcode properties" fi if test -n "$add_shlibpath"; then case :$compile_shlibpath: in *":$add_shlibpath:"*) ;; *) func_append compile_shlibpath "$add_shlibpath:" ;; esac fi if test "$linkmode" = prog; then test -n "$add_dir" && compile_deplibs="$add_dir $compile_deplibs" test -n "$add" && compile_deplibs="$add $compile_deplibs" else test -n "$add_dir" && deplibs="$add_dir $deplibs" test -n "$add" && deplibs="$add $deplibs" if test "$hardcode_direct" != yes && test "$hardcode_minus_L" != yes && test "$hardcode_shlibpath_var" = yes; then case :$finalize_shlibpath: in *":$libdir:"*) ;; *) func_append finalize_shlibpath "$libdir:" ;; esac fi fi fi if test "$linkmode" = prog || test "$opt_mode" = relink; then add_shlibpath= add_dir= add= # Finalize command for both is simple: just hardcode it. if test "$hardcode_direct" = yes && test "$hardcode_direct_absolute" = no; then add="$libdir/$linklib" elif test "$hardcode_minus_L" = yes; then add_dir="-L$libdir" add="-l$name" elif test "$hardcode_shlibpath_var" = yes; then case :$finalize_shlibpath: in *":$libdir:"*) ;; *) func_append finalize_shlibpath "$libdir:" ;; esac add="-l$name" elif test "$hardcode_automatic" = yes; then if test -n "$inst_prefix_dir" && test -f "$inst_prefix_dir$libdir/$linklib" ; then add="$inst_prefix_dir$libdir/$linklib" else add="$libdir/$linklib" fi else # We cannot seem to hardcode it, guess we'll fake it. add_dir="-L$libdir" # Try looking first in the location we're being installed to. if test -n "$inst_prefix_dir"; then case $libdir in [\\/]*) func_append add_dir " -L$inst_prefix_dir$libdir" ;; esac fi add="-l$name" fi if test "$linkmode" = prog; then test -n "$add_dir" && finalize_deplibs="$add_dir $finalize_deplibs" test -n "$add" && finalize_deplibs="$add $finalize_deplibs" else test -n "$add_dir" && deplibs="$add_dir $deplibs" test -n "$add" && deplibs="$add $deplibs" fi fi elif test "$linkmode" = prog; then # Here we assume that one of hardcode_direct or hardcode_minus_L # is not unsupported. This is valid on all known static and # shared platforms. if test "$hardcode_direct" != unsupported; then test -n "$old_library" && linklib="$old_library" compile_deplibs="$dir/$linklib $compile_deplibs" finalize_deplibs="$dir/$linklib $finalize_deplibs" else compile_deplibs="-l$name -L$dir $compile_deplibs" finalize_deplibs="-l$name -L$dir $finalize_deplibs" fi elif test "$build_libtool_libs" = yes; then # Not a shared library if test "$deplibs_check_method" != pass_all; then # We're trying link a shared library against a static one # but the system doesn't support it. # Just print a warning and add the library to dependency_libs so # that the program can be linked against the static library. echo $ECHO "*** Warning: This system can not link to static lib archive $lib." echo "*** I have the capability to make that library automatically link in when" echo "*** you link to this library. But I can only do this if you have a" echo "*** shared version of the library, which you do not appear to have." if test "$module" = yes; then echo "*** But as you try to build a module library, libtool will still create " echo "*** a static module, that should work as long as the dlopening application" echo "*** is linked with the -dlopen flag to resolve symbols at runtime." if test -z "$global_symbol_pipe"; then echo echo "*** However, this would only work if libtool was able to extract symbol" echo "*** lists from a program, using \`nm' or equivalent, but libtool could" echo "*** not find such a program. So, this module is probably useless." echo "*** \`nm' from GNU binutils and a full rebuild may help." fi if test "$build_old_libs" = no; then build_libtool_libs=module build_old_libs=yes else build_libtool_libs=no fi fi else deplibs="$dir/$old_library $deplibs" link_static=yes fi fi # link shared/static library? if test "$linkmode" = lib; then if test -n "$dependency_libs" && { test "$hardcode_into_libs" != yes || test "$build_old_libs" = yes || test "$link_static" = yes; }; then # Extract -R from dependency_libs temp_deplibs= for libdir in $dependency_libs; do case $libdir in -R*) func_stripname '-R' '' "$libdir" temp_xrpath=$func_stripname_result case " $xrpath " in *" $temp_xrpath "*) ;; *) func_append xrpath " $temp_xrpath";; esac;; *) func_append temp_deplibs " $libdir";; esac done dependency_libs="$temp_deplibs" fi func_append newlib_search_path " $absdir" # Link against this library test "$link_static" = no && newdependency_libs="$abs_ladir/$laname $newdependency_libs" # ... and its dependency_libs tmp_libs= for deplib in $dependency_libs; do newdependency_libs="$deplib $newdependency_libs" case $deplib in -L*) func_stripname '-L' '' "$deplib" func_resolve_sysroot "$func_stripname_result";; *) func_resolve_sysroot "$deplib" ;; esac if $opt_preserve_dup_deps ; then case "$tmp_libs " in *" $func_resolve_sysroot_result "*) func_append specialdeplibs " $func_resolve_sysroot_result" ;; esac fi func_append tmp_libs " $func_resolve_sysroot_result" done if test "$link_all_deplibs" != no; then # Add the search paths of all dependency libraries for deplib in $dependency_libs; do path= case $deplib in -L*) path="$deplib" ;; *.la) func_resolve_sysroot "$deplib" deplib=$func_resolve_sysroot_result func_dirname "$deplib" "" "." dir=$func_dirname_result # We need an absolute path. case $dir in [\\/]* | [A-Za-z]:[\\/]*) absdir="$dir" ;; *) absdir=`cd "$dir" && pwd` if test -z "$absdir"; then func_warning "cannot determine absolute directory name of \`$dir'" absdir="$dir" fi ;; esac if $GREP "^installed=no" $deplib > /dev/null; then case $host in *-*-darwin*) depdepl= eval deplibrary_names=`${SED} -n -e 's/^library_names=\(.*\)$/\1/p' $deplib` if test -n "$deplibrary_names" ; then for tmp in $deplibrary_names ; do depdepl=$tmp done if test -f "$absdir/$objdir/$depdepl" ; then depdepl="$absdir/$objdir/$depdepl" darwin_install_name=`${OTOOL} -L $depdepl | awk '{if (NR == 2) {print $1;exit}}'` if test -z "$darwin_install_name"; then darwin_install_name=`${OTOOL64} -L $depdepl | awk '{if (NR == 2) {print $1;exit}}'` fi func_append compiler_flags " ${wl}-dylib_file ${wl}${darwin_install_name}:${depdepl}" func_append linker_flags " -dylib_file ${darwin_install_name}:${depdepl}" path= fi fi ;; *) path="-L$absdir/$objdir" ;; esac else eval libdir=`${SED} -n -e 's/^libdir=\(.*\)$/\1/p' $deplib` test -z "$libdir" && \ func_fatal_error "\`$deplib' is not a valid libtool archive" test "$absdir" != "$libdir" && \ func_warning "\`$deplib' seems to be moved" path="-L$absdir" fi ;; esac case " $deplibs " in *" $path "*) ;; *) deplibs="$path $deplibs" ;; esac done fi # link_all_deplibs != no fi # linkmode = lib done # for deplib in $libs if test "$pass" = link; then if test "$linkmode" = "prog"; then compile_deplibs="$new_inherited_linker_flags $compile_deplibs" finalize_deplibs="$new_inherited_linker_flags $finalize_deplibs" else compiler_flags="$compiler_flags "`$ECHO " $new_inherited_linker_flags" | $SED 's% \([^ $]*\).ltframework% -framework \1%g'` fi fi dependency_libs="$newdependency_libs" if test "$pass" = dlpreopen; then # Link the dlpreopened libraries before other libraries for deplib in $save_deplibs; do deplibs="$deplib $deplibs" done fi if test "$pass" != dlopen; then if test "$pass" != conv; then # Make sure lib_search_path contains only unique directories. lib_search_path= for dir in $newlib_search_path; do case "$lib_search_path " in *" $dir "*) ;; *) func_append lib_search_path " $dir" ;; esac done newlib_search_path= fi if test "$linkmode,$pass" != "prog,link"; then vars="deplibs" else vars="compile_deplibs finalize_deplibs" fi for var in $vars dependency_libs; do # Add libraries to $var in reverse order eval tmp_libs=\"\$$var\" new_libs= for deplib in $tmp_libs; do # FIXME: Pedantically, this is the right thing to do, so # that some nasty dependency loop isn't accidentally # broken: #new_libs="$deplib $new_libs" # Pragmatically, this seems to cause very few problems in # practice: case $deplib in -L*) new_libs="$deplib $new_libs" ;; -R*) ;; *) # And here is the reason: when a library appears more # than once as an explicit dependence of a library, or # is implicitly linked in more than once by the # compiler, it is considered special, and multiple # occurrences thereof are not removed. Compare this # with having the same library being listed as a # dependency of multiple other libraries: in this case, # we know (pedantically, we assume) the library does not # need to be listed more than once, so we keep only the # last copy. This is not always right, but it is rare # enough that we require users that really mean to play # such unportable linking tricks to link the library # using -Wl,-lname, so that libtool does not consider it # for duplicate removal. case " $specialdeplibs " in *" $deplib "*) new_libs="$deplib $new_libs" ;; *) case " $new_libs " in *" $deplib "*) ;; *) new_libs="$deplib $new_libs" ;; esac ;; esac ;; esac done tmp_libs= for deplib in $new_libs; do case $deplib in -L*) case " $tmp_libs " in *" $deplib "*) ;; *) func_append tmp_libs " $deplib" ;; esac ;; *) func_append tmp_libs " $deplib" ;; esac done eval $var=\"$tmp_libs\" done # for var fi # Last step: remove runtime libs from dependency_libs # (they stay in deplibs) tmp_libs= for i in $dependency_libs ; do case " $predeps $postdeps $compiler_lib_search_path " in *" $i "*) i="" ;; esac if test -n "$i" ; then func_append tmp_libs " $i" fi done dependency_libs=$tmp_libs done # for pass if test "$linkmode" = prog; then dlfiles="$newdlfiles" fi if test "$linkmode" = prog || test "$linkmode" = lib; then dlprefiles="$newdlprefiles" fi case $linkmode in oldlib) if test -n "$dlfiles$dlprefiles" || test "$dlself" != no; then func_warning "\`-dlopen' is ignored for archives" fi case " $deplibs" in *\ -l* | *\ -L*) func_warning "\`-l' and \`-L' are ignored for archives" ;; esac test -n "$rpath" && \ func_warning "\`-rpath' is ignored for archives" test -n "$xrpath" && \ func_warning "\`-R' is ignored for archives" test -n "$vinfo" && \ func_warning "\`-version-info/-version-number' is ignored for archives" test -n "$release" && \ func_warning "\`-release' is ignored for archives" test -n "$export_symbols$export_symbols_regex" && \ func_warning "\`-export-symbols' is ignored for archives" # Now set the variables for building old libraries. build_libtool_libs=no oldlibs="$output" func_append objs "$old_deplibs" ;; lib) # Make sure we only generate libraries of the form `libNAME.la'. case $outputname in lib*) func_stripname 'lib' '.la' "$outputname" name=$func_stripname_result eval shared_ext=\"$shrext_cmds\" eval libname=\"$libname_spec\" ;; *) test "$module" = no && \ func_fatal_help "libtool library \`$output' must begin with \`lib'" if test "$need_lib_prefix" != no; then # Add the "lib" prefix for modules if required func_stripname '' '.la' "$outputname" name=$func_stripname_result eval shared_ext=\"$shrext_cmds\" eval libname=\"$libname_spec\" else func_stripname '' '.la' "$outputname" libname=$func_stripname_result fi ;; esac if test -n "$objs"; then if test "$deplibs_check_method" != pass_all; then func_fatal_error "cannot build libtool library \`$output' from non-libtool objects on this host:$objs" else echo $ECHO "*** Warning: Linking the shared library $output against the non-libtool" $ECHO "*** objects $objs is not portable!" func_append libobjs " $objs" fi fi test "$dlself" != no && \ func_warning "\`-dlopen self' is ignored for libtool libraries" set dummy $rpath shift test "$#" -gt 1 && \ func_warning "ignoring multiple \`-rpath's for a libtool library" install_libdir="$1" oldlibs= if test -z "$rpath"; then if test "$build_libtool_libs" = yes; then # Building a libtool convenience library. # Some compilers have problems with a `.al' extension so # convenience libraries should have the same extension an # archive normally would. oldlibs="$output_objdir/$libname.$libext $oldlibs" build_libtool_libs=convenience build_old_libs=yes fi test -n "$vinfo" && \ func_warning "\`-version-info/-version-number' is ignored for convenience libraries" test -n "$release" && \ func_warning "\`-release' is ignored for convenience libraries" else # Parse the version information argument. save_ifs="$IFS"; IFS=':' set dummy $vinfo 0 0 0 shift IFS="$save_ifs" test -n "$7" && \ func_fatal_help "too many parameters to \`-version-info'" # convert absolute version numbers to libtool ages # this retains compatibility with .la files and attempts # to make the code below a bit more comprehensible case $vinfo_number in yes) number_major="$1" number_minor="$2" number_revision="$3" # # There are really only two kinds -- those that # use the current revision as the major version # and those that subtract age and use age as # a minor version. But, then there is irix # which has an extra 1 added just for fun # case $version_type in # correct linux to gnu/linux during the next big refactor darwin|linux|osf|windows|none) func_arith $number_major + $number_minor current=$func_arith_result age="$number_minor" revision="$number_revision" ;; freebsd-aout|freebsd-elf|qnx|sunos) current="$number_major" revision="$number_minor" age="0" ;; irix|nonstopux) func_arith $number_major + $number_minor current=$func_arith_result age="$number_minor" revision="$number_minor" lt_irix_increment=no ;; *) func_fatal_configuration "$modename: unknown library version type \`$version_type'" ;; esac ;; no) current="$1" revision="$2" age="$3" ;; esac # Check that each of the things are valid numbers. case $current in 0|[1-9]|[1-9][0-9]|[1-9][0-9][0-9]|[1-9][0-9][0-9][0-9]|[1-9][0-9][0-9][0-9][0-9]) ;; *) func_error "CURRENT \`$current' must be a nonnegative integer" func_fatal_error "\`$vinfo' is not valid version information" ;; esac case $revision in 0|[1-9]|[1-9][0-9]|[1-9][0-9][0-9]|[1-9][0-9][0-9][0-9]|[1-9][0-9][0-9][0-9][0-9]) ;; *) func_error "REVISION \`$revision' must be a nonnegative integer" func_fatal_error "\`$vinfo' is not valid version information" ;; esac case $age in 0|[1-9]|[1-9][0-9]|[1-9][0-9][0-9]|[1-9][0-9][0-9][0-9]|[1-9][0-9][0-9][0-9][0-9]) ;; *) func_error "AGE \`$age' must be a nonnegative integer" func_fatal_error "\`$vinfo' is not valid version information" ;; esac if test "$age" -gt "$current"; then func_error "AGE \`$age' is greater than the current interface number \`$current'" func_fatal_error "\`$vinfo' is not valid version information" fi # Calculate the version variables. major= versuffix= verstring= case $version_type in none) ;; darwin) # Like Linux, but with the current version available in # verstring for coding it into the library header func_arith $current - $age major=.$func_arith_result versuffix="$major.$age.$revision" # Darwin ld doesn't like 0 for these options... func_arith $current + 1 minor_current=$func_arith_result xlcverstring="${wl}-compatibility_version ${wl}$minor_current ${wl}-current_version ${wl}$minor_current.$revision" verstring="-compatibility_version $minor_current -current_version $minor_current.$revision" ;; freebsd-aout) major=".$current" versuffix=".$current.$revision"; ;; freebsd-elf) major=".$current" versuffix=".$current" ;; irix | nonstopux) if test "X$lt_irix_increment" = "Xno"; then func_arith $current - $age else func_arith $current - $age + 1 fi major=$func_arith_result case $version_type in nonstopux) verstring_prefix=nonstopux ;; *) verstring_prefix=sgi ;; esac verstring="$verstring_prefix$major.$revision" # Add in all the interfaces that we are compatible with. loop=$revision while test "$loop" -ne 0; do func_arith $revision - $loop iface=$func_arith_result func_arith $loop - 1 loop=$func_arith_result verstring="$verstring_prefix$major.$iface:$verstring" done # Before this point, $major must not contain `.'. major=.$major versuffix="$major.$revision" ;; linux) # correct to gnu/linux during the next big refactor func_arith $current - $age major=.$func_arith_result versuffix="$major.$age.$revision" ;; osf) func_arith $current - $age major=.$func_arith_result versuffix=".$current.$age.$revision" verstring="$current.$age.$revision" # Add in all the interfaces that we are compatible with. loop=$age while test "$loop" -ne 0; do func_arith $current - $loop iface=$func_arith_result func_arith $loop - 1 loop=$func_arith_result verstring="$verstring:${iface}.0" done # Make executables depend on our current version. func_append verstring ":${current}.0" ;; qnx) major=".$current" versuffix=".$current" ;; sunos) major=".$current" versuffix=".$current.$revision" ;; windows) # Use '-' rather than '.', since we only want one # extension on DOS 8.3 filesystems. func_arith $current - $age major=$func_arith_result versuffix="-$major" ;; *) func_fatal_configuration "unknown library version type \`$version_type'" ;; esac # Clear the version info if we defaulted, and they specified a release. if test -z "$vinfo" && test -n "$release"; then major= case $version_type in darwin) # we can't check for "0.0" in archive_cmds due to quoting # problems, so we reset it completely verstring= ;; *) verstring="0.0" ;; esac if test "$need_version" = no; then versuffix= else versuffix=".0.0" fi fi # Remove version info from name if versioning should be avoided if test "$avoid_version" = yes && test "$need_version" = no; then major= versuffix= verstring="" fi # Check to see if the archive will have undefined symbols. if test "$allow_undefined" = yes; then if test "$allow_undefined_flag" = unsupported; then func_warning "undefined symbols not allowed in $host shared libraries" build_libtool_libs=no build_old_libs=yes fi else # Don't allow undefined symbols. allow_undefined_flag="$no_undefined_flag" fi fi func_generate_dlsyms "$libname" "$libname" "yes" func_append libobjs " $symfileobj" test "X$libobjs" = "X " && libobjs= if test "$opt_mode" != relink; then # Remove our outputs, but don't remove object files since they # may have been created when compiling PIC objects. removelist= tempremovelist=`$ECHO "$output_objdir/*"` for p in $tempremovelist; do case $p in *.$objext | *.gcno) ;; $output_objdir/$outputname | $output_objdir/$libname.* | $output_objdir/${libname}${release}.*) if test "X$precious_files_regex" != "X"; then if $ECHO "$p" | $EGREP -e "$precious_files_regex" >/dev/null 2>&1 then continue fi fi func_append removelist " $p" ;; *) ;; esac done test -n "$removelist" && \ func_show_eval "${RM}r \$removelist" fi # Now set the variables for building old libraries. if test "$build_old_libs" = yes && test "$build_libtool_libs" != convenience ; then func_append oldlibs " $output_objdir/$libname.$libext" # Transform .lo files to .o files. oldobjs="$objs "`$ECHO "$libobjs" | $SP2NL | $SED "/\.${libext}$/d; $lo2o" | $NL2SP` fi # Eliminate all temporary directories. #for path in $notinst_path; do # lib_search_path=`$ECHO "$lib_search_path " | $SED "s% $path % %g"` # deplibs=`$ECHO "$deplibs " | $SED "s% -L$path % %g"` # dependency_libs=`$ECHO "$dependency_libs " | $SED "s% -L$path % %g"` #done if test -n "$xrpath"; then # If the user specified any rpath flags, then add them. temp_xrpath= for libdir in $xrpath; do func_replace_sysroot "$libdir" func_append temp_xrpath " -R$func_replace_sysroot_result" case "$finalize_rpath " in *" $libdir "*) ;; *) func_append finalize_rpath " $libdir" ;; esac done if test "$hardcode_into_libs" != yes || test "$build_old_libs" = yes; then dependency_libs="$temp_xrpath $dependency_libs" fi fi # Make sure dlfiles contains only unique files that won't be dlpreopened old_dlfiles="$dlfiles" dlfiles= for lib in $old_dlfiles; do case " $dlprefiles $dlfiles " in *" $lib "*) ;; *) func_append dlfiles " $lib" ;; esac done # Make sure dlprefiles contains only unique files old_dlprefiles="$dlprefiles" dlprefiles= for lib in $old_dlprefiles; do case "$dlprefiles " in *" $lib "*) ;; *) func_append dlprefiles " $lib" ;; esac done if test "$build_libtool_libs" = yes; then if test -n "$rpath"; then case $host in *-*-cygwin* | *-*-mingw* | *-*-pw32* | *-*-os2* | *-*-beos* | *-cegcc* | *-*-haiku*) # these systems don't actually have a c library (as such)! ;; *-*-rhapsody* | *-*-darwin1.[012]) # Rhapsody C library is in the System framework func_append deplibs " System.ltframework" ;; *-*-netbsd*) # Don't link with libc until the a.out ld.so is fixed. ;; *-*-openbsd* | *-*-freebsd* | *-*-dragonfly*) # Do not include libc due to us having libc/libc_r. ;; *-*-sco3.2v5* | *-*-sco5v6*) # Causes problems with __ctype ;; *-*-sysv4.2uw2* | *-*-sysv5* | *-*-unixware* | *-*-OpenUNIX*) # Compiler inserts libc in the correct place for threads to work ;; *) # Add libc to deplibs on all other systems if necessary. if test "$build_libtool_need_lc" = "yes"; then func_append deplibs " -lc" fi ;; esac fi # Transform deplibs into only deplibs that can be linked in shared. name_save=$name libname_save=$libname release_save=$release versuffix_save=$versuffix major_save=$major # I'm not sure if I'm treating the release correctly. I think # release should show up in the -l (ie -lgmp5) so we don't want to # add it in twice. Is that correct? release="" versuffix="" major="" newdeplibs= droppeddeps=no case $deplibs_check_method in pass_all) # Don't check for shared/static. Everything works. # This might be a little naive. We might want to check # whether the library exists or not. But this is on # osf3 & osf4 and I'm not really sure... Just # implementing what was already the behavior. newdeplibs=$deplibs ;; test_compile) # This code stresses the "libraries are programs" paradigm to its # limits. Maybe even breaks it. We compile a program, linking it # against the deplibs as a proxy for the library. Then we can check # whether they linked in statically or dynamically with ldd. $opt_dry_run || $RM conftest.c cat > conftest.c </dev/null` $nocaseglob else potential_libs=`ls $i/$libnameglob[.-]* 2>/dev/null` fi for potent_lib in $potential_libs; do # Follow soft links. if ls -lLd "$potent_lib" 2>/dev/null | $GREP " -> " >/dev/null; then continue fi # The statement above tries to avoid entering an # endless loop below, in case of cyclic links. # We might still enter an endless loop, since a link # loop can be closed while we follow links, # but so what? potlib="$potent_lib" while test -h "$potlib" 2>/dev/null; do potliblink=`ls -ld $potlib | ${SED} 's/.* -> //'` case $potliblink in [\\/]* | [A-Za-z]:[\\/]*) potlib="$potliblink";; *) potlib=`$ECHO "$potlib" | $SED 's,[^/]*$,,'`"$potliblink";; esac done if eval $file_magic_cmd \"\$potlib\" 2>/dev/null | $SED -e 10q | $EGREP "$file_magic_regex" > /dev/null; then func_append newdeplibs " $a_deplib" a_deplib="" break 2 fi done done fi if test -n "$a_deplib" ; then droppeddeps=yes echo $ECHO "*** Warning: linker path does not have real file for library $a_deplib." echo "*** I have the capability to make that library automatically link in when" echo "*** you link to this library. But I can only do this if you have a" echo "*** shared version of the library, which you do not appear to have" echo "*** because I did check the linker path looking for a file starting" if test -z "$potlib" ; then $ECHO "*** with $libname but no candidates were found. (...for file magic test)" else $ECHO "*** with $libname and none of the candidates passed a file format test" $ECHO "*** using a file magic. Last file checked: $potlib" fi fi ;; *) # Add a -L argument. func_append newdeplibs " $a_deplib" ;; esac done # Gone through all deplibs. ;; match_pattern*) set dummy $deplibs_check_method; shift match_pattern_regex=`expr "$deplibs_check_method" : "$1 \(.*\)"` for a_deplib in $deplibs; do case $a_deplib in -l*) func_stripname -l '' "$a_deplib" name=$func_stripname_result if test "X$allow_libtool_libs_with_static_runtimes" = "Xyes" ; then case " $predeps $postdeps " in *" $a_deplib "*) func_append newdeplibs " $a_deplib" a_deplib="" ;; esac fi if test -n "$a_deplib" ; then libname=`eval "\\$ECHO \"$libname_spec\""` for i in $lib_search_path $sys_lib_search_path $shlib_search_path; do potential_libs=`ls $i/$libname[.-]* 2>/dev/null` for potent_lib in $potential_libs; do potlib="$potent_lib" # see symlink-check above in file_magic test if eval "\$ECHO \"$potent_lib\"" 2>/dev/null | $SED 10q | \ $EGREP "$match_pattern_regex" > /dev/null; then func_append newdeplibs " $a_deplib" a_deplib="" break 2 fi done done fi if test -n "$a_deplib" ; then droppeddeps=yes echo $ECHO "*** Warning: linker path does not have real file for library $a_deplib." echo "*** I have the capability to make that library automatically link in when" echo "*** you link to this library. But I can only do this if you have a" echo "*** shared version of the library, which you do not appear to have" echo "*** because I did check the linker path looking for a file starting" if test -z "$potlib" ; then $ECHO "*** with $libname but no candidates were found. (...for regex pattern test)" else $ECHO "*** with $libname and none of the candidates passed a file format test" $ECHO "*** using a regex pattern. Last file checked: $potlib" fi fi ;; *) # Add a -L argument. func_append newdeplibs " $a_deplib" ;; esac done # Gone through all deplibs. ;; none | unknown | *) newdeplibs="" tmp_deplibs=`$ECHO " $deplibs" | $SED 's/ -lc$//; s/ -[LR][^ ]*//g'` if test "X$allow_libtool_libs_with_static_runtimes" = "Xyes" ; then for i in $predeps $postdeps ; do # can't use Xsed below, because $i might contain '/' tmp_deplibs=`$ECHO " $tmp_deplibs" | $SED "s,$i,,"` done fi case $tmp_deplibs in *[!\ \ ]*) echo if test "X$deplibs_check_method" = "Xnone"; then echo "*** Warning: inter-library dependencies are not supported in this platform." else echo "*** Warning: inter-library dependencies are not known to be supported." fi echo "*** All declared inter-library dependencies are being dropped." droppeddeps=yes ;; esac ;; esac versuffix=$versuffix_save major=$major_save release=$release_save libname=$libname_save name=$name_save case $host in *-*-rhapsody* | *-*-darwin1.[012]) # On Rhapsody replace the C library with the System framework newdeplibs=`$ECHO " $newdeplibs" | $SED 's/ -lc / System.ltframework /'` ;; esac if test "$droppeddeps" = yes; then if test "$module" = yes; then echo echo "*** Warning: libtool could not satisfy all declared inter-library" $ECHO "*** dependencies of module $libname. Therefore, libtool will create" echo "*** a static module, that should work as long as the dlopening" echo "*** application is linked with the -dlopen flag." if test -z "$global_symbol_pipe"; then echo echo "*** However, this would only work if libtool was able to extract symbol" echo "*** lists from a program, using \`nm' or equivalent, but libtool could" echo "*** not find such a program. So, this module is probably useless." echo "*** \`nm' from GNU binutils and a full rebuild may help." fi if test "$build_old_libs" = no; then oldlibs="$output_objdir/$libname.$libext" build_libtool_libs=module build_old_libs=yes else build_libtool_libs=no fi else echo "*** The inter-library dependencies that have been dropped here will be" echo "*** automatically added whenever a program is linked with this library" echo "*** or is declared to -dlopen it." if test "$allow_undefined" = no; then echo echo "*** Since this library must not contain undefined symbols," echo "*** because either the platform does not support them or" echo "*** it was explicitly requested with -no-undefined," echo "*** libtool will only create a static version of it." if test "$build_old_libs" = no; then oldlibs="$output_objdir/$libname.$libext" build_libtool_libs=module build_old_libs=yes else build_libtool_libs=no fi fi fi fi # Done checking deplibs! deplibs=$newdeplibs fi # Time to change all our "foo.ltframework" stuff back to "-framework foo" case $host in *-*-darwin*) newdeplibs=`$ECHO " $newdeplibs" | $SED 's% \([^ $]*\).ltframework% -framework \1%g'` new_inherited_linker_flags=`$ECHO " $new_inherited_linker_flags" | $SED 's% \([^ $]*\).ltframework% -framework \1%g'` deplibs=`$ECHO " $deplibs" | $SED 's% \([^ $]*\).ltframework% -framework \1%g'` ;; esac # move library search paths that coincide with paths to not yet # installed libraries to the beginning of the library search list new_libs= for path in $notinst_path; do case " $new_libs " in *" -L$path/$objdir "*) ;; *) case " $deplibs " in *" -L$path/$objdir "*) func_append new_libs " -L$path/$objdir" ;; esac ;; esac done for deplib in $deplibs; do case $deplib in -L*) case " $new_libs " in *" $deplib "*) ;; *) func_append new_libs " $deplib" ;; esac ;; *) func_append new_libs " $deplib" ;; esac done deplibs="$new_libs" # All the library-specific variables (install_libdir is set above). library_names= old_library= dlname= # Test again, we may have decided not to build it any more if test "$build_libtool_libs" = yes; then # Remove ${wl} instances when linking with ld. # FIXME: should test the right _cmds variable. case $archive_cmds in *\$LD\ *) wl= ;; esac if test "$hardcode_into_libs" = yes; then # Hardcode the library paths hardcode_libdirs= dep_rpath= rpath="$finalize_rpath" test "$opt_mode" != relink && rpath="$compile_rpath$rpath" for libdir in $rpath; do if test -n "$hardcode_libdir_flag_spec"; then if test -n "$hardcode_libdir_separator"; then func_replace_sysroot "$libdir" libdir=$func_replace_sysroot_result if test -z "$hardcode_libdirs"; then hardcode_libdirs="$libdir" else # Just accumulate the unique libdirs. case $hardcode_libdir_separator$hardcode_libdirs$hardcode_libdir_separator in *"$hardcode_libdir_separator$libdir$hardcode_libdir_separator"*) ;; *) func_append hardcode_libdirs "$hardcode_libdir_separator$libdir" ;; esac fi else eval flag=\"$hardcode_libdir_flag_spec\" func_append dep_rpath " $flag" fi elif test -n "$runpath_var"; then case "$perm_rpath " in *" $libdir "*) ;; *) func_append perm_rpath " $libdir" ;; esac fi done # Substitute the hardcoded libdirs into the rpath. if test -n "$hardcode_libdir_separator" && test -n "$hardcode_libdirs"; then libdir="$hardcode_libdirs" eval "dep_rpath=\"$hardcode_libdir_flag_spec\"" fi if test -n "$runpath_var" && test -n "$perm_rpath"; then # We should set the runpath_var. rpath= for dir in $perm_rpath; do func_append rpath "$dir:" done eval "$runpath_var='$rpath\$$runpath_var'; export $runpath_var" fi test -n "$dep_rpath" && deplibs="$dep_rpath $deplibs" fi shlibpath="$finalize_shlibpath" test "$opt_mode" != relink && shlibpath="$compile_shlibpath$shlibpath" if test -n "$shlibpath"; then eval "$shlibpath_var='$shlibpath\$$shlibpath_var'; export $shlibpath_var" fi # Get the real and link names of the library. eval shared_ext=\"$shrext_cmds\" eval library_names=\"$library_names_spec\" set dummy $library_names shift realname="$1" shift if test -n "$soname_spec"; then eval soname=\"$soname_spec\" else soname="$realname" fi if test -z "$dlname"; then dlname=$soname fi lib="$output_objdir/$realname" linknames= for link do func_append linknames " $link" done # Use standard objects if they are pic test -z "$pic_flag" && libobjs=`$ECHO "$libobjs" | $SP2NL | $SED "$lo2o" | $NL2SP` test "X$libobjs" = "X " && libobjs= delfiles= if test -n "$export_symbols" && test -n "$include_expsyms"; then $opt_dry_run || cp "$export_symbols" "$output_objdir/$libname.uexp" export_symbols="$output_objdir/$libname.uexp" func_append delfiles " $export_symbols" fi orig_export_symbols= case $host_os in cygwin* | mingw* | cegcc*) if test -n "$export_symbols" && test -z "$export_symbols_regex"; then # exporting using user supplied symfile if test "x`$SED 1q $export_symbols`" != xEXPORTS; then # and it's NOT already a .def file. Must figure out # which of the given symbols are data symbols and tag # them as such. So, trigger use of export_symbols_cmds. # export_symbols gets reassigned inside the "prepare # the list of exported symbols" if statement, so the # include_expsyms logic still works. orig_export_symbols="$export_symbols" export_symbols= always_export_symbols=yes fi fi ;; esac # Prepare the list of exported symbols if test -z "$export_symbols"; then if test "$always_export_symbols" = yes || test -n "$export_symbols_regex"; then func_verbose "generating symbol list for \`$libname.la'" export_symbols="$output_objdir/$libname.exp" $opt_dry_run || $RM $export_symbols cmds=$export_symbols_cmds save_ifs="$IFS"; IFS='~' for cmd1 in $cmds; do IFS="$save_ifs" # Take the normal branch if the nm_file_list_spec branch # doesn't work or if tool conversion is not needed. case $nm_file_list_spec~$to_tool_file_cmd in *~func_convert_file_noop | *~func_convert_file_msys_to_w32 | ~*) try_normal_branch=yes eval cmd=\"$cmd1\" func_len " $cmd" len=$func_len_result ;; *) try_normal_branch=no ;; esac if test "$try_normal_branch" = yes \ && { test "$len" -lt "$max_cmd_len" \ || test "$max_cmd_len" -le -1; } then func_show_eval "$cmd" 'exit $?' skipped_export=false elif test -n "$nm_file_list_spec"; then func_basename "$output" output_la=$func_basename_result save_libobjs=$libobjs save_output=$output output=${output_objdir}/${output_la}.nm func_to_tool_file "$output" libobjs=$nm_file_list_spec$func_to_tool_file_result func_append delfiles " $output" func_verbose "creating $NM input file list: $output" for obj in $save_libobjs; do func_to_tool_file "$obj" $ECHO "$func_to_tool_file_result" done > "$output" eval cmd=\"$cmd1\" func_show_eval "$cmd" 'exit $?' output=$save_output libobjs=$save_libobjs skipped_export=false else # The command line is too long to execute in one step. func_verbose "using reloadable object file for export list..." skipped_export=: # Break out early, otherwise skipped_export may be # set to false by a later but shorter cmd. break fi done IFS="$save_ifs" if test -n "$export_symbols_regex" && test "X$skipped_export" != "X:"; then func_show_eval '$EGREP -e "$export_symbols_regex" "$export_symbols" > "${export_symbols}T"' func_show_eval '$MV "${export_symbols}T" "$export_symbols"' fi fi fi if test -n "$export_symbols" && test -n "$include_expsyms"; then tmp_export_symbols="$export_symbols" test -n "$orig_export_symbols" && tmp_export_symbols="$orig_export_symbols" $opt_dry_run || eval '$ECHO "$include_expsyms" | $SP2NL >> "$tmp_export_symbols"' fi if test "X$skipped_export" != "X:" && test -n "$orig_export_symbols"; then # The given exports_symbols file has to be filtered, so filter it. func_verbose "filter symbol list for \`$libname.la' to tag DATA exports" # FIXME: $output_objdir/$libname.filter potentially contains lots of # 's' commands which not all seds can handle. GNU sed should be fine # though. Also, the filter scales superlinearly with the number of # global variables. join(1) would be nice here, but unfortunately # isn't a blessed tool. $opt_dry_run || $SED -e '/[ ,]DATA/!d;s,\(.*\)\([ \,].*\),s|^\1$|\1\2|,' < $export_symbols > $output_objdir/$libname.filter func_append delfiles " $export_symbols $output_objdir/$libname.filter" export_symbols=$output_objdir/$libname.def $opt_dry_run || $SED -f $output_objdir/$libname.filter < $orig_export_symbols > $export_symbols fi tmp_deplibs= for test_deplib in $deplibs; do case " $convenience " in *" $test_deplib "*) ;; *) func_append tmp_deplibs " $test_deplib" ;; esac done deplibs="$tmp_deplibs" if test -n "$convenience"; then if test -n "$whole_archive_flag_spec" && test "$compiler_needs_object" = yes && test -z "$libobjs"; then # extract the archives, so we have objects to list. # TODO: could optimize this to just extract one archive. whole_archive_flag_spec= fi if test -n "$whole_archive_flag_spec"; then save_libobjs=$libobjs eval libobjs=\"\$libobjs $whole_archive_flag_spec\" test "X$libobjs" = "X " && libobjs= else gentop="$output_objdir/${outputname}x" func_append generated " $gentop" func_extract_archives $gentop $convenience func_append libobjs " $func_extract_archives_result" test "X$libobjs" = "X " && libobjs= fi fi if test "$thread_safe" = yes && test -n "$thread_safe_flag_spec"; then eval flag=\"$thread_safe_flag_spec\" func_append linker_flags " $flag" fi # Make a backup of the uninstalled library when relinking if test "$opt_mode" = relink; then $opt_dry_run || eval '(cd $output_objdir && $RM ${realname}U && $MV $realname ${realname}U)' || exit $? fi # Do each of the archive commands. if test "$module" = yes && test -n "$module_cmds" ; then if test -n "$export_symbols" && test -n "$module_expsym_cmds"; then eval test_cmds=\"$module_expsym_cmds\" cmds=$module_expsym_cmds else eval test_cmds=\"$module_cmds\" cmds=$module_cmds fi else if test -n "$export_symbols" && test -n "$archive_expsym_cmds"; then eval test_cmds=\"$archive_expsym_cmds\" cmds=$archive_expsym_cmds else eval test_cmds=\"$archive_cmds\" cmds=$archive_cmds fi fi if test "X$skipped_export" != "X:" && func_len " $test_cmds" && len=$func_len_result && test "$len" -lt "$max_cmd_len" || test "$max_cmd_len" -le -1; then : else # The command line is too long to link in one step, link piecewise # or, if using GNU ld and skipped_export is not :, use a linker # script. # Save the value of $output and $libobjs because we want to # use them later. If we have whole_archive_flag_spec, we # want to use save_libobjs as it was before # whole_archive_flag_spec was expanded, because we can't # assume the linker understands whole_archive_flag_spec. # This may have to be revisited, in case too many # convenience libraries get linked in and end up exceeding # the spec. if test -z "$convenience" || test -z "$whole_archive_flag_spec"; then save_libobjs=$libobjs fi save_output=$output func_basename "$output" output_la=$func_basename_result # Clear the reloadable object creation command queue and # initialize k to one. test_cmds= concat_cmds= objlist= last_robj= k=1 if test -n "$save_libobjs" && test "X$skipped_export" != "X:" && test "$with_gnu_ld" = yes; then output=${output_objdir}/${output_la}.lnkscript func_verbose "creating GNU ld script: $output" echo 'INPUT (' > $output for obj in $save_libobjs do func_to_tool_file "$obj" $ECHO "$func_to_tool_file_result" >> $output done echo ')' >> $output func_append delfiles " $output" func_to_tool_file "$output" output=$func_to_tool_file_result elif test -n "$save_libobjs" && test "X$skipped_export" != "X:" && test "X$file_list_spec" != X; then output=${output_objdir}/${output_la}.lnk func_verbose "creating linker input file list: $output" : > $output set x $save_libobjs shift firstobj= if test "$compiler_needs_object" = yes; then firstobj="$1 " shift fi for obj do func_to_tool_file "$obj" $ECHO "$func_to_tool_file_result" >> $output done func_append delfiles " $output" func_to_tool_file "$output" output=$firstobj\"$file_list_spec$func_to_tool_file_result\" else if test -n "$save_libobjs"; then func_verbose "creating reloadable object files..." output=$output_objdir/$output_la-${k}.$objext eval test_cmds=\"$reload_cmds\" func_len " $test_cmds" len0=$func_len_result len=$len0 # Loop over the list of objects to be linked. for obj in $save_libobjs do func_len " $obj" func_arith $len + $func_len_result len=$func_arith_result if test "X$objlist" = X || test "$len" -lt "$max_cmd_len"; then func_append objlist " $obj" else # The command $test_cmds is almost too long, add a # command to the queue. if test "$k" -eq 1 ; then # The first file doesn't have a previous command to add. reload_objs=$objlist eval concat_cmds=\"$reload_cmds\" else # All subsequent reloadable object files will link in # the last one created. reload_objs="$objlist $last_robj" eval concat_cmds=\"\$concat_cmds~$reload_cmds~\$RM $last_robj\" fi last_robj=$output_objdir/$output_la-${k}.$objext func_arith $k + 1 k=$func_arith_result output=$output_objdir/$output_la-${k}.$objext objlist=" $obj" func_len " $last_robj" func_arith $len0 + $func_len_result len=$func_arith_result fi done # Handle the remaining objects by creating one last # reloadable object file. All subsequent reloadable object # files will link in the last one created. test -z "$concat_cmds" || concat_cmds=$concat_cmds~ reload_objs="$objlist $last_robj" eval concat_cmds=\"\${concat_cmds}$reload_cmds\" if test -n "$last_robj"; then eval concat_cmds=\"\${concat_cmds}~\$RM $last_robj\" fi func_append delfiles " $output" else output= fi if ${skipped_export-false}; then func_verbose "generating symbol list for \`$libname.la'" export_symbols="$output_objdir/$libname.exp" $opt_dry_run || $RM $export_symbols libobjs=$output # Append the command to create the export file. test -z "$concat_cmds" || concat_cmds=$concat_cmds~ eval concat_cmds=\"\$concat_cmds$export_symbols_cmds\" if test -n "$last_robj"; then eval concat_cmds=\"\$concat_cmds~\$RM $last_robj\" fi fi test -n "$save_libobjs" && func_verbose "creating a temporary reloadable object file: $output" # Loop through the commands generated above and execute them. save_ifs="$IFS"; IFS='~' for cmd in $concat_cmds; do IFS="$save_ifs" $opt_silent || { func_quote_for_expand "$cmd" eval "func_echo $func_quote_for_expand_result" } $opt_dry_run || eval "$cmd" || { lt_exit=$? # Restore the uninstalled library and exit if test "$opt_mode" = relink; then ( cd "$output_objdir" && \ $RM "${realname}T" && \ $MV "${realname}U" "$realname" ) fi exit $lt_exit } done IFS="$save_ifs" if test -n "$export_symbols_regex" && ${skipped_export-false}; then func_show_eval '$EGREP -e "$export_symbols_regex" "$export_symbols" > "${export_symbols}T"' func_show_eval '$MV "${export_symbols}T" "$export_symbols"' fi fi if ${skipped_export-false}; then if test -n "$export_symbols" && test -n "$include_expsyms"; then tmp_export_symbols="$export_symbols" test -n "$orig_export_symbols" && tmp_export_symbols="$orig_export_symbols" $opt_dry_run || eval '$ECHO "$include_expsyms" | $SP2NL >> "$tmp_export_symbols"' fi if test -n "$orig_export_symbols"; then # The given exports_symbols file has to be filtered, so filter it. func_verbose "filter symbol list for \`$libname.la' to tag DATA exports" # FIXME: $output_objdir/$libname.filter potentially contains lots of # 's' commands which not all seds can handle. GNU sed should be fine # though. Also, the filter scales superlinearly with the number of # global variables. join(1) would be nice here, but unfortunately # isn't a blessed tool. $opt_dry_run || $SED -e '/[ ,]DATA/!d;s,\(.*\)\([ \,].*\),s|^\1$|\1\2|,' < $export_symbols > $output_objdir/$libname.filter func_append delfiles " $export_symbols $output_objdir/$libname.filter" export_symbols=$output_objdir/$libname.def $opt_dry_run || $SED -f $output_objdir/$libname.filter < $orig_export_symbols > $export_symbols fi fi libobjs=$output # Restore the value of output. output=$save_output if test -n "$convenience" && test -n "$whole_archive_flag_spec"; then eval libobjs=\"\$libobjs $whole_archive_flag_spec\" test "X$libobjs" = "X " && libobjs= fi # Expand the library linking commands again to reset the # value of $libobjs for piecewise linking. # Do each of the archive commands. if test "$module" = yes && test -n "$module_cmds" ; then if test -n "$export_symbols" && test -n "$module_expsym_cmds"; then cmds=$module_expsym_cmds else cmds=$module_cmds fi else if test -n "$export_symbols" && test -n "$archive_expsym_cmds"; then cmds=$archive_expsym_cmds else cmds=$archive_cmds fi fi fi if test -n "$delfiles"; then # Append the command to remove temporary files to $cmds. eval cmds=\"\$cmds~\$RM $delfiles\" fi # Add any objects from preloaded convenience libraries if test -n "$dlprefiles"; then gentop="$output_objdir/${outputname}x" func_append generated " $gentop" func_extract_archives $gentop $dlprefiles func_append libobjs " $func_extract_archives_result" test "X$libobjs" = "X " && libobjs= fi save_ifs="$IFS"; IFS='~' for cmd in $cmds; do IFS="$save_ifs" eval cmd=\"$cmd\" $opt_silent || { func_quote_for_expand "$cmd" eval "func_echo $func_quote_for_expand_result" } $opt_dry_run || eval "$cmd" || { lt_exit=$? # Restore the uninstalled library and exit if test "$opt_mode" = relink; then ( cd "$output_objdir" && \ $RM "${realname}T" && \ $MV "${realname}U" "$realname" ) fi exit $lt_exit } done IFS="$save_ifs" # Restore the uninstalled library and exit if test "$opt_mode" = relink; then $opt_dry_run || eval '(cd $output_objdir && $RM ${realname}T && $MV $realname ${realname}T && $MV ${realname}U $realname)' || exit $? if test -n "$convenience"; then if test -z "$whole_archive_flag_spec"; then func_show_eval '${RM}r "$gentop"' fi fi exit $EXIT_SUCCESS fi # Create links to the real library. for linkname in $linknames; do if test "$realname" != "$linkname"; then func_show_eval '(cd "$output_objdir" && $RM "$linkname" && $LN_S "$realname" "$linkname")' 'exit $?' fi done # If -module or -export-dynamic was specified, set the dlname. if test "$module" = yes || test "$export_dynamic" = yes; then # On all known operating systems, these are identical. dlname="$soname" fi fi ;; obj) if test -n "$dlfiles$dlprefiles" || test "$dlself" != no; then func_warning "\`-dlopen' is ignored for objects" fi case " $deplibs" in *\ -l* | *\ -L*) func_warning "\`-l' and \`-L' are ignored for objects" ;; esac test -n "$rpath" && \ func_warning "\`-rpath' is ignored for objects" test -n "$xrpath" && \ func_warning "\`-R' is ignored for objects" test -n "$vinfo" && \ func_warning "\`-version-info' is ignored for objects" test -n "$release" && \ func_warning "\`-release' is ignored for objects" case $output in *.lo) test -n "$objs$old_deplibs" && \ func_fatal_error "cannot build library object \`$output' from non-libtool objects" libobj=$output func_lo2o "$libobj" obj=$func_lo2o_result ;; *) libobj= obj="$output" ;; esac # Delete the old objects. $opt_dry_run || $RM $obj $libobj # Objects from convenience libraries. This assumes # single-version convenience libraries. Whenever we create # different ones for PIC/non-PIC, this we'll have to duplicate # the extraction. reload_conv_objs= gentop= # reload_cmds runs $LD directly, so let us get rid of # -Wl from whole_archive_flag_spec and hope we can get by with # turning comma into space.. wl= if test -n "$convenience"; then if test -n "$whole_archive_flag_spec"; then eval tmp_whole_archive_flags=\"$whole_archive_flag_spec\" reload_conv_objs=$reload_objs\ `$ECHO "$tmp_whole_archive_flags" | $SED 's|,| |g'` else gentop="$output_objdir/${obj}x" func_append generated " $gentop" func_extract_archives $gentop $convenience reload_conv_objs="$reload_objs $func_extract_archives_result" fi fi # If we're not building shared, we need to use non_pic_objs test "$build_libtool_libs" != yes && libobjs="$non_pic_objects" # Create the old-style object. reload_objs="$objs$old_deplibs "`$ECHO "$libobjs" | $SP2NL | $SED "/\.${libext}$/d; /\.lib$/d; $lo2o" | $NL2SP`" $reload_conv_objs" ### testsuite: skip nested quoting test output="$obj" func_execute_cmds "$reload_cmds" 'exit $?' # Exit if we aren't doing a library object file. if test -z "$libobj"; then if test -n "$gentop"; then func_show_eval '${RM}r "$gentop"' fi exit $EXIT_SUCCESS fi if test "$build_libtool_libs" != yes; then if test -n "$gentop"; then func_show_eval '${RM}r "$gentop"' fi # Create an invalid libtool object if no PIC, so that we don't # accidentally link it into a program. # $show "echo timestamp > $libobj" # $opt_dry_run || eval "echo timestamp > $libobj" || exit $? exit $EXIT_SUCCESS fi if test -n "$pic_flag" || test "$pic_mode" != default; then # Only do commands if we really have different PIC objects. reload_objs="$libobjs $reload_conv_objs" output="$libobj" func_execute_cmds "$reload_cmds" 'exit $?' fi if test -n "$gentop"; then func_show_eval '${RM}r "$gentop"' fi exit $EXIT_SUCCESS ;; prog) case $host in *cygwin*) func_stripname '' '.exe' "$output" output=$func_stripname_result.exe;; esac test -n "$vinfo" && \ func_warning "\`-version-info' is ignored for programs" test -n "$release" && \ func_warning "\`-release' is ignored for programs" test "$preload" = yes \ && test "$dlopen_support" = unknown \ && test "$dlopen_self" = unknown \ && test "$dlopen_self_static" = unknown && \ func_warning "\`LT_INIT([dlopen])' not used. Assuming no dlopen support." case $host in *-*-rhapsody* | *-*-darwin1.[012]) # On Rhapsody replace the C library is the System framework compile_deplibs=`$ECHO " $compile_deplibs" | $SED 's/ -lc / System.ltframework /'` finalize_deplibs=`$ECHO " $finalize_deplibs" | $SED 's/ -lc / System.ltframework /'` ;; esac case $host in *-*-darwin*) # Don't allow lazy linking, it breaks C++ global constructors # But is supposedly fixed on 10.4 or later (yay!). if test "$tagname" = CXX ; then case ${MACOSX_DEPLOYMENT_TARGET-10.0} in 10.[0123]) func_append compile_command " ${wl}-bind_at_load" func_append finalize_command " ${wl}-bind_at_load" ;; esac fi # Time to change all our "foo.ltframework" stuff back to "-framework foo" compile_deplibs=`$ECHO " $compile_deplibs" | $SED 's% \([^ $]*\).ltframework% -framework \1%g'` finalize_deplibs=`$ECHO " $finalize_deplibs" | $SED 's% \([^ $]*\).ltframework% -framework \1%g'` ;; esac # move library search paths that coincide with paths to not yet # installed libraries to the beginning of the library search list new_libs= for path in $notinst_path; do case " $new_libs " in *" -L$path/$objdir "*) ;; *) case " $compile_deplibs " in *" -L$path/$objdir "*) func_append new_libs " -L$path/$objdir" ;; esac ;; esac done for deplib in $compile_deplibs; do case $deplib in -L*) case " $new_libs " in *" $deplib "*) ;; *) func_append new_libs " $deplib" ;; esac ;; *) func_append new_libs " $deplib" ;; esac done compile_deplibs="$new_libs" func_append compile_command " $compile_deplibs" func_append finalize_command " $finalize_deplibs" if test -n "$rpath$xrpath"; then # If the user specified any rpath flags, then add them. for libdir in $rpath $xrpath; do # This is the magic to use -rpath. case "$finalize_rpath " in *" $libdir "*) ;; *) func_append finalize_rpath " $libdir" ;; esac done fi # Now hardcode the library paths rpath= hardcode_libdirs= for libdir in $compile_rpath $finalize_rpath; do if test -n "$hardcode_libdir_flag_spec"; then if test -n "$hardcode_libdir_separator"; then if test -z "$hardcode_libdirs"; then hardcode_libdirs="$libdir" else # Just accumulate the unique libdirs. case $hardcode_libdir_separator$hardcode_libdirs$hardcode_libdir_separator in *"$hardcode_libdir_separator$libdir$hardcode_libdir_separator"*) ;; *) func_append hardcode_libdirs "$hardcode_libdir_separator$libdir" ;; esac fi else eval flag=\"$hardcode_libdir_flag_spec\" func_append rpath " $flag" fi elif test -n "$runpath_var"; then case "$perm_rpath " in *" $libdir "*) ;; *) func_append perm_rpath " $libdir" ;; esac fi case $host in *-*-cygwin* | *-*-mingw* | *-*-pw32* | *-*-os2* | *-cegcc*) testbindir=`${ECHO} "$libdir" | ${SED} -e 's*/lib$*/bin*'` case :$dllsearchpath: in *":$libdir:"*) ;; ::) dllsearchpath=$libdir;; *) func_append dllsearchpath ":$libdir";; esac case :$dllsearchpath: in *":$testbindir:"*) ;; ::) dllsearchpath=$testbindir;; *) func_append dllsearchpath ":$testbindir";; esac ;; esac done # Substitute the hardcoded libdirs into the rpath. if test -n "$hardcode_libdir_separator" && test -n "$hardcode_libdirs"; then libdir="$hardcode_libdirs" eval rpath=\" $hardcode_libdir_flag_spec\" fi compile_rpath="$rpath" rpath= hardcode_libdirs= for libdir in $finalize_rpath; do if test -n "$hardcode_libdir_flag_spec"; then if test -n "$hardcode_libdir_separator"; then if test -z "$hardcode_libdirs"; then hardcode_libdirs="$libdir" else # Just accumulate the unique libdirs. case $hardcode_libdir_separator$hardcode_libdirs$hardcode_libdir_separator in *"$hardcode_libdir_separator$libdir$hardcode_libdir_separator"*) ;; *) func_append hardcode_libdirs "$hardcode_libdir_separator$libdir" ;; esac fi else eval flag=\"$hardcode_libdir_flag_spec\" func_append rpath " $flag" fi elif test -n "$runpath_var"; then case "$finalize_perm_rpath " in *" $libdir "*) ;; *) func_append finalize_perm_rpath " $libdir" ;; esac fi done # Substitute the hardcoded libdirs into the rpath. if test -n "$hardcode_libdir_separator" && test -n "$hardcode_libdirs"; then libdir="$hardcode_libdirs" eval rpath=\" $hardcode_libdir_flag_spec\" fi finalize_rpath="$rpath" if test -n "$libobjs" && test "$build_old_libs" = yes; then # Transform all the library objects into standard objects. compile_command=`$ECHO "$compile_command" | $SP2NL | $SED "$lo2o" | $NL2SP` finalize_command=`$ECHO "$finalize_command" | $SP2NL | $SED "$lo2o" | $NL2SP` fi func_generate_dlsyms "$outputname" "@PROGRAM@" "no" # template prelinking step if test -n "$prelink_cmds"; then func_execute_cmds "$prelink_cmds" 'exit $?' fi wrappers_required=yes case $host in *cegcc* | *mingw32ce*) # Disable wrappers for cegcc and mingw32ce hosts, we are cross compiling anyway. wrappers_required=no ;; *cygwin* | *mingw* ) if test "$build_libtool_libs" != yes; then wrappers_required=no fi ;; *) if test "$need_relink" = no || test "$build_libtool_libs" != yes; then wrappers_required=no fi ;; esac if test "$wrappers_required" = no; then # Replace the output file specification. compile_command=`$ECHO "$compile_command" | $SED 's%@OUTPUT@%'"$output"'%g'` link_command="$compile_command$compile_rpath" # We have no uninstalled library dependencies, so finalize right now. exit_status=0 func_show_eval "$link_command" 'exit_status=$?' if test -n "$postlink_cmds"; then func_to_tool_file "$output" postlink_cmds=`func_echo_all "$postlink_cmds" | $SED -e 's%@OUTPUT@%'"$output"'%g' -e 's%@TOOL_OUTPUT@%'"$func_to_tool_file_result"'%g'` func_execute_cmds "$postlink_cmds" 'exit $?' fi # Delete the generated files. if test -f "$output_objdir/${outputname}S.${objext}"; then func_show_eval '$RM "$output_objdir/${outputname}S.${objext}"' fi exit $exit_status fi if test -n "$compile_shlibpath$finalize_shlibpath"; then compile_command="$shlibpath_var=\"$compile_shlibpath$finalize_shlibpath\$$shlibpath_var\" $compile_command" fi if test -n "$finalize_shlibpath"; then finalize_command="$shlibpath_var=\"$finalize_shlibpath\$$shlibpath_var\" $finalize_command" fi compile_var= finalize_var= if test -n "$runpath_var"; then if test -n "$perm_rpath"; then # We should set the runpath_var. rpath= for dir in $perm_rpath; do func_append rpath "$dir:" done compile_var="$runpath_var=\"$rpath\$$runpath_var\" " fi if test -n "$finalize_perm_rpath"; then # We should set the runpath_var. rpath= for dir in $finalize_perm_rpath; do func_append rpath "$dir:" done finalize_var="$runpath_var=\"$rpath\$$runpath_var\" " fi fi if test "$no_install" = yes; then # We don't need to create a wrapper script. link_command="$compile_var$compile_command$compile_rpath" # Replace the output file specification. link_command=`$ECHO "$link_command" | $SED 's%@OUTPUT@%'"$output"'%g'` # Delete the old output file. $opt_dry_run || $RM $output # Link the executable and exit func_show_eval "$link_command" 'exit $?' if test -n "$postlink_cmds"; then func_to_tool_file "$output" postlink_cmds=`func_echo_all "$postlink_cmds" | $SED -e 's%@OUTPUT@%'"$output"'%g' -e 's%@TOOL_OUTPUT@%'"$func_to_tool_file_result"'%g'` func_execute_cmds "$postlink_cmds" 'exit $?' fi exit $EXIT_SUCCESS fi if test "$hardcode_action" = relink; then # Fast installation is not supported link_command="$compile_var$compile_command$compile_rpath" relink_command="$finalize_var$finalize_command$finalize_rpath" func_warning "this platform does not like uninstalled shared libraries" func_warning "\`$output' will be relinked during installation" else if test "$fast_install" != no; then link_command="$finalize_var$compile_command$finalize_rpath" if test "$fast_install" = yes; then relink_command=`$ECHO "$compile_var$compile_command$compile_rpath" | $SED 's%@OUTPUT@%\$progdir/\$file%g'` else # fast_install is set to needless relink_command= fi else link_command="$compile_var$compile_command$compile_rpath" relink_command="$finalize_var$finalize_command$finalize_rpath" fi fi # Replace the output file specification. link_command=`$ECHO "$link_command" | $SED 's%@OUTPUT@%'"$output_objdir/$outputname"'%g'` # Delete the old output files. $opt_dry_run || $RM $output $output_objdir/$outputname $output_objdir/lt-$outputname func_show_eval "$link_command" 'exit $?' if test -n "$postlink_cmds"; then func_to_tool_file "$output_objdir/$outputname" postlink_cmds=`func_echo_all "$postlink_cmds" | $SED -e 's%@OUTPUT@%'"$output_objdir/$outputname"'%g' -e 's%@TOOL_OUTPUT@%'"$func_to_tool_file_result"'%g'` func_execute_cmds "$postlink_cmds" 'exit $?' fi # Now create the wrapper script. func_verbose "creating $output" # Quote the relink command for shipping. if test -n "$relink_command"; then # Preserve any variables that may affect compiler behavior for var in $variables_saved_for_relink; do if eval test -z \"\${$var+set}\"; then relink_command="{ test -z \"\${$var+set}\" || $lt_unset $var || { $var=; export $var; }; }; $relink_command" elif eval var_value=\$$var; test -z "$var_value"; then relink_command="$var=; export $var; $relink_command" else func_quote_for_eval "$var_value" relink_command="$var=$func_quote_for_eval_result; export $var; $relink_command" fi done relink_command="(cd `pwd`; $relink_command)" relink_command=`$ECHO "$relink_command" | $SED "$sed_quote_subst"` fi # Only actually do things if not in dry run mode. $opt_dry_run || { # win32 will think the script is a binary if it has # a .exe suffix, so we strip it off here. case $output in *.exe) func_stripname '' '.exe' "$output" output=$func_stripname_result ;; esac # test for cygwin because mv fails w/o .exe extensions case $host in *cygwin*) exeext=.exe func_stripname '' '.exe' "$outputname" outputname=$func_stripname_result ;; *) exeext= ;; esac case $host in *cygwin* | *mingw* ) func_dirname_and_basename "$output" "" "." output_name=$func_basename_result output_path=$func_dirname_result cwrappersource="$output_path/$objdir/lt-$output_name.c" cwrapper="$output_path/$output_name.exe" $RM $cwrappersource $cwrapper trap "$RM $cwrappersource $cwrapper; exit $EXIT_FAILURE" 1 2 15 func_emit_cwrapperexe_src > $cwrappersource # The wrapper executable is built using the $host compiler, # because it contains $host paths and files. If cross- # compiling, it, like the target executable, must be # executed on the $host or under an emulation environment. $opt_dry_run || { $LTCC $LTCFLAGS -o $cwrapper $cwrappersource $STRIP $cwrapper } # Now, create the wrapper script for func_source use: func_ltwrapper_scriptname $cwrapper $RM $func_ltwrapper_scriptname_result trap "$RM $func_ltwrapper_scriptname_result; exit $EXIT_FAILURE" 1 2 15 $opt_dry_run || { # note: this script will not be executed, so do not chmod. if test "x$build" = "x$host" ; then $cwrapper --lt-dump-script > $func_ltwrapper_scriptname_result else func_emit_wrapper no > $func_ltwrapper_scriptname_result fi } ;; * ) $RM $output trap "$RM $output; exit $EXIT_FAILURE" 1 2 15 func_emit_wrapper no > $output chmod +x $output ;; esac } exit $EXIT_SUCCESS ;; esac # See if we need to build an old-fashioned archive. for oldlib in $oldlibs; do if test "$build_libtool_libs" = convenience; then oldobjs="$libobjs_save $symfileobj" addlibs="$convenience" build_libtool_libs=no else if test "$build_libtool_libs" = module; then oldobjs="$libobjs_save" build_libtool_libs=no else oldobjs="$old_deplibs $non_pic_objects" if test "$preload" = yes && test -f "$symfileobj"; then func_append oldobjs " $symfileobj" fi fi addlibs="$old_convenience" fi if test -n "$addlibs"; then gentop="$output_objdir/${outputname}x" func_append generated " $gentop" func_extract_archives $gentop $addlibs func_append oldobjs " $func_extract_archives_result" fi # Do each command in the archive commands. if test -n "$old_archive_from_new_cmds" && test "$build_libtool_libs" = yes; then cmds=$old_archive_from_new_cmds else # Add any objects from preloaded convenience libraries if test -n "$dlprefiles"; then gentop="$output_objdir/${outputname}x" func_append generated " $gentop" func_extract_archives $gentop $dlprefiles func_append oldobjs " $func_extract_archives_result" fi # POSIX demands no paths to be encoded in archives. We have # to avoid creating archives with duplicate basenames if we # might have to extract them afterwards, e.g., when creating a # static archive out of a convenience library, or when linking # the entirety of a libtool archive into another (currently # not supported by libtool). if (for obj in $oldobjs do func_basename "$obj" $ECHO "$func_basename_result" done | sort | sort -uc >/dev/null 2>&1); then : else echo "copying selected object files to avoid basename conflicts..." gentop="$output_objdir/${outputname}x" func_append generated " $gentop" func_mkdir_p "$gentop" save_oldobjs=$oldobjs oldobjs= counter=1 for obj in $save_oldobjs do func_basename "$obj" objbase="$func_basename_result" case " $oldobjs " in " ") oldobjs=$obj ;; *[\ /]"$objbase "*) while :; do # Make sure we don't pick an alternate name that also # overlaps. newobj=lt$counter-$objbase func_arith $counter + 1 counter=$func_arith_result case " $oldobjs " in *[\ /]"$newobj "*) ;; *) if test ! -f "$gentop/$newobj"; then break; fi ;; esac done func_show_eval "ln $obj $gentop/$newobj || cp $obj $gentop/$newobj" func_append oldobjs " $gentop/$newobj" ;; *) func_append oldobjs " $obj" ;; esac done fi func_to_tool_file "$oldlib" func_convert_file_msys_to_w32 tool_oldlib=$func_to_tool_file_result eval cmds=\"$old_archive_cmds\" func_len " $cmds" len=$func_len_result if test "$len" -lt "$max_cmd_len" || test "$max_cmd_len" -le -1; then cmds=$old_archive_cmds elif test -n "$archiver_list_spec"; then func_verbose "using command file archive linking..." for obj in $oldobjs do func_to_tool_file "$obj" $ECHO "$func_to_tool_file_result" done > $output_objdir/$libname.libcmd func_to_tool_file "$output_objdir/$libname.libcmd" oldobjs=" $archiver_list_spec$func_to_tool_file_result" cmds=$old_archive_cmds else # the command line is too long to link in one step, link in parts func_verbose "using piecewise archive linking..." save_RANLIB=$RANLIB RANLIB=: objlist= concat_cmds= save_oldobjs=$oldobjs oldobjs= # Is there a better way of finding the last object in the list? for obj in $save_oldobjs do last_oldobj=$obj done eval test_cmds=\"$old_archive_cmds\" func_len " $test_cmds" len0=$func_len_result len=$len0 for obj in $save_oldobjs do func_len " $obj" func_arith $len + $func_len_result len=$func_arith_result func_append objlist " $obj" if test "$len" -lt "$max_cmd_len"; then : else # the above command should be used before it gets too long oldobjs=$objlist if test "$obj" = "$last_oldobj" ; then RANLIB=$save_RANLIB fi test -z "$concat_cmds" || concat_cmds=$concat_cmds~ eval concat_cmds=\"\${concat_cmds}$old_archive_cmds\" objlist= len=$len0 fi done RANLIB=$save_RANLIB oldobjs=$objlist if test "X$oldobjs" = "X" ; then eval cmds=\"\$concat_cmds\" else eval cmds=\"\$concat_cmds~\$old_archive_cmds\" fi fi fi func_execute_cmds "$cmds" 'exit $?' done test -n "$generated" && \ func_show_eval "${RM}r$generated" # Now create the libtool archive. case $output in *.la) old_library= test "$build_old_libs" = yes && old_library="$libname.$libext" func_verbose "creating $output" # Preserve any variables that may affect compiler behavior for var in $variables_saved_for_relink; do if eval test -z \"\${$var+set}\"; then relink_command="{ test -z \"\${$var+set}\" || $lt_unset $var || { $var=; export $var; }; }; $relink_command" elif eval var_value=\$$var; test -z "$var_value"; then relink_command="$var=; export $var; $relink_command" else func_quote_for_eval "$var_value" relink_command="$var=$func_quote_for_eval_result; export $var; $relink_command" fi done # Quote the link command for shipping. relink_command="(cd `pwd`; $SHELL $progpath $preserve_args --mode=relink $libtool_args @inst_prefix_dir@)" relink_command=`$ECHO "$relink_command" | $SED "$sed_quote_subst"` if test "$hardcode_automatic" = yes ; then relink_command= fi # Only create the output if not a dry run. $opt_dry_run || { for installed in no yes; do if test "$installed" = yes; then if test -z "$install_libdir"; then break fi output="$output_objdir/$outputname"i # Replace all uninstalled libtool libraries with the installed ones newdependency_libs= for deplib in $dependency_libs; do case $deplib in *.la) func_basename "$deplib" name="$func_basename_result" func_resolve_sysroot "$deplib" eval libdir=`${SED} -n -e 's/^libdir=\(.*\)$/\1/p' $func_resolve_sysroot_result` test -z "$libdir" && \ func_fatal_error "\`$deplib' is not a valid libtool archive" func_append newdependency_libs " ${lt_sysroot:+=}$libdir/$name" ;; -L*) func_stripname -L '' "$deplib" func_replace_sysroot "$func_stripname_result" func_append newdependency_libs " -L$func_replace_sysroot_result" ;; -R*) func_stripname -R '' "$deplib" func_replace_sysroot "$func_stripname_result" func_append newdependency_libs " -R$func_replace_sysroot_result" ;; *) func_append newdependency_libs " $deplib" ;; esac done dependency_libs="$newdependency_libs" newdlfiles= for lib in $dlfiles; do case $lib in *.la) func_basename "$lib" name="$func_basename_result" eval libdir=`${SED} -n -e 's/^libdir=\(.*\)$/\1/p' $lib` test -z "$libdir" && \ func_fatal_error "\`$lib' is not a valid libtool archive" func_append newdlfiles " ${lt_sysroot:+=}$libdir/$name" ;; *) func_append newdlfiles " $lib" ;; esac done dlfiles="$newdlfiles" newdlprefiles= for lib in $dlprefiles; do case $lib in *.la) # Only pass preopened files to the pseudo-archive (for # eventual linking with the app. that links it) if we # didn't already link the preopened objects directly into # the library: func_basename "$lib" name="$func_basename_result" eval libdir=`${SED} -n -e 's/^libdir=\(.*\)$/\1/p' $lib` test -z "$libdir" && \ func_fatal_error "\`$lib' is not a valid libtool archive" func_append newdlprefiles " ${lt_sysroot:+=}$libdir/$name" ;; esac done dlprefiles="$newdlprefiles" else newdlfiles= for lib in $dlfiles; do case $lib in [\\/]* | [A-Za-z]:[\\/]*) abs="$lib" ;; *) abs=`pwd`"/$lib" ;; esac func_append newdlfiles " $abs" done dlfiles="$newdlfiles" newdlprefiles= for lib in $dlprefiles; do case $lib in [\\/]* | [A-Za-z]:[\\/]*) abs="$lib" ;; *) abs=`pwd`"/$lib" ;; esac func_append newdlprefiles " $abs" done dlprefiles="$newdlprefiles" fi $RM $output # place dlname in correct position for cygwin # In fact, it would be nice if we could use this code for all target # systems that can't hard-code library paths into their executables # and that have no shared library path variable independent of PATH, # but it turns out we can't easily determine that from inspecting # libtool variables, so we have to hard-code the OSs to which it # applies here; at the moment, that means platforms that use the PE # object format with DLL files. See the long comment at the top of # tests/bindir.at for full details. tdlname=$dlname case $host,$output,$installed,$module,$dlname in *cygwin*,*lai,yes,no,*.dll | *mingw*,*lai,yes,no,*.dll | *cegcc*,*lai,yes,no,*.dll) # If a -bindir argument was supplied, place the dll there. if test "x$bindir" != x ; then func_relative_path "$install_libdir" "$bindir" tdlname=$func_relative_path_result$dlname else # Otherwise fall back on heuristic. tdlname=../bin/$dlname fi ;; esac $ECHO > $output "\ # $outputname - a libtool library file # Generated by $PROGRAM (GNU $PACKAGE$TIMESTAMP) $VERSION # # Please DO NOT delete this file! # It is necessary for linking the library. # The name that we can dlopen(3). dlname='$tdlname' # Names of this library. library_names='$library_names' # The name of the static archive. old_library='$old_library' # Linker flags that can not go in dependency_libs. inherited_linker_flags='$new_inherited_linker_flags' # Libraries that this one depends upon. dependency_libs='$dependency_libs' # Names of additional weak libraries provided by this library weak_library_names='$weak_libs' # Version information for $libname. current=$current age=$age revision=$revision # Is this an already installed library? installed=$installed # Should we warn about portability when linking against -modules? shouldnotlink=$module # Files to dlopen/dlpreopen dlopen='$dlfiles' dlpreopen='$dlprefiles' # Directory that this library needs to be installed in: libdir='$install_libdir'" if test "$installed" = no && test "$need_relink" = yes; then $ECHO >> $output "\ relink_command=\"$relink_command\"" fi done } # Do a symbolic link so that the libtool archive can be found in # LD_LIBRARY_PATH before the program is installed. func_show_eval '( cd "$output_objdir" && $RM "$outputname" && $LN_S "../$outputname" "$outputname" )' 'exit $?' ;; esac exit $EXIT_SUCCESS } { test "$opt_mode" = link || test "$opt_mode" = relink; } && func_mode_link ${1+"$@"} # func_mode_uninstall arg... func_mode_uninstall () { $opt_debug RM="$nonopt" files= rmforce= exit_status=0 # This variable tells wrapper scripts just to set variables rather # than running their programs. libtool_install_magic="$magic" for arg do case $arg in -f) func_append RM " $arg"; rmforce=yes ;; -*) func_append RM " $arg" ;; *) func_append files " $arg" ;; esac done test -z "$RM" && \ func_fatal_help "you must specify an RM program" rmdirs= for file in $files; do func_dirname "$file" "" "." dir="$func_dirname_result" if test "X$dir" = X.; then odir="$objdir" else odir="$dir/$objdir" fi func_basename "$file" name="$func_basename_result" test "$opt_mode" = uninstall && odir="$dir" # Remember odir for removal later, being careful to avoid duplicates if test "$opt_mode" = clean; then case " $rmdirs " in *" $odir "*) ;; *) func_append rmdirs " $odir" ;; esac fi # Don't error if the file doesn't exist and rm -f was used. if { test -L "$file"; } >/dev/null 2>&1 || { test -h "$file"; } >/dev/null 2>&1 || test -f "$file"; then : elif test -d "$file"; then exit_status=1 continue elif test "$rmforce" = yes; then continue fi rmfiles="$file" case $name in *.la) # Possibly a libtool archive, so verify it. if func_lalib_p "$file"; then func_source $dir/$name # Delete the libtool libraries and symlinks. for n in $library_names; do func_append rmfiles " $odir/$n" done test -n "$old_library" && func_append rmfiles " $odir/$old_library" case "$opt_mode" in clean) case " $library_names " in *" $dlname "*) ;; *) test -n "$dlname" && func_append rmfiles " $odir/$dlname" ;; esac test -n "$libdir" && func_append rmfiles " $odir/$name $odir/${name}i" ;; uninstall) if test -n "$library_names"; then # Do each command in the postuninstall commands. func_execute_cmds "$postuninstall_cmds" 'test "$rmforce" = yes || exit_status=1' fi if test -n "$old_library"; then # Do each command in the old_postuninstall commands. func_execute_cmds "$old_postuninstall_cmds" 'test "$rmforce" = yes || exit_status=1' fi # FIXME: should reinstall the best remaining shared library. ;; esac fi ;; *.lo) # Possibly a libtool object, so verify it. if func_lalib_p "$file"; then # Read the .lo file func_source $dir/$name # Add PIC object to the list of files to remove. if test -n "$pic_object" && test "$pic_object" != none; then func_append rmfiles " $dir/$pic_object" fi # Add non-PIC object to the list of files to remove. if test -n "$non_pic_object" && test "$non_pic_object" != none; then func_append rmfiles " $dir/$non_pic_object" fi fi ;; *) if test "$opt_mode" = clean ; then noexename=$name case $file in *.exe) func_stripname '' '.exe' "$file" file=$func_stripname_result func_stripname '' '.exe' "$name" noexename=$func_stripname_result # $file with .exe has already been added to rmfiles, # add $file without .exe func_append rmfiles " $file" ;; esac # Do a test to see if this is a libtool program. if func_ltwrapper_p "$file"; then if func_ltwrapper_executable_p "$file"; then func_ltwrapper_scriptname "$file" relink_command= func_source $func_ltwrapper_scriptname_result func_append rmfiles " $func_ltwrapper_scriptname_result" else relink_command= func_source $dir/$noexename fi # note $name still contains .exe if it was in $file originally # as does the version of $file that was added into $rmfiles func_append rmfiles " $odir/$name $odir/${name}S.${objext}" if test "$fast_install" = yes && test -n "$relink_command"; then func_append rmfiles " $odir/lt-$name" fi if test "X$noexename" != "X$name" ; then func_append rmfiles " $odir/lt-${noexename}.c" fi fi fi ;; esac func_show_eval "$RM $rmfiles" 'exit_status=1' done # Try to remove the ${objdir}s in the directories where we deleted files for dir in $rmdirs; do if test -d "$dir"; then func_show_eval "rmdir $dir >/dev/null 2>&1" fi done exit $exit_status } { test "$opt_mode" = uninstall || test "$opt_mode" = clean; } && func_mode_uninstall ${1+"$@"} test -z "$opt_mode" && { help="$generic_help" func_fatal_help "you must specify a MODE" } test -z "$exec_cmd" && \ func_fatal_help "invalid operation mode \`$opt_mode'" if test -n "$exec_cmd"; then eval exec "$exec_cmd" exit $EXIT_FAILURE fi exit $exit_status # The TAGs below are defined such that we never get into a situation # in which we disable both kinds of libraries. Given conflicting # choices, we go for a static library, that is the most portable, # since we can't tell whether shared libraries were disabled because # the user asked for that or because the platform doesn't support # them. This is particularly important on AIX, because we don't # support having both static and shared libraries enabled at the same # time on that platform, so we default to a shared-only configuration. # If a disable-shared tag is given, we'll fallback to a static-only # configuration. But we'll never go from static-only to shared-only. # ### BEGIN LIBTOOL TAG CONFIG: disable-shared build_libtool_libs=no build_old_libs=yes # ### END LIBTOOL TAG CONFIG: disable-shared # ### BEGIN LIBTOOL TAG CONFIG: disable-static build_old_libs=`case $build_libtool_libs in yes) echo no;; *) echo yes;; esac` # ### END LIBTOOL TAG CONFIG: disable-static # Local Variables: # mode:shell-script # sh-indentation:2 # End: # vi:sw=2 ffc-2019.1.0.post0/libs/gtest-1.7.0/build-aux/missing000077500000000000000000000241521346026536000215700ustar00rootroot00000000000000#! /bin/sh # Common stub for a few missing GNU programs while installing. scriptversion=2012-01-06.13; # UTC # Copyright (C) 1996, 1997, 1999, 2000, 2002, 2003, 2004, 2005, 2006, # 2008, 2009, 2010, 2011, 2012 Free Software Foundation, Inc. # Originally by Fran,cois Pinard , 1996. # This program is free software; you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation; either version 2, or (at your option) # any later version. # This program is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # You should have received a copy of the GNU General Public License # along with this program. If not, see . # As a special exception to the GNU General Public License, if you # distribute this file as part of a program that contains a # configuration script generated by Autoconf, you may include it under # the same distribution terms that you use for the rest of that program. if test $# -eq 0; then echo 1>&2 "Try \`$0 --help' for more information" exit 1 fi run=: sed_output='s/.* --output[ =]\([^ ]*\).*/\1/p' sed_minuso='s/.* -o \([^ ]*\).*/\1/p' # In the cases where this matters, `missing' is being run in the # srcdir already. if test -f configure.ac; then configure_ac=configure.ac else configure_ac=configure.in fi msg="missing on your system" case $1 in --run) # Try to run requested program, and just exit if it succeeds. run= shift "$@" && exit 0 # Exit code 63 means version mismatch. This often happens # when the user try to use an ancient version of a tool on # a file that requires a minimum version. In this case we # we should proceed has if the program had been absent, or # if --run hadn't been passed. if test $? = 63; then run=: msg="probably too old" fi ;; -h|--h|--he|--hel|--help) echo "\ $0 [OPTION]... PROGRAM [ARGUMENT]... Handle \`PROGRAM [ARGUMENT]...' for when PROGRAM is missing, or return an error status if there is no known handling for PROGRAM. Options: -h, --help display this help and exit -v, --version output version information and exit --run try to run the given command, and emulate it if it fails Supported PROGRAM values: aclocal touch file \`aclocal.m4' autoconf touch file \`configure' autoheader touch file \`config.h.in' autom4te touch the output file, or create a stub one automake touch all \`Makefile.in' files bison create \`y.tab.[ch]', if possible, from existing .[ch] flex create \`lex.yy.c', if possible, from existing .c help2man touch the output file lex create \`lex.yy.c', if possible, from existing .c makeinfo touch the output file yacc create \`y.tab.[ch]', if possible, from existing .[ch] Version suffixes to PROGRAM as well as the prefixes \`gnu-', \`gnu', and \`g' are ignored when checking the name. Send bug reports to ." exit $? ;; -v|--v|--ve|--ver|--vers|--versi|--versio|--version) echo "missing $scriptversion (GNU Automake)" exit $? ;; -*) echo 1>&2 "$0: Unknown \`$1' option" echo 1>&2 "Try \`$0 --help' for more information" exit 1 ;; esac # normalize program name to check for. program=`echo "$1" | sed ' s/^gnu-//; t s/^gnu//; t s/^g//; t'` # Now exit if we have it, but it failed. Also exit now if we # don't have it and --version was passed (most likely to detect # the program). This is about non-GNU programs, so use $1 not # $program. case $1 in lex*|yacc*) # Not GNU programs, they don't have --version. ;; *) if test -z "$run" && ($1 --version) > /dev/null 2>&1; then # We have it, but it failed. exit 1 elif test "x$2" = "x--version" || test "x$2" = "x--help"; then # Could not run --version or --help. This is probably someone # running `$TOOL --version' or `$TOOL --help' to check whether # $TOOL exists and not knowing $TOOL uses missing. exit 1 fi ;; esac # If it does not exist, or fails to run (possibly an outdated version), # try to emulate it. case $program in aclocal*) echo 1>&2 "\ WARNING: \`$1' is $msg. You should only need it if you modified \`acinclude.m4' or \`${configure_ac}'. You might want to install the \`Automake' and \`Perl' packages. Grab them from any GNU archive site." touch aclocal.m4 ;; autoconf*) echo 1>&2 "\ WARNING: \`$1' is $msg. You should only need it if you modified \`${configure_ac}'. You might want to install the \`Autoconf' and \`GNU m4' packages. Grab them from any GNU archive site." touch configure ;; autoheader*) echo 1>&2 "\ WARNING: \`$1' is $msg. You should only need it if you modified \`acconfig.h' or \`${configure_ac}'. You might want to install the \`Autoconf' and \`GNU m4' packages. Grab them from any GNU archive site." files=`sed -n 's/^[ ]*A[CM]_CONFIG_HEADER(\([^)]*\)).*/\1/p' ${configure_ac}` test -z "$files" && files="config.h" touch_files= for f in $files; do case $f in *:*) touch_files="$touch_files "`echo "$f" | sed -e 's/^[^:]*://' -e 's/:.*//'`;; *) touch_files="$touch_files $f.in";; esac done touch $touch_files ;; automake*) echo 1>&2 "\ WARNING: \`$1' is $msg. You should only need it if you modified \`Makefile.am', \`acinclude.m4' or \`${configure_ac}'. You might want to install the \`Automake' and \`Perl' packages. Grab them from any GNU archive site." find . -type f -name Makefile.am -print | sed 's/\.am$/.in/' | while read f; do touch "$f"; done ;; autom4te*) echo 1>&2 "\ WARNING: \`$1' is needed, but is $msg. You might have modified some files without having the proper tools for further handling them. You can get \`$1' as part of \`Autoconf' from any GNU archive site." file=`echo "$*" | sed -n "$sed_output"` test -z "$file" && file=`echo "$*" | sed -n "$sed_minuso"` if test -f "$file"; then touch $file else test -z "$file" || exec >$file echo "#! /bin/sh" echo "# Created by GNU Automake missing as a replacement of" echo "# $ $@" echo "exit 0" chmod +x $file exit 1 fi ;; bison*|yacc*) echo 1>&2 "\ WARNING: \`$1' $msg. You should only need it if you modified a \`.y' file. You may need the \`Bison' package in order for those modifications to take effect. You can get \`Bison' from any GNU archive site." rm -f y.tab.c y.tab.h if test $# -ne 1; then eval LASTARG=\${$#} case $LASTARG in *.y) SRCFILE=`echo "$LASTARG" | sed 's/y$/c/'` if test -f "$SRCFILE"; then cp "$SRCFILE" y.tab.c fi SRCFILE=`echo "$LASTARG" | sed 's/y$/h/'` if test -f "$SRCFILE"; then cp "$SRCFILE" y.tab.h fi ;; esac fi if test ! -f y.tab.h; then echo >y.tab.h fi if test ! -f y.tab.c; then echo 'main() { return 0; }' >y.tab.c fi ;; lex*|flex*) echo 1>&2 "\ WARNING: \`$1' is $msg. You should only need it if you modified a \`.l' file. You may need the \`Flex' package in order for those modifications to take effect. You can get \`Flex' from any GNU archive site." rm -f lex.yy.c if test $# -ne 1; then eval LASTARG=\${$#} case $LASTARG in *.l) SRCFILE=`echo "$LASTARG" | sed 's/l$/c/'` if test -f "$SRCFILE"; then cp "$SRCFILE" lex.yy.c fi ;; esac fi if test ! -f lex.yy.c; then echo 'main() { return 0; }' >lex.yy.c fi ;; help2man*) echo 1>&2 "\ WARNING: \`$1' is $msg. You should only need it if you modified a dependency of a manual page. You may need the \`Help2man' package in order for those modifications to take effect. You can get \`Help2man' from any GNU archive site." file=`echo "$*" | sed -n "$sed_output"` test -z "$file" && file=`echo "$*" | sed -n "$sed_minuso"` if test -f "$file"; then touch $file else test -z "$file" || exec >$file echo ".ab help2man is required to generate this page" exit $? fi ;; makeinfo*) echo 1>&2 "\ WARNING: \`$1' is $msg. You should only need it if you modified a \`.texi' or \`.texinfo' file, or any other file indirectly affecting the aspect of the manual. The spurious call might also be the consequence of using a buggy \`make' (AIX, DU, IRIX). You might want to install the \`Texinfo' package or the \`GNU make' package. Grab either from any GNU archive site." # The file to touch is that specified with -o ... file=`echo "$*" | sed -n "$sed_output"` test -z "$file" && file=`echo "$*" | sed -n "$sed_minuso"` if test -z "$file"; then # ... or it is the one specified with @setfilename ... infile=`echo "$*" | sed 's/.* \([^ ]*\) *$/\1/'` file=`sed -n ' /^@setfilename/{ s/.* \([^ ]*\) *$/\1/ p q }' $infile` # ... or it is derived from the source name (dir/f.texi becomes f.info) test -z "$file" && file=`echo "$infile" | sed 's,.*/,,;s,.[^.]*$,,'`.info fi # If the file does not exist, the user really needs makeinfo; # let's fail without touching anything. test -f $file || exit 1 touch $file ;; *) echo 1>&2 "\ WARNING: \`$1' is needed, and is $msg. You might have modified some files without having the proper tools for further handling them. Check the \`README' file, it often tells you about the needed prerequisites for installing this package. You may also peek at any GNU archive site, in case some other package would contain this missing \`$1' program." exit 1 ;; esac exit 0 # Local variables: # eval: (add-hook 'write-file-hooks 'time-stamp) # time-stamp-start: "scriptversion=" # time-stamp-format: "%:y-%02m-%02d.%02H" # time-stamp-time-zone: "UTC" # time-stamp-end: "; # UTC" # End: ffc-2019.1.0.post0/libs/gtest-1.7.0/cmake/000077500000000000000000000000001346026536000173535ustar00rootroot00000000000000ffc-2019.1.0.post0/libs/gtest-1.7.0/cmake/internal_utils.cmake000066400000000000000000000225141346026536000234150ustar00rootroot00000000000000# Defines functions and macros useful for building Google Test and # Google Mock. # # Note: # # - This file will be run twice when building Google Mock (once via # Google Test's CMakeLists.txt, and once via Google Mock's). # Therefore it shouldn't have any side effects other than defining # the functions and macros. # # - The functions/macros defined in this file may depend on Google # Test and Google Mock's option() definitions, and thus must be # called *after* the options have been defined. # Tweaks CMake's default compiler/linker settings to suit Google Test's needs. # # This must be a macro(), as inside a function string() can only # update variables in the function scope. macro(fix_default_compiler_settings_) if (MSVC) # For MSVC, CMake sets certain flags to defaults we want to override. # This replacement code is taken from sample in the CMake Wiki at # http://www.cmake.org/Wiki/CMake_FAQ#Dynamic_Replace. foreach (flag_var CMAKE_CXX_FLAGS CMAKE_CXX_FLAGS_DEBUG CMAKE_CXX_FLAGS_RELEASE CMAKE_CXX_FLAGS_MINSIZEREL CMAKE_CXX_FLAGS_RELWITHDEBINFO) if (NOT BUILD_SHARED_LIBS AND NOT gtest_force_shared_crt) # When Google Test is built as a shared library, it should also use # shared runtime libraries. Otherwise, it may end up with multiple # copies of runtime library data in different modules, resulting in # hard-to-find crashes. When it is built as a static library, it is # preferable to use CRT as static libraries, as we don't have to rely # on CRT DLLs being available. CMake always defaults to using shared # CRT libraries, so we override that default here. string(REPLACE "/MD" "-MT" ${flag_var} "${${flag_var}}") endif() # We prefer more strict warning checking for building Google Test. # Replaces /W3 with /W4 in defaults. string(REPLACE "/W3" "-W4" ${flag_var} "${${flag_var}}") endforeach() endif() endmacro() # Defines the compiler/linker flags used to build Google Test and # Google Mock. You can tweak these definitions to suit your need. A # variable's value is empty before it's explicitly assigned to. macro(config_compiler_and_linker) if (NOT gtest_disable_pthreads) # Defines CMAKE_USE_PTHREADS_INIT and CMAKE_THREAD_LIBS_INIT. find_package(Threads) endif() fix_default_compiler_settings_() if (MSVC) # Newlines inside flags variables break CMake's NMake generator. # TODO(vladl@google.com): Add -RTCs and -RTCu to debug builds. set(cxx_base_flags "-GS -W4 -WX -wd4127 -wd4251 -wd4275 -nologo -J -Zi") if (MSVC_VERSION LESS 1400) # Suppress spurious warnings MSVC 7.1 sometimes issues. # Forcing value to bool. set(cxx_base_flags "${cxx_base_flags} -wd4800") # Copy constructor and assignment operator could not be generated. set(cxx_base_flags "${cxx_base_flags} -wd4511 -wd4512") # Compatibility warnings not applicable to Google Test. # Resolved overload was found by argument-dependent lookup. set(cxx_base_flags "${cxx_base_flags} -wd4675") endif() set(cxx_base_flags "${cxx_base_flags} -D_UNICODE -DUNICODE -DWIN32 -D_WIN32") set(cxx_base_flags "${cxx_base_flags} -DSTRICT -DWIN32_LEAN_AND_MEAN") set(cxx_exception_flags "-EHsc -D_HAS_EXCEPTIONS=1") set(cxx_no_exception_flags "-D_HAS_EXCEPTIONS=0") set(cxx_no_rtti_flags "-GR-") elseif (CMAKE_COMPILER_IS_GNUCXX) set(cxx_base_flags "-Wall -Wshadow") set(cxx_exception_flags "-fexceptions") set(cxx_no_exception_flags "-fno-exceptions") # Until version 4.3.2, GCC doesn't define a macro to indicate # whether RTTI is enabled. Therefore we define GTEST_HAS_RTTI # explicitly. set(cxx_no_rtti_flags "-fno-rtti -DGTEST_HAS_RTTI=0") set(cxx_strict_flags "-Wextra -Wno-unused-parameter -Wno-missing-field-initializers") elseif (CMAKE_CXX_COMPILER_ID STREQUAL "SunPro") set(cxx_exception_flags "-features=except") # Sun Pro doesn't provide macros to indicate whether exceptions and # RTTI are enabled, so we define GTEST_HAS_* explicitly. set(cxx_no_exception_flags "-features=no%except -DGTEST_HAS_EXCEPTIONS=0") set(cxx_no_rtti_flags "-features=no%rtti -DGTEST_HAS_RTTI=0") elseif (CMAKE_CXX_COMPILER_ID STREQUAL "VisualAge" OR CMAKE_CXX_COMPILER_ID STREQUAL "XL") # CMake 2.8 changes Visual Age's compiler ID to "XL". set(cxx_exception_flags "-qeh") set(cxx_no_exception_flags "-qnoeh") # Until version 9.0, Visual Age doesn't define a macro to indicate # whether RTTI is enabled. Therefore we define GTEST_HAS_RTTI # explicitly. set(cxx_no_rtti_flags "-qnortti -DGTEST_HAS_RTTI=0") elseif (CMAKE_CXX_COMPILER_ID STREQUAL "HP") set(cxx_base_flags "-AA -mt") set(cxx_exception_flags "-DGTEST_HAS_EXCEPTIONS=1") set(cxx_no_exception_flags "+noeh -DGTEST_HAS_EXCEPTIONS=0") # RTTI can not be disabled in HP aCC compiler. set(cxx_no_rtti_flags "") endif() if (CMAKE_USE_PTHREADS_INIT) # The pthreads library is available and allowed. set(cxx_base_flags "${cxx_base_flags} -DGTEST_HAS_PTHREAD=1") else() set(cxx_base_flags "${cxx_base_flags} -DGTEST_HAS_PTHREAD=0") endif() # For building gtest's own tests and samples. set(cxx_exception "${CMAKE_CXX_FLAGS} ${cxx_base_flags} ${cxx_exception_flags}") set(cxx_no_exception "${CMAKE_CXX_FLAGS} ${cxx_base_flags} ${cxx_no_exception_flags}") set(cxx_default "${cxx_exception}") set(cxx_no_rtti "${cxx_default} ${cxx_no_rtti_flags}") set(cxx_use_own_tuple "${cxx_default} -DGTEST_USE_OWN_TR1_TUPLE=1") # For building the gtest libraries. set(cxx_strict "${cxx_default} ${cxx_strict_flags}") endmacro() # Defines the gtest & gtest_main libraries. User tests should link # with one of them. function(cxx_library_with_type name type cxx_flags) # type can be either STATIC or SHARED to denote a static or shared library. # ARGN refers to additional arguments after 'cxx_flags'. add_library(${name} ${type} ${ARGN}) set_target_properties(${name} PROPERTIES COMPILE_FLAGS "${cxx_flags}") if (BUILD_SHARED_LIBS OR type STREQUAL "SHARED") set_target_properties(${name} PROPERTIES COMPILE_DEFINITIONS "GTEST_CREATE_SHARED_LIBRARY=1") endif() if (CMAKE_USE_PTHREADS_INIT) target_link_libraries(${name} ${CMAKE_THREAD_LIBS_INIT}) endif() endfunction() ######################################################################## # # Helper functions for creating build targets. function(cxx_shared_library name cxx_flags) cxx_library_with_type(${name} SHARED "${cxx_flags}" ${ARGN}) endfunction() function(cxx_library name cxx_flags) cxx_library_with_type(${name} "" "${cxx_flags}" ${ARGN}) endfunction() # cxx_executable_with_flags(name cxx_flags libs srcs...) # # creates a named C++ executable that depends on the given libraries and # is built from the given source files with the given compiler flags. function(cxx_executable_with_flags name cxx_flags libs) add_executable(${name} ${ARGN}) if (cxx_flags) set_target_properties(${name} PROPERTIES COMPILE_FLAGS "${cxx_flags}") endif() if (BUILD_SHARED_LIBS) set_target_properties(${name} PROPERTIES COMPILE_DEFINITIONS "GTEST_LINKED_AS_SHARED_LIBRARY=1") endif() # To support mixing linking in static and dynamic libraries, link each # library in with an extra call to target_link_libraries. foreach (lib "${libs}") target_link_libraries(${name} ${lib}) endforeach() endfunction() # cxx_executable(name dir lib srcs...) # # creates a named target that depends on the given libs and is built # from the given source files. dir/name.cc is implicitly included in # the source file list. function(cxx_executable name dir libs) cxx_executable_with_flags( ${name} "${cxx_default}" "${libs}" "${dir}/${name}.cc" ${ARGN}) endfunction() # Sets PYTHONINTERP_FOUND and PYTHON_EXECUTABLE. find_package(PythonInterp) # cxx_test_with_flags(name cxx_flags libs srcs...) # # creates a named C++ test that depends on the given libs and is built # from the given source files with the given compiler flags. function(cxx_test_with_flags name cxx_flags libs) cxx_executable_with_flags(${name} "${cxx_flags}" "${libs}" ${ARGN}) add_test(${name} ${name}) endfunction() # cxx_test(name libs srcs...) # # creates a named test target that depends on the given libs and is # built from the given source files. Unlike cxx_test_with_flags, # test/name.cc is already implicitly included in the source file list. function(cxx_test name libs) cxx_test_with_flags("${name}" "${cxx_default}" "${libs}" "test/${name}.cc" ${ARGN}) endfunction() # py_test(name) # # creates a Python test with the given name whose main module is in # test/name.py. It does nothing if Python is not installed. function(py_test name) # We are not supporting Python tests on Linux yet as they consider # all Linux environments to be google3 and try to use google3 features. if (PYTHONINTERP_FOUND) # ${CMAKE_BINARY_DIR} is known at configuration time, so we can # directly bind it from cmake. ${CTEST_CONFIGURATION_TYPE} is known # only at ctest runtime (by calling ctest -c ), so # we have to escape $ to delay variable substitution here. add_test(${name} ${PYTHON_EXECUTABLE} ${CMAKE_CURRENT_SOURCE_DIR}/test/${name}.py --build_dir=${CMAKE_CURRENT_BINARY_DIR}/\${CTEST_CONFIGURATION_TYPE}) endif() endfunction() ffc-2019.1.0.post0/libs/gtest-1.7.0/codegear/000077500000000000000000000000001346026536000200445ustar00rootroot00000000000000ffc-2019.1.0.post0/libs/gtest-1.7.0/codegear/gtest.cbproj000066400000000000000000000243661346026536000224060ustar00rootroot00000000000000 {bca37a72-5b07-46cf-b44e-89f8e06451a2} Release true true true Base true true Base true lib JPHNE NO_STRICT true true CppStaticLibrary true rtl.bpi;vcl.bpi;bcbie.bpi;vclx.bpi;vclactnband.bpi;xmlrtl.bpi;bcbsmp.bpi;dbrtl.bpi;vcldb.bpi;bdertl.bpi;vcldbx.bpi;dsnap.bpi;dsnapcon.bpi;vclib.bpi;ibxpress.bpi;adortl.bpi;dbxcds.bpi;dbexpress.bpi;DbxCommonDriver.bpi;websnap.bpi;vclie.bpi;webdsnap.bpi;inet.bpi;inetdbbde.bpi;inetdbxpress.bpi;soaprtl.bpi;Rave75VCL.bpi;teeUI.bpi;tee.bpi;teedb.bpi;IndyCore.bpi;IndySystem.bpi;IndyProtocols.bpi;IntrawebDB_90_100.bpi;Intraweb_90_100.bpi;dclZipForged11.bpi;vclZipForged11.bpi;GR32_BDS2006.bpi;GR32_DSGN_BDS2006.bpi;Jcl.bpi;JclVcl.bpi;JvCoreD11R.bpi;JvSystemD11R.bpi;JvStdCtrlsD11R.bpi;JvAppFrmD11R.bpi;JvBandsD11R.bpi;JvDBD11R.bpi;JvDlgsD11R.bpi;JvBDED11R.bpi;JvCmpD11R.bpi;JvCryptD11R.bpi;JvCtrlsD11R.bpi;JvCustomD11R.bpi;JvDockingD11R.bpi;JvDotNetCtrlsD11R.bpi;JvEDID11R.bpi;JvGlobusD11R.bpi;JvHMID11R.bpi;JvInterpreterD11R.bpi;JvJansD11R.bpi;JvManagedThreadsD11R.bpi;JvMMD11R.bpi;JvNetD11R.bpi;JvPageCompsD11R.bpi;JvPluginD11R.bpi;JvPrintPreviewD11R.bpi;JvRuntimeDesignD11R.bpi;JvTimeFrameworkD11R.bpi;JvValidatorsD11R.bpi;JvWizardD11R.bpi;JvXPCtrlsD11R.bpi;VclSmp.bpi;CExceptionExpert11.bpi false $(BDS)\include;$(BDS)\include\dinkumware;$(BDS)\include\vcl;..\src;..\include;.. rtl.lib;vcl.lib 32 $(BDS)\lib;$(BDS)\lib\obj;$(BDS)\lib\psdk false false true _DEBUG;$(Defines) true false true None DEBUG true Debug true true true $(BDS)\lib\debug;$(ILINK_LibraryPath) Full true NDEBUG;$(Defines) Release $(BDS)\lib\release;$(ILINK_LibraryPath) None CPlusPlusBuilder.Personality CppStaticLibrary FalseFalse1000FalseFalseFalseFalseFalse103312521.0.0.01.0.0.0FalseFalseFalseTrueFalse CodeGear C++Builder Office 2000 Servers Package CodeGear C++Builder Office XP Servers Package FalseTrueTrue3$(BDS)\include;$(BDS)\include\dinkumware;$(BDS)\include\vcl;..\src;..\include;..$(BDS)\include;$(BDS)\include\dinkumware;$(BDS)\include\vcl;..\src;..\include;..$(BDS)\include;$(BDS)\include\dinkumware;$(BDS)\include\vcl;..\src;..\src;..\include1$(BDS)\lib;$(BDS)\lib\obj;$(BDS)\lib\psdk1NO_STRICT13216 3 4 5 6 7 8 0 1 2 9 10 11 12 14 13 15 16 17 18 Cfg_1 Cfg_2 ffc-2019.1.0.post0/libs/gtest-1.7.0/codegear/gtest.groupproj000066400000000000000000000037301346026536000231460ustar00rootroot00000000000000 {c1d923e0-6cba-4332-9b6f-3420acbf5091} Default.Personality ffc-2019.1.0.post0/libs/gtest-1.7.0/codegear/gtest_all.cc000066400000000000000000000035111346026536000223310ustar00rootroot00000000000000// Copyright 2009, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: Josh Kelley (joshkel@gmail.com) // // Google C++ Testing Framework (Google Test) // // C++Builder's IDE cannot build a static library from files with hyphens // in their name. See http://qc.codegear.com/wc/qcmain.aspx?d=70977 . // This file serves as a workaround. #include "src/gtest-all.cc" ffc-2019.1.0.post0/libs/gtest-1.7.0/codegear/gtest_link.cc000066400000000000000000000037111346026536000225200ustar00rootroot00000000000000// Copyright 2009, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: Josh Kelley (joshkel@gmail.com) // // Google C++ Testing Framework (Google Test) // // Links gtest.lib and gtest_main.lib into the current project in C++Builder. // This means that these libraries can't be renamed, but it's the only way to // ensure that Debug versus Release test builds are linked against the // appropriate Debug or Release build of the libraries. #pragma link "gtest.lib" #pragma link "gtest_main.lib" ffc-2019.1.0.post0/libs/gtest-1.7.0/codegear/gtest_main.cbproj000066400000000000000000000206041346026536000234010ustar00rootroot00000000000000 {bca37a72-5b07-46cf-b44e-89f8e06451a2} Release true true true Base true true Base true lib JPHNE NO_STRICT true true CppStaticLibrary true rtl.bpi;vcl.bpi;bcbie.bpi;vclx.bpi;vclactnband.bpi;xmlrtl.bpi;bcbsmp.bpi;dbrtl.bpi;vcldb.bpi;bdertl.bpi;vcldbx.bpi;dsnap.bpi;dsnapcon.bpi;vclib.bpi;ibxpress.bpi;adortl.bpi;dbxcds.bpi;dbexpress.bpi;DbxCommonDriver.bpi;websnap.bpi;vclie.bpi;webdsnap.bpi;inet.bpi;inetdbbde.bpi;inetdbxpress.bpi;soaprtl.bpi;Rave75VCL.bpi;teeUI.bpi;tee.bpi;teedb.bpi;IndyCore.bpi;IndySystem.bpi;IndyProtocols.bpi;IntrawebDB_90_100.bpi;Intraweb_90_100.bpi;dclZipForged11.bpi;vclZipForged11.bpi;GR32_BDS2006.bpi;GR32_DSGN_BDS2006.bpi;Jcl.bpi;JclVcl.bpi;JvCoreD11R.bpi;JvSystemD11R.bpi;JvStdCtrlsD11R.bpi;JvAppFrmD11R.bpi;JvBandsD11R.bpi;JvDBD11R.bpi;JvDlgsD11R.bpi;JvBDED11R.bpi;JvCmpD11R.bpi;JvCryptD11R.bpi;JvCtrlsD11R.bpi;JvCustomD11R.bpi;JvDockingD11R.bpi;JvDotNetCtrlsD11R.bpi;JvEDID11R.bpi;JvGlobusD11R.bpi;JvHMID11R.bpi;JvInterpreterD11R.bpi;JvJansD11R.bpi;JvManagedThreadsD11R.bpi;JvMMD11R.bpi;JvNetD11R.bpi;JvPageCompsD11R.bpi;JvPluginD11R.bpi;JvPrintPreviewD11R.bpi;JvRuntimeDesignD11R.bpi;JvTimeFrameworkD11R.bpi;JvValidatorsD11R.bpi;JvWizardD11R.bpi;JvXPCtrlsD11R.bpi;VclSmp.bpi;CExceptionExpert11.bpi false $(BDS)\include;$(BDS)\include\dinkumware;$(BDS)\include\vcl;..\src;..\include;.. rtl.lib;vcl.lib 32 $(BDS)\lib;$(BDS)\lib\obj;$(BDS)\lib\psdk false false true _DEBUG;$(Defines) true false true None DEBUG true Debug true true true $(BDS)\lib\debug;$(ILINK_LibraryPath) Full true NDEBUG;$(Defines) Release $(BDS)\lib\release;$(ILINK_LibraryPath) None CPlusPlusBuilder.Personality CppStaticLibrary FalseFalse1000FalseFalseFalseFalseFalse103312521.0.0.01.0.0.0FalseFalseFalseTrueFalse CodeGear C++Builder Office 2000 Servers Package CodeGear C++Builder Office XP Servers Package FalseTrueTrue3$(BDS)\include;$(BDS)\include\dinkumware;$(BDS)\include\vcl;..\src;..\include;..$(BDS)\include;$(BDS)\include\dinkumware;$(BDS)\include\vcl;..\src;..\include;..$(BDS)\include;$(BDS)\include\dinkumware;$(BDS)\include\vcl;..\src;..\src;..\include1$(BDS)\lib;$(BDS)\lib\obj;$(BDS)\lib\psdk1NO_STRICT13216 0 Cfg_1 Cfg_2 ffc-2019.1.0.post0/libs/gtest-1.7.0/codegear/gtest_unittest.cbproj000066400000000000000000000207631346026536000243420ustar00rootroot00000000000000 {eea63393-5ac5-4b9c-8909-d75fef2daa41} Release true true true Base true true Base exe true NO_STRICT JPHNE true ..\test true CppConsoleApplication true true rtl.bpi;vcl.bpi;bcbie.bpi;vclx.bpi;vclactnband.bpi;xmlrtl.bpi;bcbsmp.bpi;dbrtl.bpi;vcldb.bpi;bdertl.bpi;vcldbx.bpi;dsnap.bpi;dsnapcon.bpi;vclib.bpi;ibxpress.bpi;adortl.bpi;dbxcds.bpi;dbexpress.bpi;DbxCommonDriver.bpi;websnap.bpi;vclie.bpi;webdsnap.bpi;inet.bpi;inetdbbde.bpi;inetdbxpress.bpi;soaprtl.bpi;Rave75VCL.bpi;teeUI.bpi;tee.bpi;teedb.bpi;IndyCore.bpi;IndySystem.bpi;IndyProtocols.bpi;IntrawebDB_90_100.bpi;Intraweb_90_100.bpi;Jcl.bpi;JclVcl.bpi;JvCoreD11R.bpi;JvSystemD11R.bpi;JvStdCtrlsD11R.bpi;JvAppFrmD11R.bpi;JvBandsD11R.bpi;JvDBD11R.bpi;JvDlgsD11R.bpi;JvBDED11R.bpi;JvCmpD11R.bpi;JvCryptD11R.bpi;JvCtrlsD11R.bpi;JvCustomD11R.bpi;JvDockingD11R.bpi;JvDotNetCtrlsD11R.bpi;JvEDID11R.bpi;JvGlobusD11R.bpi;JvHMID11R.bpi;JvInterpreterD11R.bpi;JvJansD11R.bpi;JvManagedThreadsD11R.bpi;JvMMD11R.bpi;JvNetD11R.bpi;JvPageCompsD11R.bpi;JvPluginD11R.bpi;JvPrintPreviewD11R.bpi;JvRuntimeDesignD11R.bpi;JvTimeFrameworkD11R.bpi;JvValidatorsD11R.bpi;JvWizardD11R.bpi;JvXPCtrlsD11R.bpi;VclSmp.bpi false $(BDS)\include;$(BDS)\include\dinkumware;$(BDS)\include\vcl;..\include;..\test;.. $(BDS)\lib;$(BDS)\lib\obj;$(BDS)\lib\psdk;..\test true false false true _DEBUG;$(Defines) true false true None DEBUG true Debug true true true $(BDS)\lib\debug;$(ILINK_LibraryPath) Full true NDEBUG;$(Defines) Release $(BDS)\lib\release;$(ILINK_LibraryPath) None CPlusPlusBuilder.Personality CppConsoleApplication FalseFalse1000FalseFalseFalseFalseFalse103312521.0.0.01.0.0.0FalseFalseFalseTrueFalse CodeGear C++Builder Office 2000 Servers Package CodeGear C++Builder Office XP Servers Package FalseTrueTrue3$(BDS)\include;$(BDS)\include\dinkumware;$(BDS)\include\vcl;..\include;..\test;..$(BDS)\include;$(BDS)\include\dinkumware;$(BDS)\include\vcl;..\include;..\test$(BDS)\include;$(BDS)\include\dinkumware;$(BDS)\include\vcl;..\include1$(BDS)\lib;$(BDS)\lib\obj;$(BDS)\lib\psdk;..\test$(BDS)\lib;$(BDS)\lib\obj;$(BDS)\lib\psdk;..\test$(BDS)\lib;$(BDS)\lib\obj;$(BDS)\lib\psdk;$(OUTPUTDIR);..\test2NO_STRICTSTRICT 0 1 Cfg_1 Cfg_2 ffc-2019.1.0.post0/libs/gtest-1.7.0/configure000077500000000000000000021163601346026536000202130ustar00rootroot00000000000000#! /bin/sh # Guess values for system-dependent variables and create Makefiles. # Generated by GNU Autoconf 2.68 for Google C++ Testing Framework 1.7.0. # # Report bugs to . # # # Copyright (C) 1992, 1993, 1994, 1995, 1996, 1998, 1999, 2000, 2001, # 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010 Free Software # Foundation, Inc. # # # This configure script is free software; the Free Software Foundation # gives unlimited permission to copy, distribute and modify it. ## -------------------- ## ## M4sh Initialization. ## ## -------------------- ## # Be more Bourne compatible DUALCASE=1; export DUALCASE # for MKS sh if test -n "${ZSH_VERSION+set}" && (emulate sh) >/dev/null 2>&1; then : emulate sh NULLCMD=: # Pre-4.2 versions of Zsh do word splitting on ${1+"$@"}, which # is contrary to our usage. Disable this feature. alias -g '${1+"$@"}'='"$@"' setopt NO_GLOB_SUBST else case `(set -o) 2>/dev/null` in #( *posix*) : set -o posix ;; #( *) : ;; esac fi as_nl=' ' export as_nl # Printing a long string crashes Solaris 7 /usr/bin/printf. as_echo='\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\' as_echo=$as_echo$as_echo$as_echo$as_echo$as_echo as_echo=$as_echo$as_echo$as_echo$as_echo$as_echo$as_echo # Prefer a ksh shell builtin over an external printf program on Solaris, # but without wasting forks for bash or zsh. if test -z "$BASH_VERSION$ZSH_VERSION" \ && (test "X`print -r -- $as_echo`" = "X$as_echo") 2>/dev/null; then as_echo='print -r --' as_echo_n='print -rn --' elif (test "X`printf %s $as_echo`" = "X$as_echo") 2>/dev/null; then as_echo='printf %s\n' as_echo_n='printf %s' else if test "X`(/usr/ucb/echo -n -n $as_echo) 2>/dev/null`" = "X-n $as_echo"; then as_echo_body='eval /usr/ucb/echo -n "$1$as_nl"' as_echo_n='/usr/ucb/echo -n' else as_echo_body='eval expr "X$1" : "X\\(.*\\)"' as_echo_n_body='eval arg=$1; case $arg in #( *"$as_nl"*) expr "X$arg" : "X\\(.*\\)$as_nl"; arg=`expr "X$arg" : ".*$as_nl\\(.*\\)"`;; esac; expr "X$arg" : "X\\(.*\\)" | tr -d "$as_nl" ' export as_echo_n_body as_echo_n='sh -c $as_echo_n_body as_echo' fi export as_echo_body as_echo='sh -c $as_echo_body as_echo' fi # The user is always right. if test "${PATH_SEPARATOR+set}" != set; then PATH_SEPARATOR=: (PATH='/bin;/bin'; FPATH=$PATH; sh -c :) >/dev/null 2>&1 && { (PATH='/bin:/bin'; FPATH=$PATH; sh -c :) >/dev/null 2>&1 || PATH_SEPARATOR=';' } fi # IFS # We need space, tab and new line, in precisely that order. Quoting is # there to prevent editors from complaining about space-tab. # (If _AS_PATH_WALK were called with IFS unset, it would disable word # splitting by setting IFS to empty value.) IFS=" "" $as_nl" # Find who we are. Look in the path if we contain no directory separator. as_myself= case $0 in #(( *[\\/]* ) as_myself=$0 ;; *) as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. test -r "$as_dir/$0" && as_myself=$as_dir/$0 && break done IFS=$as_save_IFS ;; esac # We did not find ourselves, most probably we were run as `sh COMMAND' # in which case we are not to be found in the path. if test "x$as_myself" = x; then as_myself=$0 fi if test ! -f "$as_myself"; then $as_echo "$as_myself: error: cannot find myself; rerun with an absolute file name" >&2 exit 1 fi # Unset variables that we do not need and which cause bugs (e.g. in # pre-3.0 UWIN ksh). But do not cause bugs in bash 2.01; the "|| exit 1" # suppresses any "Segmentation fault" message there. '((' could # trigger a bug in pdksh 5.2.14. for as_var in BASH_ENV ENV MAIL MAILPATH do eval test x\${$as_var+set} = xset \ && ( (unset $as_var) || exit 1) >/dev/null 2>&1 && unset $as_var || : done PS1='$ ' PS2='> ' PS4='+ ' # NLS nuisances. LC_ALL=C export LC_ALL LANGUAGE=C export LANGUAGE # CDPATH. (unset CDPATH) >/dev/null 2>&1 && unset CDPATH if test "x$CONFIG_SHELL" = x; then as_bourne_compatible="if test -n \"\${ZSH_VERSION+set}\" && (emulate sh) >/dev/null 2>&1; then : emulate sh NULLCMD=: # Pre-4.2 versions of Zsh do word splitting on \${1+\"\$@\"}, which # is contrary to our usage. Disable this feature. alias -g '\${1+\"\$@\"}'='\"\$@\"' setopt NO_GLOB_SUBST else case \`(set -o) 2>/dev/null\` in #( *posix*) : set -o posix ;; #( *) : ;; esac fi " as_required="as_fn_return () { (exit \$1); } as_fn_success () { as_fn_return 0; } as_fn_failure () { as_fn_return 1; } as_fn_ret_success () { return 0; } as_fn_ret_failure () { return 1; } exitcode=0 as_fn_success || { exitcode=1; echo as_fn_success failed.; } as_fn_failure && { exitcode=1; echo as_fn_failure succeeded.; } as_fn_ret_success || { exitcode=1; echo as_fn_ret_success failed.; } as_fn_ret_failure && { exitcode=1; echo as_fn_ret_failure succeeded.; } if ( set x; as_fn_ret_success y && test x = \"\$1\" ); then : else exitcode=1; echo positional parameters were not saved. fi test x\$exitcode = x0 || exit 1" as_suggested=" as_lineno_1=";as_suggested=$as_suggested$LINENO;as_suggested=$as_suggested" as_lineno_1a=\$LINENO as_lineno_2=";as_suggested=$as_suggested$LINENO;as_suggested=$as_suggested" as_lineno_2a=\$LINENO eval 'test \"x\$as_lineno_1'\$as_run'\" != \"x\$as_lineno_2'\$as_run'\" && test \"x\`expr \$as_lineno_1'\$as_run' + 1\`\" = \"x\$as_lineno_2'\$as_run'\"' || exit 1 test -n \"\${ZSH_VERSION+set}\${BASH_VERSION+set}\" || ( ECHO='\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\' ECHO=\$ECHO\$ECHO\$ECHO\$ECHO\$ECHO ECHO=\$ECHO\$ECHO\$ECHO\$ECHO\$ECHO\$ECHO PATH=/empty FPATH=/empty; export PATH FPATH test \"X\`printf %s \$ECHO\`\" = \"X\$ECHO\" \\ || test \"X\`print -r -- \$ECHO\`\" = \"X\$ECHO\" ) || exit 1 test \$(( 1 + 1 )) = 2 || exit 1" if (eval "$as_required") 2>/dev/null; then : as_have_required=yes else as_have_required=no fi if test x$as_have_required = xyes && (eval "$as_suggested") 2>/dev/null; then : else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR as_found=false for as_dir in /bin$PATH_SEPARATOR/usr/bin$PATH_SEPARATOR$PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. as_found=: case $as_dir in #( /*) for as_base in sh bash ksh sh5; do # Try only shells that exist, to save several forks. as_shell=$as_dir/$as_base if { test -f "$as_shell" || test -f "$as_shell.exe"; } && { $as_echo "$as_bourne_compatible""$as_required" | as_run=a "$as_shell"; } 2>/dev/null; then : CONFIG_SHELL=$as_shell as_have_required=yes if { $as_echo "$as_bourne_compatible""$as_suggested" | as_run=a "$as_shell"; } 2>/dev/null; then : break 2 fi fi done;; esac as_found=false done $as_found || { if { test -f "$SHELL" || test -f "$SHELL.exe"; } && { $as_echo "$as_bourne_compatible""$as_required" | as_run=a "$SHELL"; } 2>/dev/null; then : CONFIG_SHELL=$SHELL as_have_required=yes fi; } IFS=$as_save_IFS if test "x$CONFIG_SHELL" != x; then : # We cannot yet assume a decent shell, so we have to provide a # neutralization value for shells without unset; and this also # works around shells that cannot unset nonexistent variables. # Preserve -v and -x to the replacement shell. BASH_ENV=/dev/null ENV=/dev/null (unset BASH_ENV) >/dev/null 2>&1 && unset BASH_ENV ENV export CONFIG_SHELL case $- in # (((( *v*x* | *x*v* ) as_opts=-vx ;; *v* ) as_opts=-v ;; *x* ) as_opts=-x ;; * ) as_opts= ;; esac exec "$CONFIG_SHELL" $as_opts "$as_myself" ${1+"$@"} fi if test x$as_have_required = xno; then : $as_echo "$0: This script requires a shell more modern than all" $as_echo "$0: the shells that I found on your system." if test x${ZSH_VERSION+set} = xset ; then $as_echo "$0: In particular, zsh $ZSH_VERSION has bugs and should" $as_echo "$0: be upgraded to zsh 4.3.4 or later." else $as_echo "$0: Please tell bug-autoconf@gnu.org and $0: googletestframework@googlegroups.com about your system, $0: including any error possibly output before this $0: message. Then install a modern shell, or manually run $0: the script under such a shell if you do have one." fi exit 1 fi fi fi SHELL=${CONFIG_SHELL-/bin/sh} export SHELL # Unset more variables known to interfere with behavior of common tools. CLICOLOR_FORCE= GREP_OPTIONS= unset CLICOLOR_FORCE GREP_OPTIONS ## --------------------- ## ## M4sh Shell Functions. ## ## --------------------- ## # as_fn_unset VAR # --------------- # Portably unset VAR. as_fn_unset () { { eval $1=; unset $1;} } as_unset=as_fn_unset # as_fn_set_status STATUS # ----------------------- # Set $? to STATUS, without forking. as_fn_set_status () { return $1 } # as_fn_set_status # as_fn_exit STATUS # ----------------- # Exit the shell with STATUS, even in a "trap 0" or "set -e" context. as_fn_exit () { set +e as_fn_set_status $1 exit $1 } # as_fn_exit # as_fn_mkdir_p # ------------- # Create "$as_dir" as a directory, including parents if necessary. as_fn_mkdir_p () { case $as_dir in #( -*) as_dir=./$as_dir;; esac test -d "$as_dir" || eval $as_mkdir_p || { as_dirs= while :; do case $as_dir in #( *\'*) as_qdir=`$as_echo "$as_dir" | sed "s/'/'\\\\\\\\''/g"`;; #'( *) as_qdir=$as_dir;; esac as_dirs="'$as_qdir' $as_dirs" as_dir=`$as_dirname -- "$as_dir" || $as_expr X"$as_dir" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \ X"$as_dir" : 'X\(//\)[^/]' \| \ X"$as_dir" : 'X\(//\)$' \| \ X"$as_dir" : 'X\(/\)' \| . 2>/dev/null || $as_echo X"$as_dir" | sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{ s//\1/ q } /^X\(\/\/\)[^/].*/{ s//\1/ q } /^X\(\/\/\)$/{ s//\1/ q } /^X\(\/\).*/{ s//\1/ q } s/.*/./; q'` test -d "$as_dir" && break done test -z "$as_dirs" || eval "mkdir $as_dirs" } || test -d "$as_dir" || as_fn_error $? "cannot create directory $as_dir" } # as_fn_mkdir_p # as_fn_append VAR VALUE # ---------------------- # Append the text in VALUE to the end of the definition contained in VAR. Take # advantage of any shell optimizations that allow amortized linear growth over # repeated appends, instead of the typical quadratic growth present in naive # implementations. if (eval "as_var=1; as_var+=2; test x\$as_var = x12") 2>/dev/null; then : eval 'as_fn_append () { eval $1+=\$2 }' else as_fn_append () { eval $1=\$$1\$2 } fi # as_fn_append # as_fn_arith ARG... # ------------------ # Perform arithmetic evaluation on the ARGs, and store the result in the # global $as_val. Take advantage of shells that can avoid forks. The arguments # must be portable across $(()) and expr. if (eval "test \$(( 1 + 1 )) = 2") 2>/dev/null; then : eval 'as_fn_arith () { as_val=$(( $* )) }' else as_fn_arith () { as_val=`expr "$@" || test $? -eq 1` } fi # as_fn_arith # as_fn_error STATUS ERROR [LINENO LOG_FD] # ---------------------------------------- # Output "`basename $0`: error: ERROR" to stderr. If LINENO and LOG_FD are # provided, also output the error to LOG_FD, referencing LINENO. Then exit the # script with STATUS, using 1 if that was 0. as_fn_error () { as_status=$1; test $as_status -eq 0 && as_status=1 if test "$4"; then as_lineno=${as_lineno-"$3"} as_lineno_stack=as_lineno_stack=$as_lineno_stack $as_echo "$as_me:${as_lineno-$LINENO}: error: $2" >&$4 fi $as_echo "$as_me: error: $2" >&2 as_fn_exit $as_status } # as_fn_error if expr a : '\(a\)' >/dev/null 2>&1 && test "X`expr 00001 : '.*\(...\)'`" = X001; then as_expr=expr else as_expr=false fi if (basename -- /) >/dev/null 2>&1 && test "X`basename -- / 2>&1`" = "X/"; then as_basename=basename else as_basename=false fi if (as_dir=`dirname -- /` && test "X$as_dir" = X/) >/dev/null 2>&1; then as_dirname=dirname else as_dirname=false fi as_me=`$as_basename -- "$0" || $as_expr X/"$0" : '.*/\([^/][^/]*\)/*$' \| \ X"$0" : 'X\(//\)$' \| \ X"$0" : 'X\(/\)' \| . 2>/dev/null || $as_echo X/"$0" | sed '/^.*\/\([^/][^/]*\)\/*$/{ s//\1/ q } /^X\/\(\/\/\)$/{ s//\1/ q } /^X\/\(\/\).*/{ s//\1/ q } s/.*/./; q'` # Avoid depending upon Character Ranges. as_cr_letters='abcdefghijklmnopqrstuvwxyz' as_cr_LETTERS='ABCDEFGHIJKLMNOPQRSTUVWXYZ' as_cr_Letters=$as_cr_letters$as_cr_LETTERS as_cr_digits='0123456789' as_cr_alnum=$as_cr_Letters$as_cr_digits as_lineno_1=$LINENO as_lineno_1a=$LINENO as_lineno_2=$LINENO as_lineno_2a=$LINENO eval 'test "x$as_lineno_1'$as_run'" != "x$as_lineno_2'$as_run'" && test "x`expr $as_lineno_1'$as_run' + 1`" = "x$as_lineno_2'$as_run'"' || { # Blame Lee E. McMahon (1931-1989) for sed's syntax. :-) sed -n ' p /[$]LINENO/= ' <$as_myself | sed ' s/[$]LINENO.*/&-/ t lineno b :lineno N :loop s/[$]LINENO\([^'$as_cr_alnum'_].*\n\)\(.*\)/\2\1\2/ t loop s/-\n.*// ' >$as_me.lineno && chmod +x "$as_me.lineno" || { $as_echo "$as_me: error: cannot create $as_me.lineno; rerun with a POSIX shell" >&2; as_fn_exit 1; } # Don't try to exec as it changes $[0], causing all sort of problems # (the dirname of $[0] is not the place where we might find the # original and so on. Autoconf is especially sensitive to this). . "./$as_me.lineno" # Exit status is that of the last command. exit } ECHO_C= ECHO_N= ECHO_T= case `echo -n x` in #((((( -n*) case `echo 'xy\c'` in *c*) ECHO_T=' ';; # ECHO_T is single tab character. xy) ECHO_C='\c';; *) echo `echo ksh88 bug on AIX 6.1` > /dev/null ECHO_T=' ';; esac;; *) ECHO_N='-n';; esac rm -f conf$$ conf$$.exe conf$$.file if test -d conf$$.dir; then rm -f conf$$.dir/conf$$.file else rm -f conf$$.dir mkdir conf$$.dir 2>/dev/null fi if (echo >conf$$.file) 2>/dev/null; then if ln -s conf$$.file conf$$ 2>/dev/null; then as_ln_s='ln -s' # ... but there are two gotchas: # 1) On MSYS, both `ln -s file dir' and `ln file dir' fail. # 2) DJGPP < 2.04 has no symlinks; `ln -s' creates a wrapper executable. # In both cases, we have to default to `cp -p'. ln -s conf$$.file conf$$.dir 2>/dev/null && test ! -f conf$$.exe || as_ln_s='cp -p' elif ln conf$$.file conf$$ 2>/dev/null; then as_ln_s=ln else as_ln_s='cp -p' fi else as_ln_s='cp -p' fi rm -f conf$$ conf$$.exe conf$$.dir/conf$$.file conf$$.file rmdir conf$$.dir 2>/dev/null if mkdir -p . 2>/dev/null; then as_mkdir_p='mkdir -p "$as_dir"' else test -d ./-p && rmdir ./-p as_mkdir_p=false fi if test -x / >/dev/null 2>&1; then as_test_x='test -x' else if ls -dL / >/dev/null 2>&1; then as_ls_L_option=L else as_ls_L_option= fi as_test_x=' eval sh -c '\'' if test -d "$1"; then test -d "$1/."; else case $1 in #( -*)set "./$1";; esac; case `ls -ld'$as_ls_L_option' "$1" 2>/dev/null` in #(( ???[sx]*):;;*)false;;esac;fi '\'' sh ' fi as_executable_p=$as_test_x # Sed expression to map a string onto a valid CPP name. as_tr_cpp="eval sed 'y%*$as_cr_letters%P$as_cr_LETTERS%;s%[^_$as_cr_alnum]%_%g'" # Sed expression to map a string onto a valid variable name. as_tr_sh="eval sed 'y%*+%pp%;s%[^_$as_cr_alnum]%_%g'" SHELL=${CONFIG_SHELL-/bin/sh} test -n "$DJDIR" || exec 7<&0 &1 # Name of the host. # hostname on some systems (SVR3.2, old GNU/Linux) returns a bogus exit status, # so uname gets run too. ac_hostname=`(hostname || uname -n) 2>/dev/null | sed 1q` # # Initializations. # ac_default_prefix=/usr/local ac_clean_files= ac_config_libobj_dir=. LIBOBJS= cross_compiling=no subdirs= MFLAGS= MAKEFLAGS= # Identity of this package. PACKAGE_NAME='Google C++ Testing Framework' PACKAGE_TARNAME='gtest' PACKAGE_VERSION='1.7.0' PACKAGE_STRING='Google C++ Testing Framework 1.7.0' PACKAGE_BUGREPORT='googletestframework@googlegroups.com' PACKAGE_URL='' ac_unique_file="./LICENSE" # Factoring default headers for most tests. ac_includes_default="\ #include #ifdef HAVE_SYS_TYPES_H # include #endif #ifdef HAVE_SYS_STAT_H # include #endif #ifdef STDC_HEADERS # include # include #else # ifdef HAVE_STDLIB_H # include # endif #endif #ifdef HAVE_STRING_H # if !defined STDC_HEADERS && defined HAVE_MEMORY_H # include # endif # include #endif #ifdef HAVE_STRINGS_H # include #endif #ifdef HAVE_INTTYPES_H # include #endif #ifdef HAVE_STDINT_H # include #endif #ifdef HAVE_UNISTD_H # include #endif" ac_subst_vars='am__EXEEXT_FALSE am__EXEEXT_TRUE LTLIBOBJS LIBOBJS HAVE_PTHREADS_FALSE HAVE_PTHREADS_TRUE PTHREAD_CFLAGS PTHREAD_LIBS PTHREAD_CC acx_pthread_config HAVE_PYTHON_FALSE HAVE_PYTHON_TRUE PYTHON CXXCPP CPP OTOOL64 OTOOL LIPO NMEDIT DSYMUTIL MANIFEST_TOOL RANLIB ac_ct_AR AR DLLTOOL OBJDUMP LN_S NM ac_ct_DUMPBIN DUMPBIN LD FGREP EGREP GREP SED host_os host_vendor host_cpu host build_os build_vendor build_cpu build LIBTOOL am__fastdepCXX_FALSE am__fastdepCXX_TRUE CXXDEPMODE ac_ct_CXX CXXFLAGS CXX am__fastdepCC_FALSE am__fastdepCC_TRUE CCDEPMODE am__nodep AMDEPBACKSLASH AMDEP_FALSE AMDEP_TRUE am__quote am__include DEPDIR OBJEXT EXEEXT ac_ct_CC CPPFLAGS LDFLAGS CFLAGS CC am__untar am__tar AMTAR am__leading_dot SET_MAKE AWK mkdir_p MKDIR_P INSTALL_STRIP_PROGRAM STRIP install_sh MAKEINFO AUTOHEADER AUTOMAKE AUTOCONF ACLOCAL VERSION PACKAGE CYGPATH_W am__isrc INSTALL_DATA INSTALL_SCRIPT INSTALL_PROGRAM target_alias host_alias build_alias LIBS ECHO_T ECHO_N ECHO_C DEFS mandir localedir libdir psdir pdfdir dvidir htmldir infodir docdir oldincludedir includedir localstatedir sharedstatedir sysconfdir datadir datarootdir libexecdir sbindir bindir program_transform_name prefix exec_prefix PACKAGE_URL PACKAGE_BUGREPORT PACKAGE_STRING PACKAGE_VERSION PACKAGE_TARNAME PACKAGE_NAME PATH_SEPARATOR SHELL' ac_subst_files='' ac_user_opts=' enable_option_checking enable_dependency_tracking enable_shared enable_static with_pic enable_fast_install with_gnu_ld with_sysroot enable_libtool_lock with_pthreads ' ac_precious_vars='build_alias host_alias target_alias CC CFLAGS LDFLAGS LIBS CPPFLAGS CXX CXXFLAGS CCC CPP CXXCPP' # Initialize some variables set by options. ac_init_help= ac_init_version=false ac_unrecognized_opts= ac_unrecognized_sep= # The variables have the same names as the options, with # dashes changed to underlines. cache_file=/dev/null exec_prefix=NONE no_create= no_recursion= prefix=NONE program_prefix=NONE program_suffix=NONE program_transform_name=s,x,x, silent= site= srcdir= verbose= x_includes=NONE x_libraries=NONE # Installation directory options. # These are left unexpanded so users can "make install exec_prefix=/foo" # and all the variables that are supposed to be based on exec_prefix # by default will actually change. # Use braces instead of parens because sh, perl, etc. also accept them. # (The list follows the same order as the GNU Coding Standards.) bindir='${exec_prefix}/bin' sbindir='${exec_prefix}/sbin' libexecdir='${exec_prefix}/libexec' datarootdir='${prefix}/share' datadir='${datarootdir}' sysconfdir='${prefix}/etc' sharedstatedir='${prefix}/com' localstatedir='${prefix}/var' includedir='${prefix}/include' oldincludedir='/usr/include' docdir='${datarootdir}/doc/${PACKAGE_TARNAME}' infodir='${datarootdir}/info' htmldir='${docdir}' dvidir='${docdir}' pdfdir='${docdir}' psdir='${docdir}' libdir='${exec_prefix}/lib' localedir='${datarootdir}/locale' mandir='${datarootdir}/man' ac_prev= ac_dashdash= for ac_option do # If the previous option needs an argument, assign it. if test -n "$ac_prev"; then eval $ac_prev=\$ac_option ac_prev= continue fi case $ac_option in *=?*) ac_optarg=`expr "X$ac_option" : '[^=]*=\(.*\)'` ;; *=) ac_optarg= ;; *) ac_optarg=yes ;; esac # Accept the important Cygnus configure options, so we can diagnose typos. case $ac_dashdash$ac_option in --) ac_dashdash=yes ;; -bindir | --bindir | --bindi | --bind | --bin | --bi) ac_prev=bindir ;; -bindir=* | --bindir=* | --bindi=* | --bind=* | --bin=* | --bi=*) bindir=$ac_optarg ;; -build | --build | --buil | --bui | --bu) ac_prev=build_alias ;; -build=* | --build=* | --buil=* | --bui=* | --bu=*) build_alias=$ac_optarg ;; -cache-file | --cache-file | --cache-fil | --cache-fi \ | --cache-f | --cache- | --cache | --cach | --cac | --ca | --c) ac_prev=cache_file ;; -cache-file=* | --cache-file=* | --cache-fil=* | --cache-fi=* \ | --cache-f=* | --cache-=* | --cache=* | --cach=* | --cac=* | --ca=* | --c=*) cache_file=$ac_optarg ;; --config-cache | -C) cache_file=config.cache ;; -datadir | --datadir | --datadi | --datad) ac_prev=datadir ;; -datadir=* | --datadir=* | --datadi=* | --datad=*) datadir=$ac_optarg ;; -datarootdir | --datarootdir | --datarootdi | --datarootd | --dataroot \ | --dataroo | --dataro | --datar) ac_prev=datarootdir ;; -datarootdir=* | --datarootdir=* | --datarootdi=* | --datarootd=* \ | --dataroot=* | --dataroo=* | --dataro=* | --datar=*) datarootdir=$ac_optarg ;; -disable-* | --disable-*) ac_useropt=`expr "x$ac_option" : 'x-*disable-\(.*\)'` # Reject names that are not valid shell variable names. expr "x$ac_useropt" : ".*[^-+._$as_cr_alnum]" >/dev/null && as_fn_error $? "invalid feature name: $ac_useropt" ac_useropt_orig=$ac_useropt ac_useropt=`$as_echo "$ac_useropt" | sed 's/[-+.]/_/g'` case $ac_user_opts in *" "enable_$ac_useropt" "*) ;; *) ac_unrecognized_opts="$ac_unrecognized_opts$ac_unrecognized_sep--disable-$ac_useropt_orig" ac_unrecognized_sep=', ';; esac eval enable_$ac_useropt=no ;; -docdir | --docdir | --docdi | --doc | --do) ac_prev=docdir ;; -docdir=* | --docdir=* | --docdi=* | --doc=* | --do=*) docdir=$ac_optarg ;; -dvidir | --dvidir | --dvidi | --dvid | --dvi | --dv) ac_prev=dvidir ;; -dvidir=* | --dvidir=* | --dvidi=* | --dvid=* | --dvi=* | --dv=*) dvidir=$ac_optarg ;; -enable-* | --enable-*) ac_useropt=`expr "x$ac_option" : 'x-*enable-\([^=]*\)'` # Reject names that are not valid shell variable names. expr "x$ac_useropt" : ".*[^-+._$as_cr_alnum]" >/dev/null && as_fn_error $? "invalid feature name: $ac_useropt" ac_useropt_orig=$ac_useropt ac_useropt=`$as_echo "$ac_useropt" | sed 's/[-+.]/_/g'` case $ac_user_opts in *" "enable_$ac_useropt" "*) ;; *) ac_unrecognized_opts="$ac_unrecognized_opts$ac_unrecognized_sep--enable-$ac_useropt_orig" ac_unrecognized_sep=', ';; esac eval enable_$ac_useropt=\$ac_optarg ;; -exec-prefix | --exec_prefix | --exec-prefix | --exec-prefi \ | --exec-pref | --exec-pre | --exec-pr | --exec-p | --exec- \ | --exec | --exe | --ex) ac_prev=exec_prefix ;; -exec-prefix=* | --exec_prefix=* | --exec-prefix=* | --exec-prefi=* \ | --exec-pref=* | --exec-pre=* | --exec-pr=* | --exec-p=* | --exec-=* \ | --exec=* | --exe=* | --ex=*) exec_prefix=$ac_optarg ;; -gas | --gas | --ga | --g) # Obsolete; use --with-gas. with_gas=yes ;; -help | --help | --hel | --he | -h) ac_init_help=long ;; -help=r* | --help=r* | --hel=r* | --he=r* | -hr*) ac_init_help=recursive ;; -help=s* | --help=s* | --hel=s* | --he=s* | -hs*) ac_init_help=short ;; -host | --host | --hos | --ho) ac_prev=host_alias ;; -host=* | --host=* | --hos=* | --ho=*) host_alias=$ac_optarg ;; -htmldir | --htmldir | --htmldi | --htmld | --html | --htm | --ht) ac_prev=htmldir ;; -htmldir=* | --htmldir=* | --htmldi=* | --htmld=* | --html=* | --htm=* \ | --ht=*) htmldir=$ac_optarg ;; -includedir | --includedir | --includedi | --included | --include \ | --includ | --inclu | --incl | --inc) ac_prev=includedir ;; -includedir=* | --includedir=* | --includedi=* | --included=* | --include=* \ | --includ=* | --inclu=* | --incl=* | --inc=*) includedir=$ac_optarg ;; -infodir | --infodir | --infodi | --infod | --info | --inf) ac_prev=infodir ;; -infodir=* | --infodir=* | --infodi=* | --infod=* | --info=* | --inf=*) infodir=$ac_optarg ;; -libdir | --libdir | --libdi | --libd) ac_prev=libdir ;; -libdir=* | --libdir=* | --libdi=* | --libd=*) libdir=$ac_optarg ;; -libexecdir | --libexecdir | --libexecdi | --libexecd | --libexec \ | --libexe | --libex | --libe) ac_prev=libexecdir ;; -libexecdir=* | --libexecdir=* | --libexecdi=* | --libexecd=* | --libexec=* \ | --libexe=* | --libex=* | --libe=*) libexecdir=$ac_optarg ;; -localedir | --localedir | --localedi | --localed | --locale) ac_prev=localedir ;; -localedir=* | --localedir=* | --localedi=* | --localed=* | --locale=*) localedir=$ac_optarg ;; -localstatedir | --localstatedir | --localstatedi | --localstated \ | --localstate | --localstat | --localsta | --localst | --locals) ac_prev=localstatedir ;; -localstatedir=* | --localstatedir=* | --localstatedi=* | --localstated=* \ | --localstate=* | --localstat=* | --localsta=* | --localst=* | --locals=*) localstatedir=$ac_optarg ;; -mandir | --mandir | --mandi | --mand | --man | --ma | --m) ac_prev=mandir ;; -mandir=* | --mandir=* | --mandi=* | --mand=* | --man=* | --ma=* | --m=*) mandir=$ac_optarg ;; -nfp | --nfp | --nf) # Obsolete; use --without-fp. with_fp=no ;; -no-create | --no-create | --no-creat | --no-crea | --no-cre \ | --no-cr | --no-c | -n) no_create=yes ;; -no-recursion | --no-recursion | --no-recursio | --no-recursi \ | --no-recurs | --no-recur | --no-recu | --no-rec | --no-re | --no-r) no_recursion=yes ;; -oldincludedir | --oldincludedir | --oldincludedi | --oldincluded \ | --oldinclude | --oldinclud | --oldinclu | --oldincl | --oldinc \ | --oldin | --oldi | --old | --ol | --o) ac_prev=oldincludedir ;; -oldincludedir=* | --oldincludedir=* | --oldincludedi=* | --oldincluded=* \ | --oldinclude=* | --oldinclud=* | --oldinclu=* | --oldincl=* | --oldinc=* \ | --oldin=* | --oldi=* | --old=* | --ol=* | --o=*) oldincludedir=$ac_optarg ;; -prefix | --prefix | --prefi | --pref | --pre | --pr | --p) ac_prev=prefix ;; -prefix=* | --prefix=* | --prefi=* | --pref=* | --pre=* | --pr=* | --p=*) prefix=$ac_optarg ;; -program-prefix | --program-prefix | --program-prefi | --program-pref \ | --program-pre | --program-pr | --program-p) ac_prev=program_prefix ;; -program-prefix=* | --program-prefix=* | --program-prefi=* \ | --program-pref=* | --program-pre=* | --program-pr=* | --program-p=*) program_prefix=$ac_optarg ;; -program-suffix | --program-suffix | --program-suffi | --program-suff \ | --program-suf | --program-su | --program-s) ac_prev=program_suffix ;; -program-suffix=* | --program-suffix=* | --program-suffi=* \ | --program-suff=* | --program-suf=* | --program-su=* | --program-s=*) program_suffix=$ac_optarg ;; -program-transform-name | --program-transform-name \ | --program-transform-nam | --program-transform-na \ | --program-transform-n | --program-transform- \ | --program-transform | --program-transfor \ | --program-transfo | --program-transf \ | --program-trans | --program-tran \ | --progr-tra | --program-tr | --program-t) ac_prev=program_transform_name ;; -program-transform-name=* | --program-transform-name=* \ | --program-transform-nam=* | --program-transform-na=* \ | --program-transform-n=* | --program-transform-=* \ | --program-transform=* | --program-transfor=* \ | --program-transfo=* | --program-transf=* \ | --program-trans=* | --program-tran=* \ | --progr-tra=* | --program-tr=* | --program-t=*) program_transform_name=$ac_optarg ;; -pdfdir | --pdfdir | --pdfdi | --pdfd | --pdf | --pd) ac_prev=pdfdir ;; -pdfdir=* | --pdfdir=* | --pdfdi=* | --pdfd=* | --pdf=* | --pd=*) pdfdir=$ac_optarg ;; -psdir | --psdir | --psdi | --psd | --ps) ac_prev=psdir ;; -psdir=* | --psdir=* | --psdi=* | --psd=* | --ps=*) psdir=$ac_optarg ;; -q | -quiet | --quiet | --quie | --qui | --qu | --q \ | -silent | --silent | --silen | --sile | --sil) silent=yes ;; -sbindir | --sbindir | --sbindi | --sbind | --sbin | --sbi | --sb) ac_prev=sbindir ;; -sbindir=* | --sbindir=* | --sbindi=* | --sbind=* | --sbin=* \ | --sbi=* | --sb=*) sbindir=$ac_optarg ;; -sharedstatedir | --sharedstatedir | --sharedstatedi \ | --sharedstated | --sharedstate | --sharedstat | --sharedsta \ | --sharedst | --shareds | --shared | --share | --shar \ | --sha | --sh) ac_prev=sharedstatedir ;; -sharedstatedir=* | --sharedstatedir=* | --sharedstatedi=* \ | --sharedstated=* | --sharedstate=* | --sharedstat=* | --sharedsta=* \ | --sharedst=* | --shareds=* | --shared=* | --share=* | --shar=* \ | --sha=* | --sh=*) sharedstatedir=$ac_optarg ;; -site | --site | --sit) ac_prev=site ;; -site=* | --site=* | --sit=*) site=$ac_optarg ;; -srcdir | --srcdir | --srcdi | --srcd | --src | --sr) ac_prev=srcdir ;; -srcdir=* | --srcdir=* | --srcdi=* | --srcd=* | --src=* | --sr=*) srcdir=$ac_optarg ;; -sysconfdir | --sysconfdir | --sysconfdi | --sysconfd | --sysconf \ | --syscon | --sysco | --sysc | --sys | --sy) ac_prev=sysconfdir ;; -sysconfdir=* | --sysconfdir=* | --sysconfdi=* | --sysconfd=* | --sysconf=* \ | --syscon=* | --sysco=* | --sysc=* | --sys=* | --sy=*) sysconfdir=$ac_optarg ;; -target | --target | --targe | --targ | --tar | --ta | --t) ac_prev=target_alias ;; -target=* | --target=* | --targe=* | --targ=* | --tar=* | --ta=* | --t=*) target_alias=$ac_optarg ;; -v | -verbose | --verbose | --verbos | --verbo | --verb) verbose=yes ;; -version | --version | --versio | --versi | --vers | -V) ac_init_version=: ;; -with-* | --with-*) ac_useropt=`expr "x$ac_option" : 'x-*with-\([^=]*\)'` # Reject names that are not valid shell variable names. expr "x$ac_useropt" : ".*[^-+._$as_cr_alnum]" >/dev/null && as_fn_error $? "invalid package name: $ac_useropt" ac_useropt_orig=$ac_useropt ac_useropt=`$as_echo "$ac_useropt" | sed 's/[-+.]/_/g'` case $ac_user_opts in *" "with_$ac_useropt" "*) ;; *) ac_unrecognized_opts="$ac_unrecognized_opts$ac_unrecognized_sep--with-$ac_useropt_orig" ac_unrecognized_sep=', ';; esac eval with_$ac_useropt=\$ac_optarg ;; -without-* | --without-*) ac_useropt=`expr "x$ac_option" : 'x-*without-\(.*\)'` # Reject names that are not valid shell variable names. expr "x$ac_useropt" : ".*[^-+._$as_cr_alnum]" >/dev/null && as_fn_error $? "invalid package name: $ac_useropt" ac_useropt_orig=$ac_useropt ac_useropt=`$as_echo "$ac_useropt" | sed 's/[-+.]/_/g'` case $ac_user_opts in *" "with_$ac_useropt" "*) ;; *) ac_unrecognized_opts="$ac_unrecognized_opts$ac_unrecognized_sep--without-$ac_useropt_orig" ac_unrecognized_sep=', ';; esac eval with_$ac_useropt=no ;; --x) # Obsolete; use --with-x. with_x=yes ;; -x-includes | --x-includes | --x-include | --x-includ | --x-inclu \ | --x-incl | --x-inc | --x-in | --x-i) ac_prev=x_includes ;; -x-includes=* | --x-includes=* | --x-include=* | --x-includ=* | --x-inclu=* \ | --x-incl=* | --x-inc=* | --x-in=* | --x-i=*) x_includes=$ac_optarg ;; -x-libraries | --x-libraries | --x-librarie | --x-librari \ | --x-librar | --x-libra | --x-libr | --x-lib | --x-li | --x-l) ac_prev=x_libraries ;; -x-libraries=* | --x-libraries=* | --x-librarie=* | --x-librari=* \ | --x-librar=* | --x-libra=* | --x-libr=* | --x-lib=* | --x-li=* | --x-l=*) x_libraries=$ac_optarg ;; -*) as_fn_error $? "unrecognized option: \`$ac_option' Try \`$0 --help' for more information" ;; *=*) ac_envvar=`expr "x$ac_option" : 'x\([^=]*\)='` # Reject names that are not valid shell variable names. case $ac_envvar in #( '' | [0-9]* | *[!_$as_cr_alnum]* ) as_fn_error $? "invalid variable name: \`$ac_envvar'" ;; esac eval $ac_envvar=\$ac_optarg export $ac_envvar ;; *) # FIXME: should be removed in autoconf 3.0. $as_echo "$as_me: WARNING: you should use --build, --host, --target" >&2 expr "x$ac_option" : ".*[^-._$as_cr_alnum]" >/dev/null && $as_echo "$as_me: WARNING: invalid host type: $ac_option" >&2 : "${build_alias=$ac_option} ${host_alias=$ac_option} ${target_alias=$ac_option}" ;; esac done if test -n "$ac_prev"; then ac_option=--`echo $ac_prev | sed 's/_/-/g'` as_fn_error $? "missing argument to $ac_option" fi if test -n "$ac_unrecognized_opts"; then case $enable_option_checking in no) ;; fatal) as_fn_error $? "unrecognized options: $ac_unrecognized_opts" ;; *) $as_echo "$as_me: WARNING: unrecognized options: $ac_unrecognized_opts" >&2 ;; esac fi # Check all directory arguments for consistency. for ac_var in exec_prefix prefix bindir sbindir libexecdir datarootdir \ datadir sysconfdir sharedstatedir localstatedir includedir \ oldincludedir docdir infodir htmldir dvidir pdfdir psdir \ libdir localedir mandir do eval ac_val=\$$ac_var # Remove trailing slashes. case $ac_val in */ ) ac_val=`expr "X$ac_val" : 'X\(.*[^/]\)' \| "X$ac_val" : 'X\(.*\)'` eval $ac_var=\$ac_val;; esac # Be sure to have absolute directory names. case $ac_val in [\\/$]* | ?:[\\/]* ) continue;; NONE | '' ) case $ac_var in *prefix ) continue;; esac;; esac as_fn_error $? "expected an absolute directory name for --$ac_var: $ac_val" done # There might be people who depend on the old broken behavior: `$host' # used to hold the argument of --host etc. # FIXME: To remove some day. build=$build_alias host=$host_alias target=$target_alias # FIXME: To remove some day. if test "x$host_alias" != x; then if test "x$build_alias" = x; then cross_compiling=maybe $as_echo "$as_me: WARNING: if you wanted to set the --build type, don't use --host. If a cross compiler is detected then cross compile mode will be used" >&2 elif test "x$build_alias" != "x$host_alias"; then cross_compiling=yes fi fi ac_tool_prefix= test -n "$host_alias" && ac_tool_prefix=$host_alias- test "$silent" = yes && exec 6>/dev/null ac_pwd=`pwd` && test -n "$ac_pwd" && ac_ls_di=`ls -di .` && ac_pwd_ls_di=`cd "$ac_pwd" && ls -di .` || as_fn_error $? "working directory cannot be determined" test "X$ac_ls_di" = "X$ac_pwd_ls_di" || as_fn_error $? "pwd does not report name of working directory" # Find the source files, if location was not specified. if test -z "$srcdir"; then ac_srcdir_defaulted=yes # Try the directory containing this script, then the parent directory. ac_confdir=`$as_dirname -- "$as_myself" || $as_expr X"$as_myself" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \ X"$as_myself" : 'X\(//\)[^/]' \| \ X"$as_myself" : 'X\(//\)$' \| \ X"$as_myself" : 'X\(/\)' \| . 2>/dev/null || $as_echo X"$as_myself" | sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{ s//\1/ q } /^X\(\/\/\)[^/].*/{ s//\1/ q } /^X\(\/\/\)$/{ s//\1/ q } /^X\(\/\).*/{ s//\1/ q } s/.*/./; q'` srcdir=$ac_confdir if test ! -r "$srcdir/$ac_unique_file"; then srcdir=.. fi else ac_srcdir_defaulted=no fi if test ! -r "$srcdir/$ac_unique_file"; then test "$ac_srcdir_defaulted" = yes && srcdir="$ac_confdir or .." as_fn_error $? "cannot find sources ($ac_unique_file) in $srcdir" fi ac_msg="sources are in $srcdir, but \`cd $srcdir' does not work" ac_abs_confdir=`( cd "$srcdir" && test -r "./$ac_unique_file" || as_fn_error $? "$ac_msg" pwd)` # When building in place, set srcdir=. if test "$ac_abs_confdir" = "$ac_pwd"; then srcdir=. fi # Remove unnecessary trailing slashes from srcdir. # Double slashes in file names in object file debugging info # mess up M-x gdb in Emacs. case $srcdir in */) srcdir=`expr "X$srcdir" : 'X\(.*[^/]\)' \| "X$srcdir" : 'X\(.*\)'`;; esac for ac_var in $ac_precious_vars; do eval ac_env_${ac_var}_set=\${${ac_var}+set} eval ac_env_${ac_var}_value=\$${ac_var} eval ac_cv_env_${ac_var}_set=\${${ac_var}+set} eval ac_cv_env_${ac_var}_value=\$${ac_var} done # # Report the --help message. # if test "$ac_init_help" = "long"; then # Omit some internal or obsolete options to make the list less imposing. # This message is too long to be a string in the A/UX 3.1 sh. cat <<_ACEOF \`configure' configures Google C++ Testing Framework 1.7.0 to adapt to many kinds of systems. Usage: $0 [OPTION]... [VAR=VALUE]... To assign environment variables (e.g., CC, CFLAGS...), specify them as VAR=VALUE. See below for descriptions of some of the useful variables. Defaults for the options are specified in brackets. Configuration: -h, --help display this help and exit --help=short display options specific to this package --help=recursive display the short help of all the included packages -V, --version display version information and exit -q, --quiet, --silent do not print \`checking ...' messages --cache-file=FILE cache test results in FILE [disabled] -C, --config-cache alias for \`--cache-file=config.cache' -n, --no-create do not create output files --srcdir=DIR find the sources in DIR [configure dir or \`..'] Installation directories: --prefix=PREFIX install architecture-independent files in PREFIX [$ac_default_prefix] --exec-prefix=EPREFIX install architecture-dependent files in EPREFIX [PREFIX] By default, \`make install' will install all the files in \`$ac_default_prefix/bin', \`$ac_default_prefix/lib' etc. You can specify an installation prefix other than \`$ac_default_prefix' using \`--prefix', for instance \`--prefix=\$HOME'. For better control, use the options below. Fine tuning of the installation directories: --bindir=DIR user executables [EPREFIX/bin] --sbindir=DIR system admin executables [EPREFIX/sbin] --libexecdir=DIR program executables [EPREFIX/libexec] --sysconfdir=DIR read-only single-machine data [PREFIX/etc] --sharedstatedir=DIR modifiable architecture-independent data [PREFIX/com] --localstatedir=DIR modifiable single-machine data [PREFIX/var] --libdir=DIR object code libraries [EPREFIX/lib] --includedir=DIR C header files [PREFIX/include] --oldincludedir=DIR C header files for non-gcc [/usr/include] --datarootdir=DIR read-only arch.-independent data root [PREFIX/share] --datadir=DIR read-only architecture-independent data [DATAROOTDIR] --infodir=DIR info documentation [DATAROOTDIR/info] --localedir=DIR locale-dependent data [DATAROOTDIR/locale] --mandir=DIR man documentation [DATAROOTDIR/man] --docdir=DIR documentation root [DATAROOTDIR/doc/gtest] --htmldir=DIR html documentation [DOCDIR] --dvidir=DIR dvi documentation [DOCDIR] --pdfdir=DIR pdf documentation [DOCDIR] --psdir=DIR ps documentation [DOCDIR] _ACEOF cat <<\_ACEOF Program names: --program-prefix=PREFIX prepend PREFIX to installed program names --program-suffix=SUFFIX append SUFFIX to installed program names --program-transform-name=PROGRAM run sed PROGRAM on installed program names System types: --build=BUILD configure for building on BUILD [guessed] --host=HOST cross-compile to build programs to run on HOST [BUILD] _ACEOF fi if test -n "$ac_init_help"; then case $ac_init_help in short | recursive ) echo "Configuration of Google C++ Testing Framework 1.7.0:";; esac cat <<\_ACEOF Optional Features: --disable-option-checking ignore unrecognized --enable/--with options --disable-FEATURE do not include FEATURE (same as --enable-FEATURE=no) --enable-FEATURE[=ARG] include FEATURE [ARG=yes] --disable-dependency-tracking speeds up one-time build --enable-dependency-tracking do not reject slow dependency extractors --enable-shared[=PKGS] build shared libraries [default=yes] --enable-static[=PKGS] build static libraries [default=yes] --enable-fast-install[=PKGS] optimize for fast installation [default=yes] --disable-libtool-lock avoid locking (might break parallel builds) Optional Packages: --with-PACKAGE[=ARG] use PACKAGE [ARG=yes] --without-PACKAGE do not use PACKAGE (same as --with-PACKAGE=no) --with-pic[=PKGS] try to use only PIC/non-PIC objects [default=use both] --with-gnu-ld assume the C compiler uses GNU ld [default=no] --with-sysroot=DIR Search for dependent libraries within DIR (or the compiler's sysroot if not specified). --with-pthreads use pthreads (default is yes) Some influential environment variables: CC C compiler command CFLAGS C compiler flags LDFLAGS linker flags, e.g. -L if you have libraries in a nonstandard directory LIBS libraries to pass to the linker, e.g. -l CPPFLAGS (Objective) C/C++ preprocessor flags, e.g. -I if you have headers in a nonstandard directory CXX C++ compiler command CXXFLAGS C++ compiler flags CPP C preprocessor CXXCPP C++ preprocessor Use these variables to override the choices made by `configure' or to help it to find libraries and programs with nonstandard names/locations. Report bugs to . _ACEOF ac_status=$? fi if test "$ac_init_help" = "recursive"; then # If there are subdirs, report their specific --help. for ac_dir in : $ac_subdirs_all; do test "x$ac_dir" = x: && continue test -d "$ac_dir" || { cd "$srcdir" && ac_pwd=`pwd` && srcdir=. && test -d "$ac_dir"; } || continue ac_builddir=. case "$ac_dir" in .) ac_dir_suffix= ac_top_builddir_sub=. ac_top_build_prefix= ;; *) ac_dir_suffix=/`$as_echo "$ac_dir" | sed 's|^\.[\\/]||'` # A ".." for each directory in $ac_dir_suffix. ac_top_builddir_sub=`$as_echo "$ac_dir_suffix" | sed 's|/[^\\/]*|/..|g;s|/||'` case $ac_top_builddir_sub in "") ac_top_builddir_sub=. ac_top_build_prefix= ;; *) ac_top_build_prefix=$ac_top_builddir_sub/ ;; esac ;; esac ac_abs_top_builddir=$ac_pwd ac_abs_builddir=$ac_pwd$ac_dir_suffix # for backward compatibility: ac_top_builddir=$ac_top_build_prefix case $srcdir in .) # We are building in place. ac_srcdir=. ac_top_srcdir=$ac_top_builddir_sub ac_abs_top_srcdir=$ac_pwd ;; [\\/]* | ?:[\\/]* ) # Absolute name. ac_srcdir=$srcdir$ac_dir_suffix; ac_top_srcdir=$srcdir ac_abs_top_srcdir=$srcdir ;; *) # Relative name. ac_srcdir=$ac_top_build_prefix$srcdir$ac_dir_suffix ac_top_srcdir=$ac_top_build_prefix$srcdir ac_abs_top_srcdir=$ac_pwd/$srcdir ;; esac ac_abs_srcdir=$ac_abs_top_srcdir$ac_dir_suffix cd "$ac_dir" || { ac_status=$?; continue; } # Check for guested configure. if test -f "$ac_srcdir/configure.gnu"; then echo && $SHELL "$ac_srcdir/configure.gnu" --help=recursive elif test -f "$ac_srcdir/configure"; then echo && $SHELL "$ac_srcdir/configure" --help=recursive else $as_echo "$as_me: WARNING: no configuration information is in $ac_dir" >&2 fi || ac_status=$? cd "$ac_pwd" || { ac_status=$?; break; } done fi test -n "$ac_init_help" && exit $ac_status if $ac_init_version; then cat <<\_ACEOF Google C++ Testing Framework configure 1.7.0 generated by GNU Autoconf 2.68 Copyright (C) 2010 Free Software Foundation, Inc. This configure script is free software; the Free Software Foundation gives unlimited permission to copy, distribute and modify it. _ACEOF exit fi ## ------------------------ ## ## Autoconf initialization. ## ## ------------------------ ## # ac_fn_c_try_compile LINENO # -------------------------- # Try to compile conftest.$ac_ext, and return whether this succeeded. ac_fn_c_try_compile () { as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack rm -f conftest.$ac_objext if { { ac_try="$ac_compile" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_compile") 2>conftest.err ac_status=$? if test -s conftest.err; then grep -v '^ *+' conftest.err >conftest.er1 cat conftest.er1 >&5 mv -f conftest.er1 conftest.err fi $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } && { test -z "$ac_c_werror_flag" || test ! -s conftest.err } && test -s conftest.$ac_objext; then : ac_retval=0 else $as_echo "$as_me: failed program was:" >&5 sed 's/^/| /' conftest.$ac_ext >&5 ac_retval=1 fi eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno as_fn_set_status $ac_retval } # ac_fn_c_try_compile # ac_fn_cxx_try_compile LINENO # ---------------------------- # Try to compile conftest.$ac_ext, and return whether this succeeded. ac_fn_cxx_try_compile () { as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack rm -f conftest.$ac_objext if { { ac_try="$ac_compile" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_compile") 2>conftest.err ac_status=$? if test -s conftest.err; then grep -v '^ *+' conftest.err >conftest.er1 cat conftest.er1 >&5 mv -f conftest.er1 conftest.err fi $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } && { test -z "$ac_cxx_werror_flag" || test ! -s conftest.err } && test -s conftest.$ac_objext; then : ac_retval=0 else $as_echo "$as_me: failed program was:" >&5 sed 's/^/| /' conftest.$ac_ext >&5 ac_retval=1 fi eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno as_fn_set_status $ac_retval } # ac_fn_cxx_try_compile # ac_fn_c_try_link LINENO # ----------------------- # Try to link conftest.$ac_ext, and return whether this succeeded. ac_fn_c_try_link () { as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack rm -f conftest.$ac_objext conftest$ac_exeext if { { ac_try="$ac_link" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_link") 2>conftest.err ac_status=$? if test -s conftest.err; then grep -v '^ *+' conftest.err >conftest.er1 cat conftest.er1 >&5 mv -f conftest.er1 conftest.err fi $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } && { test -z "$ac_c_werror_flag" || test ! -s conftest.err } && test -s conftest$ac_exeext && { test "$cross_compiling" = yes || $as_test_x conftest$ac_exeext }; then : ac_retval=0 else $as_echo "$as_me: failed program was:" >&5 sed 's/^/| /' conftest.$ac_ext >&5 ac_retval=1 fi # Delete the IPA/IPO (Inter Procedural Analysis/Optimization) information # created by the PGI compiler (conftest_ipa8_conftest.oo), as it would # interfere with the next link command; also delete a directory that is # left behind by Apple's compiler. We do this before executing the actions. rm -rf conftest.dSYM conftest_ipa8_conftest.oo eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno as_fn_set_status $ac_retval } # ac_fn_c_try_link # ac_fn_c_check_header_compile LINENO HEADER VAR INCLUDES # ------------------------------------------------------- # Tests whether HEADER exists and can be compiled using the include files in # INCLUDES, setting the cache variable VAR accordingly. ac_fn_c_check_header_compile () { as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $2" >&5 $as_echo_n "checking for $2... " >&6; } if eval \${$3+:} false; then : $as_echo_n "(cached) " >&6 else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ $4 #include <$2> _ACEOF if ac_fn_c_try_compile "$LINENO"; then : eval "$3=yes" else eval "$3=no" fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext fi eval ac_res=\$$3 { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_res" >&5 $as_echo "$ac_res" >&6; } eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno } # ac_fn_c_check_header_compile # ac_fn_c_try_cpp LINENO # ---------------------- # Try to preprocess conftest.$ac_ext, and return whether this succeeded. ac_fn_c_try_cpp () { as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack if { { ac_try="$ac_cpp conftest.$ac_ext" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_cpp conftest.$ac_ext") 2>conftest.err ac_status=$? if test -s conftest.err; then grep -v '^ *+' conftest.err >conftest.er1 cat conftest.er1 >&5 mv -f conftest.er1 conftest.err fi $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } > conftest.i && { test -z "$ac_c_preproc_warn_flag$ac_c_werror_flag" || test ! -s conftest.err }; then : ac_retval=0 else $as_echo "$as_me: failed program was:" >&5 sed 's/^/| /' conftest.$ac_ext >&5 ac_retval=1 fi eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno as_fn_set_status $ac_retval } # ac_fn_c_try_cpp # ac_fn_c_try_run LINENO # ---------------------- # Try to link conftest.$ac_ext, and return whether this succeeded. Assumes # that executables *can* be run. ac_fn_c_try_run () { as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack if { { ac_try="$ac_link" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_link") 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } && { ac_try='./conftest$ac_exeext' { { case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_try") 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; }; then : ac_retval=0 else $as_echo "$as_me: program exited with status $ac_status" >&5 $as_echo "$as_me: failed program was:" >&5 sed 's/^/| /' conftest.$ac_ext >&5 ac_retval=$ac_status fi rm -rf conftest.dSYM conftest_ipa8_conftest.oo eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno as_fn_set_status $ac_retval } # ac_fn_c_try_run # ac_fn_c_check_func LINENO FUNC VAR # ---------------------------------- # Tests whether FUNC exists, setting the cache variable VAR accordingly ac_fn_c_check_func () { as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $2" >&5 $as_echo_n "checking for $2... " >&6; } if eval \${$3+:} false; then : $as_echo_n "(cached) " >&6 else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Define $2 to an innocuous variant, in case declares $2. For example, HP-UX 11i declares gettimeofday. */ #define $2 innocuous_$2 /* System header to define __stub macros and hopefully few prototypes, which can conflict with char $2 (); below. Prefer to if __STDC__ is defined, since exists even on freestanding compilers. */ #ifdef __STDC__ # include #else # include #endif #undef $2 /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char $2 (); /* The GNU C library defines this for functions which it implements to always fail with ENOSYS. Some functions are actually named something starting with __ and the normal name is an alias. */ #if defined __stub_$2 || defined __stub___$2 choke me #endif int main () { return $2 (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : eval "$3=yes" else eval "$3=no" fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext fi eval ac_res=\$$3 { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_res" >&5 $as_echo "$ac_res" >&6; } eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno } # ac_fn_c_check_func # ac_fn_cxx_try_cpp LINENO # ------------------------ # Try to preprocess conftest.$ac_ext, and return whether this succeeded. ac_fn_cxx_try_cpp () { as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack if { { ac_try="$ac_cpp conftest.$ac_ext" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_cpp conftest.$ac_ext") 2>conftest.err ac_status=$? if test -s conftest.err; then grep -v '^ *+' conftest.err >conftest.er1 cat conftest.er1 >&5 mv -f conftest.er1 conftest.err fi $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } > conftest.i && { test -z "$ac_cxx_preproc_warn_flag$ac_cxx_werror_flag" || test ! -s conftest.err }; then : ac_retval=0 else $as_echo "$as_me: failed program was:" >&5 sed 's/^/| /' conftest.$ac_ext >&5 ac_retval=1 fi eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno as_fn_set_status $ac_retval } # ac_fn_cxx_try_cpp # ac_fn_cxx_try_link LINENO # ------------------------- # Try to link conftest.$ac_ext, and return whether this succeeded. ac_fn_cxx_try_link () { as_lineno=${as_lineno-"$1"} as_lineno_stack=as_lineno_stack=$as_lineno_stack rm -f conftest.$ac_objext conftest$ac_exeext if { { ac_try="$ac_link" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_link") 2>conftest.err ac_status=$? if test -s conftest.err; then grep -v '^ *+' conftest.err >conftest.er1 cat conftest.er1 >&5 mv -f conftest.er1 conftest.err fi $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } && { test -z "$ac_cxx_werror_flag" || test ! -s conftest.err } && test -s conftest$ac_exeext && { test "$cross_compiling" = yes || $as_test_x conftest$ac_exeext }; then : ac_retval=0 else $as_echo "$as_me: failed program was:" >&5 sed 's/^/| /' conftest.$ac_ext >&5 ac_retval=1 fi # Delete the IPA/IPO (Inter Procedural Analysis/Optimization) information # created by the PGI compiler (conftest_ipa8_conftest.oo), as it would # interfere with the next link command; also delete a directory that is # left behind by Apple's compiler. We do this before executing the actions. rm -rf conftest.dSYM conftest_ipa8_conftest.oo eval $as_lineno_stack; ${as_lineno_stack:+:} unset as_lineno as_fn_set_status $ac_retval } # ac_fn_cxx_try_link cat >config.log <<_ACEOF This file contains any messages produced by compilers while running configure, to aid debugging if configure makes a mistake. It was created by Google C++ Testing Framework $as_me 1.7.0, which was generated by GNU Autoconf 2.68. Invocation command line was $ $0 $@ _ACEOF exec 5>>config.log { cat <<_ASUNAME ## --------- ## ## Platform. ## ## --------- ## hostname = `(hostname || uname -n) 2>/dev/null | sed 1q` uname -m = `(uname -m) 2>/dev/null || echo unknown` uname -r = `(uname -r) 2>/dev/null || echo unknown` uname -s = `(uname -s) 2>/dev/null || echo unknown` uname -v = `(uname -v) 2>/dev/null || echo unknown` /usr/bin/uname -p = `(/usr/bin/uname -p) 2>/dev/null || echo unknown` /bin/uname -X = `(/bin/uname -X) 2>/dev/null || echo unknown` /bin/arch = `(/bin/arch) 2>/dev/null || echo unknown` /usr/bin/arch -k = `(/usr/bin/arch -k) 2>/dev/null || echo unknown` /usr/convex/getsysinfo = `(/usr/convex/getsysinfo) 2>/dev/null || echo unknown` /usr/bin/hostinfo = `(/usr/bin/hostinfo) 2>/dev/null || echo unknown` /bin/machine = `(/bin/machine) 2>/dev/null || echo unknown` /usr/bin/oslevel = `(/usr/bin/oslevel) 2>/dev/null || echo unknown` /bin/universe = `(/bin/universe) 2>/dev/null || echo unknown` _ASUNAME as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. $as_echo "PATH: $as_dir" done IFS=$as_save_IFS } >&5 cat >&5 <<_ACEOF ## ----------- ## ## Core tests. ## ## ----------- ## _ACEOF # Keep a trace of the command line. # Strip out --no-create and --no-recursion so they do not pile up. # Strip out --silent because we don't want to record it for future runs. # Also quote any args containing shell meta-characters. # Make two passes to allow for proper duplicate-argument suppression. ac_configure_args= ac_configure_args0= ac_configure_args1= ac_must_keep_next=false for ac_pass in 1 2 do for ac_arg do case $ac_arg in -no-create | --no-c* | -n | -no-recursion | --no-r*) continue ;; -q | -quiet | --quiet | --quie | --qui | --qu | --q \ | -silent | --silent | --silen | --sile | --sil) continue ;; *\'*) ac_arg=`$as_echo "$ac_arg" | sed "s/'/'\\\\\\\\''/g"` ;; esac case $ac_pass in 1) as_fn_append ac_configure_args0 " '$ac_arg'" ;; 2) as_fn_append ac_configure_args1 " '$ac_arg'" if test $ac_must_keep_next = true; then ac_must_keep_next=false # Got value, back to normal. else case $ac_arg in *=* | --config-cache | -C | -disable-* | --disable-* \ | -enable-* | --enable-* | -gas | --g* | -nfp | --nf* \ | -q | -quiet | --q* | -silent | --sil* | -v | -verb* \ | -with-* | --with-* | -without-* | --without-* | --x) case "$ac_configure_args0 " in "$ac_configure_args1"*" '$ac_arg' "* ) continue ;; esac ;; -* ) ac_must_keep_next=true ;; esac fi as_fn_append ac_configure_args " '$ac_arg'" ;; esac done done { ac_configure_args0=; unset ac_configure_args0;} { ac_configure_args1=; unset ac_configure_args1;} # When interrupted or exit'd, cleanup temporary files, and complete # config.log. We remove comments because anyway the quotes in there # would cause problems or look ugly. # WARNING: Use '\'' to represent an apostrophe within the trap. # WARNING: Do not start the trap code with a newline, due to a FreeBSD 4.0 bug. trap 'exit_status=$? # Save into config.log some information that might help in debugging. { echo $as_echo "## ---------------- ## ## Cache variables. ## ## ---------------- ##" echo # The following way of writing the cache mishandles newlines in values, ( for ac_var in `(set) 2>&1 | sed -n '\''s/^\([a-zA-Z_][a-zA-Z0-9_]*\)=.*/\1/p'\''`; do eval ac_val=\$$ac_var case $ac_val in #( *${as_nl}*) case $ac_var in #( *_cv_*) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: cache variable $ac_var contains a newline" >&5 $as_echo "$as_me: WARNING: cache variable $ac_var contains a newline" >&2;} ;; esac case $ac_var in #( _ | IFS | as_nl) ;; #( BASH_ARGV | BASH_SOURCE) eval $ac_var= ;; #( *) { eval $ac_var=; unset $ac_var;} ;; esac ;; esac done (set) 2>&1 | case $as_nl`(ac_space='\'' '\''; set) 2>&1` in #( *${as_nl}ac_space=\ *) sed -n \ "s/'\''/'\''\\\\'\'''\''/g; s/^\\([_$as_cr_alnum]*_cv_[_$as_cr_alnum]*\\)=\\(.*\\)/\\1='\''\\2'\''/p" ;; #( *) sed -n "/^[_$as_cr_alnum]*_cv_[_$as_cr_alnum]*=/p" ;; esac | sort ) echo $as_echo "## ----------------- ## ## Output variables. ## ## ----------------- ##" echo for ac_var in $ac_subst_vars do eval ac_val=\$$ac_var case $ac_val in *\'\''*) ac_val=`$as_echo "$ac_val" | sed "s/'\''/'\''\\\\\\\\'\'''\''/g"`;; esac $as_echo "$ac_var='\''$ac_val'\''" done | sort echo if test -n "$ac_subst_files"; then $as_echo "## ------------------- ## ## File substitutions. ## ## ------------------- ##" echo for ac_var in $ac_subst_files do eval ac_val=\$$ac_var case $ac_val in *\'\''*) ac_val=`$as_echo "$ac_val" | sed "s/'\''/'\''\\\\\\\\'\'''\''/g"`;; esac $as_echo "$ac_var='\''$ac_val'\''" done | sort echo fi if test -s confdefs.h; then $as_echo "## ----------- ## ## confdefs.h. ## ## ----------- ##" echo cat confdefs.h echo fi test "$ac_signal" != 0 && $as_echo "$as_me: caught signal $ac_signal" $as_echo "$as_me: exit $exit_status" } >&5 rm -f core *.core core.conftest.* && rm -f -r conftest* confdefs* conf$$* $ac_clean_files && exit $exit_status ' 0 for ac_signal in 1 2 13 15; do trap 'ac_signal='$ac_signal'; as_fn_exit 1' $ac_signal done ac_signal=0 # confdefs.h avoids OS command line length limits that DEFS can exceed. rm -f -r conftest* confdefs.h $as_echo "/* confdefs.h */" > confdefs.h # Predefined preprocessor variables. cat >>confdefs.h <<_ACEOF #define PACKAGE_NAME "$PACKAGE_NAME" _ACEOF cat >>confdefs.h <<_ACEOF #define PACKAGE_TARNAME "$PACKAGE_TARNAME" _ACEOF cat >>confdefs.h <<_ACEOF #define PACKAGE_VERSION "$PACKAGE_VERSION" _ACEOF cat >>confdefs.h <<_ACEOF #define PACKAGE_STRING "$PACKAGE_STRING" _ACEOF cat >>confdefs.h <<_ACEOF #define PACKAGE_BUGREPORT "$PACKAGE_BUGREPORT" _ACEOF cat >>confdefs.h <<_ACEOF #define PACKAGE_URL "$PACKAGE_URL" _ACEOF # Let the site file select an alternate cache file if it wants to. # Prefer an explicitly selected file to automatically selected ones. ac_site_file1=NONE ac_site_file2=NONE if test -n "$CONFIG_SITE"; then # We do not want a PATH search for config.site. case $CONFIG_SITE in #(( -*) ac_site_file1=./$CONFIG_SITE;; */*) ac_site_file1=$CONFIG_SITE;; *) ac_site_file1=./$CONFIG_SITE;; esac elif test "x$prefix" != xNONE; then ac_site_file1=$prefix/share/config.site ac_site_file2=$prefix/etc/config.site else ac_site_file1=$ac_default_prefix/share/config.site ac_site_file2=$ac_default_prefix/etc/config.site fi for ac_site_file in "$ac_site_file1" "$ac_site_file2" do test "x$ac_site_file" = xNONE && continue if test /dev/null != "$ac_site_file" && test -r "$ac_site_file"; then { $as_echo "$as_me:${as_lineno-$LINENO}: loading site script $ac_site_file" >&5 $as_echo "$as_me: loading site script $ac_site_file" >&6;} sed 's/^/| /' "$ac_site_file" >&5 . "$ac_site_file" \ || { { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5 $as_echo "$as_me: error: in \`$ac_pwd':" >&2;} as_fn_error $? "failed to load site script $ac_site_file See \`config.log' for more details" "$LINENO" 5; } fi done if test -r "$cache_file"; then # Some versions of bash will fail to source /dev/null (special files # actually), so we avoid doing that. DJGPP emulates it as a regular file. if test /dev/null != "$cache_file" && test -f "$cache_file"; then { $as_echo "$as_me:${as_lineno-$LINENO}: loading cache $cache_file" >&5 $as_echo "$as_me: loading cache $cache_file" >&6;} case $cache_file in [\\/]* | ?:[\\/]* ) . "$cache_file";; *) . "./$cache_file";; esac fi else { $as_echo "$as_me:${as_lineno-$LINENO}: creating cache $cache_file" >&5 $as_echo "$as_me: creating cache $cache_file" >&6;} >$cache_file fi # Check that the precious variables saved in the cache have kept the same # value. ac_cache_corrupted=false for ac_var in $ac_precious_vars; do eval ac_old_set=\$ac_cv_env_${ac_var}_set eval ac_new_set=\$ac_env_${ac_var}_set eval ac_old_val=\$ac_cv_env_${ac_var}_value eval ac_new_val=\$ac_env_${ac_var}_value case $ac_old_set,$ac_new_set in set,) { $as_echo "$as_me:${as_lineno-$LINENO}: error: \`$ac_var' was set to \`$ac_old_val' in the previous run" >&5 $as_echo "$as_me: error: \`$ac_var' was set to \`$ac_old_val' in the previous run" >&2;} ac_cache_corrupted=: ;; ,set) { $as_echo "$as_me:${as_lineno-$LINENO}: error: \`$ac_var' was not set in the previous run" >&5 $as_echo "$as_me: error: \`$ac_var' was not set in the previous run" >&2;} ac_cache_corrupted=: ;; ,);; *) if test "x$ac_old_val" != "x$ac_new_val"; then # differences in whitespace do not lead to failure. ac_old_val_w=`echo x $ac_old_val` ac_new_val_w=`echo x $ac_new_val` if test "$ac_old_val_w" != "$ac_new_val_w"; then { $as_echo "$as_me:${as_lineno-$LINENO}: error: \`$ac_var' has changed since the previous run:" >&5 $as_echo "$as_me: error: \`$ac_var' has changed since the previous run:" >&2;} ac_cache_corrupted=: else { $as_echo "$as_me:${as_lineno-$LINENO}: warning: ignoring whitespace changes in \`$ac_var' since the previous run:" >&5 $as_echo "$as_me: warning: ignoring whitespace changes in \`$ac_var' since the previous run:" >&2;} eval $ac_var=\$ac_old_val fi { $as_echo "$as_me:${as_lineno-$LINENO}: former value: \`$ac_old_val'" >&5 $as_echo "$as_me: former value: \`$ac_old_val'" >&2;} { $as_echo "$as_me:${as_lineno-$LINENO}: current value: \`$ac_new_val'" >&5 $as_echo "$as_me: current value: \`$ac_new_val'" >&2;} fi;; esac # Pass precious variables to config.status. if test "$ac_new_set" = set; then case $ac_new_val in *\'*) ac_arg=$ac_var=`$as_echo "$ac_new_val" | sed "s/'/'\\\\\\\\''/g"` ;; *) ac_arg=$ac_var=$ac_new_val ;; esac case " $ac_configure_args " in *" '$ac_arg' "*) ;; # Avoid dups. Use of quotes ensures accuracy. *) as_fn_append ac_configure_args " '$ac_arg'" ;; esac fi done if $ac_cache_corrupted; then { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5 $as_echo "$as_me: error: in \`$ac_pwd':" >&2;} { $as_echo "$as_me:${as_lineno-$LINENO}: error: changes in the environment can compromise the build" >&5 $as_echo "$as_me: error: changes in the environment can compromise the build" >&2;} as_fn_error $? "run \`make distclean' and/or \`rm $cache_file' and start over" "$LINENO" 5 fi ## -------------------- ## ## Main body of script. ## ## -------------------- ## ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu # Provide various options to initialize the Autoconf and configure processes. ac_aux_dir= for ac_dir in build-aux "$srcdir"/build-aux; do if test -f "$ac_dir/install-sh"; then ac_aux_dir=$ac_dir ac_install_sh="$ac_aux_dir/install-sh -c" break elif test -f "$ac_dir/install.sh"; then ac_aux_dir=$ac_dir ac_install_sh="$ac_aux_dir/install.sh -c" break elif test -f "$ac_dir/shtool"; then ac_aux_dir=$ac_dir ac_install_sh="$ac_aux_dir/shtool install -c" break fi done if test -z "$ac_aux_dir"; then as_fn_error $? "cannot find install-sh, install.sh, or shtool in build-aux \"$srcdir\"/build-aux" "$LINENO" 5 fi # These three variables are undocumented and unsupported, # and are intended to be withdrawn in a future Autoconf release. # They can cause serious problems if a builder's source tree is in a directory # whose full name contains unusual characters. ac_config_guess="$SHELL $ac_aux_dir/config.guess" # Please don't use this var. ac_config_sub="$SHELL $ac_aux_dir/config.sub" # Please don't use this var. ac_configure="$SHELL $ac_aux_dir/configure" # Please don't use this var. ac_config_headers="$ac_config_headers build-aux/config.h" ac_config_files="$ac_config_files Makefile" ac_config_files="$ac_config_files scripts/gtest-config" # Initialize Automake with various options. We require at least v1.9, prevent # pedantic complaints about package files, and enable various distribution # targets. am__api_version='1.11' # Find a good install program. We prefer a C program (faster), # so one script is as good as another. But avoid the broken or # incompatible versions: # SysV /etc/install, /usr/sbin/install # SunOS /usr/etc/install # IRIX /sbin/install # AIX /bin/install # AmigaOS /C/install, which installs bootblocks on floppy discs # AIX 4 /usr/bin/installbsd, which doesn't work without a -g flag # AFS /usr/afsws/bin/install, which mishandles nonexistent args # SVR4 /usr/ucb/install, which tries to use the nonexistent group "staff" # OS/2's system install, which has a completely different semantic # ./install, which can be erroneously created by make from ./install.sh. # Reject install programs that cannot install multiple files. { $as_echo "$as_me:${as_lineno-$LINENO}: checking for a BSD-compatible install" >&5 $as_echo_n "checking for a BSD-compatible install... " >&6; } if test -z "$INSTALL"; then if ${ac_cv_path_install+:} false; then : $as_echo_n "(cached) " >&6 else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. # Account for people who put trailing slashes in PATH elements. case $as_dir/ in #(( ./ | .// | /[cC]/* | \ /etc/* | /usr/sbin/* | /usr/etc/* | /sbin/* | /usr/afsws/bin/* | \ ?:[\\/]os2[\\/]install[\\/]* | ?:[\\/]OS2[\\/]INSTALL[\\/]* | \ /usr/ucb/* ) ;; *) # OSF1 and SCO ODT 3.0 have their own names for install. # Don't use installbsd from OSF since it installs stuff as root # by default. for ac_prog in ginstall scoinst install; do for ac_exec_ext in '' $ac_executable_extensions; do if { test -f "$as_dir/$ac_prog$ac_exec_ext" && $as_test_x "$as_dir/$ac_prog$ac_exec_ext"; }; then if test $ac_prog = install && grep dspmsg "$as_dir/$ac_prog$ac_exec_ext" >/dev/null 2>&1; then # AIX install. It has an incompatible calling convention. : elif test $ac_prog = install && grep pwplus "$as_dir/$ac_prog$ac_exec_ext" >/dev/null 2>&1; then # program-specific install script used by HP pwplus--don't use. : else rm -rf conftest.one conftest.two conftest.dir echo one > conftest.one echo two > conftest.two mkdir conftest.dir if "$as_dir/$ac_prog$ac_exec_ext" -c conftest.one conftest.two "`pwd`/conftest.dir" && test -s conftest.one && test -s conftest.two && test -s conftest.dir/conftest.one && test -s conftest.dir/conftest.two then ac_cv_path_install="$as_dir/$ac_prog$ac_exec_ext -c" break 3 fi fi fi done done ;; esac done IFS=$as_save_IFS rm -rf conftest.one conftest.two conftest.dir fi if test "${ac_cv_path_install+set}" = set; then INSTALL=$ac_cv_path_install else # As a last resort, use the slow shell script. Don't cache a # value for INSTALL within a source directory, because that will # break other packages using the cache if that directory is # removed, or if the value is a relative name. INSTALL=$ac_install_sh fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $INSTALL" >&5 $as_echo "$INSTALL" >&6; } # Use test -z because SunOS4 sh mishandles braces in ${var-val}. # It thinks the first close brace ends the variable substitution. test -z "$INSTALL_PROGRAM" && INSTALL_PROGRAM='${INSTALL}' test -z "$INSTALL_SCRIPT" && INSTALL_SCRIPT='${INSTALL}' test -z "$INSTALL_DATA" && INSTALL_DATA='${INSTALL} -m 644' { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether build environment is sane" >&5 $as_echo_n "checking whether build environment is sane... " >&6; } # Just in case sleep 1 echo timestamp > conftest.file # Reject unsafe characters in $srcdir or the absolute working directory # name. Accept space and tab only in the latter. am_lf=' ' case `pwd` in *[\\\"\#\$\&\'\`$am_lf]*) as_fn_error $? "unsafe absolute working directory name" "$LINENO" 5;; esac case $srcdir in *[\\\"\#\$\&\'\`$am_lf\ \ ]*) as_fn_error $? "unsafe srcdir value: \`$srcdir'" "$LINENO" 5;; esac # Do `set' in a subshell so we don't clobber the current shell's # arguments. Must try -L first in case configure is actually a # symlink; some systems play weird games with the mod time of symlinks # (eg FreeBSD returns the mod time of the symlink's containing # directory). if ( set X `ls -Lt "$srcdir/configure" conftest.file 2> /dev/null` if test "$*" = "X"; then # -L didn't work. set X `ls -t "$srcdir/configure" conftest.file` fi rm -f conftest.file if test "$*" != "X $srcdir/configure conftest.file" \ && test "$*" != "X conftest.file $srcdir/configure"; then # If neither matched, then we have a broken ls. This can happen # if, for instance, CONFIG_SHELL is bash and it inherits a # broken ls alias from the environment. This has actually # happened. Such a system could not be considered "sane". as_fn_error $? "ls -t appears to fail. Make sure there is not a broken alias in your environment" "$LINENO" 5 fi test "$2" = conftest.file ) then # Ok. : else as_fn_error $? "newly created file is older than distributed files! Check your system clock" "$LINENO" 5 fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } test "$program_prefix" != NONE && program_transform_name="s&^&$program_prefix&;$program_transform_name" # Use a double $ so make ignores it. test "$program_suffix" != NONE && program_transform_name="s&\$&$program_suffix&;$program_transform_name" # Double any \ or $. # By default was `s,x,x', remove it if useless. ac_script='s/[\\$]/&&/g;s/;s,x,x,$//' program_transform_name=`$as_echo "$program_transform_name" | sed "$ac_script"` # expand $ac_aux_dir to an absolute path am_aux_dir=`cd $ac_aux_dir && pwd` if test x"${MISSING+set}" != xset; then case $am_aux_dir in *\ * | *\ *) MISSING="\${SHELL} \"$am_aux_dir/missing\"" ;; *) MISSING="\${SHELL} $am_aux_dir/missing" ;; esac fi # Use eval to expand $SHELL if eval "$MISSING --run true"; then am_missing_run="$MISSING --run " else am_missing_run= { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: \`missing' script is too old or missing" >&5 $as_echo "$as_me: WARNING: \`missing' script is too old or missing" >&2;} fi if test x"${install_sh}" != xset; then case $am_aux_dir in *\ * | *\ *) install_sh="\${SHELL} '$am_aux_dir/install-sh'" ;; *) install_sh="\${SHELL} $am_aux_dir/install-sh" esac fi # Installed binaries are usually stripped using `strip' when the user # run `make install-strip'. However `strip' might not be the right # tool to use in cross-compilation environments, therefore Automake # will honor the `STRIP' environment variable to overrule this program. if test "$cross_compiling" != no; then if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}strip", so it can be a program name with args. set dummy ${ac_tool_prefix}strip; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_STRIP+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$STRIP"; then ac_cv_prog_STRIP="$STRIP" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then ac_cv_prog_STRIP="${ac_tool_prefix}strip" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi STRIP=$ac_cv_prog_STRIP if test -n "$STRIP"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $STRIP" >&5 $as_echo "$STRIP" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi if test -z "$ac_cv_prog_STRIP"; then ac_ct_STRIP=$STRIP # Extract the first word of "strip", so it can be a program name with args. set dummy strip; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_STRIP+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_STRIP"; then ac_cv_prog_ac_ct_STRIP="$ac_ct_STRIP" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then ac_cv_prog_ac_ct_STRIP="strip" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_STRIP=$ac_cv_prog_ac_ct_STRIP if test -n "$ac_ct_STRIP"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_STRIP" >&5 $as_echo "$ac_ct_STRIP" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test "x$ac_ct_STRIP" = x; then STRIP=":" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac STRIP=$ac_ct_STRIP fi else STRIP="$ac_cv_prog_STRIP" fi fi INSTALL_STRIP_PROGRAM="\$(install_sh) -c -s" { $as_echo "$as_me:${as_lineno-$LINENO}: checking for a thread-safe mkdir -p" >&5 $as_echo_n "checking for a thread-safe mkdir -p... " >&6; } if test -z "$MKDIR_P"; then if ${ac_cv_path_mkdir+:} false; then : $as_echo_n "(cached) " >&6 else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH$PATH_SEPARATOR/opt/sfw/bin do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_prog in mkdir gmkdir; do for ac_exec_ext in '' $ac_executable_extensions; do { test -f "$as_dir/$ac_prog$ac_exec_ext" && $as_test_x "$as_dir/$ac_prog$ac_exec_ext"; } || continue case `"$as_dir/$ac_prog$ac_exec_ext" --version 2>&1` in #( 'mkdir (GNU coreutils) '* | \ 'mkdir (coreutils) '* | \ 'mkdir (fileutils) '4.1*) ac_cv_path_mkdir=$as_dir/$ac_prog$ac_exec_ext break 3;; esac done done done IFS=$as_save_IFS fi test -d ./--version && rmdir ./--version if test "${ac_cv_path_mkdir+set}" = set; then MKDIR_P="$ac_cv_path_mkdir -p" else # As a last resort, use the slow shell script. Don't cache a # value for MKDIR_P within a source directory, because that will # break other packages using the cache if that directory is # removed, or if the value is a relative name. MKDIR_P="$ac_install_sh -d" fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $MKDIR_P" >&5 $as_echo "$MKDIR_P" >&6; } mkdir_p="$MKDIR_P" case $mkdir_p in [\\/$]* | ?:[\\/]*) ;; */*) mkdir_p="\$(top_builddir)/$mkdir_p" ;; esac for ac_prog in gawk mawk nawk awk do # Extract the first word of "$ac_prog", so it can be a program name with args. set dummy $ac_prog; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_AWK+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$AWK"; then ac_cv_prog_AWK="$AWK" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then ac_cv_prog_AWK="$ac_prog" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi AWK=$ac_cv_prog_AWK if test -n "$AWK"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $AWK" >&5 $as_echo "$AWK" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi test -n "$AWK" && break done { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether ${MAKE-make} sets \$(MAKE)" >&5 $as_echo_n "checking whether ${MAKE-make} sets \$(MAKE)... " >&6; } set x ${MAKE-make} ac_make=`$as_echo "$2" | sed 's/+/p/g; s/[^a-zA-Z0-9_]/_/g'` if eval \${ac_cv_prog_make_${ac_make}_set+:} false; then : $as_echo_n "(cached) " >&6 else cat >conftest.make <<\_ACEOF SHELL = /bin/sh all: @echo '@@@%%%=$(MAKE)=@@@%%%' _ACEOF # GNU make sometimes prints "make[1]: Entering ...", which would confuse us. case `${MAKE-make} -f conftest.make 2>/dev/null` in *@@@%%%=?*=@@@%%%*) eval ac_cv_prog_make_${ac_make}_set=yes;; *) eval ac_cv_prog_make_${ac_make}_set=no;; esac rm -f conftest.make fi if eval test \$ac_cv_prog_make_${ac_make}_set = yes; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } SET_MAKE= else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } SET_MAKE="MAKE=${MAKE-make}" fi rm -rf .tst 2>/dev/null mkdir .tst 2>/dev/null if test -d .tst; then am__leading_dot=. else am__leading_dot=_ fi rmdir .tst 2>/dev/null if test "`cd $srcdir && pwd`" != "`pwd`"; then # Use -I$(srcdir) only when $(srcdir) != ., so that make's output # is not polluted with repeated "-I." am__isrc=' -I$(srcdir)' # test to see if srcdir already configured if test -f $srcdir/config.status; then as_fn_error $? "source directory already configured; run \"make distclean\" there first" "$LINENO" 5 fi fi # test whether we have cygpath if test -z "$CYGPATH_W"; then if (cygpath --version) >/dev/null 2>/dev/null; then CYGPATH_W='cygpath -w' else CYGPATH_W=echo fi fi # Define the identity of the package. PACKAGE='gtest' VERSION='1.7.0' cat >>confdefs.h <<_ACEOF #define PACKAGE "$PACKAGE" _ACEOF cat >>confdefs.h <<_ACEOF #define VERSION "$VERSION" _ACEOF # Some tools Automake needs. ACLOCAL=${ACLOCAL-"${am_missing_run}aclocal-${am__api_version}"} AUTOCONF=${AUTOCONF-"${am_missing_run}autoconf"} AUTOMAKE=${AUTOMAKE-"${am_missing_run}automake-${am__api_version}"} AUTOHEADER=${AUTOHEADER-"${am_missing_run}autoheader"} MAKEINFO=${MAKEINFO-"${am_missing_run}makeinfo"} # We need awk for the "check" target. The system "awk" is bad on # some platforms. # Always define AMTAR for backward compatibility. Yes, it's still used # in the wild :-( We should find a proper way to deprecate it ... AMTAR='$${TAR-tar}' am__tar='$${TAR-tar} chof - "$$tardir"' am__untar='$${TAR-tar} xf -' # Check for programs used in building Google Test. ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}gcc", so it can be a program name with args. set dummy ${ac_tool_prefix}gcc; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_CC+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$CC"; then ac_cv_prog_CC="$CC" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then ac_cv_prog_CC="${ac_tool_prefix}gcc" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi CC=$ac_cv_prog_CC if test -n "$CC"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $CC" >&5 $as_echo "$CC" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi if test -z "$ac_cv_prog_CC"; then ac_ct_CC=$CC # Extract the first word of "gcc", so it can be a program name with args. set dummy gcc; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_CC+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_CC"; then ac_cv_prog_ac_ct_CC="$ac_ct_CC" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then ac_cv_prog_ac_ct_CC="gcc" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_CC=$ac_cv_prog_ac_ct_CC if test -n "$ac_ct_CC"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_CC" >&5 $as_echo "$ac_ct_CC" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test "x$ac_ct_CC" = x; then CC="" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac CC=$ac_ct_CC fi else CC="$ac_cv_prog_CC" fi if test -z "$CC"; then if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}cc", so it can be a program name with args. set dummy ${ac_tool_prefix}cc; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_CC+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$CC"; then ac_cv_prog_CC="$CC" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then ac_cv_prog_CC="${ac_tool_prefix}cc" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi CC=$ac_cv_prog_CC if test -n "$CC"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $CC" >&5 $as_echo "$CC" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi fi if test -z "$CC"; then # Extract the first word of "cc", so it can be a program name with args. set dummy cc; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_CC+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$CC"; then ac_cv_prog_CC="$CC" # Let the user override the test. else ac_prog_rejected=no as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then if test "$as_dir/$ac_word$ac_exec_ext" = "/usr/ucb/cc"; then ac_prog_rejected=yes continue fi ac_cv_prog_CC="cc" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS if test $ac_prog_rejected = yes; then # We found a bogon in the path, so make sure we never use it. set dummy $ac_cv_prog_CC shift if test $# != 0; then # We chose a different compiler from the bogus one. # However, it has the same basename, so the bogon will be chosen # first if we set CC to just the basename; use the full file name. shift ac_cv_prog_CC="$as_dir/$ac_word${1+' '}$@" fi fi fi fi CC=$ac_cv_prog_CC if test -n "$CC"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $CC" >&5 $as_echo "$CC" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi if test -z "$CC"; then if test -n "$ac_tool_prefix"; then for ac_prog in cl.exe do # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args. set dummy $ac_tool_prefix$ac_prog; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_CC+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$CC"; then ac_cv_prog_CC="$CC" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then ac_cv_prog_CC="$ac_tool_prefix$ac_prog" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi CC=$ac_cv_prog_CC if test -n "$CC"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $CC" >&5 $as_echo "$CC" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi test -n "$CC" && break done fi if test -z "$CC"; then ac_ct_CC=$CC for ac_prog in cl.exe do # Extract the first word of "$ac_prog", so it can be a program name with args. set dummy $ac_prog; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_CC+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_CC"; then ac_cv_prog_ac_ct_CC="$ac_ct_CC" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then ac_cv_prog_ac_ct_CC="$ac_prog" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_CC=$ac_cv_prog_ac_ct_CC if test -n "$ac_ct_CC"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_CC" >&5 $as_echo "$ac_ct_CC" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi test -n "$ac_ct_CC" && break done if test "x$ac_ct_CC" = x; then CC="" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac CC=$ac_ct_CC fi fi fi test -z "$CC" && { { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5 $as_echo "$as_me: error: in \`$ac_pwd':" >&2;} as_fn_error $? "no acceptable C compiler found in \$PATH See \`config.log' for more details" "$LINENO" 5; } # Provide some information about the compiler. $as_echo "$as_me:${as_lineno-$LINENO}: checking for C compiler version" >&5 set X $ac_compile ac_compiler=$2 for ac_option in --version -v -V -qversion; do { { ac_try="$ac_compiler $ac_option >&5" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_compiler $ac_option >&5") 2>conftest.err ac_status=$? if test -s conftest.err; then sed '10a\ ... rest of stderr output deleted ... 10q' conftest.err >conftest.er1 cat conftest.er1 >&5 fi rm -f conftest.er1 conftest.err $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } done cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF ac_clean_files_save=$ac_clean_files ac_clean_files="$ac_clean_files a.out a.out.dSYM a.exe b.out" # Try to create an executable without -o first, disregard a.out. # It will help us diagnose broken compilers, and finding out an intuition # of exeext. { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether the C compiler works" >&5 $as_echo_n "checking whether the C compiler works... " >&6; } ac_link_default=`$as_echo "$ac_link" | sed 's/ -o *conftest[^ ]*//'` # The possible output files: ac_files="a.out conftest.exe conftest a.exe a_out.exe b.out conftest.*" ac_rmfiles= for ac_file in $ac_files do case $ac_file in *.$ac_ext | *.xcoff | *.tds | *.d | *.pdb | *.xSYM | *.bb | *.bbg | *.map | *.inf | *.dSYM | *.o | *.obj ) ;; * ) ac_rmfiles="$ac_rmfiles $ac_file";; esac done rm -f $ac_rmfiles if { { ac_try="$ac_link_default" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_link_default") 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then : # Autoconf-2.13 could set the ac_cv_exeext variable to `no'. # So ignore a value of `no', otherwise this would lead to `EXEEXT = no' # in a Makefile. We should not override ac_cv_exeext if it was cached, # so that the user can short-circuit this test for compilers unknown to # Autoconf. for ac_file in $ac_files '' do test -f "$ac_file" || continue case $ac_file in *.$ac_ext | *.xcoff | *.tds | *.d | *.pdb | *.xSYM | *.bb | *.bbg | *.map | *.inf | *.dSYM | *.o | *.obj ) ;; [ab].out ) # We found the default executable, but exeext='' is most # certainly right. break;; *.* ) if test "${ac_cv_exeext+set}" = set && test "$ac_cv_exeext" != no; then :; else ac_cv_exeext=`expr "$ac_file" : '[^.]*\(\..*\)'` fi # We set ac_cv_exeext here because the later test for it is not # safe: cross compilers may not add the suffix if given an `-o' # argument, so we may need to know it at that point already. # Even if this section looks crufty: it has the advantage of # actually working. break;; * ) break;; esac done test "$ac_cv_exeext" = no && ac_cv_exeext= else ac_file='' fi if test -z "$ac_file"; then : { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } $as_echo "$as_me: failed program was:" >&5 sed 's/^/| /' conftest.$ac_ext >&5 { { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5 $as_echo "$as_me: error: in \`$ac_pwd':" >&2;} as_fn_error 77 "C compiler cannot create executables See \`config.log' for more details" "$LINENO" 5; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking for C compiler default output file name" >&5 $as_echo_n "checking for C compiler default output file name... " >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_file" >&5 $as_echo "$ac_file" >&6; } ac_exeext=$ac_cv_exeext rm -f -r a.out a.out.dSYM a.exe conftest$ac_cv_exeext b.out ac_clean_files=$ac_clean_files_save { $as_echo "$as_me:${as_lineno-$LINENO}: checking for suffix of executables" >&5 $as_echo_n "checking for suffix of executables... " >&6; } if { { ac_try="$ac_link" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_link") 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then : # If both `conftest.exe' and `conftest' are `present' (well, observable) # catch `conftest.exe'. For instance with Cygwin, `ls conftest' will # work properly (i.e., refer to `conftest.exe'), while it won't with # `rm'. for ac_file in conftest.exe conftest conftest.*; do test -f "$ac_file" || continue case $ac_file in *.$ac_ext | *.xcoff | *.tds | *.d | *.pdb | *.xSYM | *.bb | *.bbg | *.map | *.inf | *.dSYM | *.o | *.obj ) ;; *.* ) ac_cv_exeext=`expr "$ac_file" : '[^.]*\(\..*\)'` break;; * ) break;; esac done else { { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5 $as_echo "$as_me: error: in \`$ac_pwd':" >&2;} as_fn_error $? "cannot compute suffix of executables: cannot compile and link See \`config.log' for more details" "$LINENO" 5; } fi rm -f conftest conftest$ac_cv_exeext { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_exeext" >&5 $as_echo "$ac_cv_exeext" >&6; } rm -f conftest.$ac_ext EXEEXT=$ac_cv_exeext ac_exeext=$EXEEXT cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include int main () { FILE *f = fopen ("conftest.out", "w"); return ferror (f) || fclose (f) != 0; ; return 0; } _ACEOF ac_clean_files="$ac_clean_files conftest.out" # Check that the compiler produces executables we can run. If not, either # the compiler is broken, or we cross compile. { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether we are cross compiling" >&5 $as_echo_n "checking whether we are cross compiling... " >&6; } if test "$cross_compiling" != yes; then { { ac_try="$ac_link" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_link") 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } if { ac_try='./conftest$ac_cv_exeext' { { case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_try") 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; }; then cross_compiling=no else if test "$cross_compiling" = maybe; then cross_compiling=yes else { { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5 $as_echo "$as_me: error: in \`$ac_pwd':" >&2;} as_fn_error $? "cannot run C compiled programs. If you meant to cross compile, use \`--host'. See \`config.log' for more details" "$LINENO" 5; } fi fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $cross_compiling" >&5 $as_echo "$cross_compiling" >&6; } rm -f conftest.$ac_ext conftest$ac_cv_exeext conftest.out ac_clean_files=$ac_clean_files_save { $as_echo "$as_me:${as_lineno-$LINENO}: checking for suffix of object files" >&5 $as_echo_n "checking for suffix of object files... " >&6; } if ${ac_cv_objext+:} false; then : $as_echo_n "(cached) " >&6 else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF rm -f conftest.o conftest.obj if { { ac_try="$ac_compile" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_compile") 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then : for ac_file in conftest.o conftest.obj conftest.*; do test -f "$ac_file" || continue; case $ac_file in *.$ac_ext | *.xcoff | *.tds | *.d | *.pdb | *.xSYM | *.bb | *.bbg | *.map | *.inf | *.dSYM ) ;; *) ac_cv_objext=`expr "$ac_file" : '.*\.\(.*\)'` break;; esac done else $as_echo "$as_me: failed program was:" >&5 sed 's/^/| /' conftest.$ac_ext >&5 { { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5 $as_echo "$as_me: error: in \`$ac_pwd':" >&2;} as_fn_error $? "cannot compute suffix of object files: cannot compile See \`config.log' for more details" "$LINENO" 5; } fi rm -f conftest.$ac_cv_objext conftest.$ac_ext fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_objext" >&5 $as_echo "$ac_cv_objext" >&6; } OBJEXT=$ac_cv_objext ac_objext=$OBJEXT { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether we are using the GNU C compiler" >&5 $as_echo_n "checking whether we are using the GNU C compiler... " >&6; } if ${ac_cv_c_compiler_gnu+:} false; then : $as_echo_n "(cached) " >&6 else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { #ifndef __GNUC__ choke me #endif ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO"; then : ac_compiler_gnu=yes else ac_compiler_gnu=no fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext ac_cv_c_compiler_gnu=$ac_compiler_gnu fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_c_compiler_gnu" >&5 $as_echo "$ac_cv_c_compiler_gnu" >&6; } if test $ac_compiler_gnu = yes; then GCC=yes else GCC= fi ac_test_CFLAGS=${CFLAGS+set} ac_save_CFLAGS=$CFLAGS { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether $CC accepts -g" >&5 $as_echo_n "checking whether $CC accepts -g... " >&6; } if ${ac_cv_prog_cc_g+:} false; then : $as_echo_n "(cached) " >&6 else ac_save_c_werror_flag=$ac_c_werror_flag ac_c_werror_flag=yes ac_cv_prog_cc_g=no CFLAGS="-g" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO"; then : ac_cv_prog_cc_g=yes else CFLAGS="" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO"; then : else ac_c_werror_flag=$ac_save_c_werror_flag CFLAGS="-g" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO"; then : ac_cv_prog_cc_g=yes fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext ac_c_werror_flag=$ac_save_c_werror_flag fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_prog_cc_g" >&5 $as_echo "$ac_cv_prog_cc_g" >&6; } if test "$ac_test_CFLAGS" = set; then CFLAGS=$ac_save_CFLAGS elif test $ac_cv_prog_cc_g = yes; then if test "$GCC" = yes; then CFLAGS="-g -O2" else CFLAGS="-g" fi else if test "$GCC" = yes; then CFLAGS="-O2" else CFLAGS= fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $CC option to accept ISO C89" >&5 $as_echo_n "checking for $CC option to accept ISO C89... " >&6; } if ${ac_cv_prog_cc_c89+:} false; then : $as_echo_n "(cached) " >&6 else ac_cv_prog_cc_c89=no ac_save_CC=$CC cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include #include #include #include /* Most of the following tests are stolen from RCS 5.7's src/conf.sh. */ struct buf { int x; }; FILE * (*rcsopen) (struct buf *, struct stat *, int); static char *e (p, i) char **p; int i; { return p[i]; } static char *f (char * (*g) (char **, int), char **p, ...) { char *s; va_list v; va_start (v,p); s = g (p, va_arg (v,int)); va_end (v); return s; } /* OSF 4.0 Compaq cc is some sort of almost-ANSI by default. It has function prototypes and stuff, but not '\xHH' hex character constants. These don't provoke an error unfortunately, instead are silently treated as 'x'. The following induces an error, until -std is added to get proper ANSI mode. Curiously '\x00'!='x' always comes out true, for an array size at least. It's necessary to write '\x00'==0 to get something that's true only with -std. */ int osf4_cc_array ['\x00' == 0 ? 1 : -1]; /* IBM C 6 for AIX is almost-ANSI by default, but it replaces macro parameters inside strings and character constants. */ #define FOO(x) 'x' int xlc6_cc_array[FOO(a) == 'x' ? 1 : -1]; int test (int i, double x); struct s1 {int (*f) (int a);}; struct s2 {int (*f) (double a);}; int pairnames (int, char **, FILE *(*)(struct buf *, struct stat *, int), int, int); int argc; char **argv; int main () { return f (e, argv, 0) != argv[0] || f (e, argv, 1) != argv[1]; ; return 0; } _ACEOF for ac_arg in '' -qlanglvl=extc89 -qlanglvl=ansi -std \ -Ae "-Aa -D_HPUX_SOURCE" "-Xc -D__EXTENSIONS__" do CC="$ac_save_CC $ac_arg" if ac_fn_c_try_compile "$LINENO"; then : ac_cv_prog_cc_c89=$ac_arg fi rm -f core conftest.err conftest.$ac_objext test "x$ac_cv_prog_cc_c89" != "xno" && break done rm -f conftest.$ac_ext CC=$ac_save_CC fi # AC_CACHE_VAL case "x$ac_cv_prog_cc_c89" in x) { $as_echo "$as_me:${as_lineno-$LINENO}: result: none needed" >&5 $as_echo "none needed" >&6; } ;; xno) { $as_echo "$as_me:${as_lineno-$LINENO}: result: unsupported" >&5 $as_echo "unsupported" >&6; } ;; *) CC="$CC $ac_cv_prog_cc_c89" { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_prog_cc_c89" >&5 $as_echo "$ac_cv_prog_cc_c89" >&6; } ;; esac if test "x$ac_cv_prog_cc_c89" != xno; then : fi ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu DEPDIR="${am__leading_dot}deps" ac_config_commands="$ac_config_commands depfiles" am_make=${MAKE-make} cat > confinc << 'END' am__doit: @echo this is the am__doit target .PHONY: am__doit END # If we don't find an include directive, just comment out the code. { $as_echo "$as_me:${as_lineno-$LINENO}: checking for style of include used by $am_make" >&5 $as_echo_n "checking for style of include used by $am_make... " >&6; } am__include="#" am__quote= _am_result=none # First try GNU make style include. echo "include confinc" > confmf # Ignore all kinds of additional output from `make'. case `$am_make -s -f confmf 2> /dev/null` in #( *the\ am__doit\ target*) am__include=include am__quote= _am_result=GNU ;; esac # Now try BSD make style include. if test "$am__include" = "#"; then echo '.include "confinc"' > confmf case `$am_make -s -f confmf 2> /dev/null` in #( *the\ am__doit\ target*) am__include=.include am__quote="\"" _am_result=BSD ;; esac fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $_am_result" >&5 $as_echo "$_am_result" >&6; } rm -f confinc confmf # Check whether --enable-dependency-tracking was given. if test "${enable_dependency_tracking+set}" = set; then : enableval=$enable_dependency_tracking; fi if test "x$enable_dependency_tracking" != xno; then am_depcomp="$ac_aux_dir/depcomp" AMDEPBACKSLASH='\' am__nodep='_no' fi if test "x$enable_dependency_tracking" != xno; then AMDEP_TRUE= AMDEP_FALSE='#' else AMDEP_TRUE='#' AMDEP_FALSE= fi depcc="$CC" am_compiler_list= { $as_echo "$as_me:${as_lineno-$LINENO}: checking dependency style of $depcc" >&5 $as_echo_n "checking dependency style of $depcc... " >&6; } if ${am_cv_CC_dependencies_compiler_type+:} false; then : $as_echo_n "(cached) " >&6 else if test -z "$AMDEP_TRUE" && test -f "$am_depcomp"; then # We make a subdir and do the tests there. Otherwise we can end up # making bogus files that we don't know about and never remove. For # instance it was reported that on HP-UX the gcc test will end up # making a dummy file named `D' -- because `-MD' means `put the output # in D'. rm -rf conftest.dir mkdir conftest.dir # Copy depcomp to subdir because otherwise we won't find it if we're # using a relative directory. cp "$am_depcomp" conftest.dir cd conftest.dir # We will build objects and dependencies in a subdirectory because # it helps to detect inapplicable dependency modes. For instance # both Tru64's cc and ICC support -MD to output dependencies as a # side effect of compilation, but ICC will put the dependencies in # the current directory while Tru64 will put them in the object # directory. mkdir sub am_cv_CC_dependencies_compiler_type=none if test "$am_compiler_list" = ""; then am_compiler_list=`sed -n 's/^#*\([a-zA-Z0-9]*\))$/\1/p' < ./depcomp` fi am__universal=false case " $depcc " in #( *\ -arch\ *\ -arch\ *) am__universal=true ;; esac for depmode in $am_compiler_list; do # Setup a source with many dependencies, because some compilers # like to wrap large dependency lists on column 80 (with \), and # we should not choose a depcomp mode which is confused by this. # # We need to recreate these files for each test, as the compiler may # overwrite some of them when testing with obscure command lines. # This happens at least with the AIX C compiler. : > sub/conftest.c for i in 1 2 3 4 5 6; do echo '#include "conftst'$i'.h"' >> sub/conftest.c # Using `: > sub/conftst$i.h' creates only sub/conftst1.h with # Solaris 8's {/usr,}/bin/sh. touch sub/conftst$i.h done echo "${am__include} ${am__quote}sub/conftest.Po${am__quote}" > confmf # We check with `-c' and `-o' for the sake of the "dashmstdout" # mode. It turns out that the SunPro C++ compiler does not properly # handle `-M -o', and we need to detect this. Also, some Intel # versions had trouble with output in subdirs am__obj=sub/conftest.${OBJEXT-o} am__minus_obj="-o $am__obj" case $depmode in gcc) # This depmode causes a compiler race in universal mode. test "$am__universal" = false || continue ;; nosideeffect) # after this tag, mechanisms are not by side-effect, so they'll # only be used when explicitly requested if test "x$enable_dependency_tracking" = xyes; then continue else break fi ;; msvc7 | msvc7msys | msvisualcpp | msvcmsys) # This compiler won't grok `-c -o', but also, the minuso test has # not run yet. These depmodes are late enough in the game, and # so weak that their functioning should not be impacted. am__obj=conftest.${OBJEXT-o} am__minus_obj= ;; none) break ;; esac if depmode=$depmode \ source=sub/conftest.c object=$am__obj \ depfile=sub/conftest.Po tmpdepfile=sub/conftest.TPo \ $SHELL ./depcomp $depcc -c $am__minus_obj sub/conftest.c \ >/dev/null 2>conftest.err && grep sub/conftst1.h sub/conftest.Po > /dev/null 2>&1 && grep sub/conftst6.h sub/conftest.Po > /dev/null 2>&1 && grep $am__obj sub/conftest.Po > /dev/null 2>&1 && ${MAKE-make} -s -f confmf > /dev/null 2>&1; then # icc doesn't choke on unknown options, it will just issue warnings # or remarks (even with -Werror). So we grep stderr for any message # that says an option was ignored or not supported. # When given -MP, icc 7.0 and 7.1 complain thusly: # icc: Command line warning: ignoring option '-M'; no argument required # The diagnosis changed in icc 8.0: # icc: Command line remark: option '-MP' not supported if (grep 'ignoring option' conftest.err || grep 'not supported' conftest.err) >/dev/null 2>&1; then :; else am_cv_CC_dependencies_compiler_type=$depmode break fi fi done cd .. rm -rf conftest.dir else am_cv_CC_dependencies_compiler_type=none fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $am_cv_CC_dependencies_compiler_type" >&5 $as_echo "$am_cv_CC_dependencies_compiler_type" >&6; } CCDEPMODE=depmode=$am_cv_CC_dependencies_compiler_type if test "x$enable_dependency_tracking" != xno \ && test "$am_cv_CC_dependencies_compiler_type" = gcc3; then am__fastdepCC_TRUE= am__fastdepCC_FALSE='#' else am__fastdepCC_TRUE='#' am__fastdepCC_FALSE= fi ac_ext=cpp ac_cpp='$CXXCPP $CPPFLAGS' ac_compile='$CXX -c $CXXFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CXX -o conftest$ac_exeext $CXXFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_cxx_compiler_gnu if test -z "$CXX"; then if test -n "$CCC"; then CXX=$CCC else if test -n "$ac_tool_prefix"; then for ac_prog in g++ c++ gpp aCC CC cxx cc++ cl.exe FCC KCC RCC xlC_r xlC do # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args. set dummy $ac_tool_prefix$ac_prog; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_CXX+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$CXX"; then ac_cv_prog_CXX="$CXX" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then ac_cv_prog_CXX="$ac_tool_prefix$ac_prog" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi CXX=$ac_cv_prog_CXX if test -n "$CXX"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $CXX" >&5 $as_echo "$CXX" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi test -n "$CXX" && break done fi if test -z "$CXX"; then ac_ct_CXX=$CXX for ac_prog in g++ c++ gpp aCC CC cxx cc++ cl.exe FCC KCC RCC xlC_r xlC do # Extract the first word of "$ac_prog", so it can be a program name with args. set dummy $ac_prog; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_CXX+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_CXX"; then ac_cv_prog_ac_ct_CXX="$ac_ct_CXX" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then ac_cv_prog_ac_ct_CXX="$ac_prog" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_CXX=$ac_cv_prog_ac_ct_CXX if test -n "$ac_ct_CXX"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_CXX" >&5 $as_echo "$ac_ct_CXX" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi test -n "$ac_ct_CXX" && break done if test "x$ac_ct_CXX" = x; then CXX="g++" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac CXX=$ac_ct_CXX fi fi fi fi # Provide some information about the compiler. $as_echo "$as_me:${as_lineno-$LINENO}: checking for C++ compiler version" >&5 set X $ac_compile ac_compiler=$2 for ac_option in --version -v -V -qversion; do { { ac_try="$ac_compiler $ac_option >&5" case "(($ac_try" in *\"* | *\`* | *\\*) ac_try_echo=\$ac_try;; *) ac_try_echo=$ac_try;; esac eval ac_try_echo="\"\$as_me:${as_lineno-$LINENO}: $ac_try_echo\"" $as_echo "$ac_try_echo"; } >&5 (eval "$ac_compiler $ac_option >&5") 2>conftest.err ac_status=$? if test -s conftest.err; then sed '10a\ ... rest of stderr output deleted ... 10q' conftest.err >conftest.er1 cat conftest.er1 >&5 fi rm -f conftest.er1 conftest.err $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } done { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether we are using the GNU C++ compiler" >&5 $as_echo_n "checking whether we are using the GNU C++ compiler... " >&6; } if ${ac_cv_cxx_compiler_gnu+:} false; then : $as_echo_n "(cached) " >&6 else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { #ifndef __GNUC__ choke me #endif ; return 0; } _ACEOF if ac_fn_cxx_try_compile "$LINENO"; then : ac_compiler_gnu=yes else ac_compiler_gnu=no fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext ac_cv_cxx_compiler_gnu=$ac_compiler_gnu fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_cxx_compiler_gnu" >&5 $as_echo "$ac_cv_cxx_compiler_gnu" >&6; } if test $ac_compiler_gnu = yes; then GXX=yes else GXX= fi ac_test_CXXFLAGS=${CXXFLAGS+set} ac_save_CXXFLAGS=$CXXFLAGS { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether $CXX accepts -g" >&5 $as_echo_n "checking whether $CXX accepts -g... " >&6; } if ${ac_cv_prog_cxx_g+:} false; then : $as_echo_n "(cached) " >&6 else ac_save_cxx_werror_flag=$ac_cxx_werror_flag ac_cxx_werror_flag=yes ac_cv_prog_cxx_g=no CXXFLAGS="-g" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF if ac_fn_cxx_try_compile "$LINENO"; then : ac_cv_prog_cxx_g=yes else CXXFLAGS="" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF if ac_fn_cxx_try_compile "$LINENO"; then : else ac_cxx_werror_flag=$ac_save_cxx_werror_flag CXXFLAGS="-g" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF if ac_fn_cxx_try_compile "$LINENO"; then : ac_cv_prog_cxx_g=yes fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext ac_cxx_werror_flag=$ac_save_cxx_werror_flag fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_prog_cxx_g" >&5 $as_echo "$ac_cv_prog_cxx_g" >&6; } if test "$ac_test_CXXFLAGS" = set; then CXXFLAGS=$ac_save_CXXFLAGS elif test $ac_cv_prog_cxx_g = yes; then if test "$GXX" = yes; then CXXFLAGS="-g -O2" else CXXFLAGS="-g" fi else if test "$GXX" = yes; then CXXFLAGS="-O2" else CXXFLAGS= fi fi ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu depcc="$CXX" am_compiler_list= { $as_echo "$as_me:${as_lineno-$LINENO}: checking dependency style of $depcc" >&5 $as_echo_n "checking dependency style of $depcc... " >&6; } if ${am_cv_CXX_dependencies_compiler_type+:} false; then : $as_echo_n "(cached) " >&6 else if test -z "$AMDEP_TRUE" && test -f "$am_depcomp"; then # We make a subdir and do the tests there. Otherwise we can end up # making bogus files that we don't know about and never remove. For # instance it was reported that on HP-UX the gcc test will end up # making a dummy file named `D' -- because `-MD' means `put the output # in D'. rm -rf conftest.dir mkdir conftest.dir # Copy depcomp to subdir because otherwise we won't find it if we're # using a relative directory. cp "$am_depcomp" conftest.dir cd conftest.dir # We will build objects and dependencies in a subdirectory because # it helps to detect inapplicable dependency modes. For instance # both Tru64's cc and ICC support -MD to output dependencies as a # side effect of compilation, but ICC will put the dependencies in # the current directory while Tru64 will put them in the object # directory. mkdir sub am_cv_CXX_dependencies_compiler_type=none if test "$am_compiler_list" = ""; then am_compiler_list=`sed -n 's/^#*\([a-zA-Z0-9]*\))$/\1/p' < ./depcomp` fi am__universal=false case " $depcc " in #( *\ -arch\ *\ -arch\ *) am__universal=true ;; esac for depmode in $am_compiler_list; do # Setup a source with many dependencies, because some compilers # like to wrap large dependency lists on column 80 (with \), and # we should not choose a depcomp mode which is confused by this. # # We need to recreate these files for each test, as the compiler may # overwrite some of them when testing with obscure command lines. # This happens at least with the AIX C compiler. : > sub/conftest.c for i in 1 2 3 4 5 6; do echo '#include "conftst'$i'.h"' >> sub/conftest.c # Using `: > sub/conftst$i.h' creates only sub/conftst1.h with # Solaris 8's {/usr,}/bin/sh. touch sub/conftst$i.h done echo "${am__include} ${am__quote}sub/conftest.Po${am__quote}" > confmf # We check with `-c' and `-o' for the sake of the "dashmstdout" # mode. It turns out that the SunPro C++ compiler does not properly # handle `-M -o', and we need to detect this. Also, some Intel # versions had trouble with output in subdirs am__obj=sub/conftest.${OBJEXT-o} am__minus_obj="-o $am__obj" case $depmode in gcc) # This depmode causes a compiler race in universal mode. test "$am__universal" = false || continue ;; nosideeffect) # after this tag, mechanisms are not by side-effect, so they'll # only be used when explicitly requested if test "x$enable_dependency_tracking" = xyes; then continue else break fi ;; msvc7 | msvc7msys | msvisualcpp | msvcmsys) # This compiler won't grok `-c -o', but also, the minuso test has # not run yet. These depmodes are late enough in the game, and # so weak that their functioning should not be impacted. am__obj=conftest.${OBJEXT-o} am__minus_obj= ;; none) break ;; esac if depmode=$depmode \ source=sub/conftest.c object=$am__obj \ depfile=sub/conftest.Po tmpdepfile=sub/conftest.TPo \ $SHELL ./depcomp $depcc -c $am__minus_obj sub/conftest.c \ >/dev/null 2>conftest.err && grep sub/conftst1.h sub/conftest.Po > /dev/null 2>&1 && grep sub/conftst6.h sub/conftest.Po > /dev/null 2>&1 && grep $am__obj sub/conftest.Po > /dev/null 2>&1 && ${MAKE-make} -s -f confmf > /dev/null 2>&1; then # icc doesn't choke on unknown options, it will just issue warnings # or remarks (even with -Werror). So we grep stderr for any message # that says an option was ignored or not supported. # When given -MP, icc 7.0 and 7.1 complain thusly: # icc: Command line warning: ignoring option '-M'; no argument required # The diagnosis changed in icc 8.0: # icc: Command line remark: option '-MP' not supported if (grep 'ignoring option' conftest.err || grep 'not supported' conftest.err) >/dev/null 2>&1; then :; else am_cv_CXX_dependencies_compiler_type=$depmode break fi fi done cd .. rm -rf conftest.dir else am_cv_CXX_dependencies_compiler_type=none fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $am_cv_CXX_dependencies_compiler_type" >&5 $as_echo "$am_cv_CXX_dependencies_compiler_type" >&6; } CXXDEPMODE=depmode=$am_cv_CXX_dependencies_compiler_type if test "x$enable_dependency_tracking" != xno \ && test "$am_cv_CXX_dependencies_compiler_type" = gcc3; then am__fastdepCXX_TRUE= am__fastdepCXX_FALSE='#' else am__fastdepCXX_TRUE='#' am__fastdepCXX_FALSE= fi ac_ext=cpp ac_cpp='$CXXCPP $CPPFLAGS' ac_compile='$CXX -c $CXXFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CXX -o conftest$ac_exeext $CXXFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_cxx_compiler_gnu case `pwd` in *\ * | *\ *) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Libtool does not cope well with whitespace in \`pwd\`" >&5 $as_echo "$as_me: WARNING: Libtool does not cope well with whitespace in \`pwd\`" >&2;} ;; esac macro_version='2.4.2' macro_revision='1.3337' ltmain="$ac_aux_dir/ltmain.sh" # Make sure we can run config.sub. $SHELL "$ac_aux_dir/config.sub" sun4 >/dev/null 2>&1 || as_fn_error $? "cannot run $SHELL $ac_aux_dir/config.sub" "$LINENO" 5 { $as_echo "$as_me:${as_lineno-$LINENO}: checking build system type" >&5 $as_echo_n "checking build system type... " >&6; } if ${ac_cv_build+:} false; then : $as_echo_n "(cached) " >&6 else ac_build_alias=$build_alias test "x$ac_build_alias" = x && ac_build_alias=`$SHELL "$ac_aux_dir/config.guess"` test "x$ac_build_alias" = x && as_fn_error $? "cannot guess build type; you must specify one" "$LINENO" 5 ac_cv_build=`$SHELL "$ac_aux_dir/config.sub" $ac_build_alias` || as_fn_error $? "$SHELL $ac_aux_dir/config.sub $ac_build_alias failed" "$LINENO" 5 fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_build" >&5 $as_echo "$ac_cv_build" >&6; } case $ac_cv_build in *-*-*) ;; *) as_fn_error $? "invalid value of canonical build" "$LINENO" 5;; esac build=$ac_cv_build ac_save_IFS=$IFS; IFS='-' set x $ac_cv_build shift build_cpu=$1 build_vendor=$2 shift; shift # Remember, the first character of IFS is used to create $*, # except with old shells: build_os=$* IFS=$ac_save_IFS case $build_os in *\ *) build_os=`echo "$build_os" | sed 's/ /-/g'`;; esac { $as_echo "$as_me:${as_lineno-$LINENO}: checking host system type" >&5 $as_echo_n "checking host system type... " >&6; } if ${ac_cv_host+:} false; then : $as_echo_n "(cached) " >&6 else if test "x$host_alias" = x; then ac_cv_host=$ac_cv_build else ac_cv_host=`$SHELL "$ac_aux_dir/config.sub" $host_alias` || as_fn_error $? "$SHELL $ac_aux_dir/config.sub $host_alias failed" "$LINENO" 5 fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_host" >&5 $as_echo "$ac_cv_host" >&6; } case $ac_cv_host in *-*-*) ;; *) as_fn_error $? "invalid value of canonical host" "$LINENO" 5;; esac host=$ac_cv_host ac_save_IFS=$IFS; IFS='-' set x $ac_cv_host shift host_cpu=$1 host_vendor=$2 shift; shift # Remember, the first character of IFS is used to create $*, # except with old shells: host_os=$* IFS=$ac_save_IFS case $host_os in *\ *) host_os=`echo "$host_os" | sed 's/ /-/g'`;; esac # Backslashify metacharacters that are still active within # double-quoted strings. sed_quote_subst='s/\(["`$\\]\)/\\\1/g' # Same as above, but do not quote variable references. double_quote_subst='s/\(["`\\]\)/\\\1/g' # Sed substitution to delay expansion of an escaped shell variable in a # double_quote_subst'ed string. delay_variable_subst='s/\\\\\\\\\\\$/\\\\\\$/g' # Sed substitution to delay expansion of an escaped single quote. delay_single_quote_subst='s/'\''/'\'\\\\\\\'\''/g' # Sed substitution to avoid accidental globbing in evaled expressions no_glob_subst='s/\*/\\\*/g' ECHO='\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\' ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO$ECHO { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to print strings" >&5 $as_echo_n "checking how to print strings... " >&6; } # Test print first, because it will be a builtin if present. if test "X`( print -r -- -n ) 2>/dev/null`" = X-n && \ test "X`print -r -- $ECHO 2>/dev/null`" = "X$ECHO"; then ECHO='print -r --' elif test "X`printf %s $ECHO 2>/dev/null`" = "X$ECHO"; then ECHO='printf %s\n' else # Use this function as a fallback that always works. func_fallback_echo () { eval 'cat <<_LTECHO_EOF $1 _LTECHO_EOF' } ECHO='func_fallback_echo' fi # func_echo_all arg... # Invoke $ECHO with all args, space-separated. func_echo_all () { $ECHO "" } case "$ECHO" in printf*) { $as_echo "$as_me:${as_lineno-$LINENO}: result: printf" >&5 $as_echo "printf" >&6; } ;; print*) { $as_echo "$as_me:${as_lineno-$LINENO}: result: print -r" >&5 $as_echo "print -r" >&6; } ;; *) { $as_echo "$as_me:${as_lineno-$LINENO}: result: cat" >&5 $as_echo "cat" >&6; } ;; esac { $as_echo "$as_me:${as_lineno-$LINENO}: checking for a sed that does not truncate output" >&5 $as_echo_n "checking for a sed that does not truncate output... " >&6; } if ${ac_cv_path_SED+:} false; then : $as_echo_n "(cached) " >&6 else ac_script=s/aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa/bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb/ for ac_i in 1 2 3 4 5 6 7; do ac_script="$ac_script$as_nl$ac_script" done echo "$ac_script" 2>/dev/null | sed 99q >conftest.sed { ac_script=; unset ac_script;} if test -z "$SED"; then ac_path_SED_found=false # Loop through the user's path and test for each of PROGNAME-LIST as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_prog in sed gsed; do for ac_exec_ext in '' $ac_executable_extensions; do ac_path_SED="$as_dir/$ac_prog$ac_exec_ext" { test -f "$ac_path_SED" && $as_test_x "$ac_path_SED"; } || continue # Check for GNU ac_path_SED and select it if it is found. # Check for GNU $ac_path_SED case `"$ac_path_SED" --version 2>&1` in *GNU*) ac_cv_path_SED="$ac_path_SED" ac_path_SED_found=:;; *) ac_count=0 $as_echo_n 0123456789 >"conftest.in" while : do cat "conftest.in" "conftest.in" >"conftest.tmp" mv "conftest.tmp" "conftest.in" cp "conftest.in" "conftest.nl" $as_echo '' >> "conftest.nl" "$ac_path_SED" -f conftest.sed < "conftest.nl" >"conftest.out" 2>/dev/null || break diff "conftest.out" "conftest.nl" >/dev/null 2>&1 || break as_fn_arith $ac_count + 1 && ac_count=$as_val if test $ac_count -gt ${ac_path_SED_max-0}; then # Best one so far, save it but keep looking for a better one ac_cv_path_SED="$ac_path_SED" ac_path_SED_max=$ac_count fi # 10*(2^10) chars as input seems more than enough test $ac_count -gt 10 && break done rm -f conftest.in conftest.tmp conftest.nl conftest.out;; esac $ac_path_SED_found && break 3 done done done IFS=$as_save_IFS if test -z "$ac_cv_path_SED"; then as_fn_error $? "no acceptable sed could be found in \$PATH" "$LINENO" 5 fi else ac_cv_path_SED=$SED fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_path_SED" >&5 $as_echo "$ac_cv_path_SED" >&6; } SED="$ac_cv_path_SED" rm -f conftest.sed test -z "$SED" && SED=sed Xsed="$SED -e 1s/^X//" { $as_echo "$as_me:${as_lineno-$LINENO}: checking for grep that handles long lines and -e" >&5 $as_echo_n "checking for grep that handles long lines and -e... " >&6; } if ${ac_cv_path_GREP+:} false; then : $as_echo_n "(cached) " >&6 else if test -z "$GREP"; then ac_path_GREP_found=false # Loop through the user's path and test for each of PROGNAME-LIST as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH$PATH_SEPARATOR/usr/xpg4/bin do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_prog in grep ggrep; do for ac_exec_ext in '' $ac_executable_extensions; do ac_path_GREP="$as_dir/$ac_prog$ac_exec_ext" { test -f "$ac_path_GREP" && $as_test_x "$ac_path_GREP"; } || continue # Check for GNU ac_path_GREP and select it if it is found. # Check for GNU $ac_path_GREP case `"$ac_path_GREP" --version 2>&1` in *GNU*) ac_cv_path_GREP="$ac_path_GREP" ac_path_GREP_found=:;; *) ac_count=0 $as_echo_n 0123456789 >"conftest.in" while : do cat "conftest.in" "conftest.in" >"conftest.tmp" mv "conftest.tmp" "conftest.in" cp "conftest.in" "conftest.nl" $as_echo 'GREP' >> "conftest.nl" "$ac_path_GREP" -e 'GREP$' -e '-(cannot match)-' < "conftest.nl" >"conftest.out" 2>/dev/null || break diff "conftest.out" "conftest.nl" >/dev/null 2>&1 || break as_fn_arith $ac_count + 1 && ac_count=$as_val if test $ac_count -gt ${ac_path_GREP_max-0}; then # Best one so far, save it but keep looking for a better one ac_cv_path_GREP="$ac_path_GREP" ac_path_GREP_max=$ac_count fi # 10*(2^10) chars as input seems more than enough test $ac_count -gt 10 && break done rm -f conftest.in conftest.tmp conftest.nl conftest.out;; esac $ac_path_GREP_found && break 3 done done done IFS=$as_save_IFS if test -z "$ac_cv_path_GREP"; then as_fn_error $? "no acceptable grep could be found in $PATH$PATH_SEPARATOR/usr/xpg4/bin" "$LINENO" 5 fi else ac_cv_path_GREP=$GREP fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_path_GREP" >&5 $as_echo "$ac_cv_path_GREP" >&6; } GREP="$ac_cv_path_GREP" { $as_echo "$as_me:${as_lineno-$LINENO}: checking for egrep" >&5 $as_echo_n "checking for egrep... " >&6; } if ${ac_cv_path_EGREP+:} false; then : $as_echo_n "(cached) " >&6 else if echo a | $GREP -E '(a|b)' >/dev/null 2>&1 then ac_cv_path_EGREP="$GREP -E" else if test -z "$EGREP"; then ac_path_EGREP_found=false # Loop through the user's path and test for each of PROGNAME-LIST as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH$PATH_SEPARATOR/usr/xpg4/bin do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_prog in egrep; do for ac_exec_ext in '' $ac_executable_extensions; do ac_path_EGREP="$as_dir/$ac_prog$ac_exec_ext" { test -f "$ac_path_EGREP" && $as_test_x "$ac_path_EGREP"; } || continue # Check for GNU ac_path_EGREP and select it if it is found. # Check for GNU $ac_path_EGREP case `"$ac_path_EGREP" --version 2>&1` in *GNU*) ac_cv_path_EGREP="$ac_path_EGREP" ac_path_EGREP_found=:;; *) ac_count=0 $as_echo_n 0123456789 >"conftest.in" while : do cat "conftest.in" "conftest.in" >"conftest.tmp" mv "conftest.tmp" "conftest.in" cp "conftest.in" "conftest.nl" $as_echo 'EGREP' >> "conftest.nl" "$ac_path_EGREP" 'EGREP$' < "conftest.nl" >"conftest.out" 2>/dev/null || break diff "conftest.out" "conftest.nl" >/dev/null 2>&1 || break as_fn_arith $ac_count + 1 && ac_count=$as_val if test $ac_count -gt ${ac_path_EGREP_max-0}; then # Best one so far, save it but keep looking for a better one ac_cv_path_EGREP="$ac_path_EGREP" ac_path_EGREP_max=$ac_count fi # 10*(2^10) chars as input seems more than enough test $ac_count -gt 10 && break done rm -f conftest.in conftest.tmp conftest.nl conftest.out;; esac $ac_path_EGREP_found && break 3 done done done IFS=$as_save_IFS if test -z "$ac_cv_path_EGREP"; then as_fn_error $? "no acceptable egrep could be found in $PATH$PATH_SEPARATOR/usr/xpg4/bin" "$LINENO" 5 fi else ac_cv_path_EGREP=$EGREP fi fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_path_EGREP" >&5 $as_echo "$ac_cv_path_EGREP" >&6; } EGREP="$ac_cv_path_EGREP" { $as_echo "$as_me:${as_lineno-$LINENO}: checking for fgrep" >&5 $as_echo_n "checking for fgrep... " >&6; } if ${ac_cv_path_FGREP+:} false; then : $as_echo_n "(cached) " >&6 else if echo 'ab*c' | $GREP -F 'ab*c' >/dev/null 2>&1 then ac_cv_path_FGREP="$GREP -F" else if test -z "$FGREP"; then ac_path_FGREP_found=false # Loop through the user's path and test for each of PROGNAME-LIST as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH$PATH_SEPARATOR/usr/xpg4/bin do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_prog in fgrep; do for ac_exec_ext in '' $ac_executable_extensions; do ac_path_FGREP="$as_dir/$ac_prog$ac_exec_ext" { test -f "$ac_path_FGREP" && $as_test_x "$ac_path_FGREP"; } || continue # Check for GNU ac_path_FGREP and select it if it is found. # Check for GNU $ac_path_FGREP case `"$ac_path_FGREP" --version 2>&1` in *GNU*) ac_cv_path_FGREP="$ac_path_FGREP" ac_path_FGREP_found=:;; *) ac_count=0 $as_echo_n 0123456789 >"conftest.in" while : do cat "conftest.in" "conftest.in" >"conftest.tmp" mv "conftest.tmp" "conftest.in" cp "conftest.in" "conftest.nl" $as_echo 'FGREP' >> "conftest.nl" "$ac_path_FGREP" FGREP < "conftest.nl" >"conftest.out" 2>/dev/null || break diff "conftest.out" "conftest.nl" >/dev/null 2>&1 || break as_fn_arith $ac_count + 1 && ac_count=$as_val if test $ac_count -gt ${ac_path_FGREP_max-0}; then # Best one so far, save it but keep looking for a better one ac_cv_path_FGREP="$ac_path_FGREP" ac_path_FGREP_max=$ac_count fi # 10*(2^10) chars as input seems more than enough test $ac_count -gt 10 && break done rm -f conftest.in conftest.tmp conftest.nl conftest.out;; esac $ac_path_FGREP_found && break 3 done done done IFS=$as_save_IFS if test -z "$ac_cv_path_FGREP"; then as_fn_error $? "no acceptable fgrep could be found in $PATH$PATH_SEPARATOR/usr/xpg4/bin" "$LINENO" 5 fi else ac_cv_path_FGREP=$FGREP fi fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_path_FGREP" >&5 $as_echo "$ac_cv_path_FGREP" >&6; } FGREP="$ac_cv_path_FGREP" test -z "$GREP" && GREP=grep # Check whether --with-gnu-ld was given. if test "${with_gnu_ld+set}" = set; then : withval=$with_gnu_ld; test "$withval" = no || with_gnu_ld=yes else with_gnu_ld=no fi ac_prog=ld if test "$GCC" = yes; then # Check if gcc -print-prog-name=ld gives a path. { $as_echo "$as_me:${as_lineno-$LINENO}: checking for ld used by $CC" >&5 $as_echo_n "checking for ld used by $CC... " >&6; } case $host in *-*-mingw*) # gcc leaves a trailing carriage return which upsets mingw ac_prog=`($CC -print-prog-name=ld) 2>&5 | tr -d '\015'` ;; *) ac_prog=`($CC -print-prog-name=ld) 2>&5` ;; esac case $ac_prog in # Accept absolute paths. [\\/]* | ?:[\\/]*) re_direlt='/[^/][^/]*/\.\./' # Canonicalize the pathname of ld ac_prog=`$ECHO "$ac_prog"| $SED 's%\\\\%/%g'` while $ECHO "$ac_prog" | $GREP "$re_direlt" > /dev/null 2>&1; do ac_prog=`$ECHO $ac_prog| $SED "s%$re_direlt%/%"` done test -z "$LD" && LD="$ac_prog" ;; "") # If it fails, then pretend we aren't using GCC. ac_prog=ld ;; *) # If it is relative, then search for the first ld in PATH. with_gnu_ld=unknown ;; esac elif test "$with_gnu_ld" = yes; then { $as_echo "$as_me:${as_lineno-$LINENO}: checking for GNU ld" >&5 $as_echo_n "checking for GNU ld... " >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: checking for non-GNU ld" >&5 $as_echo_n "checking for non-GNU ld... " >&6; } fi if ${lt_cv_path_LD+:} false; then : $as_echo_n "(cached) " >&6 else if test -z "$LD"; then lt_save_ifs="$IFS"; IFS=$PATH_SEPARATOR for ac_dir in $PATH; do IFS="$lt_save_ifs" test -z "$ac_dir" && ac_dir=. if test -f "$ac_dir/$ac_prog" || test -f "$ac_dir/$ac_prog$ac_exeext"; then lt_cv_path_LD="$ac_dir/$ac_prog" # Check to see if the program is GNU ld. I'd rather use --version, # but apparently some variants of GNU ld only accept -v. # Break only if it was the GNU/non-GNU ld that we prefer. case `"$lt_cv_path_LD" -v 2>&1 &5 $as_echo "$LD" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi test -z "$LD" && as_fn_error $? "no acceptable ld found in \$PATH" "$LINENO" 5 { $as_echo "$as_me:${as_lineno-$LINENO}: checking if the linker ($LD) is GNU ld" >&5 $as_echo_n "checking if the linker ($LD) is GNU ld... " >&6; } if ${lt_cv_prog_gnu_ld+:} false; then : $as_echo_n "(cached) " >&6 else # I'd rather use --version here, but apparently some GNU lds only accept -v. case `$LD -v 2>&1 &5 $as_echo "$lt_cv_prog_gnu_ld" >&6; } with_gnu_ld=$lt_cv_prog_gnu_ld { $as_echo "$as_me:${as_lineno-$LINENO}: checking for BSD- or MS-compatible name lister (nm)" >&5 $as_echo_n "checking for BSD- or MS-compatible name lister (nm)... " >&6; } if ${lt_cv_path_NM+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$NM"; then # Let the user override the test. lt_cv_path_NM="$NM" else lt_nm_to_check="${ac_tool_prefix}nm" if test -n "$ac_tool_prefix" && test "$build" = "$host"; then lt_nm_to_check="$lt_nm_to_check nm" fi for lt_tmp_nm in $lt_nm_to_check; do lt_save_ifs="$IFS"; IFS=$PATH_SEPARATOR for ac_dir in $PATH /usr/ccs/bin/elf /usr/ccs/bin /usr/ucb /bin; do IFS="$lt_save_ifs" test -z "$ac_dir" && ac_dir=. tmp_nm="$ac_dir/$lt_tmp_nm" if test -f "$tmp_nm" || test -f "$tmp_nm$ac_exeext" ; then # Check to see if the nm accepts a BSD-compat flag. # Adding the `sed 1q' prevents false positives on HP-UX, which says: # nm: unknown option "B" ignored # Tru64's nm complains that /dev/null is an invalid object file case `"$tmp_nm" -B /dev/null 2>&1 | sed '1q'` in */dev/null* | *'Invalid file or object type'*) lt_cv_path_NM="$tmp_nm -B" break ;; *) case `"$tmp_nm" -p /dev/null 2>&1 | sed '1q'` in */dev/null*) lt_cv_path_NM="$tmp_nm -p" break ;; *) lt_cv_path_NM=${lt_cv_path_NM="$tmp_nm"} # keep the first match, but continue # so that we can try to find one that supports BSD flags ;; esac ;; esac fi done IFS="$lt_save_ifs" done : ${lt_cv_path_NM=no} fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_path_NM" >&5 $as_echo "$lt_cv_path_NM" >&6; } if test "$lt_cv_path_NM" != "no"; then NM="$lt_cv_path_NM" else # Didn't find any BSD compatible name lister, look for dumpbin. if test -n "$DUMPBIN"; then : # Let the user override the test. else if test -n "$ac_tool_prefix"; then for ac_prog in dumpbin "link -dump" do # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args. set dummy $ac_tool_prefix$ac_prog; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_DUMPBIN+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$DUMPBIN"; then ac_cv_prog_DUMPBIN="$DUMPBIN" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then ac_cv_prog_DUMPBIN="$ac_tool_prefix$ac_prog" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi DUMPBIN=$ac_cv_prog_DUMPBIN if test -n "$DUMPBIN"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $DUMPBIN" >&5 $as_echo "$DUMPBIN" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi test -n "$DUMPBIN" && break done fi if test -z "$DUMPBIN"; then ac_ct_DUMPBIN=$DUMPBIN for ac_prog in dumpbin "link -dump" do # Extract the first word of "$ac_prog", so it can be a program name with args. set dummy $ac_prog; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_DUMPBIN+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_DUMPBIN"; then ac_cv_prog_ac_ct_DUMPBIN="$ac_ct_DUMPBIN" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then ac_cv_prog_ac_ct_DUMPBIN="$ac_prog" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_DUMPBIN=$ac_cv_prog_ac_ct_DUMPBIN if test -n "$ac_ct_DUMPBIN"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_DUMPBIN" >&5 $as_echo "$ac_ct_DUMPBIN" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi test -n "$ac_ct_DUMPBIN" && break done if test "x$ac_ct_DUMPBIN" = x; then DUMPBIN=":" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac DUMPBIN=$ac_ct_DUMPBIN fi fi case `$DUMPBIN -symbols /dev/null 2>&1 | sed '1q'` in *COFF*) DUMPBIN="$DUMPBIN -symbols" ;; *) DUMPBIN=: ;; esac fi if test "$DUMPBIN" != ":"; then NM="$DUMPBIN" fi fi test -z "$NM" && NM=nm { $as_echo "$as_me:${as_lineno-$LINENO}: checking the name lister ($NM) interface" >&5 $as_echo_n "checking the name lister ($NM) interface... " >&6; } if ${lt_cv_nm_interface+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_nm_interface="BSD nm" echo "int some_variable = 0;" > conftest.$ac_ext (eval echo "\"\$as_me:$LINENO: $ac_compile\"" >&5) (eval "$ac_compile" 2>conftest.err) cat conftest.err >&5 (eval echo "\"\$as_me:$LINENO: $NM \\\"conftest.$ac_objext\\\"\"" >&5) (eval "$NM \"conftest.$ac_objext\"" 2>conftest.err > conftest.out) cat conftest.err >&5 (eval echo "\"\$as_me:$LINENO: output\"" >&5) cat conftest.out >&5 if $GREP 'External.*some_variable' conftest.out > /dev/null; then lt_cv_nm_interface="MS dumpbin" fi rm -f conftest* fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_nm_interface" >&5 $as_echo "$lt_cv_nm_interface" >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether ln -s works" >&5 $as_echo_n "checking whether ln -s works... " >&6; } LN_S=$as_ln_s if test "$LN_S" = "ln -s"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no, using $LN_S" >&5 $as_echo "no, using $LN_S" >&6; } fi # find the maximum length of command line arguments { $as_echo "$as_me:${as_lineno-$LINENO}: checking the maximum length of command line arguments" >&5 $as_echo_n "checking the maximum length of command line arguments... " >&6; } if ${lt_cv_sys_max_cmd_len+:} false; then : $as_echo_n "(cached) " >&6 else i=0 teststring="ABCD" case $build_os in msdosdjgpp*) # On DJGPP, this test can blow up pretty badly due to problems in libc # (any single argument exceeding 2000 bytes causes a buffer overrun # during glob expansion). Even if it were fixed, the result of this # check would be larger than it should be. lt_cv_sys_max_cmd_len=12288; # 12K is about right ;; gnu*) # Under GNU Hurd, this test is not required because there is # no limit to the length of command line arguments. # Libtool will interpret -1 as no limit whatsoever lt_cv_sys_max_cmd_len=-1; ;; cygwin* | mingw* | cegcc*) # On Win9x/ME, this test blows up -- it succeeds, but takes # about 5 minutes as the teststring grows exponentially. # Worse, since 9x/ME are not pre-emptively multitasking, # you end up with a "frozen" computer, even though with patience # the test eventually succeeds (with a max line length of 256k). # Instead, let's just punt: use the minimum linelength reported by # all of the supported platforms: 8192 (on NT/2K/XP). lt_cv_sys_max_cmd_len=8192; ;; mint*) # On MiNT this can take a long time and run out of memory. lt_cv_sys_max_cmd_len=8192; ;; amigaos*) # On AmigaOS with pdksh, this test takes hours, literally. # So we just punt and use a minimum line length of 8192. lt_cv_sys_max_cmd_len=8192; ;; netbsd* | freebsd* | openbsd* | darwin* | dragonfly*) # This has been around since 386BSD, at least. Likely further. if test -x /sbin/sysctl; then lt_cv_sys_max_cmd_len=`/sbin/sysctl -n kern.argmax` elif test -x /usr/sbin/sysctl; then lt_cv_sys_max_cmd_len=`/usr/sbin/sysctl -n kern.argmax` else lt_cv_sys_max_cmd_len=65536 # usable default for all BSDs fi # And add a safety zone lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \/ 4` lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \* 3` ;; interix*) # We know the value 262144 and hardcode it with a safety zone (like BSD) lt_cv_sys_max_cmd_len=196608 ;; os2*) # The test takes a long time on OS/2. lt_cv_sys_max_cmd_len=8192 ;; osf*) # Dr. Hans Ekkehard Plesser reports seeing a kernel panic running configure # due to this test when exec_disable_arg_limit is 1 on Tru64. It is not # nice to cause kernel panics so lets avoid the loop below. # First set a reasonable default. lt_cv_sys_max_cmd_len=16384 # if test -x /sbin/sysconfig; then case `/sbin/sysconfig -q proc exec_disable_arg_limit` in *1*) lt_cv_sys_max_cmd_len=-1 ;; esac fi ;; sco3.2v5*) lt_cv_sys_max_cmd_len=102400 ;; sysv5* | sco5v6* | sysv4.2uw2*) kargmax=`grep ARG_MAX /etc/conf/cf.d/stune 2>/dev/null` if test -n "$kargmax"; then lt_cv_sys_max_cmd_len=`echo $kargmax | sed 's/.*[ ]//'` else lt_cv_sys_max_cmd_len=32768 fi ;; *) lt_cv_sys_max_cmd_len=`(getconf ARG_MAX) 2> /dev/null` if test -n "$lt_cv_sys_max_cmd_len"; then lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \/ 4` lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \* 3` else # Make teststring a little bigger before we do anything with it. # a 1K string should be a reasonable start. for i in 1 2 3 4 5 6 7 8 ; do teststring=$teststring$teststring done SHELL=${SHELL-${CONFIG_SHELL-/bin/sh}} # If test is not a shell built-in, we'll probably end up computing a # maximum length that is only half of the actual maximum length, but # we can't tell. while { test "X"`env echo "$teststring$teststring" 2>/dev/null` \ = "X$teststring$teststring"; } >/dev/null 2>&1 && test $i != 17 # 1/2 MB should be enough do i=`expr $i + 1` teststring=$teststring$teststring done # Only check the string length outside the loop. lt_cv_sys_max_cmd_len=`expr "X$teststring" : ".*" 2>&1` teststring= # Add a significant safety factor because C++ compilers can tack on # massive amounts of additional arguments before passing them to the # linker. It appears as though 1/2 is a usable value. lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \/ 2` fi ;; esac fi if test -n $lt_cv_sys_max_cmd_len ; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_sys_max_cmd_len" >&5 $as_echo "$lt_cv_sys_max_cmd_len" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: none" >&5 $as_echo "none" >&6; } fi max_cmd_len=$lt_cv_sys_max_cmd_len : ${CP="cp -f"} : ${MV="mv -f"} : ${RM="rm -f"} { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether the shell understands some XSI constructs" >&5 $as_echo_n "checking whether the shell understands some XSI constructs... " >&6; } # Try some XSI features xsi_shell=no ( _lt_dummy="a/b/c" test "${_lt_dummy##*/},${_lt_dummy%/*},${_lt_dummy#??}"${_lt_dummy%"$_lt_dummy"}, \ = c,a/b,b/c, \ && eval 'test $(( 1 + 1 )) -eq 2 \ && test "${#_lt_dummy}" -eq 5' ) >/dev/null 2>&1 \ && xsi_shell=yes { $as_echo "$as_me:${as_lineno-$LINENO}: result: $xsi_shell" >&5 $as_echo "$xsi_shell" >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether the shell understands \"+=\"" >&5 $as_echo_n "checking whether the shell understands \"+=\"... " >&6; } lt_shell_append=no ( foo=bar; set foo baz; eval "$1+=\$2" && test "$foo" = barbaz ) \ >/dev/null 2>&1 \ && lt_shell_append=yes { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_shell_append" >&5 $as_echo "$lt_shell_append" >&6; } if ( (MAIL=60; unset MAIL) || exit) >/dev/null 2>&1; then lt_unset=unset else lt_unset=false fi # test EBCDIC or ASCII case `echo X|tr X '\101'` in A) # ASCII based system # \n is not interpreted correctly by Solaris 8 /usr/ucb/tr lt_SP2NL='tr \040 \012' lt_NL2SP='tr \015\012 \040\040' ;; *) # EBCDIC based system lt_SP2NL='tr \100 \n' lt_NL2SP='tr \r\n \100\100' ;; esac { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to $host format" >&5 $as_echo_n "checking how to convert $build file names to $host format... " >&6; } if ${lt_cv_to_host_file_cmd+:} false; then : $as_echo_n "(cached) " >&6 else case $host in *-*-mingw* ) case $build in *-*-mingw* ) # actually msys lt_cv_to_host_file_cmd=func_convert_file_msys_to_w32 ;; *-*-cygwin* ) lt_cv_to_host_file_cmd=func_convert_file_cygwin_to_w32 ;; * ) # otherwise, assume *nix lt_cv_to_host_file_cmd=func_convert_file_nix_to_w32 ;; esac ;; *-*-cygwin* ) case $build in *-*-mingw* ) # actually msys lt_cv_to_host_file_cmd=func_convert_file_msys_to_cygwin ;; *-*-cygwin* ) lt_cv_to_host_file_cmd=func_convert_file_noop ;; * ) # otherwise, assume *nix lt_cv_to_host_file_cmd=func_convert_file_nix_to_cygwin ;; esac ;; * ) # unhandled hosts (and "normal" native builds) lt_cv_to_host_file_cmd=func_convert_file_noop ;; esac fi to_host_file_cmd=$lt_cv_to_host_file_cmd { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_host_file_cmd" >&5 $as_echo "$lt_cv_to_host_file_cmd" >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to toolchain format" >&5 $as_echo_n "checking how to convert $build file names to toolchain format... " >&6; } if ${lt_cv_to_tool_file_cmd+:} false; then : $as_echo_n "(cached) " >&6 else #assume ordinary cross tools, or native build. lt_cv_to_tool_file_cmd=func_convert_file_noop case $host in *-*-mingw* ) case $build in *-*-mingw* ) # actually msys lt_cv_to_tool_file_cmd=func_convert_file_msys_to_w32 ;; esac ;; esac fi to_tool_file_cmd=$lt_cv_to_tool_file_cmd { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_tool_file_cmd" >&5 $as_echo "$lt_cv_to_tool_file_cmd" >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $LD option to reload object files" >&5 $as_echo_n "checking for $LD option to reload object files... " >&6; } if ${lt_cv_ld_reload_flag+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_ld_reload_flag='-r' fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_ld_reload_flag" >&5 $as_echo "$lt_cv_ld_reload_flag" >&6; } reload_flag=$lt_cv_ld_reload_flag case $reload_flag in "" | " "*) ;; *) reload_flag=" $reload_flag" ;; esac reload_cmds='$LD$reload_flag -o $output$reload_objs' case $host_os in cygwin* | mingw* | pw32* | cegcc*) if test "$GCC" != yes; then reload_cmds=false fi ;; darwin*) if test "$GCC" = yes; then reload_cmds='$LTCC $LTCFLAGS -nostdlib ${wl}-r -o $output$reload_objs' else reload_cmds='$LD$reload_flag -o $output$reload_objs' fi ;; esac if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}objdump", so it can be a program name with args. set dummy ${ac_tool_prefix}objdump; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_OBJDUMP+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$OBJDUMP"; then ac_cv_prog_OBJDUMP="$OBJDUMP" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then ac_cv_prog_OBJDUMP="${ac_tool_prefix}objdump" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi OBJDUMP=$ac_cv_prog_OBJDUMP if test -n "$OBJDUMP"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $OBJDUMP" >&5 $as_echo "$OBJDUMP" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi if test -z "$ac_cv_prog_OBJDUMP"; then ac_ct_OBJDUMP=$OBJDUMP # Extract the first word of "objdump", so it can be a program name with args. set dummy objdump; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_OBJDUMP+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_OBJDUMP"; then ac_cv_prog_ac_ct_OBJDUMP="$ac_ct_OBJDUMP" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then ac_cv_prog_ac_ct_OBJDUMP="objdump" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_OBJDUMP=$ac_cv_prog_ac_ct_OBJDUMP if test -n "$ac_ct_OBJDUMP"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_OBJDUMP" >&5 $as_echo "$ac_ct_OBJDUMP" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test "x$ac_ct_OBJDUMP" = x; then OBJDUMP="false" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac OBJDUMP=$ac_ct_OBJDUMP fi else OBJDUMP="$ac_cv_prog_OBJDUMP" fi test -z "$OBJDUMP" && OBJDUMP=objdump { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to recognize dependent libraries" >&5 $as_echo_n "checking how to recognize dependent libraries... " >&6; } if ${lt_cv_deplibs_check_method+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_file_magic_cmd='$MAGIC_CMD' lt_cv_file_magic_test_file= lt_cv_deplibs_check_method='unknown' # Need to set the preceding variable on all platforms that support # interlibrary dependencies. # 'none' -- dependencies not supported. # `unknown' -- same as none, but documents that we really don't know. # 'pass_all' -- all dependencies passed with no checks. # 'test_compile' -- check by making test program. # 'file_magic [[regex]]' -- check by looking for files in library path # which responds to the $file_magic_cmd with a given extended regex. # If you have `file' or equivalent on your system and you're not sure # whether `pass_all' will *always* work, you probably want this one. case $host_os in aix[4-9]*) lt_cv_deplibs_check_method=pass_all ;; beos*) lt_cv_deplibs_check_method=pass_all ;; bsdi[45]*) lt_cv_deplibs_check_method='file_magic ELF [0-9][0-9]*-bit [ML]SB (shared object|dynamic lib)' lt_cv_file_magic_cmd='/usr/bin/file -L' lt_cv_file_magic_test_file=/shlib/libc.so ;; cygwin*) # func_win32_libid is a shell function defined in ltmain.sh lt_cv_deplibs_check_method='file_magic ^x86 archive import|^x86 DLL' lt_cv_file_magic_cmd='func_win32_libid' ;; mingw* | pw32*) # Base MSYS/MinGW do not provide the 'file' command needed by # func_win32_libid shell function, so use a weaker test based on 'objdump', # unless we find 'file', for example because we are cross-compiling. # func_win32_libid assumes BSD nm, so disallow it if using MS dumpbin. if ( test "$lt_cv_nm_interface" = "BSD nm" && file / ) >/dev/null 2>&1; then lt_cv_deplibs_check_method='file_magic ^x86 archive import|^x86 DLL' lt_cv_file_magic_cmd='func_win32_libid' else # Keep this pattern in sync with the one in func_win32_libid. lt_cv_deplibs_check_method='file_magic file format (pei*-i386(.*architecture: i386)?|pe-arm-wince|pe-x86-64)' lt_cv_file_magic_cmd='$OBJDUMP -f' fi ;; cegcc*) # use the weaker test based on 'objdump'. See mingw*. lt_cv_deplibs_check_method='file_magic file format pe-arm-.*little(.*architecture: arm)?' lt_cv_file_magic_cmd='$OBJDUMP -f' ;; darwin* | rhapsody*) lt_cv_deplibs_check_method=pass_all ;; freebsd* | dragonfly*) if echo __ELF__ | $CC -E - | $GREP __ELF__ > /dev/null; then case $host_cpu in i*86 ) # Not sure whether the presence of OpenBSD here was a mistake. # Let's accept both of them until this is cleared up. lt_cv_deplibs_check_method='file_magic (FreeBSD|OpenBSD|DragonFly)/i[3-9]86 (compact )?demand paged shared library' lt_cv_file_magic_cmd=/usr/bin/file lt_cv_file_magic_test_file=`echo /usr/lib/libc.so.*` ;; esac else lt_cv_deplibs_check_method=pass_all fi ;; gnu*) lt_cv_deplibs_check_method=pass_all ;; haiku*) lt_cv_deplibs_check_method=pass_all ;; hpux10.20* | hpux11*) lt_cv_file_magic_cmd=/usr/bin/file case $host_cpu in ia64*) lt_cv_deplibs_check_method='file_magic (s[0-9][0-9][0-9]|ELF-[0-9][0-9]) shared object file - IA64' lt_cv_file_magic_test_file=/usr/lib/hpux32/libc.so ;; hppa*64*) lt_cv_deplibs_check_method='file_magic (s[0-9][0-9][0-9]|ELF[ -][0-9][0-9])(-bit)?( [LM]SB)? shared object( file)?[, -]* PA-RISC [0-9]\.[0-9]' lt_cv_file_magic_test_file=/usr/lib/pa20_64/libc.sl ;; *) lt_cv_deplibs_check_method='file_magic (s[0-9][0-9][0-9]|PA-RISC[0-9]\.[0-9]) shared library' lt_cv_file_magic_test_file=/usr/lib/libc.sl ;; esac ;; interix[3-9]*) # PIC code is broken on Interix 3.x, that's why |\.a not |_pic\.a here lt_cv_deplibs_check_method='match_pattern /lib[^/]+(\.so|\.a)$' ;; irix5* | irix6* | nonstopux*) case $LD in *-32|*"-32 ") libmagic=32-bit;; *-n32|*"-n32 ") libmagic=N32;; *-64|*"-64 ") libmagic=64-bit;; *) libmagic=never-match;; esac lt_cv_deplibs_check_method=pass_all ;; # This must be glibc/ELF. linux* | k*bsd*-gnu | kopensolaris*-gnu) lt_cv_deplibs_check_method=pass_all ;; netbsd* | netbsdelf*-gnu) if echo __ELF__ | $CC -E - | $GREP __ELF__ > /dev/null; then lt_cv_deplibs_check_method='match_pattern /lib[^/]+(\.so\.[0-9]+\.[0-9]+|_pic\.a)$' else lt_cv_deplibs_check_method='match_pattern /lib[^/]+(\.so|_pic\.a)$' fi ;; newos6*) lt_cv_deplibs_check_method='file_magic ELF [0-9][0-9]*-bit [ML]SB (executable|dynamic lib)' lt_cv_file_magic_cmd=/usr/bin/file lt_cv_file_magic_test_file=/usr/lib/libnls.so ;; *nto* | *qnx*) lt_cv_deplibs_check_method=pass_all ;; openbsd*) if test -z "`echo __ELF__ | $CC -E - | $GREP __ELF__`" || test "$host_os-$host_cpu" = "openbsd2.8-powerpc"; then lt_cv_deplibs_check_method='match_pattern /lib[^/]+(\.so\.[0-9]+\.[0-9]+|\.so|_pic\.a)$' else lt_cv_deplibs_check_method='match_pattern /lib[^/]+(\.so\.[0-9]+\.[0-9]+|_pic\.a)$' fi ;; osf3* | osf4* | osf5*) lt_cv_deplibs_check_method=pass_all ;; rdos*) lt_cv_deplibs_check_method=pass_all ;; solaris*) lt_cv_deplibs_check_method=pass_all ;; sysv5* | sco3.2v5* | sco5v6* | unixware* | OpenUNIX* | sysv4*uw2*) lt_cv_deplibs_check_method=pass_all ;; sysv4 | sysv4.3*) case $host_vendor in motorola) lt_cv_deplibs_check_method='file_magic ELF [0-9][0-9]*-bit [ML]SB (shared object|dynamic lib) M[0-9][0-9]* Version [0-9]' lt_cv_file_magic_test_file=`echo /usr/lib/libc.so*` ;; ncr) lt_cv_deplibs_check_method=pass_all ;; sequent) lt_cv_file_magic_cmd='/bin/file' lt_cv_deplibs_check_method='file_magic ELF [0-9][0-9]*-bit [LM]SB (shared object|dynamic lib )' ;; sni) lt_cv_file_magic_cmd='/bin/file' lt_cv_deplibs_check_method="file_magic ELF [0-9][0-9]*-bit [LM]SB dynamic lib" lt_cv_file_magic_test_file=/lib/libc.so ;; siemens) lt_cv_deplibs_check_method=pass_all ;; pc) lt_cv_deplibs_check_method=pass_all ;; esac ;; tpf*) lt_cv_deplibs_check_method=pass_all ;; esac fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_deplibs_check_method" >&5 $as_echo "$lt_cv_deplibs_check_method" >&6; } file_magic_glob= want_nocaseglob=no if test "$build" = "$host"; then case $host_os in mingw* | pw32*) if ( shopt | grep nocaseglob ) >/dev/null 2>&1; then want_nocaseglob=yes else file_magic_glob=`echo aAbBcCdDeEfFgGhHiIjJkKlLmMnNoOpPqQrRsStTuUvVwWxXyYzZ | $SED -e "s/\(..\)/s\/[\1]\/[\1]\/g;/g"` fi ;; esac fi file_magic_cmd=$lt_cv_file_magic_cmd deplibs_check_method=$lt_cv_deplibs_check_method test -z "$deplibs_check_method" && deplibs_check_method=unknown if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}dlltool", so it can be a program name with args. set dummy ${ac_tool_prefix}dlltool; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_DLLTOOL+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$DLLTOOL"; then ac_cv_prog_DLLTOOL="$DLLTOOL" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then ac_cv_prog_DLLTOOL="${ac_tool_prefix}dlltool" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi DLLTOOL=$ac_cv_prog_DLLTOOL if test -n "$DLLTOOL"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $DLLTOOL" >&5 $as_echo "$DLLTOOL" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi if test -z "$ac_cv_prog_DLLTOOL"; then ac_ct_DLLTOOL=$DLLTOOL # Extract the first word of "dlltool", so it can be a program name with args. set dummy dlltool; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_DLLTOOL+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_DLLTOOL"; then ac_cv_prog_ac_ct_DLLTOOL="$ac_ct_DLLTOOL" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then ac_cv_prog_ac_ct_DLLTOOL="dlltool" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_DLLTOOL=$ac_cv_prog_ac_ct_DLLTOOL if test -n "$ac_ct_DLLTOOL"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_DLLTOOL" >&5 $as_echo "$ac_ct_DLLTOOL" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test "x$ac_ct_DLLTOOL" = x; then DLLTOOL="false" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac DLLTOOL=$ac_ct_DLLTOOL fi else DLLTOOL="$ac_cv_prog_DLLTOOL" fi test -z "$DLLTOOL" && DLLTOOL=dlltool { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to associate runtime and link libraries" >&5 $as_echo_n "checking how to associate runtime and link libraries... " >&6; } if ${lt_cv_sharedlib_from_linklib_cmd+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_sharedlib_from_linklib_cmd='unknown' case $host_os in cygwin* | mingw* | pw32* | cegcc*) # two different shell functions defined in ltmain.sh # decide which to use based on capabilities of $DLLTOOL case `$DLLTOOL --help 2>&1` in *--identify-strict*) lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib ;; *) lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib_fallback ;; esac ;; *) # fallback: assume linklib IS sharedlib lt_cv_sharedlib_from_linklib_cmd="$ECHO" ;; esac fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_sharedlib_from_linklib_cmd" >&5 $as_echo "$lt_cv_sharedlib_from_linklib_cmd" >&6; } sharedlib_from_linklib_cmd=$lt_cv_sharedlib_from_linklib_cmd test -z "$sharedlib_from_linklib_cmd" && sharedlib_from_linklib_cmd=$ECHO if test -n "$ac_tool_prefix"; then for ac_prog in ar do # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args. set dummy $ac_tool_prefix$ac_prog; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_AR+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$AR"; then ac_cv_prog_AR="$AR" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then ac_cv_prog_AR="$ac_tool_prefix$ac_prog" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi AR=$ac_cv_prog_AR if test -n "$AR"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $AR" >&5 $as_echo "$AR" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi test -n "$AR" && break done fi if test -z "$AR"; then ac_ct_AR=$AR for ac_prog in ar do # Extract the first word of "$ac_prog", so it can be a program name with args. set dummy $ac_prog; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_AR+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_AR"; then ac_cv_prog_ac_ct_AR="$ac_ct_AR" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then ac_cv_prog_ac_ct_AR="$ac_prog" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_AR=$ac_cv_prog_ac_ct_AR if test -n "$ac_ct_AR"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_AR" >&5 $as_echo "$ac_ct_AR" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi test -n "$ac_ct_AR" && break done if test "x$ac_ct_AR" = x; then AR="false" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac AR=$ac_ct_AR fi fi : ${AR=ar} : ${AR_FLAGS=cru} { $as_echo "$as_me:${as_lineno-$LINENO}: checking for archiver @FILE support" >&5 $as_echo_n "checking for archiver @FILE support... " >&6; } if ${lt_cv_ar_at_file+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_ar_at_file=no cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF if ac_fn_cxx_try_compile "$LINENO"; then : echo conftest.$ac_objext > conftest.lst lt_ar_try='$AR $AR_FLAGS libconftest.a @conftest.lst >&5' { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5 (eval $lt_ar_try) 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } if test "$ac_status" -eq 0; then # Ensure the archiver fails upon bogus file names. rm -f conftest.$ac_objext libconftest.a { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5 (eval $lt_ar_try) 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } if test "$ac_status" -ne 0; then lt_cv_ar_at_file=@ fi fi rm -f conftest.* libconftest.a fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_ar_at_file" >&5 $as_echo "$lt_cv_ar_at_file" >&6; } if test "x$lt_cv_ar_at_file" = xno; then archiver_list_spec= else archiver_list_spec=$lt_cv_ar_at_file fi if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}strip", so it can be a program name with args. set dummy ${ac_tool_prefix}strip; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_STRIP+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$STRIP"; then ac_cv_prog_STRIP="$STRIP" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then ac_cv_prog_STRIP="${ac_tool_prefix}strip" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi STRIP=$ac_cv_prog_STRIP if test -n "$STRIP"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $STRIP" >&5 $as_echo "$STRIP" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi if test -z "$ac_cv_prog_STRIP"; then ac_ct_STRIP=$STRIP # Extract the first word of "strip", so it can be a program name with args. set dummy strip; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_STRIP+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_STRIP"; then ac_cv_prog_ac_ct_STRIP="$ac_ct_STRIP" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then ac_cv_prog_ac_ct_STRIP="strip" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_STRIP=$ac_cv_prog_ac_ct_STRIP if test -n "$ac_ct_STRIP"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_STRIP" >&5 $as_echo "$ac_ct_STRIP" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test "x$ac_ct_STRIP" = x; then STRIP=":" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac STRIP=$ac_ct_STRIP fi else STRIP="$ac_cv_prog_STRIP" fi test -z "$STRIP" && STRIP=: if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}ranlib", so it can be a program name with args. set dummy ${ac_tool_prefix}ranlib; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_RANLIB+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$RANLIB"; then ac_cv_prog_RANLIB="$RANLIB" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then ac_cv_prog_RANLIB="${ac_tool_prefix}ranlib" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi RANLIB=$ac_cv_prog_RANLIB if test -n "$RANLIB"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $RANLIB" >&5 $as_echo "$RANLIB" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi if test -z "$ac_cv_prog_RANLIB"; then ac_ct_RANLIB=$RANLIB # Extract the first word of "ranlib", so it can be a program name with args. set dummy ranlib; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_RANLIB+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_RANLIB"; then ac_cv_prog_ac_ct_RANLIB="$ac_ct_RANLIB" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then ac_cv_prog_ac_ct_RANLIB="ranlib" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_RANLIB=$ac_cv_prog_ac_ct_RANLIB if test -n "$ac_ct_RANLIB"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_RANLIB" >&5 $as_echo "$ac_ct_RANLIB" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test "x$ac_ct_RANLIB" = x; then RANLIB=":" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac RANLIB=$ac_ct_RANLIB fi else RANLIB="$ac_cv_prog_RANLIB" fi test -z "$RANLIB" && RANLIB=: # Determine commands to create old-style static archives. old_archive_cmds='$AR $AR_FLAGS $oldlib$oldobjs' old_postinstall_cmds='chmod 644 $oldlib' old_postuninstall_cmds= if test -n "$RANLIB"; then case $host_os in openbsd*) old_postinstall_cmds="$old_postinstall_cmds~\$RANLIB -t \$tool_oldlib" ;; *) old_postinstall_cmds="$old_postinstall_cmds~\$RANLIB \$tool_oldlib" ;; esac old_archive_cmds="$old_archive_cmds~\$RANLIB \$tool_oldlib" fi case $host_os in darwin*) lock_old_archive_extraction=yes ;; *) lock_old_archive_extraction=no ;; esac # If no C compiler was specified, use CC. LTCC=${LTCC-"$CC"} # If no C compiler flags were specified, use CFLAGS. LTCFLAGS=${LTCFLAGS-"$CFLAGS"} # Allow CC to be a program name with arguments. compiler=$CC # Check for command to grab the raw symbol name followed by C symbol from nm. { $as_echo "$as_me:${as_lineno-$LINENO}: checking command to parse $NM output from $compiler object" >&5 $as_echo_n "checking command to parse $NM output from $compiler object... " >&6; } if ${lt_cv_sys_global_symbol_pipe+:} false; then : $as_echo_n "(cached) " >&6 else # These are sane defaults that work on at least a few old systems. # [They come from Ultrix. What could be older than Ultrix?!! ;)] # Character class describing NM global symbol codes. symcode='[BCDEGRST]' # Regexp to match symbols that can be accessed directly from C. sympat='\([_A-Za-z][_A-Za-z0-9]*\)' # Define system-specific variables. case $host_os in aix*) symcode='[BCDT]' ;; cygwin* | mingw* | pw32* | cegcc*) symcode='[ABCDGISTW]' ;; hpux*) if test "$host_cpu" = ia64; then symcode='[ABCDEGRST]' fi ;; irix* | nonstopux*) symcode='[BCDEGRST]' ;; osf*) symcode='[BCDEGQRST]' ;; solaris*) symcode='[BDRT]' ;; sco3.2v5*) symcode='[DT]' ;; sysv4.2uw2*) symcode='[DT]' ;; sysv5* | sco5v6* | unixware* | OpenUNIX*) symcode='[ABDT]' ;; sysv4) symcode='[DFNSTU]' ;; esac # If we're using GNU nm, then use its standard symbol codes. case `$NM -V 2>&1` in *GNU* | *'with BFD'*) symcode='[ABCDGIRSTW]' ;; esac # Transform an extracted symbol line into a proper C declaration. # Some systems (esp. on ia64) link data and code symbols differently, # so use this general approach. lt_cv_sys_global_symbol_to_cdecl="sed -n -e 's/^T .* \(.*\)$/extern int \1();/p' -e 's/^$symcode* .* \(.*\)$/extern char \1;/p'" # Transform an extracted symbol line into symbol name and symbol address lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\)[ ]*$/ {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/ {\"\2\", (void *) \&\2},/p'" lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\)[ ]*$/ {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/ {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/ {\"lib\2\", (void *) \&\2},/p'" # Handle CRLF in mingw tool chain opt_cr= case $build_os in mingw*) opt_cr=`$ECHO 'x\{0,1\}' | tr x '\015'` # option cr in regexp ;; esac # Try without a prefix underscore, then with it. for ac_symprfx in "" "_"; do # Transform symcode, sympat, and symprfx into a raw symbol and a C symbol. symxfrm="\\1 $ac_symprfx\\2 \\2" # Write the raw and C identifiers. if test "$lt_cv_nm_interface" = "MS dumpbin"; then # Fake it for dumpbin and say T for any non-static function # and D for any global variable. # Also find C++ and __fastcall symbols from MSVC++, # which start with @ or ?. lt_cv_sys_global_symbol_pipe="$AWK '"\ " {last_section=section; section=\$ 3};"\ " /^COFF SYMBOL TABLE/{for(i in hide) delete hide[i]};"\ " /Section length .*#relocs.*(pick any)/{hide[last_section]=1};"\ " \$ 0!~/External *\|/{next};"\ " / 0+ UNDEF /{next}; / UNDEF \([^|]\)*()/{next};"\ " {if(hide[section]) next};"\ " {f=0}; \$ 0~/\(\).*\|/{f=1}; {printf f ? \"T \" : \"D \"};"\ " {split(\$ 0, a, /\||\r/); split(a[2], s)};"\ " s[1]~/^[@?]/{print s[1], s[1]; next};"\ " s[1]~prfx {split(s[1],t,\"@\"); print t[1], substr(t[1],length(prfx))}"\ " ' prfx=^$ac_symprfx" else lt_cv_sys_global_symbol_pipe="sed -n -e 's/^.*[ ]\($symcode$symcode*\)[ ][ ]*$ac_symprfx$sympat$opt_cr$/$symxfrm/p'" fi lt_cv_sys_global_symbol_pipe="$lt_cv_sys_global_symbol_pipe | sed '/ __gnu_lto/d'" # Check to see that the pipe works correctly. pipe_works=no rm -f conftest* cat > conftest.$ac_ext <<_LT_EOF #ifdef __cplusplus extern "C" { #endif char nm_test_var; void nm_test_func(void); void nm_test_func(void){} #ifdef __cplusplus } #endif int main(){nm_test_var='a';nm_test_func();return(0);} _LT_EOF if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5 (eval $ac_compile) 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then # Now try to grab the symbols. nlist=conftest.nm if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$NM conftest.$ac_objext \| "$lt_cv_sys_global_symbol_pipe" \> $nlist\""; } >&5 (eval $NM conftest.$ac_objext \| "$lt_cv_sys_global_symbol_pipe" \> $nlist) 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } && test -s "$nlist"; then # Try sorting and uniquifying the output. if sort "$nlist" | uniq > "$nlist"T; then mv -f "$nlist"T "$nlist" else rm -f "$nlist"T fi # Make sure that we snagged all the symbols we need. if $GREP ' nm_test_var$' "$nlist" >/dev/null; then if $GREP ' nm_test_func$' "$nlist" >/dev/null; then cat <<_LT_EOF > conftest.$ac_ext /* Keep this code in sync between libtool.m4, ltmain, lt_system.h, and tests. */ #if defined(_WIN32) || defined(__CYGWIN__) || defined(_WIN32_WCE) /* DATA imports from DLLs on WIN32 con't be const, because runtime relocations are performed -- see ld's documentation on pseudo-relocs. */ # define LT_DLSYM_CONST #elif defined(__osf__) /* This system does not cope well with relocations in const data. */ # define LT_DLSYM_CONST #else # define LT_DLSYM_CONST const #endif #ifdef __cplusplus extern "C" { #endif _LT_EOF # Now generate the symbol file. eval "$lt_cv_sys_global_symbol_to_cdecl"' < "$nlist" | $GREP -v main >> conftest.$ac_ext' cat <<_LT_EOF >> conftest.$ac_ext /* The mapping between symbol names and symbols. */ LT_DLSYM_CONST struct { const char *name; void *address; } lt__PROGRAM__LTX_preloaded_symbols[] = { { "@PROGRAM@", (void *) 0 }, _LT_EOF $SED "s/^$symcode$symcode* \(.*\) \(.*\)$/ {\"\2\", (void *) \&\2},/" < "$nlist" | $GREP -v main >> conftest.$ac_ext cat <<\_LT_EOF >> conftest.$ac_ext {0, (void *) 0} }; /* This works around a problem in FreeBSD linker */ #ifdef FREEBSD_WORKAROUND static const void *lt_preloaded_setup() { return lt__PROGRAM__LTX_preloaded_symbols; } #endif #ifdef __cplusplus } #endif _LT_EOF # Now try linking the two files. mv conftest.$ac_objext conftstm.$ac_objext lt_globsym_save_LIBS=$LIBS lt_globsym_save_CFLAGS=$CFLAGS LIBS="conftstm.$ac_objext" CFLAGS="$CFLAGS$lt_prog_compiler_no_builtin_flag" if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_link\""; } >&5 (eval $ac_link) 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } && test -s conftest${ac_exeext}; then pipe_works=yes fi LIBS=$lt_globsym_save_LIBS CFLAGS=$lt_globsym_save_CFLAGS else echo "cannot find nm_test_func in $nlist" >&5 fi else echo "cannot find nm_test_var in $nlist" >&5 fi else echo "cannot run $lt_cv_sys_global_symbol_pipe" >&5 fi else echo "$progname: failed program was:" >&5 cat conftest.$ac_ext >&5 fi rm -rf conftest* conftst* # Do not use the global_symbol_pipe unless it works. if test "$pipe_works" = yes; then break else lt_cv_sys_global_symbol_pipe= fi done fi if test -z "$lt_cv_sys_global_symbol_pipe"; then lt_cv_sys_global_symbol_to_cdecl= fi if test -z "$lt_cv_sys_global_symbol_pipe$lt_cv_sys_global_symbol_to_cdecl"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: failed" >&5 $as_echo "failed" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: ok" >&5 $as_echo "ok" >&6; } fi # Response file support. if test "$lt_cv_nm_interface" = "MS dumpbin"; then nm_file_list_spec='@' elif $NM --help 2>/dev/null | grep '[@]FILE' >/dev/null; then nm_file_list_spec='@' fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking for sysroot" >&5 $as_echo_n "checking for sysroot... " >&6; } # Check whether --with-sysroot was given. if test "${with_sysroot+set}" = set; then : withval=$with_sysroot; else with_sysroot=no fi lt_sysroot= case ${with_sysroot} in #( yes) if test "$GCC" = yes; then lt_sysroot=`$CC --print-sysroot 2>/dev/null` fi ;; #( /*) lt_sysroot=`echo "$with_sysroot" | sed -e "$sed_quote_subst"` ;; #( no|'') ;; #( *) { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${with_sysroot}" >&5 $as_echo "${with_sysroot}" >&6; } as_fn_error $? "The sysroot must be an absolute path." "$LINENO" 5 ;; esac { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${lt_sysroot:-no}" >&5 $as_echo "${lt_sysroot:-no}" >&6; } # Check whether --enable-libtool-lock was given. if test "${enable_libtool_lock+set}" = set; then : enableval=$enable_libtool_lock; fi test "x$enable_libtool_lock" != xno && enable_libtool_lock=yes # Some flags need to be propagated to the compiler or linker for good # libtool support. case $host in ia64-*-hpux*) # Find out which ABI we are using. echo 'int i;' > conftest.$ac_ext if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5 (eval $ac_compile) 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then case `/usr/bin/file conftest.$ac_objext` in *ELF-32*) HPUX_IA64_MODE="32" ;; *ELF-64*) HPUX_IA64_MODE="64" ;; esac fi rm -rf conftest* ;; *-*-irix6*) # Find out which ABI we are using. echo '#line '$LINENO' "configure"' > conftest.$ac_ext if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5 (eval $ac_compile) 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then if test "$lt_cv_prog_gnu_ld" = yes; then case `/usr/bin/file conftest.$ac_objext` in *32-bit*) LD="${LD-ld} -melf32bsmip" ;; *N32*) LD="${LD-ld} -melf32bmipn32" ;; *64-bit*) LD="${LD-ld} -melf64bmip" ;; esac else case `/usr/bin/file conftest.$ac_objext` in *32-bit*) LD="${LD-ld} -32" ;; *N32*) LD="${LD-ld} -n32" ;; *64-bit*) LD="${LD-ld} -64" ;; esac fi fi rm -rf conftest* ;; x86_64-*kfreebsd*-gnu|x86_64-*linux*|ppc*-*linux*|powerpc*-*linux*| \ s390*-*linux*|s390*-*tpf*|sparc*-*linux*) # Find out which ABI we are using. echo 'int i;' > conftest.$ac_ext if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5 (eval $ac_compile) 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then case `/usr/bin/file conftest.o` in *32-bit*) case $host in x86_64-*kfreebsd*-gnu) LD="${LD-ld} -m elf_i386_fbsd" ;; x86_64-*linux*) LD="${LD-ld} -m elf_i386" ;; ppc64-*linux*|powerpc64-*linux*) LD="${LD-ld} -m elf32ppclinux" ;; s390x-*linux*) LD="${LD-ld} -m elf_s390" ;; sparc64-*linux*) LD="${LD-ld} -m elf32_sparc" ;; esac ;; *64-bit*) case $host in x86_64-*kfreebsd*-gnu) LD="${LD-ld} -m elf_x86_64_fbsd" ;; x86_64-*linux*) LD="${LD-ld} -m elf_x86_64" ;; ppc*-*linux*|powerpc*-*linux*) LD="${LD-ld} -m elf64ppc" ;; s390*-*linux*|s390*-*tpf*) LD="${LD-ld} -m elf64_s390" ;; sparc*-*linux*) LD="${LD-ld} -m elf64_sparc" ;; esac ;; esac fi rm -rf conftest* ;; *-*-sco3.2v5*) # On SCO OpenServer 5, we need -belf to get full-featured binaries. SAVE_CFLAGS="$CFLAGS" CFLAGS="$CFLAGS -belf" { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether the C compiler needs -belf" >&5 $as_echo_n "checking whether the C compiler needs -belf... " >&6; } if ${lt_cv_cc_needs_belf+:} false; then : $as_echo_n "(cached) " >&6 else ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : lt_cv_cc_needs_belf=yes else lt_cv_cc_needs_belf=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_cc_needs_belf" >&5 $as_echo "$lt_cv_cc_needs_belf" >&6; } if test x"$lt_cv_cc_needs_belf" != x"yes"; then # this is probably gcc 2.8.0, egcs 1.0 or newer; no need for -belf CFLAGS="$SAVE_CFLAGS" fi ;; *-*solaris*) # Find out which ABI we are using. echo 'int i;' > conftest.$ac_ext if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5 (eval $ac_compile) 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then case `/usr/bin/file conftest.o` in *64-bit*) case $lt_cv_prog_gnu_ld in yes*) case $host in i?86-*-solaris*) LD="${LD-ld} -m elf_x86_64" ;; sparc*-*-solaris*) LD="${LD-ld} -m elf64_sparc" ;; esac # GNU ld 2.21 introduced _sol2 emulations. Use them if available. if ${LD-ld} -V | grep _sol2 >/dev/null 2>&1; then LD="${LD-ld}_sol2" fi ;; *) if ${LD-ld} -64 -r -o conftest2.o conftest.o >/dev/null 2>&1; then LD="${LD-ld} -64" fi ;; esac ;; esac fi rm -rf conftest* ;; esac need_locks="$enable_libtool_lock" if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}mt", so it can be a program name with args. set dummy ${ac_tool_prefix}mt; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_MANIFEST_TOOL+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$MANIFEST_TOOL"; then ac_cv_prog_MANIFEST_TOOL="$MANIFEST_TOOL" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then ac_cv_prog_MANIFEST_TOOL="${ac_tool_prefix}mt" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi MANIFEST_TOOL=$ac_cv_prog_MANIFEST_TOOL if test -n "$MANIFEST_TOOL"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $MANIFEST_TOOL" >&5 $as_echo "$MANIFEST_TOOL" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi if test -z "$ac_cv_prog_MANIFEST_TOOL"; then ac_ct_MANIFEST_TOOL=$MANIFEST_TOOL # Extract the first word of "mt", so it can be a program name with args. set dummy mt; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_MANIFEST_TOOL+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_MANIFEST_TOOL"; then ac_cv_prog_ac_ct_MANIFEST_TOOL="$ac_ct_MANIFEST_TOOL" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then ac_cv_prog_ac_ct_MANIFEST_TOOL="mt" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_MANIFEST_TOOL=$ac_cv_prog_ac_ct_MANIFEST_TOOL if test -n "$ac_ct_MANIFEST_TOOL"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_MANIFEST_TOOL" >&5 $as_echo "$ac_ct_MANIFEST_TOOL" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test "x$ac_ct_MANIFEST_TOOL" = x; then MANIFEST_TOOL=":" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac MANIFEST_TOOL=$ac_ct_MANIFEST_TOOL fi else MANIFEST_TOOL="$ac_cv_prog_MANIFEST_TOOL" fi test -z "$MANIFEST_TOOL" && MANIFEST_TOOL=mt { $as_echo "$as_me:${as_lineno-$LINENO}: checking if $MANIFEST_TOOL is a manifest tool" >&5 $as_echo_n "checking if $MANIFEST_TOOL is a manifest tool... " >&6; } if ${lt_cv_path_mainfest_tool+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_path_mainfest_tool=no echo "$as_me:$LINENO: $MANIFEST_TOOL '-?'" >&5 $MANIFEST_TOOL '-?' 2>conftest.err > conftest.out cat conftest.err >&5 if $GREP 'Manifest Tool' conftest.out > /dev/null; then lt_cv_path_mainfest_tool=yes fi rm -f conftest* fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_path_mainfest_tool" >&5 $as_echo "$lt_cv_path_mainfest_tool" >&6; } if test "x$lt_cv_path_mainfest_tool" != xyes; then MANIFEST_TOOL=: fi case $host_os in rhapsody* | darwin*) if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}dsymutil", so it can be a program name with args. set dummy ${ac_tool_prefix}dsymutil; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_DSYMUTIL+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$DSYMUTIL"; then ac_cv_prog_DSYMUTIL="$DSYMUTIL" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then ac_cv_prog_DSYMUTIL="${ac_tool_prefix}dsymutil" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi DSYMUTIL=$ac_cv_prog_DSYMUTIL if test -n "$DSYMUTIL"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $DSYMUTIL" >&5 $as_echo "$DSYMUTIL" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi if test -z "$ac_cv_prog_DSYMUTIL"; then ac_ct_DSYMUTIL=$DSYMUTIL # Extract the first word of "dsymutil", so it can be a program name with args. set dummy dsymutil; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_DSYMUTIL+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_DSYMUTIL"; then ac_cv_prog_ac_ct_DSYMUTIL="$ac_ct_DSYMUTIL" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then ac_cv_prog_ac_ct_DSYMUTIL="dsymutil" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_DSYMUTIL=$ac_cv_prog_ac_ct_DSYMUTIL if test -n "$ac_ct_DSYMUTIL"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_DSYMUTIL" >&5 $as_echo "$ac_ct_DSYMUTIL" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test "x$ac_ct_DSYMUTIL" = x; then DSYMUTIL=":" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac DSYMUTIL=$ac_ct_DSYMUTIL fi else DSYMUTIL="$ac_cv_prog_DSYMUTIL" fi if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}nmedit", so it can be a program name with args. set dummy ${ac_tool_prefix}nmedit; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_NMEDIT+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$NMEDIT"; then ac_cv_prog_NMEDIT="$NMEDIT" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then ac_cv_prog_NMEDIT="${ac_tool_prefix}nmedit" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi NMEDIT=$ac_cv_prog_NMEDIT if test -n "$NMEDIT"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $NMEDIT" >&5 $as_echo "$NMEDIT" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi if test -z "$ac_cv_prog_NMEDIT"; then ac_ct_NMEDIT=$NMEDIT # Extract the first word of "nmedit", so it can be a program name with args. set dummy nmedit; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_NMEDIT+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_NMEDIT"; then ac_cv_prog_ac_ct_NMEDIT="$ac_ct_NMEDIT" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then ac_cv_prog_ac_ct_NMEDIT="nmedit" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_NMEDIT=$ac_cv_prog_ac_ct_NMEDIT if test -n "$ac_ct_NMEDIT"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_NMEDIT" >&5 $as_echo "$ac_ct_NMEDIT" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test "x$ac_ct_NMEDIT" = x; then NMEDIT=":" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac NMEDIT=$ac_ct_NMEDIT fi else NMEDIT="$ac_cv_prog_NMEDIT" fi if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}lipo", so it can be a program name with args. set dummy ${ac_tool_prefix}lipo; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_LIPO+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$LIPO"; then ac_cv_prog_LIPO="$LIPO" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then ac_cv_prog_LIPO="${ac_tool_prefix}lipo" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi LIPO=$ac_cv_prog_LIPO if test -n "$LIPO"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $LIPO" >&5 $as_echo "$LIPO" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi if test -z "$ac_cv_prog_LIPO"; then ac_ct_LIPO=$LIPO # Extract the first word of "lipo", so it can be a program name with args. set dummy lipo; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_LIPO+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_LIPO"; then ac_cv_prog_ac_ct_LIPO="$ac_ct_LIPO" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then ac_cv_prog_ac_ct_LIPO="lipo" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_LIPO=$ac_cv_prog_ac_ct_LIPO if test -n "$ac_ct_LIPO"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_LIPO" >&5 $as_echo "$ac_ct_LIPO" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test "x$ac_ct_LIPO" = x; then LIPO=":" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac LIPO=$ac_ct_LIPO fi else LIPO="$ac_cv_prog_LIPO" fi if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}otool", so it can be a program name with args. set dummy ${ac_tool_prefix}otool; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_OTOOL+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$OTOOL"; then ac_cv_prog_OTOOL="$OTOOL" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then ac_cv_prog_OTOOL="${ac_tool_prefix}otool" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi OTOOL=$ac_cv_prog_OTOOL if test -n "$OTOOL"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $OTOOL" >&5 $as_echo "$OTOOL" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi if test -z "$ac_cv_prog_OTOOL"; then ac_ct_OTOOL=$OTOOL # Extract the first word of "otool", so it can be a program name with args. set dummy otool; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_OTOOL+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_OTOOL"; then ac_cv_prog_ac_ct_OTOOL="$ac_ct_OTOOL" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then ac_cv_prog_ac_ct_OTOOL="otool" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_OTOOL=$ac_cv_prog_ac_ct_OTOOL if test -n "$ac_ct_OTOOL"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_OTOOL" >&5 $as_echo "$ac_ct_OTOOL" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test "x$ac_ct_OTOOL" = x; then OTOOL=":" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac OTOOL=$ac_ct_OTOOL fi else OTOOL="$ac_cv_prog_OTOOL" fi if test -n "$ac_tool_prefix"; then # Extract the first word of "${ac_tool_prefix}otool64", so it can be a program name with args. set dummy ${ac_tool_prefix}otool64; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_OTOOL64+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$OTOOL64"; then ac_cv_prog_OTOOL64="$OTOOL64" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then ac_cv_prog_OTOOL64="${ac_tool_prefix}otool64" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi OTOOL64=$ac_cv_prog_OTOOL64 if test -n "$OTOOL64"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $OTOOL64" >&5 $as_echo "$OTOOL64" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi if test -z "$ac_cv_prog_OTOOL64"; then ac_ct_OTOOL64=$OTOOL64 # Extract the first word of "otool64", so it can be a program name with args. set dummy otool64; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_ac_ct_OTOOL64+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$ac_ct_OTOOL64"; then ac_cv_prog_ac_ct_OTOOL64="$ac_ct_OTOOL64" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then ac_cv_prog_ac_ct_OTOOL64="otool64" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi ac_ct_OTOOL64=$ac_cv_prog_ac_ct_OTOOL64 if test -n "$ac_ct_OTOOL64"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_OTOOL64" >&5 $as_echo "$ac_ct_OTOOL64" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test "x$ac_ct_OTOOL64" = x; then OTOOL64=":" else case $cross_compiling:$ac_tool_warned in yes:) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5 $as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;} ac_tool_warned=yes ;; esac OTOOL64=$ac_ct_OTOOL64 fi else OTOOL64="$ac_cv_prog_OTOOL64" fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking for -single_module linker flag" >&5 $as_echo_n "checking for -single_module linker flag... " >&6; } if ${lt_cv_apple_cc_single_mod+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_apple_cc_single_mod=no if test -z "${LT_MULTI_MODULE}"; then # By default we will add the -single_module flag. You can override # by either setting the environment variable LT_MULTI_MODULE # non-empty at configure time, or by adding -multi_module to the # link flags. rm -rf libconftest.dylib* echo "int foo(void){return 1;}" > conftest.c echo "$LTCC $LTCFLAGS $LDFLAGS -o libconftest.dylib \ -dynamiclib -Wl,-single_module conftest.c" >&5 $LTCC $LTCFLAGS $LDFLAGS -o libconftest.dylib \ -dynamiclib -Wl,-single_module conftest.c 2>conftest.err _lt_result=$? # If there is a non-empty error log, and "single_module" # appears in it, assume the flag caused a linker warning if test -s conftest.err && $GREP single_module conftest.err; then cat conftest.err >&5 # Otherwise, if the output was created with a 0 exit code from # the compiler, it worked. elif test -f libconftest.dylib && test $_lt_result -eq 0; then lt_cv_apple_cc_single_mod=yes else cat conftest.err >&5 fi rm -rf libconftest.dylib* rm -f conftest.* fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_apple_cc_single_mod" >&5 $as_echo "$lt_cv_apple_cc_single_mod" >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: checking for -exported_symbols_list linker flag" >&5 $as_echo_n "checking for -exported_symbols_list linker flag... " >&6; } if ${lt_cv_ld_exported_symbols_list+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_ld_exported_symbols_list=no save_LDFLAGS=$LDFLAGS echo "_main" > conftest.sym LDFLAGS="$LDFLAGS -Wl,-exported_symbols_list,conftest.sym" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : lt_cv_ld_exported_symbols_list=yes else lt_cv_ld_exported_symbols_list=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LDFLAGS="$save_LDFLAGS" fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_ld_exported_symbols_list" >&5 $as_echo "$lt_cv_ld_exported_symbols_list" >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: checking for -force_load linker flag" >&5 $as_echo_n "checking for -force_load linker flag... " >&6; } if ${lt_cv_ld_force_load+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_ld_force_load=no cat > conftest.c << _LT_EOF int forced_loaded() { return 2;} _LT_EOF echo "$LTCC $LTCFLAGS -c -o conftest.o conftest.c" >&5 $LTCC $LTCFLAGS -c -o conftest.o conftest.c 2>&5 echo "$AR cru libconftest.a conftest.o" >&5 $AR cru libconftest.a conftest.o 2>&5 echo "$RANLIB libconftest.a" >&5 $RANLIB libconftest.a 2>&5 cat > conftest.c << _LT_EOF int main() { return 0;} _LT_EOF echo "$LTCC $LTCFLAGS $LDFLAGS -o conftest conftest.c -Wl,-force_load,./libconftest.a" >&5 $LTCC $LTCFLAGS $LDFLAGS -o conftest conftest.c -Wl,-force_load,./libconftest.a 2>conftest.err _lt_result=$? if test -s conftest.err && $GREP force_load conftest.err; then cat conftest.err >&5 elif test -f conftest && test $_lt_result -eq 0 && $GREP forced_load conftest >/dev/null 2>&1 ; then lt_cv_ld_force_load=yes else cat conftest.err >&5 fi rm -f conftest.err libconftest.a conftest conftest.c rm -rf conftest.dSYM fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_ld_force_load" >&5 $as_echo "$lt_cv_ld_force_load" >&6; } case $host_os in rhapsody* | darwin1.[012]) _lt_dar_allow_undefined='${wl}-undefined ${wl}suppress' ;; darwin1.*) _lt_dar_allow_undefined='${wl}-flat_namespace ${wl}-undefined ${wl}suppress' ;; darwin*) # darwin 5.x on # if running on 10.5 or later, the deployment target defaults # to the OS version, if on x86, and 10.4, the deployment # target defaults to 10.4. Don't you love it? case ${MACOSX_DEPLOYMENT_TARGET-10.0},$host in 10.0,*86*-darwin8*|10.0,*-darwin[91]*) _lt_dar_allow_undefined='${wl}-undefined ${wl}dynamic_lookup' ;; 10.[012]*) _lt_dar_allow_undefined='${wl}-flat_namespace ${wl}-undefined ${wl}suppress' ;; 10.*) _lt_dar_allow_undefined='${wl}-undefined ${wl}dynamic_lookup' ;; esac ;; esac if test "$lt_cv_apple_cc_single_mod" = "yes"; then _lt_dar_single_mod='$single_module' fi if test "$lt_cv_ld_exported_symbols_list" = "yes"; then _lt_dar_export_syms=' ${wl}-exported_symbols_list,$output_objdir/${libname}-symbols.expsym' else _lt_dar_export_syms='~$NMEDIT -s $output_objdir/${libname}-symbols.expsym ${lib}' fi if test "$DSYMUTIL" != ":" && test "$lt_cv_ld_force_load" = "no"; then _lt_dsymutil='~$DSYMUTIL $lib || :' else _lt_dsymutil= fi ;; esac ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to run the C preprocessor" >&5 $as_echo_n "checking how to run the C preprocessor... " >&6; } # On Suns, sometimes $CPP names a directory. if test -n "$CPP" && test -d "$CPP"; then CPP= fi if test -z "$CPP"; then if ${ac_cv_prog_CPP+:} false; then : $as_echo_n "(cached) " >&6 else # Double quotes because CPP needs to be expanded for CPP in "$CC -E" "$CC -E -traditional-cpp" "/lib/cpp" do ac_preproc_ok=false for ac_c_preproc_warn_flag in '' yes do # Use a header file that comes with gcc, so configuring glibc # with a fresh cross-compiler works. # Prefer to if __STDC__ is defined, since # exists even on freestanding compilers. # On the NeXT, cc -E runs the code through the compiler's parser, # not just through cpp. "Syntax error" is here to catch this case. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #ifdef __STDC__ # include #else # include #endif Syntax error _ACEOF if ac_fn_c_try_cpp "$LINENO"; then : else # Broken: fails on valid input. continue fi rm -f conftest.err conftest.i conftest.$ac_ext # OK, works on sane cases. Now check whether nonexistent headers # can be detected and how. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include _ACEOF if ac_fn_c_try_cpp "$LINENO"; then : # Broken: success on invalid input. continue else # Passes both tests. ac_preproc_ok=: break fi rm -f conftest.err conftest.i conftest.$ac_ext done # Because of `break', _AC_PREPROC_IFELSE's cleaning code was skipped. rm -f conftest.i conftest.err conftest.$ac_ext if $ac_preproc_ok; then : break fi done ac_cv_prog_CPP=$CPP fi CPP=$ac_cv_prog_CPP else ac_cv_prog_CPP=$CPP fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $CPP" >&5 $as_echo "$CPP" >&6; } ac_preproc_ok=false for ac_c_preproc_warn_flag in '' yes do # Use a header file that comes with gcc, so configuring glibc # with a fresh cross-compiler works. # Prefer to if __STDC__ is defined, since # exists even on freestanding compilers. # On the NeXT, cc -E runs the code through the compiler's parser, # not just through cpp. "Syntax error" is here to catch this case. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #ifdef __STDC__ # include #else # include #endif Syntax error _ACEOF if ac_fn_c_try_cpp "$LINENO"; then : else # Broken: fails on valid input. continue fi rm -f conftest.err conftest.i conftest.$ac_ext # OK, works on sane cases. Now check whether nonexistent headers # can be detected and how. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include _ACEOF if ac_fn_c_try_cpp "$LINENO"; then : # Broken: success on invalid input. continue else # Passes both tests. ac_preproc_ok=: break fi rm -f conftest.err conftest.i conftest.$ac_ext done # Because of `break', _AC_PREPROC_IFELSE's cleaning code was skipped. rm -f conftest.i conftest.err conftest.$ac_ext if $ac_preproc_ok; then : else { { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5 $as_echo "$as_me: error: in \`$ac_pwd':" >&2;} as_fn_error $? "C preprocessor \"$CPP\" fails sanity check See \`config.log' for more details" "$LINENO" 5; } fi ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu { $as_echo "$as_me:${as_lineno-$LINENO}: checking for ANSI C header files" >&5 $as_echo_n "checking for ANSI C header files... " >&6; } if ${ac_cv_header_stdc+:} false; then : $as_echo_n "(cached) " >&6 else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include #include #include #include int main () { ; return 0; } _ACEOF if ac_fn_c_try_compile "$LINENO"; then : ac_cv_header_stdc=yes else ac_cv_header_stdc=no fi rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext if test $ac_cv_header_stdc = yes; then # SunOS 4.x string.h does not declare mem*, contrary to ANSI. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include _ACEOF if (eval "$ac_cpp conftest.$ac_ext") 2>&5 | $EGREP "memchr" >/dev/null 2>&1; then : else ac_cv_header_stdc=no fi rm -f conftest* fi if test $ac_cv_header_stdc = yes; then # ISC 2.0.2 stdlib.h does not declare free, contrary to ANSI. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include _ACEOF if (eval "$ac_cpp conftest.$ac_ext") 2>&5 | $EGREP "free" >/dev/null 2>&1; then : else ac_cv_header_stdc=no fi rm -f conftest* fi if test $ac_cv_header_stdc = yes; then # /bin/cc in Irix-4.0.5 gets non-ANSI ctype macros unless using -ansi. if test "$cross_compiling" = yes; then : : else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include #include #if ((' ' & 0x0FF) == 0x020) # define ISLOWER(c) ('a' <= (c) && (c) <= 'z') # define TOUPPER(c) (ISLOWER(c) ? 'A' + ((c) - 'a') : (c)) #else # define ISLOWER(c) \ (('a' <= (c) && (c) <= 'i') \ || ('j' <= (c) && (c) <= 'r') \ || ('s' <= (c) && (c) <= 'z')) # define TOUPPER(c) (ISLOWER(c) ? ((c) | 0x40) : (c)) #endif #define XOR(e, f) (((e) && !(f)) || (!(e) && (f))) int main () { int i; for (i = 0; i < 256; i++) if (XOR (islower (i), ISLOWER (i)) || toupper (i) != TOUPPER (i)) return 2; return 0; } _ACEOF if ac_fn_c_try_run "$LINENO"; then : else ac_cv_header_stdc=no fi rm -f core *.core core.conftest.* gmon.out bb.out conftest$ac_exeext \ conftest.$ac_objext conftest.beam conftest.$ac_ext fi fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_header_stdc" >&5 $as_echo "$ac_cv_header_stdc" >&6; } if test $ac_cv_header_stdc = yes; then $as_echo "#define STDC_HEADERS 1" >>confdefs.h fi # On IRIX 5.3, sys/types and inttypes.h are conflicting. for ac_header in sys/types.h sys/stat.h stdlib.h string.h memory.h strings.h \ inttypes.h stdint.h unistd.h do : as_ac_Header=`$as_echo "ac_cv_header_$ac_header" | $as_tr_sh` ac_fn_c_check_header_compile "$LINENO" "$ac_header" "$as_ac_Header" "$ac_includes_default " if eval test \"x\$"$as_ac_Header"\" = x"yes"; then : cat >>confdefs.h <<_ACEOF #define `$as_echo "HAVE_$ac_header" | $as_tr_cpp` 1 _ACEOF fi done for ac_header in dlfcn.h do : ac_fn_c_check_header_compile "$LINENO" "dlfcn.h" "ac_cv_header_dlfcn_h" "$ac_includes_default " if test "x$ac_cv_header_dlfcn_h" = xyes; then : cat >>confdefs.h <<_ACEOF #define HAVE_DLFCN_H 1 _ACEOF fi done func_stripname_cnf () { case ${2} in .*) func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%\\\\${2}\$%%"`;; *) func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%${2}\$%%"`;; esac } # func_stripname_cnf # Set options enable_dlopen=no enable_win32_dll=no # Check whether --enable-shared was given. if test "${enable_shared+set}" = set; then : enableval=$enable_shared; p=${PACKAGE-default} case $enableval in yes) enable_shared=yes ;; no) enable_shared=no ;; *) enable_shared=no # Look at the argument we got. We use all the common list separators. lt_save_ifs="$IFS"; IFS="${IFS}$PATH_SEPARATOR," for pkg in $enableval; do IFS="$lt_save_ifs" if test "X$pkg" = "X$p"; then enable_shared=yes fi done IFS="$lt_save_ifs" ;; esac else enable_shared=yes fi # Check whether --enable-static was given. if test "${enable_static+set}" = set; then : enableval=$enable_static; p=${PACKAGE-default} case $enableval in yes) enable_static=yes ;; no) enable_static=no ;; *) enable_static=no # Look at the argument we got. We use all the common list separators. lt_save_ifs="$IFS"; IFS="${IFS}$PATH_SEPARATOR," for pkg in $enableval; do IFS="$lt_save_ifs" if test "X$pkg" = "X$p"; then enable_static=yes fi done IFS="$lt_save_ifs" ;; esac else enable_static=yes fi # Check whether --with-pic was given. if test "${with_pic+set}" = set; then : withval=$with_pic; lt_p=${PACKAGE-default} case $withval in yes|no) pic_mode=$withval ;; *) pic_mode=default # Look at the argument we got. We use all the common list separators. lt_save_ifs="$IFS"; IFS="${IFS}$PATH_SEPARATOR," for lt_pkg in $withval; do IFS="$lt_save_ifs" if test "X$lt_pkg" = "X$lt_p"; then pic_mode=yes fi done IFS="$lt_save_ifs" ;; esac else pic_mode=default fi test -z "$pic_mode" && pic_mode=default # Check whether --enable-fast-install was given. if test "${enable_fast_install+set}" = set; then : enableval=$enable_fast_install; p=${PACKAGE-default} case $enableval in yes) enable_fast_install=yes ;; no) enable_fast_install=no ;; *) enable_fast_install=no # Look at the argument we got. We use all the common list separators. lt_save_ifs="$IFS"; IFS="${IFS}$PATH_SEPARATOR," for pkg in $enableval; do IFS="$lt_save_ifs" if test "X$pkg" = "X$p"; then enable_fast_install=yes fi done IFS="$lt_save_ifs" ;; esac else enable_fast_install=yes fi # This can be used to rebuild libtool when needed LIBTOOL_DEPS="$ltmain" # Always use our own libtool. LIBTOOL='$(SHELL) $(top_builddir)/libtool' test -z "$LN_S" && LN_S="ln -s" if test -n "${ZSH_VERSION+set}" ; then setopt NO_GLOB_SUBST fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking for objdir" >&5 $as_echo_n "checking for objdir... " >&6; } if ${lt_cv_objdir+:} false; then : $as_echo_n "(cached) " >&6 else rm -f .libs 2>/dev/null mkdir .libs 2>/dev/null if test -d .libs; then lt_cv_objdir=.libs else # MS-DOS does not allow filenames that begin with a dot. lt_cv_objdir=_libs fi rmdir .libs 2>/dev/null fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_objdir" >&5 $as_echo "$lt_cv_objdir" >&6; } objdir=$lt_cv_objdir cat >>confdefs.h <<_ACEOF #define LT_OBJDIR "$lt_cv_objdir/" _ACEOF case $host_os in aix3*) # AIX sometimes has problems with the GCC collect2 program. For some # reason, if we set the COLLECT_NAMES environment variable, the problems # vanish in a puff of smoke. if test "X${COLLECT_NAMES+set}" != Xset; then COLLECT_NAMES= export COLLECT_NAMES fi ;; esac # Global variables: ofile=libtool can_build_shared=yes # All known linkers require a `.a' archive for static linking (except MSVC, # which needs '.lib'). libext=a with_gnu_ld="$lt_cv_prog_gnu_ld" old_CC="$CC" old_CFLAGS="$CFLAGS" # Set sane defaults for various variables test -z "$CC" && CC=cc test -z "$LTCC" && LTCC=$CC test -z "$LTCFLAGS" && LTCFLAGS=$CFLAGS test -z "$LD" && LD=ld test -z "$ac_objext" && ac_objext=o for cc_temp in $compiler""; do case $cc_temp in compile | *[\\/]compile | ccache | *[\\/]ccache ) ;; distcc | *[\\/]distcc | purify | *[\\/]purify ) ;; \-*) ;; *) break;; esac done cc_basename=`$ECHO "$cc_temp" | $SED "s%.*/%%; s%^$host_alias-%%"` # Only perform the check for file, if the check method requires it test -z "$MAGIC_CMD" && MAGIC_CMD=file case $deplibs_check_method in file_magic*) if test "$file_magic_cmd" = '$MAGIC_CMD'; then { $as_echo "$as_me:${as_lineno-$LINENO}: checking for ${ac_tool_prefix}file" >&5 $as_echo_n "checking for ${ac_tool_prefix}file... " >&6; } if ${lt_cv_path_MAGIC_CMD+:} false; then : $as_echo_n "(cached) " >&6 else case $MAGIC_CMD in [\\/*] | ?:[\\/]*) lt_cv_path_MAGIC_CMD="$MAGIC_CMD" # Let the user override the test with a path. ;; *) lt_save_MAGIC_CMD="$MAGIC_CMD" lt_save_ifs="$IFS"; IFS=$PATH_SEPARATOR ac_dummy="/usr/bin$PATH_SEPARATOR$PATH" for ac_dir in $ac_dummy; do IFS="$lt_save_ifs" test -z "$ac_dir" && ac_dir=. if test -f $ac_dir/${ac_tool_prefix}file; then lt_cv_path_MAGIC_CMD="$ac_dir/${ac_tool_prefix}file" if test -n "$file_magic_test_file"; then case $deplibs_check_method in "file_magic "*) file_magic_regex=`expr "$deplibs_check_method" : "file_magic \(.*\)"` MAGIC_CMD="$lt_cv_path_MAGIC_CMD" if eval $file_magic_cmd \$file_magic_test_file 2> /dev/null | $EGREP "$file_magic_regex" > /dev/null; then : else cat <<_LT_EOF 1>&2 *** Warning: the command libtool uses to detect shared libraries, *** $file_magic_cmd, produces output that libtool cannot recognize. *** The result is that libtool may fail to recognize shared libraries *** as such. This will affect the creation of libtool libraries that *** depend on shared libraries, but programs linked with such libtool *** libraries will work regardless of this problem. Nevertheless, you *** may want to report the problem to your system manager and/or to *** bug-libtool@gnu.org _LT_EOF fi ;; esac fi break fi done IFS="$lt_save_ifs" MAGIC_CMD="$lt_save_MAGIC_CMD" ;; esac fi MAGIC_CMD="$lt_cv_path_MAGIC_CMD" if test -n "$MAGIC_CMD"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $MAGIC_CMD" >&5 $as_echo "$MAGIC_CMD" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test -z "$lt_cv_path_MAGIC_CMD"; then if test -n "$ac_tool_prefix"; then { $as_echo "$as_me:${as_lineno-$LINENO}: checking for file" >&5 $as_echo_n "checking for file... " >&6; } if ${lt_cv_path_MAGIC_CMD+:} false; then : $as_echo_n "(cached) " >&6 else case $MAGIC_CMD in [\\/*] | ?:[\\/]*) lt_cv_path_MAGIC_CMD="$MAGIC_CMD" # Let the user override the test with a path. ;; *) lt_save_MAGIC_CMD="$MAGIC_CMD" lt_save_ifs="$IFS"; IFS=$PATH_SEPARATOR ac_dummy="/usr/bin$PATH_SEPARATOR$PATH" for ac_dir in $ac_dummy; do IFS="$lt_save_ifs" test -z "$ac_dir" && ac_dir=. if test -f $ac_dir/file; then lt_cv_path_MAGIC_CMD="$ac_dir/file" if test -n "$file_magic_test_file"; then case $deplibs_check_method in "file_magic "*) file_magic_regex=`expr "$deplibs_check_method" : "file_magic \(.*\)"` MAGIC_CMD="$lt_cv_path_MAGIC_CMD" if eval $file_magic_cmd \$file_magic_test_file 2> /dev/null | $EGREP "$file_magic_regex" > /dev/null; then : else cat <<_LT_EOF 1>&2 *** Warning: the command libtool uses to detect shared libraries, *** $file_magic_cmd, produces output that libtool cannot recognize. *** The result is that libtool may fail to recognize shared libraries *** as such. This will affect the creation of libtool libraries that *** depend on shared libraries, but programs linked with such libtool *** libraries will work regardless of this problem. Nevertheless, you *** may want to report the problem to your system manager and/or to *** bug-libtool@gnu.org _LT_EOF fi ;; esac fi break fi done IFS="$lt_save_ifs" MAGIC_CMD="$lt_save_MAGIC_CMD" ;; esac fi MAGIC_CMD="$lt_cv_path_MAGIC_CMD" if test -n "$MAGIC_CMD"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $MAGIC_CMD" >&5 $as_echo "$MAGIC_CMD" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi else MAGIC_CMD=: fi fi fi ;; esac # Use C for the default configuration in the libtool script lt_save_CC="$CC" ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu # Source file extension for C test sources. ac_ext=c # Object file extension for compiled C test sources. objext=o objext=$objext # Code to be used in simple compile tests lt_simple_compile_test_code="int some_variable = 0;" # Code to be used in simple link tests lt_simple_link_test_code='int main(){return(0);}' # If no C compiler was specified, use CC. LTCC=${LTCC-"$CC"} # If no C compiler flags were specified, use CFLAGS. LTCFLAGS=${LTCFLAGS-"$CFLAGS"} # Allow CC to be a program name with arguments. compiler=$CC # Save the default compiler, since it gets overwritten when the other # tags are being tested, and _LT_TAGVAR(compiler, []) is a NOP. compiler_DEFAULT=$CC # save warnings/boilerplate of simple test code ac_outfile=conftest.$ac_objext echo "$lt_simple_compile_test_code" >conftest.$ac_ext eval "$ac_compile" 2>&1 >/dev/null | $SED '/^$/d; /^ *+/d' >conftest.err _lt_compiler_boilerplate=`cat conftest.err` $RM conftest* ac_outfile=conftest.$ac_objext echo "$lt_simple_link_test_code" >conftest.$ac_ext eval "$ac_link" 2>&1 >/dev/null | $SED '/^$/d; /^ *+/d' >conftest.err _lt_linker_boilerplate=`cat conftest.err` $RM -r conftest* ## CAVEAT EMPTOR: ## There is no encapsulation within the following macros, do not change ## the running order or otherwise move them around unless you know exactly ## what you are doing... if test -n "$compiler"; then lt_prog_compiler_no_builtin_flag= if test "$GCC" = yes; then case $cc_basename in nvcc*) lt_prog_compiler_no_builtin_flag=' -Xcompiler -fno-builtin' ;; *) lt_prog_compiler_no_builtin_flag=' -fno-builtin' ;; esac { $as_echo "$as_me:${as_lineno-$LINENO}: checking if $compiler supports -fno-rtti -fno-exceptions" >&5 $as_echo_n "checking if $compiler supports -fno-rtti -fno-exceptions... " >&6; } if ${lt_cv_prog_compiler_rtti_exceptions+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_prog_compiler_rtti_exceptions=no ac_outfile=conftest.$ac_objext echo "$lt_simple_compile_test_code" > conftest.$ac_ext lt_compiler_flag="-fno-rtti -fno-exceptions" # Insert the option either (1) after the last *FLAGS variable, or # (2) before a word containing "conftest.", or (3) at the end. # Note that $ac_compile itself does not contain backslashes and begins # with a dollar sign (not a hyphen), so the echo should work correctly. # The option is referenced via a variable to avoid confusing sed. lt_compile=`echo "$ac_compile" | $SED \ -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \ -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \ -e 's:$: $lt_compiler_flag:'` (eval echo "\"\$as_me:$LINENO: $lt_compile\"" >&5) (eval "$lt_compile" 2>conftest.err) ac_status=$? cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 if (exit $ac_status) && test -s "$ac_outfile"; then # The compiler can only warn and ignore the option if not recognized # So say no if there are warnings other than the usual output. $ECHO "$_lt_compiler_boilerplate" | $SED '/^$/d' >conftest.exp $SED '/^$/d; /^ *+/d' conftest.err >conftest.er2 if test ! -s conftest.er2 || diff conftest.exp conftest.er2 >/dev/null; then lt_cv_prog_compiler_rtti_exceptions=yes fi fi $RM conftest* fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_rtti_exceptions" >&5 $as_echo "$lt_cv_prog_compiler_rtti_exceptions" >&6; } if test x"$lt_cv_prog_compiler_rtti_exceptions" = xyes; then lt_prog_compiler_no_builtin_flag="$lt_prog_compiler_no_builtin_flag -fno-rtti -fno-exceptions" else : fi fi lt_prog_compiler_wl= lt_prog_compiler_pic= lt_prog_compiler_static= if test "$GCC" = yes; then lt_prog_compiler_wl='-Wl,' lt_prog_compiler_static='-static' case $host_os in aix*) # All AIX code is PIC. if test "$host_cpu" = ia64; then # AIX 5 now supports IA64 processor lt_prog_compiler_static='-Bstatic' fi ;; amigaos*) case $host_cpu in powerpc) # see comment about AmigaOS4 .so support lt_prog_compiler_pic='-fPIC' ;; m68k) # FIXME: we need at least 68020 code to build shared libraries, but # adding the `-m68020' flag to GCC prevents building anything better, # like `-m68040'. lt_prog_compiler_pic='-m68020 -resident32 -malways-restore-a4' ;; esac ;; beos* | irix5* | irix6* | nonstopux* | osf3* | osf4* | osf5*) # PIC is the default for these OSes. ;; mingw* | cygwin* | pw32* | os2* | cegcc*) # This hack is so that the source file can tell whether it is being # built for inclusion in a dll (and should export symbols for example). # Although the cygwin gcc ignores -fPIC, still need this for old-style # (--disable-auto-import) libraries lt_prog_compiler_pic='-DDLL_EXPORT' ;; darwin* | rhapsody*) # PIC is the default on this platform # Common symbols not allowed in MH_DYLIB files lt_prog_compiler_pic='-fno-common' ;; haiku*) # PIC is the default for Haiku. # The "-static" flag exists, but is broken. lt_prog_compiler_static= ;; hpux*) # PIC is the default for 64-bit PA HP-UX, but not for 32-bit # PA HP-UX. On IA64 HP-UX, PIC is the default but the pic flag # sets the default TLS model and affects inlining. case $host_cpu in hppa*64*) # +Z the default ;; *) lt_prog_compiler_pic='-fPIC' ;; esac ;; interix[3-9]*) # Interix 3.x gcc -fpic/-fPIC options generate broken code. # Instead, we relocate shared libraries at runtime. ;; msdosdjgpp*) # Just because we use GCC doesn't mean we suddenly get shared libraries # on systems that don't support them. lt_prog_compiler_can_build_shared=no enable_shared=no ;; *nto* | *qnx*) # QNX uses GNU C++, but need to define -shared option too, otherwise # it will coredump. lt_prog_compiler_pic='-fPIC -shared' ;; sysv4*MP*) if test -d /usr/nec; then lt_prog_compiler_pic=-Kconform_pic fi ;; *) lt_prog_compiler_pic='-fPIC' ;; esac case $cc_basename in nvcc*) # Cuda Compiler Driver 2.2 lt_prog_compiler_wl='-Xlinker ' if test -n "$lt_prog_compiler_pic"; then lt_prog_compiler_pic="-Xcompiler $lt_prog_compiler_pic" fi ;; esac else # PORTME Check for flag to pass linker flags through the system compiler. case $host_os in aix*) lt_prog_compiler_wl='-Wl,' if test "$host_cpu" = ia64; then # AIX 5 now supports IA64 processor lt_prog_compiler_static='-Bstatic' else lt_prog_compiler_static='-bnso -bI:/lib/syscalls.exp' fi ;; mingw* | cygwin* | pw32* | os2* | cegcc*) # This hack is so that the source file can tell whether it is being # built for inclusion in a dll (and should export symbols for example). lt_prog_compiler_pic='-DDLL_EXPORT' ;; hpux9* | hpux10* | hpux11*) lt_prog_compiler_wl='-Wl,' # PIC is the default for IA64 HP-UX and 64-bit HP-UX, but # not for PA HP-UX. case $host_cpu in hppa*64*|ia64*) # +Z the default ;; *) lt_prog_compiler_pic='+Z' ;; esac # Is there a better lt_prog_compiler_static that works with the bundled CC? lt_prog_compiler_static='${wl}-a ${wl}archive' ;; irix5* | irix6* | nonstopux*) lt_prog_compiler_wl='-Wl,' # PIC (with -KPIC) is the default. lt_prog_compiler_static='-non_shared' ;; linux* | k*bsd*-gnu | kopensolaris*-gnu) case $cc_basename in # old Intel for x86_64 which still supported -KPIC. ecc*) lt_prog_compiler_wl='-Wl,' lt_prog_compiler_pic='-KPIC' lt_prog_compiler_static='-static' ;; # icc used to be incompatible with GCC. # ICC 10 doesn't accept -KPIC any more. icc* | ifort*) lt_prog_compiler_wl='-Wl,' lt_prog_compiler_pic='-fPIC' lt_prog_compiler_static='-static' ;; # Lahey Fortran 8.1. lf95*) lt_prog_compiler_wl='-Wl,' lt_prog_compiler_pic='--shared' lt_prog_compiler_static='--static' ;; nagfor*) # NAG Fortran compiler lt_prog_compiler_wl='-Wl,-Wl,,' lt_prog_compiler_pic='-PIC' lt_prog_compiler_static='-Bstatic' ;; pgcc* | pgf77* | pgf90* | pgf95* | pgfortran*) # Portland Group compilers (*not* the Pentium gcc compiler, # which looks to be a dead project) lt_prog_compiler_wl='-Wl,' lt_prog_compiler_pic='-fpic' lt_prog_compiler_static='-Bstatic' ;; ccc*) lt_prog_compiler_wl='-Wl,' # All Alpha code is PIC. lt_prog_compiler_static='-non_shared' ;; xl* | bgxl* | bgf* | mpixl*) # IBM XL C 8.0/Fortran 10.1, 11.1 on PPC and BlueGene lt_prog_compiler_wl='-Wl,' lt_prog_compiler_pic='-qpic' lt_prog_compiler_static='-qstaticlink' ;; *) case `$CC -V 2>&1 | sed 5q` in *Sun\ Ceres\ Fortran* | *Sun*Fortran*\ [1-7].* | *Sun*Fortran*\ 8.[0-3]*) # Sun Fortran 8.3 passes all unrecognized flags to the linker lt_prog_compiler_pic='-KPIC' lt_prog_compiler_static='-Bstatic' lt_prog_compiler_wl='' ;; *Sun\ F* | *Sun*Fortran*) lt_prog_compiler_pic='-KPIC' lt_prog_compiler_static='-Bstatic' lt_prog_compiler_wl='-Qoption ld ' ;; *Sun\ C*) # Sun C 5.9 lt_prog_compiler_pic='-KPIC' lt_prog_compiler_static='-Bstatic' lt_prog_compiler_wl='-Wl,' ;; *Intel*\ [CF]*Compiler*) lt_prog_compiler_wl='-Wl,' lt_prog_compiler_pic='-fPIC' lt_prog_compiler_static='-static' ;; *Portland\ Group*) lt_prog_compiler_wl='-Wl,' lt_prog_compiler_pic='-fpic' lt_prog_compiler_static='-Bstatic' ;; esac ;; esac ;; newsos6) lt_prog_compiler_pic='-KPIC' lt_prog_compiler_static='-Bstatic' ;; *nto* | *qnx*) # QNX uses GNU C++, but need to define -shared option too, otherwise # it will coredump. lt_prog_compiler_pic='-fPIC -shared' ;; osf3* | osf4* | osf5*) lt_prog_compiler_wl='-Wl,' # All OSF/1 code is PIC. lt_prog_compiler_static='-non_shared' ;; rdos*) lt_prog_compiler_static='-non_shared' ;; solaris*) lt_prog_compiler_pic='-KPIC' lt_prog_compiler_static='-Bstatic' case $cc_basename in f77* | f90* | f95* | sunf77* | sunf90* | sunf95*) lt_prog_compiler_wl='-Qoption ld ';; *) lt_prog_compiler_wl='-Wl,';; esac ;; sunos4*) lt_prog_compiler_wl='-Qoption ld ' lt_prog_compiler_pic='-PIC' lt_prog_compiler_static='-Bstatic' ;; sysv4 | sysv4.2uw2* | sysv4.3*) lt_prog_compiler_wl='-Wl,' lt_prog_compiler_pic='-KPIC' lt_prog_compiler_static='-Bstatic' ;; sysv4*MP*) if test -d /usr/nec ;then lt_prog_compiler_pic='-Kconform_pic' lt_prog_compiler_static='-Bstatic' fi ;; sysv5* | unixware* | sco3.2v5* | sco5v6* | OpenUNIX*) lt_prog_compiler_wl='-Wl,' lt_prog_compiler_pic='-KPIC' lt_prog_compiler_static='-Bstatic' ;; unicos*) lt_prog_compiler_wl='-Wl,' lt_prog_compiler_can_build_shared=no ;; uts4*) lt_prog_compiler_pic='-pic' lt_prog_compiler_static='-Bstatic' ;; *) lt_prog_compiler_can_build_shared=no ;; esac fi case $host_os in # For platforms which do not support PIC, -DPIC is meaningless: *djgpp*) lt_prog_compiler_pic= ;; *) lt_prog_compiler_pic="$lt_prog_compiler_pic -DPIC" ;; esac { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5 $as_echo_n "checking for $compiler option to produce PIC... " >&6; } if ${lt_cv_prog_compiler_pic+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_prog_compiler_pic=$lt_prog_compiler_pic fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_pic" >&5 $as_echo "$lt_cv_prog_compiler_pic" >&6; } lt_prog_compiler_pic=$lt_cv_prog_compiler_pic # # Check to make sure the PIC flag actually works. # if test -n "$lt_prog_compiler_pic"; then { $as_echo "$as_me:${as_lineno-$LINENO}: checking if $compiler PIC flag $lt_prog_compiler_pic works" >&5 $as_echo_n "checking if $compiler PIC flag $lt_prog_compiler_pic works... " >&6; } if ${lt_cv_prog_compiler_pic_works+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_prog_compiler_pic_works=no ac_outfile=conftest.$ac_objext echo "$lt_simple_compile_test_code" > conftest.$ac_ext lt_compiler_flag="$lt_prog_compiler_pic -DPIC" # Insert the option either (1) after the last *FLAGS variable, or # (2) before a word containing "conftest.", or (3) at the end. # Note that $ac_compile itself does not contain backslashes and begins # with a dollar sign (not a hyphen), so the echo should work correctly. # The option is referenced via a variable to avoid confusing sed. lt_compile=`echo "$ac_compile" | $SED \ -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \ -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \ -e 's:$: $lt_compiler_flag:'` (eval echo "\"\$as_me:$LINENO: $lt_compile\"" >&5) (eval "$lt_compile" 2>conftest.err) ac_status=$? cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 if (exit $ac_status) && test -s "$ac_outfile"; then # The compiler can only warn and ignore the option if not recognized # So say no if there are warnings other than the usual output. $ECHO "$_lt_compiler_boilerplate" | $SED '/^$/d' >conftest.exp $SED '/^$/d; /^ *+/d' conftest.err >conftest.er2 if test ! -s conftest.er2 || diff conftest.exp conftest.er2 >/dev/null; then lt_cv_prog_compiler_pic_works=yes fi fi $RM conftest* fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_pic_works" >&5 $as_echo "$lt_cv_prog_compiler_pic_works" >&6; } if test x"$lt_cv_prog_compiler_pic_works" = xyes; then case $lt_prog_compiler_pic in "" | " "*) ;; *) lt_prog_compiler_pic=" $lt_prog_compiler_pic" ;; esac else lt_prog_compiler_pic= lt_prog_compiler_can_build_shared=no fi fi # # Check to make sure the static flag actually works. # wl=$lt_prog_compiler_wl eval lt_tmp_static_flag=\"$lt_prog_compiler_static\" { $as_echo "$as_me:${as_lineno-$LINENO}: checking if $compiler static flag $lt_tmp_static_flag works" >&5 $as_echo_n "checking if $compiler static flag $lt_tmp_static_flag works... " >&6; } if ${lt_cv_prog_compiler_static_works+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_prog_compiler_static_works=no save_LDFLAGS="$LDFLAGS" LDFLAGS="$LDFLAGS $lt_tmp_static_flag" echo "$lt_simple_link_test_code" > conftest.$ac_ext if (eval $ac_link 2>conftest.err) && test -s conftest$ac_exeext; then # The linker can only warn and ignore the option if not recognized # So say no if there are warnings if test -s conftest.err; then # Append any errors to the config.log. cat conftest.err 1>&5 $ECHO "$_lt_linker_boilerplate" | $SED '/^$/d' > conftest.exp $SED '/^$/d; /^ *+/d' conftest.err >conftest.er2 if diff conftest.exp conftest.er2 >/dev/null; then lt_cv_prog_compiler_static_works=yes fi else lt_cv_prog_compiler_static_works=yes fi fi $RM -r conftest* LDFLAGS="$save_LDFLAGS" fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_static_works" >&5 $as_echo "$lt_cv_prog_compiler_static_works" >&6; } if test x"$lt_cv_prog_compiler_static_works" = xyes; then : else lt_prog_compiler_static= fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking if $compiler supports -c -o file.$ac_objext" >&5 $as_echo_n "checking if $compiler supports -c -o file.$ac_objext... " >&6; } if ${lt_cv_prog_compiler_c_o+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_prog_compiler_c_o=no $RM -r conftest 2>/dev/null mkdir conftest cd conftest mkdir out echo "$lt_simple_compile_test_code" > conftest.$ac_ext lt_compiler_flag="-o out/conftest2.$ac_objext" # Insert the option either (1) after the last *FLAGS variable, or # (2) before a word containing "conftest.", or (3) at the end. # Note that $ac_compile itself does not contain backslashes and begins # with a dollar sign (not a hyphen), so the echo should work correctly. lt_compile=`echo "$ac_compile" | $SED \ -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \ -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \ -e 's:$: $lt_compiler_flag:'` (eval echo "\"\$as_me:$LINENO: $lt_compile\"" >&5) (eval "$lt_compile" 2>out/conftest.err) ac_status=$? cat out/conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 if (exit $ac_status) && test -s out/conftest2.$ac_objext then # The compiler can only warn and ignore the option if not recognized # So say no if there are warnings $ECHO "$_lt_compiler_boilerplate" | $SED '/^$/d' > out/conftest.exp $SED '/^$/d; /^ *+/d' out/conftest.err >out/conftest.er2 if test ! -s out/conftest.er2 || diff out/conftest.exp out/conftest.er2 >/dev/null; then lt_cv_prog_compiler_c_o=yes fi fi chmod u+w . 2>&5 $RM conftest* # SGI C++ compiler will create directory out/ii_files/ for # template instantiation test -d out/ii_files && $RM out/ii_files/* && rmdir out/ii_files $RM out/* && rmdir out cd .. $RM -r conftest $RM conftest* fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_c_o" >&5 $as_echo "$lt_cv_prog_compiler_c_o" >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: checking if $compiler supports -c -o file.$ac_objext" >&5 $as_echo_n "checking if $compiler supports -c -o file.$ac_objext... " >&6; } if ${lt_cv_prog_compiler_c_o+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_prog_compiler_c_o=no $RM -r conftest 2>/dev/null mkdir conftest cd conftest mkdir out echo "$lt_simple_compile_test_code" > conftest.$ac_ext lt_compiler_flag="-o out/conftest2.$ac_objext" # Insert the option either (1) after the last *FLAGS variable, or # (2) before a word containing "conftest.", or (3) at the end. # Note that $ac_compile itself does not contain backslashes and begins # with a dollar sign (not a hyphen), so the echo should work correctly. lt_compile=`echo "$ac_compile" | $SED \ -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \ -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \ -e 's:$: $lt_compiler_flag:'` (eval echo "\"\$as_me:$LINENO: $lt_compile\"" >&5) (eval "$lt_compile" 2>out/conftest.err) ac_status=$? cat out/conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 if (exit $ac_status) && test -s out/conftest2.$ac_objext then # The compiler can only warn and ignore the option if not recognized # So say no if there are warnings $ECHO "$_lt_compiler_boilerplate" | $SED '/^$/d' > out/conftest.exp $SED '/^$/d; /^ *+/d' out/conftest.err >out/conftest.er2 if test ! -s out/conftest.er2 || diff out/conftest.exp out/conftest.er2 >/dev/null; then lt_cv_prog_compiler_c_o=yes fi fi chmod u+w . 2>&5 $RM conftest* # SGI C++ compiler will create directory out/ii_files/ for # template instantiation test -d out/ii_files && $RM out/ii_files/* && rmdir out/ii_files $RM out/* && rmdir out cd .. $RM -r conftest $RM conftest* fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_c_o" >&5 $as_echo "$lt_cv_prog_compiler_c_o" >&6; } hard_links="nottested" if test "$lt_cv_prog_compiler_c_o" = no && test "$need_locks" != no; then # do not overwrite the value of need_locks provided by the user { $as_echo "$as_me:${as_lineno-$LINENO}: checking if we can lock with hard links" >&5 $as_echo_n "checking if we can lock with hard links... " >&6; } hard_links=yes $RM conftest* ln conftest.a conftest.b 2>/dev/null && hard_links=no touch conftest.a ln conftest.a conftest.b 2>&5 || hard_links=no ln conftest.a conftest.b 2>/dev/null && hard_links=no { $as_echo "$as_me:${as_lineno-$LINENO}: result: $hard_links" >&5 $as_echo "$hard_links" >&6; } if test "$hard_links" = no; then { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: \`$CC' does not support \`-c -o', so \`make -j' may be unsafe" >&5 $as_echo "$as_me: WARNING: \`$CC' does not support \`-c -o', so \`make -j' may be unsafe" >&2;} need_locks=warn fi else need_locks=no fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether the $compiler linker ($LD) supports shared libraries" >&5 $as_echo_n "checking whether the $compiler linker ($LD) supports shared libraries... " >&6; } runpath_var= allow_undefined_flag= always_export_symbols=no archive_cmds= archive_expsym_cmds= compiler_needs_object=no enable_shared_with_static_runtimes=no export_dynamic_flag_spec= export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED '\''s/.* //'\'' | sort | uniq > $export_symbols' hardcode_automatic=no hardcode_direct=no hardcode_direct_absolute=no hardcode_libdir_flag_spec= hardcode_libdir_separator= hardcode_minus_L=no hardcode_shlibpath_var=unsupported inherit_rpath=no link_all_deplibs=unknown module_cmds= module_expsym_cmds= old_archive_from_new_cmds= old_archive_from_expsyms_cmds= thread_safe_flag_spec= whole_archive_flag_spec= # include_expsyms should be a list of space-separated symbols to be *always* # included in the symbol list include_expsyms= # exclude_expsyms can be an extended regexp of symbols to exclude # it will be wrapped by ` (' and `)$', so one must not match beginning or # end of line. Example: `a|bc|.*d.*' will exclude the symbols `a' and `bc', # as well as any symbol that contains `d'. exclude_expsyms='_GLOBAL_OFFSET_TABLE_|_GLOBAL__F[ID]_.*' # Although _GLOBAL_OFFSET_TABLE_ is a valid symbol C name, most a.out # platforms (ab)use it in PIC code, but their linkers get confused if # the symbol is explicitly referenced. Since portable code cannot # rely on this symbol name, it's probably fine to never include it in # preloaded symbol tables. # Exclude shared library initialization/finalization symbols. extract_expsyms_cmds= case $host_os in cygwin* | mingw* | pw32* | cegcc*) # FIXME: the MSVC++ port hasn't been tested in a loooong time # When not using gcc, we currently assume that we are using # Microsoft Visual C++. if test "$GCC" != yes; then with_gnu_ld=no fi ;; interix*) # we just hope/assume this is gcc and not c89 (= MSVC++) with_gnu_ld=yes ;; openbsd*) with_gnu_ld=no ;; linux* | k*bsd*-gnu | gnu*) link_all_deplibs=no ;; esac ld_shlibs=yes # On some targets, GNU ld is compatible enough with the native linker # that we're better off using the native interface for both. lt_use_gnu_ld_interface=no if test "$with_gnu_ld" = yes; then case $host_os in aix*) # The AIX port of GNU ld has always aspired to compatibility # with the native linker. However, as the warning in the GNU ld # block says, versions before 2.19.5* couldn't really create working # shared libraries, regardless of the interface used. case `$LD -v 2>&1` in *\ \(GNU\ Binutils\)\ 2.19.5*) ;; *\ \(GNU\ Binutils\)\ 2.[2-9]*) ;; *\ \(GNU\ Binutils\)\ [3-9]*) ;; *) lt_use_gnu_ld_interface=yes ;; esac ;; *) lt_use_gnu_ld_interface=yes ;; esac fi if test "$lt_use_gnu_ld_interface" = yes; then # If archive_cmds runs LD, not CC, wlarc should be empty wlarc='${wl}' # Set some defaults for GNU ld with shared library support. These # are reset later if shared libraries are not supported. Putting them # here allows them to be overridden if necessary. runpath_var=LD_RUN_PATH hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir' export_dynamic_flag_spec='${wl}--export-dynamic' # ancient GNU ld didn't support --whole-archive et. al. if $LD --help 2>&1 | $GREP 'no-whole-archive' > /dev/null; then whole_archive_flag_spec="$wlarc"'--whole-archive$convenience '"$wlarc"'--no-whole-archive' else whole_archive_flag_spec= fi supports_anon_versioning=no case `$LD -v 2>&1` in *GNU\ gold*) supports_anon_versioning=yes ;; *\ [01].* | *\ 2.[0-9].* | *\ 2.10.*) ;; # catch versions < 2.11 *\ 2.11.93.0.2\ *) supports_anon_versioning=yes ;; # RH7.3 ... *\ 2.11.92.0.12\ *) supports_anon_versioning=yes ;; # Mandrake 8.2 ... *\ 2.11.*) ;; # other 2.11 versions *) supports_anon_versioning=yes ;; esac # See if GNU ld supports shared libraries. case $host_os in aix[3-9]*) # On AIX/PPC, the GNU linker is very broken if test "$host_cpu" != ia64; then ld_shlibs=no cat <<_LT_EOF 1>&2 *** Warning: the GNU linker, at least up to release 2.19, is reported *** to be unable to reliably create shared libraries on AIX. *** Therefore, libtool is disabling shared libraries support. If you *** really care for shared libraries, you may want to install binutils *** 2.20 or above, or modify your PATH so that a non-GNU linker is found. *** You will then need to restart the configuration process. _LT_EOF fi ;; amigaos*) case $host_cpu in powerpc) # see comment about AmigaOS4 .so support archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' archive_expsym_cmds='' ;; m68k) archive_cmds='$RM $output_objdir/a2ixlibrary.data~$ECHO "#define NAME $libname" > $output_objdir/a2ixlibrary.data~$ECHO "#define LIBRARY_ID 1" >> $output_objdir/a2ixlibrary.data~$ECHO "#define VERSION $major" >> $output_objdir/a2ixlibrary.data~$ECHO "#define REVISION $revision" >> $output_objdir/a2ixlibrary.data~$AR $AR_FLAGS $lib $libobjs~$RANLIB $lib~(cd $output_objdir && a2ixlibrary -32)' hardcode_libdir_flag_spec='-L$libdir' hardcode_minus_L=yes ;; esac ;; beos*) if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then allow_undefined_flag=unsupported # Joseph Beckenbach says some releases of gcc # support --undefined. This deserves some investigation. FIXME archive_cmds='$CC -nostart $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' else ld_shlibs=no fi ;; cygwin* | mingw* | pw32* | cegcc*) # _LT_TAGVAR(hardcode_libdir_flag_spec, ) is actually meaningless, # as there is no search path for DLLs. hardcode_libdir_flag_spec='-L$libdir' export_dynamic_flag_spec='${wl}--export-all-symbols' allow_undefined_flag=unsupported always_export_symbols=no enable_shared_with_static_runtimes=yes export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/;s/^.*[ ]__nm__\([^ ]*\)[ ][^ ]*/\1 DATA/;/^I[ ]/d;/^[AITW][ ]/s/.* //'\'' | sort | uniq > $export_symbols' exclude_expsyms='[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname' if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib' # If the export-symbols file already is a .def file (1st line # is EXPORTS), use it as is; otherwise, prepend... archive_expsym_cmds='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then cp $export_symbols $output_objdir/$soname.def; else echo EXPORTS > $output_objdir/$soname.def; cat $export_symbols >> $output_objdir/$soname.def; fi~ $CC -shared $output_objdir/$soname.def $libobjs $deplibs $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib' else ld_shlibs=no fi ;; haiku*) archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' link_all_deplibs=yes ;; interix[3-9]*) hardcode_direct=no hardcode_shlibpath_var=no hardcode_libdir_flag_spec='${wl}-rpath,$libdir' export_dynamic_flag_spec='${wl}-E' # Hack: On Interix 3.x, we cannot compile PIC because of a broken gcc. # Instead, shared libraries are loaded at an image base (0x10000000 by # default) and relocated if they conflict, which is a slow very memory # consuming and fragmenting process. To avoid this, we pick a random, # 256 KiB-aligned image base between 0x50000000 and 0x6FFC0000 at link # time. Moving up from 0x10000000 also allows more sbrk(2) space. archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-h,$soname ${wl}--image-base,`expr ${RANDOM-$$} % 4096 / 2 \* 262144 + 1342177280` -o $lib' archive_expsym_cmds='sed "s,^,_," $export_symbols >$output_objdir/$soname.expsym~$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-h,$soname ${wl}--retain-symbols-file,$output_objdir/$soname.expsym ${wl}--image-base,`expr ${RANDOM-$$} % 4096 / 2 \* 262144 + 1342177280` -o $lib' ;; gnu* | linux* | tpf* | k*bsd*-gnu | kopensolaris*-gnu) tmp_diet=no if test "$host_os" = linux-dietlibc; then case $cc_basename in diet\ *) tmp_diet=yes;; # linux-dietlibc with static linking (!diet-dyn) esac fi if $LD --help 2>&1 | $EGREP ': supported targets:.* elf' > /dev/null \ && test "$tmp_diet" = no then tmp_addflag=' $pic_flag' tmp_sharedflag='-shared' case $cc_basename,$host_cpu in pgcc*) # Portland Group C compiler whole_archive_flag_spec='${wl}--whole-archive`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` ${wl}--no-whole-archive' tmp_addflag=' $pic_flag' ;; pgf77* | pgf90* | pgf95* | pgfortran*) # Portland Group f77 and f90 compilers whole_archive_flag_spec='${wl}--whole-archive`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` ${wl}--no-whole-archive' tmp_addflag=' $pic_flag -Mnomain' ;; ecc*,ia64* | icc*,ia64*) # Intel C compiler on ia64 tmp_addflag=' -i_dynamic' ;; efc*,ia64* | ifort*,ia64*) # Intel Fortran compiler on ia64 tmp_addflag=' -i_dynamic -nofor_main' ;; ifc* | ifort*) # Intel Fortran compiler tmp_addflag=' -nofor_main' ;; lf95*) # Lahey Fortran 8.1 whole_archive_flag_spec= tmp_sharedflag='--shared' ;; xl[cC]* | bgxl[cC]* | mpixl[cC]*) # IBM XL C 8.0 on PPC (deal with xlf below) tmp_sharedflag='-qmkshrobj' tmp_addflag= ;; nvcc*) # Cuda Compiler Driver 2.2 whole_archive_flag_spec='${wl}--whole-archive`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` ${wl}--no-whole-archive' compiler_needs_object=yes ;; esac case `$CC -V 2>&1 | sed 5q` in *Sun\ C*) # Sun C 5.9 whole_archive_flag_spec='${wl}--whole-archive`new_convenience=; for conv in $convenience\"\"; do test -z \"$conv\" || new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` ${wl}--no-whole-archive' compiler_needs_object=yes tmp_sharedflag='-G' ;; *Sun\ F*) # Sun Fortran 8.3 tmp_sharedflag='-G' ;; esac archive_cmds='$CC '"$tmp_sharedflag""$tmp_addflag"' $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' if test "x$supports_anon_versioning" = xyes; then archive_expsym_cmds='echo "{ global:" > $output_objdir/$libname.ver~ cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~ echo "local: *; };" >> $output_objdir/$libname.ver~ $CC '"$tmp_sharedflag""$tmp_addflag"' $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-version-script ${wl}$output_objdir/$libname.ver -o $lib' fi case $cc_basename in xlf* | bgf* | bgxlf* | mpixlf*) # IBM XL Fortran 10.1 on PPC cannot create shared libs itself whole_archive_flag_spec='--whole-archive$convenience --no-whole-archive' hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir' archive_cmds='$LD -shared $libobjs $deplibs $linker_flags -soname $soname -o $lib' if test "x$supports_anon_versioning" = xyes; then archive_expsym_cmds='echo "{ global:" > $output_objdir/$libname.ver~ cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~ echo "local: *; };" >> $output_objdir/$libname.ver~ $LD -shared $libobjs $deplibs $linker_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib' fi ;; esac else ld_shlibs=no fi ;; netbsd* | netbsdelf*-gnu) if echo __ELF__ | $CC -E - | $GREP __ELF__ >/dev/null; then archive_cmds='$LD -Bshareable $libobjs $deplibs $linker_flags -o $lib' wlarc= else archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib' fi ;; solaris*) if $LD -v 2>&1 | $GREP 'BFD 2\.8' > /dev/null; then ld_shlibs=no cat <<_LT_EOF 1>&2 *** Warning: The releases 2.8.* of the GNU linker cannot reliably *** create shared libraries on Solaris systems. Therefore, libtool *** is disabling shared libraries support. We urge you to upgrade GNU *** binutils to release 2.9.1 or newer. Another option is to modify *** your PATH or compiler configuration so that the native linker is *** used, and then restart. _LT_EOF elif $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib' else ld_shlibs=no fi ;; sysv5* | sco3.2v5* | sco5v6* | unixware* | OpenUNIX*) case `$LD -v 2>&1` in *\ [01].* | *\ 2.[0-9].* | *\ 2.1[0-5].*) ld_shlibs=no cat <<_LT_EOF 1>&2 *** Warning: Releases of the GNU linker prior to 2.16.91.0.3 can not *** reliably create shared libraries on SCO systems. Therefore, libtool *** is disabling shared libraries support. We urge you to upgrade GNU *** binutils to release 2.16.91.0.3 or newer. Another option is to modify *** your PATH or compiler configuration so that the native linker is *** used, and then restart. _LT_EOF ;; *) # For security reasons, it is highly recommended that you always # use absolute paths for naming shared libraries, and exclude the # DT_RUNPATH tag from executables and libraries. But doing so # requires that you compile everything twice, which is a pain. if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir' archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib' else ld_shlibs=no fi ;; esac ;; sunos4*) archive_cmds='$LD -assert pure-text -Bshareable -o $lib $libobjs $deplibs $linker_flags' wlarc= hardcode_direct=yes hardcode_shlibpath_var=no ;; *) if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib' else ld_shlibs=no fi ;; esac if test "$ld_shlibs" = no; then runpath_var= hardcode_libdir_flag_spec= export_dynamic_flag_spec= whole_archive_flag_spec= fi else # PORTME fill in a description of your system's linker (not GNU ld) case $host_os in aix3*) allow_undefined_flag=unsupported always_export_symbols=yes archive_expsym_cmds='$LD -o $output_objdir/$soname $libobjs $deplibs $linker_flags -bE:$export_symbols -T512 -H512 -bM:SRE~$AR $AR_FLAGS $lib $output_objdir/$soname' # Note: this linker hardcodes the directories in LIBPATH if there # are no directories specified by -L. hardcode_minus_L=yes if test "$GCC" = yes && test -z "$lt_prog_compiler_static"; then # Neither direct hardcoding nor static linking is supported with a # broken collect2. hardcode_direct=unsupported fi ;; aix[4-9]*) if test "$host_cpu" = ia64; then # On IA64, the linker does run time linking by default, so we don't # have to do anything special. aix_use_runtimelinking=no exp_sym_flag='-Bexport' no_entry_flag="" else # If we're using GNU nm, then we don't want the "-C" option. # -C means demangle to AIX nm, but means don't demangle with GNU nm # Also, AIX nm treats weak defined symbols like other global # defined symbols, whereas GNU nm marks them as "W". if $NM -V 2>&1 | $GREP 'GNU' > /dev/null; then export_symbols_cmds='$NM -Bpg $libobjs $convenience | awk '\''{ if (((\$ 2 == "T") || (\$ 2 == "D") || (\$ 2 == "B") || (\$ 2 == "W")) && (substr(\$ 3,1,1) != ".")) { print \$ 3 } }'\'' | sort -u > $export_symbols' else export_symbols_cmds='$NM -BCpg $libobjs $convenience | awk '\''{ if (((\$ 2 == "T") || (\$ 2 == "D") || (\$ 2 == "B")) && (substr(\$ 3,1,1) != ".")) { print \$ 3 } }'\'' | sort -u > $export_symbols' fi aix_use_runtimelinking=no # Test if we are trying to use run time linking or normal # AIX style linking. If -brtl is somewhere in LDFLAGS, we # need to do runtime linking. case $host_os in aix4.[23]|aix4.[23].*|aix[5-9]*) for ld_flag in $LDFLAGS; do if (test $ld_flag = "-brtl" || test $ld_flag = "-Wl,-brtl"); then aix_use_runtimelinking=yes break fi done ;; esac exp_sym_flag='-bexport' no_entry_flag='-bnoentry' fi # When large executables or shared objects are built, AIX ld can # have problems creating the table of contents. If linking a library # or program results in "error TOC overflow" add -mminimal-toc to # CXXFLAGS/CFLAGS for g++/gcc. In the cases where that is not # enough to fix the problem, add -Wl,-bbigtoc to LDFLAGS. archive_cmds='' hardcode_direct=yes hardcode_direct_absolute=yes hardcode_libdir_separator=':' link_all_deplibs=yes file_list_spec='${wl}-f,' if test "$GCC" = yes; then case $host_os in aix4.[012]|aix4.[012].*) # We only want to do this on AIX 4.2 and lower, the check # below for broken collect2 doesn't work under 4.3+ collect2name=`${CC} -print-prog-name=collect2` if test -f "$collect2name" && strings "$collect2name" | $GREP resolve_lib_name >/dev/null then # We have reworked collect2 : else # We have old collect2 hardcode_direct=unsupported # It fails to find uninstalled libraries when the uninstalled # path is not listed in the libpath. Setting hardcode_minus_L # to unsupported forces relinking hardcode_minus_L=yes hardcode_libdir_flag_spec='-L$libdir' hardcode_libdir_separator= fi ;; esac shared_flag='-shared' if test "$aix_use_runtimelinking" = yes; then shared_flag="$shared_flag "'${wl}-G' fi link_all_deplibs=no else # not using gcc if test "$host_cpu" = ia64; then # VisualAge C++, Version 5.5 for AIX 5L for IA-64, Beta 3 Release # chokes on -Wl,-G. The following line is correct: shared_flag='-G' else if test "$aix_use_runtimelinking" = yes; then shared_flag='${wl}-G' else shared_flag='${wl}-bM:SRE' fi fi fi export_dynamic_flag_spec='${wl}-bexpall' # It seems that -bexpall does not export symbols beginning with # underscore (_), so it is better to generate a list of symbols to export. always_export_symbols=yes if test "$aix_use_runtimelinking" = yes; then # Warning - without using the other runtime loading flags (-brtl), # -berok will link without error, but may produce a broken library. allow_undefined_flag='-berok' # Determine the default libpath from the value encoded in an # empty executable. if test "${lt_cv_aix_libpath+set}" = set; then aix_libpath=$lt_cv_aix_libpath else if ${lt_cv_aix_libpath_+:} false; then : $as_echo_n "(cached) " >&6 else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : lt_aix_libpath_sed=' /Import File Strings/,/^$/ { /^0/ { s/^0 *\([^ ]*\) *$/\1/ p } }' lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"` # Check for a 64-bit object if we didn't find anything. if test -z "$lt_cv_aix_libpath_"; then lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"` fi fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext if test -z "$lt_cv_aix_libpath_"; then lt_cv_aix_libpath_="/usr/lib:/lib" fi fi aix_libpath=$lt_cv_aix_libpath_ fi hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath" archive_expsym_cmds='$CC -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags `if test "x${allow_undefined_flag}" != "x"; then func_echo_all "${wl}${allow_undefined_flag}"; else :; fi` '"\${wl}$exp_sym_flag:\$export_symbols $shared_flag" else if test "$host_cpu" = ia64; then hardcode_libdir_flag_spec='${wl}-R $libdir:/usr/lib:/lib' allow_undefined_flag="-z nodefs" archive_expsym_cmds="\$CC $shared_flag"' -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags ${wl}${allow_undefined_flag} '"\${wl}$exp_sym_flag:\$export_symbols" else # Determine the default libpath from the value encoded in an # empty executable. if test "${lt_cv_aix_libpath+set}" = set; then aix_libpath=$lt_cv_aix_libpath else if ${lt_cv_aix_libpath_+:} false; then : $as_echo_n "(cached) " >&6 else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : lt_aix_libpath_sed=' /Import File Strings/,/^$/ { /^0/ { s/^0 *\([^ ]*\) *$/\1/ p } }' lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"` # Check for a 64-bit object if we didn't find anything. if test -z "$lt_cv_aix_libpath_"; then lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"` fi fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext if test -z "$lt_cv_aix_libpath_"; then lt_cv_aix_libpath_="/usr/lib:/lib" fi fi aix_libpath=$lt_cv_aix_libpath_ fi hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath" # Warning - without using the other run time loading flags, # -berok will link without error, but may produce a broken library. no_undefined_flag=' ${wl}-bernotok' allow_undefined_flag=' ${wl}-berok' if test "$with_gnu_ld" = yes; then # We only use this code for GNU lds that support --whole-archive. whole_archive_flag_spec='${wl}--whole-archive$convenience ${wl}--no-whole-archive' else # Exported symbols can be pulled into shared objects from archives whole_archive_flag_spec='$convenience' fi archive_cmds_need_lc=yes # This is similar to how AIX traditionally builds its shared libraries. archive_expsym_cmds="\$CC $shared_flag"' -o $output_objdir/$soname $libobjs $deplibs ${wl}-bnoentry $compiler_flags ${wl}-bE:$export_symbols${allow_undefined_flag}~$AR $AR_FLAGS $output_objdir/$libname$release.a $output_objdir/$soname' fi fi ;; amigaos*) case $host_cpu in powerpc) # see comment about AmigaOS4 .so support archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' archive_expsym_cmds='' ;; m68k) archive_cmds='$RM $output_objdir/a2ixlibrary.data~$ECHO "#define NAME $libname" > $output_objdir/a2ixlibrary.data~$ECHO "#define LIBRARY_ID 1" >> $output_objdir/a2ixlibrary.data~$ECHO "#define VERSION $major" >> $output_objdir/a2ixlibrary.data~$ECHO "#define REVISION $revision" >> $output_objdir/a2ixlibrary.data~$AR $AR_FLAGS $lib $libobjs~$RANLIB $lib~(cd $output_objdir && a2ixlibrary -32)' hardcode_libdir_flag_spec='-L$libdir' hardcode_minus_L=yes ;; esac ;; bsdi[45]*) export_dynamic_flag_spec=-rdynamic ;; cygwin* | mingw* | pw32* | cegcc*) # When not using gcc, we currently assume that we are using # Microsoft Visual C++. # hardcode_libdir_flag_spec is actually meaningless, as there is # no search path for DLLs. case $cc_basename in cl*) # Native MSVC hardcode_libdir_flag_spec=' ' allow_undefined_flag=unsupported always_export_symbols=yes file_list_spec='@' # Tell ltmain to make .lib files, not .a files. libext=lib # Tell ltmain to make .dll files, not .so files. shrext_cmds=".dll" # FIXME: Setting linknames here is a bad hack. archive_cmds='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-dll~linknames=' archive_expsym_cmds='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then sed -n -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' -e '1\\\!p' < $export_symbols > $output_objdir/$soname.exp; else sed -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' < $export_symbols > $output_objdir/$soname.exp; fi~ $CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~ linknames=' # The linker will not automatically build a static lib if we build a DLL. # _LT_TAGVAR(old_archive_from_new_cmds, )='true' enable_shared_with_static_runtimes=yes exclude_expsyms='_NULL_IMPORT_DESCRIPTOR|_IMPORT_DESCRIPTOR_.*' export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1,DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols' # Don't use ranlib old_postinstall_cmds='chmod 644 $oldlib' postlink_cmds='lt_outputfile="@OUTPUT@"~ lt_tool_outputfile="@TOOL_OUTPUT@"~ case $lt_outputfile in *.exe|*.EXE) ;; *) lt_outputfile="$lt_outputfile.exe" lt_tool_outputfile="$lt_tool_outputfile.exe" ;; esac~ if test "$MANIFEST_TOOL" != ":" && test -f "$lt_outputfile.manifest"; then $MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1; $RM "$lt_outputfile.manifest"; fi' ;; *) # Assume MSVC wrapper hardcode_libdir_flag_spec=' ' allow_undefined_flag=unsupported # Tell ltmain to make .lib files, not .a files. libext=lib # Tell ltmain to make .dll files, not .so files. shrext_cmds=".dll" # FIXME: Setting linknames here is a bad hack. archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames=' # The linker will automatically build a .lib file if we build a DLL. old_archive_from_new_cmds='true' # FIXME: Should let the user specify the lib program. old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs' enable_shared_with_static_runtimes=yes ;; esac ;; darwin* | rhapsody*) archive_cmds_need_lc=no hardcode_direct=no hardcode_automatic=yes hardcode_shlibpath_var=unsupported if test "$lt_cv_ld_force_load" = "yes"; then whole_archive_flag_spec='`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience ${wl}-force_load,$conv\"; done; func_echo_all \"$new_convenience\"`' else whole_archive_flag_spec='' fi link_all_deplibs=yes allow_undefined_flag="$_lt_dar_allow_undefined" case $cc_basename in ifort*) _lt_dar_can_shared=yes ;; *) _lt_dar_can_shared=$GCC ;; esac if test "$_lt_dar_can_shared" = "yes"; then output_verbose_link_cmd=func_echo_all archive_cmds="\$CC -dynamiclib \$allow_undefined_flag -o \$lib \$libobjs \$deplibs \$compiler_flags -install_name \$rpath/\$soname \$verstring $_lt_dar_single_mod${_lt_dsymutil}" module_cmds="\$CC \$allow_undefined_flag -o \$lib -bundle \$libobjs \$deplibs \$compiler_flags${_lt_dsymutil}" archive_expsym_cmds="sed 's,^,_,' < \$export_symbols > \$output_objdir/\${libname}-symbols.expsym~\$CC -dynamiclib \$allow_undefined_flag -o \$lib \$libobjs \$deplibs \$compiler_flags -install_name \$rpath/\$soname \$verstring ${_lt_dar_single_mod}${_lt_dar_export_syms}${_lt_dsymutil}" module_expsym_cmds="sed -e 's,^,_,' < \$export_symbols > \$output_objdir/\${libname}-symbols.expsym~\$CC \$allow_undefined_flag -o \$lib -bundle \$libobjs \$deplibs \$compiler_flags${_lt_dar_export_syms}${_lt_dsymutil}" else ld_shlibs=no fi ;; dgux*) archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' hardcode_libdir_flag_spec='-L$libdir' hardcode_shlibpath_var=no ;; # FreeBSD 2.2.[012] allows us to include c++rt0.o to get C++ constructor # support. Future versions do this automatically, but an explicit c++rt0.o # does not break anything, and helps significantly (at the cost of a little # extra space). freebsd2.2*) archive_cmds='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags /usr/lib/c++rt0.o' hardcode_libdir_flag_spec='-R$libdir' hardcode_direct=yes hardcode_shlibpath_var=no ;; # Unfortunately, older versions of FreeBSD 2 do not have this feature. freebsd2.*) archive_cmds='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags' hardcode_direct=yes hardcode_minus_L=yes hardcode_shlibpath_var=no ;; # FreeBSD 3 and greater uses gcc -shared to do shared libraries. freebsd* | dragonfly*) archive_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags' hardcode_libdir_flag_spec='-R$libdir' hardcode_direct=yes hardcode_shlibpath_var=no ;; hpux9*) if test "$GCC" = yes; then archive_cmds='$RM $output_objdir/$soname~$CC -shared $pic_flag ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib' else archive_cmds='$RM $output_objdir/$soname~$LD -b +b $install_libdir -o $output_objdir/$soname $libobjs $deplibs $linker_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib' fi hardcode_libdir_flag_spec='${wl}+b ${wl}$libdir' hardcode_libdir_separator=: hardcode_direct=yes # hardcode_minus_L: Not really in the search PATH, # but as the default location of the library. hardcode_minus_L=yes export_dynamic_flag_spec='${wl}-E' ;; hpux10*) if test "$GCC" = yes && test "$with_gnu_ld" = no; then archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags' else archive_cmds='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags' fi if test "$with_gnu_ld" = no; then hardcode_libdir_flag_spec='${wl}+b ${wl}$libdir' hardcode_libdir_separator=: hardcode_direct=yes hardcode_direct_absolute=yes export_dynamic_flag_spec='${wl}-E' # hardcode_minus_L: Not really in the search PATH, # but as the default location of the library. hardcode_minus_L=yes fi ;; hpux11*) if test "$GCC" = yes && test "$with_gnu_ld" = no; then case $host_cpu in hppa*64*) archive_cmds='$CC -shared ${wl}+h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags' ;; ia64*) archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags' ;; *) archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags' ;; esac else case $host_cpu in hppa*64*) archive_cmds='$CC -b ${wl}+h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags' ;; ia64*) archive_cmds='$CC -b ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags' ;; *) # Older versions of the 11.00 compiler do not understand -b yet # (HP92453-01 A.11.01.20 doesn't, HP92453-01 B.11.X.35175-35176.GP does) { $as_echo "$as_me:${as_lineno-$LINENO}: checking if $CC understands -b" >&5 $as_echo_n "checking if $CC understands -b... " >&6; } if ${lt_cv_prog_compiler__b+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_prog_compiler__b=no save_LDFLAGS="$LDFLAGS" LDFLAGS="$LDFLAGS -b" echo "$lt_simple_link_test_code" > conftest.$ac_ext if (eval $ac_link 2>conftest.err) && test -s conftest$ac_exeext; then # The linker can only warn and ignore the option if not recognized # So say no if there are warnings if test -s conftest.err; then # Append any errors to the config.log. cat conftest.err 1>&5 $ECHO "$_lt_linker_boilerplate" | $SED '/^$/d' > conftest.exp $SED '/^$/d; /^ *+/d' conftest.err >conftest.er2 if diff conftest.exp conftest.er2 >/dev/null; then lt_cv_prog_compiler__b=yes fi else lt_cv_prog_compiler__b=yes fi fi $RM -r conftest* LDFLAGS="$save_LDFLAGS" fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler__b" >&5 $as_echo "$lt_cv_prog_compiler__b" >&6; } if test x"$lt_cv_prog_compiler__b" = xyes; then archive_cmds='$CC -b ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags' else archive_cmds='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags' fi ;; esac fi if test "$with_gnu_ld" = no; then hardcode_libdir_flag_spec='${wl}+b ${wl}$libdir' hardcode_libdir_separator=: case $host_cpu in hppa*64*|ia64*) hardcode_direct=no hardcode_shlibpath_var=no ;; *) hardcode_direct=yes hardcode_direct_absolute=yes export_dynamic_flag_spec='${wl}-E' # hardcode_minus_L: Not really in the search PATH, # but as the default location of the library. hardcode_minus_L=yes ;; esac fi ;; irix5* | irix6* | nonstopux*) if test "$GCC" = yes; then archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib' # Try to use the -exported_symbol ld option, if it does not # work, assume that -exports_file does not work either and # implicitly export all symbols. # This should be the same for all languages, so no per-tag cache variable. { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether the $host_os linker accepts -exported_symbol" >&5 $as_echo_n "checking whether the $host_os linker accepts -exported_symbol... " >&6; } if ${lt_cv_irix_exported_symbol+:} false; then : $as_echo_n "(cached) " >&6 else save_LDFLAGS="$LDFLAGS" LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int foo (void) { return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : lt_cv_irix_exported_symbol=yes else lt_cv_irix_exported_symbol=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LDFLAGS="$save_LDFLAGS" fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_irix_exported_symbol" >&5 $as_echo "$lt_cv_irix_exported_symbol" >&6; } if test "$lt_cv_irix_exported_symbol" = yes; then archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib' fi else archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib' archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -exports_file $export_symbols -o $lib' fi archive_cmds_need_lc='no' hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir' hardcode_libdir_separator=: inherit_rpath=yes link_all_deplibs=yes ;; netbsd* | netbsdelf*-gnu) if echo __ELF__ | $CC -E - | $GREP __ELF__ >/dev/null; then archive_cmds='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags' # a.out else archive_cmds='$LD -shared -o $lib $libobjs $deplibs $linker_flags' # ELF fi hardcode_libdir_flag_spec='-R$libdir' hardcode_direct=yes hardcode_shlibpath_var=no ;; newsos6) archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' hardcode_direct=yes hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir' hardcode_libdir_separator=: hardcode_shlibpath_var=no ;; *nto* | *qnx*) ;; openbsd*) if test -f /usr/libexec/ld.so; then hardcode_direct=yes hardcode_shlibpath_var=no hardcode_direct_absolute=yes if test -z "`echo __ELF__ | $CC -E - | $GREP __ELF__`" || test "$host_os-$host_cpu" = "openbsd2.8-powerpc"; then archive_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags' archive_expsym_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags ${wl}-retain-symbols-file,$export_symbols' hardcode_libdir_flag_spec='${wl}-rpath,$libdir' export_dynamic_flag_spec='${wl}-E' else case $host_os in openbsd[01].* | openbsd2.[0-7] | openbsd2.[0-7].*) archive_cmds='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags' hardcode_libdir_flag_spec='-R$libdir' ;; *) archive_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags' hardcode_libdir_flag_spec='${wl}-rpath,$libdir' ;; esac fi else ld_shlibs=no fi ;; os2*) hardcode_libdir_flag_spec='-L$libdir' hardcode_minus_L=yes allow_undefined_flag=unsupported archive_cmds='$ECHO "LIBRARY $libname INITINSTANCE" > $output_objdir/$libname.def~$ECHO "DESCRIPTION \"$libname\"" >> $output_objdir/$libname.def~echo DATA >> $output_objdir/$libname.def~echo " SINGLE NONSHARED" >> $output_objdir/$libname.def~echo EXPORTS >> $output_objdir/$libname.def~emxexp $libobjs >> $output_objdir/$libname.def~$CC -Zdll -Zcrtdll -o $lib $libobjs $deplibs $compiler_flags $output_objdir/$libname.def' old_archive_from_new_cmds='emximp -o $output_objdir/$libname.a $output_objdir/$libname.def' ;; osf3*) if test "$GCC" = yes; then allow_undefined_flag=' ${wl}-expect_unresolved ${wl}\*' archive_cmds='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib' else allow_undefined_flag=' -expect_unresolved \*' archive_cmds='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib' fi archive_cmds_need_lc='no' hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir' hardcode_libdir_separator=: ;; osf4* | osf5*) # as osf3* with the addition of -msym flag if test "$GCC" = yes; then allow_undefined_flag=' ${wl}-expect_unresolved ${wl}\*' archive_cmds='$CC -shared${allow_undefined_flag} $pic_flag $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib' hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir' else allow_undefined_flag=' -expect_unresolved \*' archive_cmds='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags -msym -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib' archive_expsym_cmds='for i in `cat $export_symbols`; do printf "%s %s\\n" -exported_symbol "\$i" >> $lib.exp; done; printf "%s\\n" "-hidden">> $lib.exp~ $CC -shared${allow_undefined_flag} ${wl}-input ${wl}$lib.exp $compiler_flags $libobjs $deplibs -soname $soname `test -n "$verstring" && $ECHO "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib~$RM $lib.exp' # Both c and cxx compiler support -rpath directly hardcode_libdir_flag_spec='-rpath $libdir' fi archive_cmds_need_lc='no' hardcode_libdir_separator=: ;; solaris*) no_undefined_flag=' -z defs' if test "$GCC" = yes; then wlarc='${wl}' archive_cmds='$CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags' archive_expsym_cmds='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~ $CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp' else case `$CC -V 2>&1` in *"Compilers 5.0"*) wlarc='' archive_cmds='$LD -G${allow_undefined_flag} -h $soname -o $lib $libobjs $deplibs $linker_flags' archive_expsym_cmds='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~ $LD -G${allow_undefined_flag} -M $lib.exp -h $soname -o $lib $libobjs $deplibs $linker_flags~$RM $lib.exp' ;; *) wlarc='${wl}' archive_cmds='$CC -G${allow_undefined_flag} -h $soname -o $lib $libobjs $deplibs $compiler_flags' archive_expsym_cmds='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~ $CC -G${allow_undefined_flag} -M $lib.exp -h $soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp' ;; esac fi hardcode_libdir_flag_spec='-R$libdir' hardcode_shlibpath_var=no case $host_os in solaris2.[0-5] | solaris2.[0-5].*) ;; *) # The compiler driver will combine and reorder linker options, # but understands `-z linker_flag'. GCC discards it without `$wl', # but is careful enough not to reorder. # Supported since Solaris 2.6 (maybe 2.5.1?) if test "$GCC" = yes; then whole_archive_flag_spec='${wl}-z ${wl}allextract$convenience ${wl}-z ${wl}defaultextract' else whole_archive_flag_spec='-z allextract$convenience -z defaultextract' fi ;; esac link_all_deplibs=yes ;; sunos4*) if test "x$host_vendor" = xsequent; then # Use $CC to link under sequent, because it throws in some extra .o # files that make .init and .fini sections work. archive_cmds='$CC -G ${wl}-h $soname -o $lib $libobjs $deplibs $compiler_flags' else archive_cmds='$LD -assert pure-text -Bstatic -o $lib $libobjs $deplibs $linker_flags' fi hardcode_libdir_flag_spec='-L$libdir' hardcode_direct=yes hardcode_minus_L=yes hardcode_shlibpath_var=no ;; sysv4) case $host_vendor in sni) archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' hardcode_direct=yes # is this really true??? ;; siemens) ## LD is ld it makes a PLAMLIB ## CC just makes a GrossModule. archive_cmds='$LD -G -o $lib $libobjs $deplibs $linker_flags' reload_cmds='$CC -r -o $output$reload_objs' hardcode_direct=no ;; motorola) archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' hardcode_direct=no #Motorola manual says yes, but my tests say they lie ;; esac runpath_var='LD_RUN_PATH' hardcode_shlibpath_var=no ;; sysv4.3*) archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' hardcode_shlibpath_var=no export_dynamic_flag_spec='-Bexport' ;; sysv4*MP*) if test -d /usr/nec; then archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' hardcode_shlibpath_var=no runpath_var=LD_RUN_PATH hardcode_runpath_var=yes ld_shlibs=yes fi ;; sysv4*uw2* | sysv5OpenUNIX* | sysv5UnixWare7.[01].[10]* | unixware7* | sco3.2v5.0.[024]*) no_undefined_flag='${wl}-z,text' archive_cmds_need_lc=no hardcode_shlibpath_var=no runpath_var='LD_RUN_PATH' if test "$GCC" = yes; then archive_cmds='$CC -shared ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' archive_expsym_cmds='$CC -shared ${wl}-Bexport:$export_symbols ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' else archive_cmds='$CC -G ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' archive_expsym_cmds='$CC -G ${wl}-Bexport:$export_symbols ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' fi ;; sysv5* | sco3.2v5* | sco5v6*) # Note: We can NOT use -z defs as we might desire, because we do not # link with -lc, and that would cause any symbols used from libc to # always be unresolved, which means just about no library would # ever link correctly. If we're not using GNU ld we use -z text # though, which does catch some bad symbols but isn't as heavy-handed # as -z defs. no_undefined_flag='${wl}-z,text' allow_undefined_flag='${wl}-z,nodefs' archive_cmds_need_lc=no hardcode_shlibpath_var=no hardcode_libdir_flag_spec='${wl}-R,$libdir' hardcode_libdir_separator=':' link_all_deplibs=yes export_dynamic_flag_spec='${wl}-Bexport' runpath_var='LD_RUN_PATH' if test "$GCC" = yes; then archive_cmds='$CC -shared ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' archive_expsym_cmds='$CC -shared ${wl}-Bexport:$export_symbols ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' else archive_cmds='$CC -G ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' archive_expsym_cmds='$CC -G ${wl}-Bexport:$export_symbols ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' fi ;; uts4*) archive_cmds='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' hardcode_libdir_flag_spec='-L$libdir' hardcode_shlibpath_var=no ;; *) ld_shlibs=no ;; esac if test x$host_vendor = xsni; then case $host in sysv4 | sysv4.2uw2* | sysv4.3* | sysv5*) export_dynamic_flag_spec='${wl}-Blargedynsym' ;; esac fi fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ld_shlibs" >&5 $as_echo "$ld_shlibs" >&6; } test "$ld_shlibs" = no && can_build_shared=no with_gnu_ld=$with_gnu_ld # # Do we need to explicitly link libc? # case "x$archive_cmds_need_lc" in x|xyes) # Assume -lc should be added archive_cmds_need_lc=yes if test "$enable_shared" = yes && test "$GCC" = yes; then case $archive_cmds in *'~'*) # FIXME: we may have to deal with multi-command sequences. ;; '$CC '*) # Test whether the compiler implicitly links with -lc since on some # systems, -lgcc has to come before -lc. If gcc already passes -lc # to ld, don't add -lc before -lgcc. { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether -lc should be explicitly linked in" >&5 $as_echo_n "checking whether -lc should be explicitly linked in... " >&6; } if ${lt_cv_archive_cmds_need_lc+:} false; then : $as_echo_n "(cached) " >&6 else $RM conftest* echo "$lt_simple_compile_test_code" > conftest.$ac_ext if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5 (eval $ac_compile) 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } 2>conftest.err; then soname=conftest lib=conftest libobjs=conftest.$ac_objext deplibs= wl=$lt_prog_compiler_wl pic_flag=$lt_prog_compiler_pic compiler_flags=-v linker_flags=-v verstring= output_objdir=. libname=conftest lt_save_allow_undefined_flag=$allow_undefined_flag allow_undefined_flag= if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$archive_cmds 2\>\&1 \| $GREP \" -lc \" \>/dev/null 2\>\&1\""; } >&5 (eval $archive_cmds 2\>\&1 \| $GREP \" -lc \" \>/dev/null 2\>\&1) 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } then lt_cv_archive_cmds_need_lc=no else lt_cv_archive_cmds_need_lc=yes fi allow_undefined_flag=$lt_save_allow_undefined_flag else cat conftest.err 1>&5 fi $RM conftest* fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_archive_cmds_need_lc" >&5 $as_echo "$lt_cv_archive_cmds_need_lc" >&6; } archive_cmds_need_lc=$lt_cv_archive_cmds_need_lc ;; esac fi ;; esac { $as_echo "$as_me:${as_lineno-$LINENO}: checking dynamic linker characteristics" >&5 $as_echo_n "checking dynamic linker characteristics... " >&6; } if test "$GCC" = yes; then case $host_os in darwin*) lt_awk_arg="/^libraries:/,/LR/" ;; *) lt_awk_arg="/^libraries:/" ;; esac case $host_os in mingw* | cegcc*) lt_sed_strip_eq="s,=\([A-Za-z]:\),\1,g" ;; *) lt_sed_strip_eq="s,=/,/,g" ;; esac lt_search_path_spec=`$CC -print-search-dirs | awk $lt_awk_arg | $SED -e "s/^libraries://" -e $lt_sed_strip_eq` case $lt_search_path_spec in *\;*) # if the path contains ";" then we assume it to be the separator # otherwise default to the standard path separator (i.e. ":") - it is # assumed that no part of a normal pathname contains ";" but that should # okay in the real world where ";" in dirpaths is itself problematic. lt_search_path_spec=`$ECHO "$lt_search_path_spec" | $SED 's/;/ /g'` ;; *) lt_search_path_spec=`$ECHO "$lt_search_path_spec" | $SED "s/$PATH_SEPARATOR/ /g"` ;; esac # Ok, now we have the path, separated by spaces, we can step through it # and add multilib dir if necessary. lt_tmp_lt_search_path_spec= lt_multi_os_dir=`$CC $CPPFLAGS $CFLAGS $LDFLAGS -print-multi-os-directory 2>/dev/null` for lt_sys_path in $lt_search_path_spec; do if test -d "$lt_sys_path/$lt_multi_os_dir"; then lt_tmp_lt_search_path_spec="$lt_tmp_lt_search_path_spec $lt_sys_path/$lt_multi_os_dir" else test -d "$lt_sys_path" && \ lt_tmp_lt_search_path_spec="$lt_tmp_lt_search_path_spec $lt_sys_path" fi done lt_search_path_spec=`$ECHO "$lt_tmp_lt_search_path_spec" | awk ' BEGIN {RS=" "; FS="/|\n";} { lt_foo=""; lt_count=0; for (lt_i = NF; lt_i > 0; lt_i--) { if ($lt_i != "" && $lt_i != ".") { if ($lt_i == "..") { lt_count++; } else { if (lt_count == 0) { lt_foo="/" $lt_i lt_foo; } else { lt_count--; } } } } if (lt_foo != "") { lt_freq[lt_foo]++; } if (lt_freq[lt_foo] == 1) { print lt_foo; } }'` # AWK program above erroneously prepends '/' to C:/dos/paths # for these hosts. case $host_os in mingw* | cegcc*) lt_search_path_spec=`$ECHO "$lt_search_path_spec" |\ $SED 's,/\([A-Za-z]:\),\1,g'` ;; esac sys_lib_search_path_spec=`$ECHO "$lt_search_path_spec" | $lt_NL2SP` else sys_lib_search_path_spec="/lib /usr/lib /usr/local/lib" fi library_names_spec= libname_spec='lib$name' soname_spec= shrext_cmds=".so" postinstall_cmds= postuninstall_cmds= finish_cmds= finish_eval= shlibpath_var= shlibpath_overrides_runpath=unknown version_type=none dynamic_linker="$host_os ld.so" sys_lib_dlsearch_path_spec="/lib /usr/lib" need_lib_prefix=unknown hardcode_into_libs=no # when you set need_version to no, make sure it does not cause -set_version # flags to be left without arguments need_version=unknown case $host_os in aix3*) version_type=linux # correct to gnu/linux during the next big refactor library_names_spec='${libname}${release}${shared_ext}$versuffix $libname.a' shlibpath_var=LIBPATH # AIX 3 has no versioning support, so we append a major version to the name. soname_spec='${libname}${release}${shared_ext}$major' ;; aix[4-9]*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no hardcode_into_libs=yes if test "$host_cpu" = ia64; then # AIX 5 supports IA64 library_names_spec='${libname}${release}${shared_ext}$major ${libname}${release}${shared_ext}$versuffix $libname${shared_ext}' shlibpath_var=LD_LIBRARY_PATH else # With GCC up to 2.95.x, collect2 would create an import file # for dependence libraries. The import file would start with # the line `#! .'. This would cause the generated library to # depend on `.', always an invalid library. This was fixed in # development snapshots of GCC prior to 3.0. case $host_os in aix4 | aix4.[01] | aix4.[01].*) if { echo '#if __GNUC__ > 2 || (__GNUC__ == 2 && __GNUC_MINOR__ >= 97)' echo ' yes ' echo '#endif'; } | ${CC} -E - | $GREP yes > /dev/null; then : else can_build_shared=no fi ;; esac # AIX (on Power*) has no versioning support, so currently we can not hardcode correct # soname into executable. Probably we can add versioning support to # collect2, so additional links can be useful in future. if test "$aix_use_runtimelinking" = yes; then # If using run time linking (on AIX 4.2 or later) use lib.so # instead of lib.a to let people know that these are not # typical AIX shared libraries. library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' else # We preserve .a as extension for shared libraries through AIX4.2 # and later when we are not doing run time linking. library_names_spec='${libname}${release}.a $libname.a' soname_spec='${libname}${release}${shared_ext}$major' fi shlibpath_var=LIBPATH fi ;; amigaos*) case $host_cpu in powerpc) # Since July 2007 AmigaOS4 officially supports .so libraries. # When compiling the executable, add -use-dynld -Lsobjs: to the compileline. library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' ;; m68k) library_names_spec='$libname.ixlibrary $libname.a' # Create ${libname}_ixlibrary.a entries in /sys/libs. finish_eval='for lib in `ls $libdir/*.ixlibrary 2>/dev/null`; do libname=`func_echo_all "$lib" | $SED '\''s%^.*/\([^/]*\)\.ixlibrary$%\1%'\''`; test $RM /sys/libs/${libname}_ixlibrary.a; $show "cd /sys/libs && $LN_S $lib ${libname}_ixlibrary.a"; cd /sys/libs && $LN_S $lib ${libname}_ixlibrary.a || exit 1; done' ;; esac ;; beos*) library_names_spec='${libname}${shared_ext}' dynamic_linker="$host_os ld.so" shlibpath_var=LIBRARY_PATH ;; bsdi[45]*) version_type=linux # correct to gnu/linux during the next big refactor need_version=no library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' finish_cmds='PATH="\$PATH:/sbin" ldconfig $libdir' shlibpath_var=LD_LIBRARY_PATH sys_lib_search_path_spec="/shlib /usr/lib /usr/X11/lib /usr/contrib/lib /lib /usr/local/lib" sys_lib_dlsearch_path_spec="/shlib /usr/lib /usr/local/lib" # the default ld.so.conf also contains /usr/contrib/lib and # /usr/X11R6/lib (/usr/X11 is a link to /usr/X11R6), but let us allow # libtool to hard-code these into programs ;; cygwin* | mingw* | pw32* | cegcc*) version_type=windows shrext_cmds=".dll" need_version=no need_lib_prefix=no case $GCC,$cc_basename in yes,*) # gcc library_names_spec='$libname.dll.a' # DLL is installed to $(libdir)/../bin by postinstall_cmds postinstall_cmds='base_file=`basename \${file}`~ dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i; echo \$dlname'\''`~ dldir=$destdir/`dirname \$dlpath`~ test -d \$dldir || mkdir -p \$dldir~ $install_prog $dir/$dlname \$dldir/$dlname~ chmod a+x \$dldir/$dlname~ if test -n '\''$stripme'\'' && test -n '\''$striplib'\''; then eval '\''$striplib \$dldir/$dlname'\'' || exit \$?; fi' postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~ dlpath=$dir/\$dldll~ $RM \$dlpath' shlibpath_overrides_runpath=yes case $host_os in cygwin*) # Cygwin DLLs use 'cyg' prefix rather than 'lib' soname_spec='`echo ${libname} | sed -e 's/^lib/cyg/'``echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}' sys_lib_search_path_spec="$sys_lib_search_path_spec /usr/lib/w32api" ;; mingw* | cegcc*) # MinGW DLLs use traditional 'lib' prefix soname_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}' ;; pw32*) # pw32 DLLs use 'pw' prefix rather than 'lib' library_names_spec='`echo ${libname} | sed -e 's/^lib/pw/'``echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}' ;; esac dynamic_linker='Win32 ld.exe' ;; *,cl*) # Native MSVC libname_spec='$name' soname_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}' library_names_spec='${libname}.dll.lib' case $build_os in mingw*) sys_lib_search_path_spec= lt_save_ifs=$IFS IFS=';' for lt_path in $LIB do IFS=$lt_save_ifs # Let DOS variable expansion print the short 8.3 style file name. lt_path=`cd "$lt_path" 2>/dev/null && cmd //C "for %i in (".") do @echo %~si"` sys_lib_search_path_spec="$sys_lib_search_path_spec $lt_path" done IFS=$lt_save_ifs # Convert to MSYS style. sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | sed -e 's|\\\\|/|g' -e 's| \\([a-zA-Z]\\):| /\\1|g' -e 's|^ ||'` ;; cygwin*) # Convert to unix form, then to dos form, then back to unix form # but this time dos style (no spaces!) so that the unix form looks # like /cygdrive/c/PROGRA~1:/cygdr... sys_lib_search_path_spec=`cygpath --path --unix "$LIB"` sys_lib_search_path_spec=`cygpath --path --dos "$sys_lib_search_path_spec" 2>/dev/null` sys_lib_search_path_spec=`cygpath --path --unix "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"` ;; *) sys_lib_search_path_spec="$LIB" if $ECHO "$sys_lib_search_path_spec" | $GREP ';[c-zC-Z]:/' >/dev/null; then # It is most probably a Windows format PATH. sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'` else sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"` fi # FIXME: find the short name or the path components, as spaces are # common. (e.g. "Program Files" -> "PROGRA~1") ;; esac # DLL is installed to $(libdir)/../bin by postinstall_cmds postinstall_cmds='base_file=`basename \${file}`~ dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i; echo \$dlname'\''`~ dldir=$destdir/`dirname \$dlpath`~ test -d \$dldir || mkdir -p \$dldir~ $install_prog $dir/$dlname \$dldir/$dlname' postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~ dlpath=$dir/\$dldll~ $RM \$dlpath' shlibpath_overrides_runpath=yes dynamic_linker='Win32 link.exe' ;; *) # Assume MSVC wrapper library_names_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext} $libname.lib' dynamic_linker='Win32 ld.exe' ;; esac # FIXME: first we should search . and the directory the executable is in shlibpath_var=PATH ;; darwin* | rhapsody*) dynamic_linker="$host_os dyld" version_type=darwin need_lib_prefix=no need_version=no library_names_spec='${libname}${release}${major}$shared_ext ${libname}$shared_ext' soname_spec='${libname}${release}${major}$shared_ext' shlibpath_overrides_runpath=yes shlibpath_var=DYLD_LIBRARY_PATH shrext_cmds='`test .$module = .yes && echo .so || echo .dylib`' sys_lib_search_path_spec="$sys_lib_search_path_spec /usr/local/lib" sys_lib_dlsearch_path_spec='/usr/local/lib /lib /usr/lib' ;; dgux*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname$shared_ext' soname_spec='${libname}${release}${shared_ext}$major' shlibpath_var=LD_LIBRARY_PATH ;; freebsd* | dragonfly*) # DragonFly does not have aout. When/if they implement a new # versioning mechanism, adjust this. if test -x /usr/bin/objformat; then objformat=`/usr/bin/objformat` else case $host_os in freebsd[23].*) objformat=aout ;; *) objformat=elf ;; esac fi version_type=freebsd-$objformat case $version_type in freebsd-elf*) library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext} $libname${shared_ext}' need_version=no need_lib_prefix=no ;; freebsd-*) library_names_spec='${libname}${release}${shared_ext}$versuffix $libname${shared_ext}$versuffix' need_version=yes ;; esac shlibpath_var=LD_LIBRARY_PATH case $host_os in freebsd2.*) shlibpath_overrides_runpath=yes ;; freebsd3.[01]* | freebsdelf3.[01]*) shlibpath_overrides_runpath=yes hardcode_into_libs=yes ;; freebsd3.[2-9]* | freebsdelf3.[2-9]* | \ freebsd4.[0-5] | freebsdelf4.[0-5] | freebsd4.1.1 | freebsdelf4.1.1) shlibpath_overrides_runpath=no hardcode_into_libs=yes ;; *) # from 4.6 on, and DragonFly shlibpath_overrides_runpath=yes hardcode_into_libs=yes ;; esac ;; gnu*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}${major} ${libname}${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=no hardcode_into_libs=yes ;; haiku*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no dynamic_linker="$host_os runtime_loader" library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}${major} ${libname}${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' shlibpath_var=LIBRARY_PATH shlibpath_overrides_runpath=yes sys_lib_dlsearch_path_spec='/boot/home/config/lib /boot/common/lib /boot/system/lib' hardcode_into_libs=yes ;; hpux9* | hpux10* | hpux11*) # Give a soname corresponding to the major version so that dld.sl refuses to # link against other versions. version_type=sunos need_lib_prefix=no need_version=no case $host_cpu in ia64*) shrext_cmds='.so' hardcode_into_libs=yes dynamic_linker="$host_os dld.so" shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes # Unless +noenvvar is specified. library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' if test "X$HPUX_IA64_MODE" = X32; then sys_lib_search_path_spec="/usr/lib/hpux32 /usr/local/lib/hpux32 /usr/local/lib" else sys_lib_search_path_spec="/usr/lib/hpux64 /usr/local/lib/hpux64" fi sys_lib_dlsearch_path_spec=$sys_lib_search_path_spec ;; hppa*64*) shrext_cmds='.sl' hardcode_into_libs=yes dynamic_linker="$host_os dld.sl" shlibpath_var=LD_LIBRARY_PATH # How should we handle SHLIB_PATH shlibpath_overrides_runpath=yes # Unless +noenvvar is specified. library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' sys_lib_search_path_spec="/usr/lib/pa20_64 /usr/ccs/lib/pa20_64" sys_lib_dlsearch_path_spec=$sys_lib_search_path_spec ;; *) shrext_cmds='.sl' dynamic_linker="$host_os dld.sl" shlibpath_var=SHLIB_PATH shlibpath_overrides_runpath=no # +s is required to enable SHLIB_PATH library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' ;; esac # HP-UX runs *really* slowly unless shared libraries are mode 555, ... postinstall_cmds='chmod 555 $lib' # or fails outright, so override atomically: install_override_mode=555 ;; interix[3-9]*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' dynamic_linker='Interix 3.x ld.so.1 (PE, like ELF)' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=no hardcode_into_libs=yes ;; irix5* | irix6* | nonstopux*) case $host_os in nonstopux*) version_type=nonstopux ;; *) if test "$lt_cv_prog_gnu_ld" = yes; then version_type=linux # correct to gnu/linux during the next big refactor else version_type=irix fi ;; esac need_lib_prefix=no need_version=no soname_spec='${libname}${release}${shared_ext}$major' library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${release}${shared_ext} $libname${shared_ext}' case $host_os in irix5* | nonstopux*) libsuff= shlibsuff= ;; *) case $LD in # libtool.m4 will add one of these switches to LD *-32|*"-32 "|*-melf32bsmip|*"-melf32bsmip ") libsuff= shlibsuff= libmagic=32-bit;; *-n32|*"-n32 "|*-melf32bmipn32|*"-melf32bmipn32 ") libsuff=32 shlibsuff=N32 libmagic=N32;; *-64|*"-64 "|*-melf64bmip|*"-melf64bmip ") libsuff=64 shlibsuff=64 libmagic=64-bit;; *) libsuff= shlibsuff= libmagic=never-match;; esac ;; esac shlibpath_var=LD_LIBRARY${shlibsuff}_PATH shlibpath_overrides_runpath=no sys_lib_search_path_spec="/usr/lib${libsuff} /lib${libsuff} /usr/local/lib${libsuff}" sys_lib_dlsearch_path_spec="/usr/lib${libsuff} /lib${libsuff}" hardcode_into_libs=yes ;; # No shared lib support for Linux oldld, aout, or coff. linux*oldld* | linux*aout* | linux*coff*) dynamic_linker=no ;; # This must be glibc/ELF. linux* | k*bsd*-gnu | kopensolaris*-gnu) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' finish_cmds='PATH="\$PATH:/sbin" ldconfig -n $libdir' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=no # Some binutils ld are patched to set DT_RUNPATH if ${lt_cv_shlibpath_overrides_runpath+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_shlibpath_overrides_runpath=no save_LDFLAGS=$LDFLAGS save_libdir=$libdir eval "libdir=/foo; wl=\"$lt_prog_compiler_wl\"; \ LDFLAGS=\"\$LDFLAGS $hardcode_libdir_flag_spec\"" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : if ($OBJDUMP -p conftest$ac_exeext) 2>/dev/null | grep "RUNPATH.*$libdir" >/dev/null; then : lt_cv_shlibpath_overrides_runpath=yes fi fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LDFLAGS=$save_LDFLAGS libdir=$save_libdir fi shlibpath_overrides_runpath=$lt_cv_shlibpath_overrides_runpath # This implies no fast_install, which is unacceptable. # Some rework will be needed to allow for fast_install # before this can be enabled. hardcode_into_libs=yes # Append ld.so.conf contents to the search path if test -f /etc/ld.so.conf; then lt_ld_extra=`awk '/^include / { system(sprintf("cd /etc; cat %s 2>/dev/null", \$2)); skip = 1; } { if (!skip) print \$0; skip = 0; }' < /etc/ld.so.conf | $SED -e 's/#.*//;/^[ ]*hwcap[ ]/d;s/[:, ]/ /g;s/=[^=]*$//;s/=[^= ]* / /g;s/"//g;/^$/d' | tr '\n' ' '` sys_lib_dlsearch_path_spec="/lib /usr/lib $lt_ld_extra" fi # We used to test for /lib/ld.so.1 and disable shared libraries on # powerpc, because MkLinux only supported shared libraries with the # GNU dynamic linker. Since this was broken with cross compilers, # most powerpc-linux boxes support dynamic linking these days and # people can always --disable-shared, the test was removed, and we # assume the GNU/Linux dynamic linker is in use. dynamic_linker='GNU/Linux ld.so' ;; netbsdelf*-gnu) version_type=linux need_lib_prefix=no need_version=no library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=no hardcode_into_libs=yes dynamic_linker='NetBSD ld.elf_so' ;; netbsd*) version_type=sunos need_lib_prefix=no need_version=no if echo __ELF__ | $CC -E - | $GREP __ELF__ >/dev/null; then library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${shared_ext}$versuffix' finish_cmds='PATH="\$PATH:/sbin" ldconfig -m $libdir' dynamic_linker='NetBSD (a.out) ld.so' else library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' dynamic_linker='NetBSD ld.elf_so' fi shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes hardcode_into_libs=yes ;; newsos6) version_type=linux # correct to gnu/linux during the next big refactor library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes ;; *nto* | *qnx*) version_type=qnx need_lib_prefix=no need_version=no library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=no hardcode_into_libs=yes dynamic_linker='ldqnx.so' ;; openbsd*) version_type=sunos sys_lib_dlsearch_path_spec="/usr/lib" need_lib_prefix=no # Some older versions of OpenBSD (3.3 at least) *do* need versioned libs. case $host_os in openbsd3.3 | openbsd3.3.*) need_version=yes ;; *) need_version=no ;; esac library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${shared_ext}$versuffix' finish_cmds='PATH="\$PATH:/sbin" ldconfig -m $libdir' shlibpath_var=LD_LIBRARY_PATH if test -z "`echo __ELF__ | $CC -E - | $GREP __ELF__`" || test "$host_os-$host_cpu" = "openbsd2.8-powerpc"; then case $host_os in openbsd2.[89] | openbsd2.[89].*) shlibpath_overrides_runpath=no ;; *) shlibpath_overrides_runpath=yes ;; esac else shlibpath_overrides_runpath=yes fi ;; os2*) libname_spec='$name' shrext_cmds=".dll" need_lib_prefix=no library_names_spec='$libname${shared_ext} $libname.a' dynamic_linker='OS/2 ld.exe' shlibpath_var=LIBPATH ;; osf3* | osf4* | osf5*) version_type=osf need_lib_prefix=no need_version=no soname_spec='${libname}${release}${shared_ext}$major' library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' shlibpath_var=LD_LIBRARY_PATH sys_lib_search_path_spec="/usr/shlib /usr/ccs/lib /usr/lib/cmplrs/cc /usr/lib /usr/local/lib /var/shlib" sys_lib_dlsearch_path_spec="$sys_lib_search_path_spec" ;; rdos*) dynamic_linker=no ;; solaris*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes hardcode_into_libs=yes # ldd complains unless libraries are executable postinstall_cmds='chmod +x $lib' ;; sunos4*) version_type=sunos library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${shared_ext}$versuffix' finish_cmds='PATH="\$PATH:/usr/etc" ldconfig $libdir' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes if test "$with_gnu_ld" = yes; then need_lib_prefix=no fi need_version=yes ;; sysv4 | sysv4.3*) version_type=linux # correct to gnu/linux during the next big refactor library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' shlibpath_var=LD_LIBRARY_PATH case $host_vendor in sni) shlibpath_overrides_runpath=no need_lib_prefix=no runpath_var=LD_RUN_PATH ;; siemens) need_lib_prefix=no ;; motorola) need_lib_prefix=no need_version=no shlibpath_overrides_runpath=no sys_lib_search_path_spec='/lib /usr/lib /usr/ccs/lib' ;; esac ;; sysv4*MP*) if test -d /usr/nec ;then version_type=linux # correct to gnu/linux during the next big refactor library_names_spec='$libname${shared_ext}.$versuffix $libname${shared_ext}.$major $libname${shared_ext}' soname_spec='$libname${shared_ext}.$major' shlibpath_var=LD_LIBRARY_PATH fi ;; sysv5* | sco3.2v5* | sco5v6* | unixware* | OpenUNIX* | sysv4*uw2*) version_type=freebsd-elf need_lib_prefix=no need_version=no library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext} $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes hardcode_into_libs=yes if test "$with_gnu_ld" = yes; then sys_lib_search_path_spec='/usr/local/lib /usr/gnu/lib /usr/ccs/lib /usr/lib /lib' else sys_lib_search_path_spec='/usr/ccs/lib /usr/lib' case $host_os in sco3.2v5*) sys_lib_search_path_spec="$sys_lib_search_path_spec /lib" ;; esac fi sys_lib_dlsearch_path_spec='/usr/lib' ;; tpf*) # TPF is a cross-target only. Preferred cross-host = GNU/Linux. version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=no hardcode_into_libs=yes ;; uts4*) version_type=linux # correct to gnu/linux during the next big refactor library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' shlibpath_var=LD_LIBRARY_PATH ;; *) dynamic_linker=no ;; esac { $as_echo "$as_me:${as_lineno-$LINENO}: result: $dynamic_linker" >&5 $as_echo "$dynamic_linker" >&6; } test "$dynamic_linker" = no && can_build_shared=no variables_saved_for_relink="PATH $shlibpath_var $runpath_var" if test "$GCC" = yes; then variables_saved_for_relink="$variables_saved_for_relink GCC_EXEC_PREFIX COMPILER_PATH LIBRARY_PATH" fi if test "${lt_cv_sys_lib_search_path_spec+set}" = set; then sys_lib_search_path_spec="$lt_cv_sys_lib_search_path_spec" fi if test "${lt_cv_sys_lib_dlsearch_path_spec+set}" = set; then sys_lib_dlsearch_path_spec="$lt_cv_sys_lib_dlsearch_path_spec" fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to hardcode library paths into programs" >&5 $as_echo_n "checking how to hardcode library paths into programs... " >&6; } hardcode_action= if test -n "$hardcode_libdir_flag_spec" || test -n "$runpath_var" || test "X$hardcode_automatic" = "Xyes" ; then # We can hardcode non-existent directories. if test "$hardcode_direct" != no && # If the only mechanism to avoid hardcoding is shlibpath_var, we # have to relink, otherwise we might link with an installed library # when we should be linking with a yet-to-be-installed one ## test "$_LT_TAGVAR(hardcode_shlibpath_var, )" != no && test "$hardcode_minus_L" != no; then # Linking always hardcodes the temporary library directory. hardcode_action=relink else # We can link without hardcoding, and we can hardcode nonexisting dirs. hardcode_action=immediate fi else # We cannot hardcode anything, or else we can only hardcode existing # directories. hardcode_action=unsupported fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $hardcode_action" >&5 $as_echo "$hardcode_action" >&6; } if test "$hardcode_action" = relink || test "$inherit_rpath" = yes; then # Fast installation is not supported enable_fast_install=no elif test "$shlibpath_overrides_runpath" = yes || test "$enable_shared" = no; then # Fast installation is not necessary enable_fast_install=needless fi if test "x$enable_dlopen" != xyes; then enable_dlopen=unknown enable_dlopen_self=unknown enable_dlopen_self_static=unknown else lt_cv_dlopen=no lt_cv_dlopen_libs= case $host_os in beos*) lt_cv_dlopen="load_add_on" lt_cv_dlopen_libs= lt_cv_dlopen_self=yes ;; mingw* | pw32* | cegcc*) lt_cv_dlopen="LoadLibrary" lt_cv_dlopen_libs= ;; cygwin*) lt_cv_dlopen="dlopen" lt_cv_dlopen_libs= ;; darwin*) # if libdl is installed we need to link against it { $as_echo "$as_me:${as_lineno-$LINENO}: checking for dlopen in -ldl" >&5 $as_echo_n "checking for dlopen in -ldl... " >&6; } if ${ac_cv_lib_dl_dlopen+:} false; then : $as_echo_n "(cached) " >&6 else ac_check_lib_save_LIBS=$LIBS LIBS="-ldl $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char dlopen (); int main () { return dlopen (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : ac_cv_lib_dl_dlopen=yes else ac_cv_lib_dl_dlopen=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LIBS=$ac_check_lib_save_LIBS fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_dl_dlopen" >&5 $as_echo "$ac_cv_lib_dl_dlopen" >&6; } if test "x$ac_cv_lib_dl_dlopen" = xyes; then : lt_cv_dlopen="dlopen" lt_cv_dlopen_libs="-ldl" else lt_cv_dlopen="dyld" lt_cv_dlopen_libs= lt_cv_dlopen_self=yes fi ;; *) ac_fn_c_check_func "$LINENO" "shl_load" "ac_cv_func_shl_load" if test "x$ac_cv_func_shl_load" = xyes; then : lt_cv_dlopen="shl_load" else { $as_echo "$as_me:${as_lineno-$LINENO}: checking for shl_load in -ldld" >&5 $as_echo_n "checking for shl_load in -ldld... " >&6; } if ${ac_cv_lib_dld_shl_load+:} false; then : $as_echo_n "(cached) " >&6 else ac_check_lib_save_LIBS=$LIBS LIBS="-ldld $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char shl_load (); int main () { return shl_load (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : ac_cv_lib_dld_shl_load=yes else ac_cv_lib_dld_shl_load=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LIBS=$ac_check_lib_save_LIBS fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_dld_shl_load" >&5 $as_echo "$ac_cv_lib_dld_shl_load" >&6; } if test "x$ac_cv_lib_dld_shl_load" = xyes; then : lt_cv_dlopen="shl_load" lt_cv_dlopen_libs="-ldld" else ac_fn_c_check_func "$LINENO" "dlopen" "ac_cv_func_dlopen" if test "x$ac_cv_func_dlopen" = xyes; then : lt_cv_dlopen="dlopen" else { $as_echo "$as_me:${as_lineno-$LINENO}: checking for dlopen in -ldl" >&5 $as_echo_n "checking for dlopen in -ldl... " >&6; } if ${ac_cv_lib_dl_dlopen+:} false; then : $as_echo_n "(cached) " >&6 else ac_check_lib_save_LIBS=$LIBS LIBS="-ldl $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char dlopen (); int main () { return dlopen (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : ac_cv_lib_dl_dlopen=yes else ac_cv_lib_dl_dlopen=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LIBS=$ac_check_lib_save_LIBS fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_dl_dlopen" >&5 $as_echo "$ac_cv_lib_dl_dlopen" >&6; } if test "x$ac_cv_lib_dl_dlopen" = xyes; then : lt_cv_dlopen="dlopen" lt_cv_dlopen_libs="-ldl" else { $as_echo "$as_me:${as_lineno-$LINENO}: checking for dlopen in -lsvld" >&5 $as_echo_n "checking for dlopen in -lsvld... " >&6; } if ${ac_cv_lib_svld_dlopen+:} false; then : $as_echo_n "(cached) " >&6 else ac_check_lib_save_LIBS=$LIBS LIBS="-lsvld $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char dlopen (); int main () { return dlopen (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : ac_cv_lib_svld_dlopen=yes else ac_cv_lib_svld_dlopen=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LIBS=$ac_check_lib_save_LIBS fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_svld_dlopen" >&5 $as_echo "$ac_cv_lib_svld_dlopen" >&6; } if test "x$ac_cv_lib_svld_dlopen" = xyes; then : lt_cv_dlopen="dlopen" lt_cv_dlopen_libs="-lsvld" else { $as_echo "$as_me:${as_lineno-$LINENO}: checking for dld_link in -ldld" >&5 $as_echo_n "checking for dld_link in -ldld... " >&6; } if ${ac_cv_lib_dld_dld_link+:} false; then : $as_echo_n "(cached) " >&6 else ac_check_lib_save_LIBS=$LIBS LIBS="-ldld $LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char dld_link (); int main () { return dld_link (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : ac_cv_lib_dld_dld_link=yes else ac_cv_lib_dld_dld_link=no fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LIBS=$ac_check_lib_save_LIBS fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_cv_lib_dld_dld_link" >&5 $as_echo "$ac_cv_lib_dld_dld_link" >&6; } if test "x$ac_cv_lib_dld_dld_link" = xyes; then : lt_cv_dlopen="dld_link" lt_cv_dlopen_libs="-ldld" fi fi fi fi fi fi ;; esac if test "x$lt_cv_dlopen" != xno; then enable_dlopen=yes else enable_dlopen=no fi case $lt_cv_dlopen in dlopen) save_CPPFLAGS="$CPPFLAGS" test "x$ac_cv_header_dlfcn_h" = xyes && CPPFLAGS="$CPPFLAGS -DHAVE_DLFCN_H" save_LDFLAGS="$LDFLAGS" wl=$lt_prog_compiler_wl eval LDFLAGS=\"\$LDFLAGS $export_dynamic_flag_spec\" save_LIBS="$LIBS" LIBS="$lt_cv_dlopen_libs $LIBS" { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether a program can dlopen itself" >&5 $as_echo_n "checking whether a program can dlopen itself... " >&6; } if ${lt_cv_dlopen_self+:} false; then : $as_echo_n "(cached) " >&6 else if test "$cross_compiling" = yes; then : lt_cv_dlopen_self=cross else lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2 lt_status=$lt_dlunknown cat > conftest.$ac_ext <<_LT_EOF #line $LINENO "configure" #include "confdefs.h" #if HAVE_DLFCN_H #include #endif #include #ifdef RTLD_GLOBAL # define LT_DLGLOBAL RTLD_GLOBAL #else # ifdef DL_GLOBAL # define LT_DLGLOBAL DL_GLOBAL # else # define LT_DLGLOBAL 0 # endif #endif /* We may have to define LT_DLLAZY_OR_NOW in the command line if we find out it does not work in some platform. */ #ifndef LT_DLLAZY_OR_NOW # ifdef RTLD_LAZY # define LT_DLLAZY_OR_NOW RTLD_LAZY # else # ifdef DL_LAZY # define LT_DLLAZY_OR_NOW DL_LAZY # else # ifdef RTLD_NOW # define LT_DLLAZY_OR_NOW RTLD_NOW # else # ifdef DL_NOW # define LT_DLLAZY_OR_NOW DL_NOW # else # define LT_DLLAZY_OR_NOW 0 # endif # endif # endif # endif #endif /* When -fvisbility=hidden is used, assume the code has been annotated correspondingly for the symbols needed. */ #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3)) int fnord () __attribute__((visibility("default"))); #endif int fnord () { return 42; } int main () { void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW); int status = $lt_dlunknown; if (self) { if (dlsym (self,"fnord")) status = $lt_dlno_uscore; else { if (dlsym( self,"_fnord")) status = $lt_dlneed_uscore; else puts (dlerror ()); } /* dlclose (self); */ } else puts (dlerror ()); return status; } _LT_EOF if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_link\""; } >&5 (eval $ac_link) 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } && test -s conftest${ac_exeext} 2>/dev/null; then (./conftest; exit; ) >&5 2>/dev/null lt_status=$? case x$lt_status in x$lt_dlno_uscore) lt_cv_dlopen_self=yes ;; x$lt_dlneed_uscore) lt_cv_dlopen_self=yes ;; x$lt_dlunknown|x*) lt_cv_dlopen_self=no ;; esac else : # compilation failed lt_cv_dlopen_self=no fi fi rm -fr conftest* fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_dlopen_self" >&5 $as_echo "$lt_cv_dlopen_self" >&6; } if test "x$lt_cv_dlopen_self" = xyes; then wl=$lt_prog_compiler_wl eval LDFLAGS=\"\$LDFLAGS $lt_prog_compiler_static\" { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether a statically linked program can dlopen itself" >&5 $as_echo_n "checking whether a statically linked program can dlopen itself... " >&6; } if ${lt_cv_dlopen_self_static+:} false; then : $as_echo_n "(cached) " >&6 else if test "$cross_compiling" = yes; then : lt_cv_dlopen_self_static=cross else lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2 lt_status=$lt_dlunknown cat > conftest.$ac_ext <<_LT_EOF #line $LINENO "configure" #include "confdefs.h" #if HAVE_DLFCN_H #include #endif #include #ifdef RTLD_GLOBAL # define LT_DLGLOBAL RTLD_GLOBAL #else # ifdef DL_GLOBAL # define LT_DLGLOBAL DL_GLOBAL # else # define LT_DLGLOBAL 0 # endif #endif /* We may have to define LT_DLLAZY_OR_NOW in the command line if we find out it does not work in some platform. */ #ifndef LT_DLLAZY_OR_NOW # ifdef RTLD_LAZY # define LT_DLLAZY_OR_NOW RTLD_LAZY # else # ifdef DL_LAZY # define LT_DLLAZY_OR_NOW DL_LAZY # else # ifdef RTLD_NOW # define LT_DLLAZY_OR_NOW RTLD_NOW # else # ifdef DL_NOW # define LT_DLLAZY_OR_NOW DL_NOW # else # define LT_DLLAZY_OR_NOW 0 # endif # endif # endif # endif #endif /* When -fvisbility=hidden is used, assume the code has been annotated correspondingly for the symbols needed. */ #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3)) int fnord () __attribute__((visibility("default"))); #endif int fnord () { return 42; } int main () { void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW); int status = $lt_dlunknown; if (self) { if (dlsym (self,"fnord")) status = $lt_dlno_uscore; else { if (dlsym( self,"_fnord")) status = $lt_dlneed_uscore; else puts (dlerror ()); } /* dlclose (self); */ } else puts (dlerror ()); return status; } _LT_EOF if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_link\""; } >&5 (eval $ac_link) 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } && test -s conftest${ac_exeext} 2>/dev/null; then (./conftest; exit; ) >&5 2>/dev/null lt_status=$? case x$lt_status in x$lt_dlno_uscore) lt_cv_dlopen_self_static=yes ;; x$lt_dlneed_uscore) lt_cv_dlopen_self_static=yes ;; x$lt_dlunknown|x*) lt_cv_dlopen_self_static=no ;; esac else : # compilation failed lt_cv_dlopen_self_static=no fi fi rm -fr conftest* fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_dlopen_self_static" >&5 $as_echo "$lt_cv_dlopen_self_static" >&6; } fi CPPFLAGS="$save_CPPFLAGS" LDFLAGS="$save_LDFLAGS" LIBS="$save_LIBS" ;; esac case $lt_cv_dlopen_self in yes|no) enable_dlopen_self=$lt_cv_dlopen_self ;; *) enable_dlopen_self=unknown ;; esac case $lt_cv_dlopen_self_static in yes|no) enable_dlopen_self_static=$lt_cv_dlopen_self_static ;; *) enable_dlopen_self_static=unknown ;; esac fi striplib= old_striplib= { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether stripping libraries is possible" >&5 $as_echo_n "checking whether stripping libraries is possible... " >&6; } if test -n "$STRIP" && $STRIP -V 2>&1 | $GREP "GNU strip" >/dev/null; then test -z "$old_striplib" && old_striplib="$STRIP --strip-debug" test -z "$striplib" && striplib="$STRIP --strip-unneeded" { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } else # FIXME - insert some real tests, host_os isn't really good enough case $host_os in darwin*) if test -n "$STRIP" ; then striplib="$STRIP -x" old_striplib="$STRIP -S" { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi ;; *) { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } ;; esac fi # Report which library types will actually be built { $as_echo "$as_me:${as_lineno-$LINENO}: checking if libtool supports shared libraries" >&5 $as_echo_n "checking if libtool supports shared libraries... " >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: result: $can_build_shared" >&5 $as_echo "$can_build_shared" >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to build shared libraries" >&5 $as_echo_n "checking whether to build shared libraries... " >&6; } test "$can_build_shared" = "no" && enable_shared=no # On AIX, shared libraries and static libraries use the same namespace, and # are all built from PIC. case $host_os in aix3*) test "$enable_shared" = yes && enable_static=no if test -n "$RANLIB"; then archive_cmds="$archive_cmds~\$RANLIB \$lib" postinstall_cmds='$RANLIB $lib' fi ;; aix[4-9]*) if test "$host_cpu" != ia64 && test "$aix_use_runtimelinking" = no ; then test "$enable_shared" = yes && enable_static=no fi ;; esac { $as_echo "$as_me:${as_lineno-$LINENO}: result: $enable_shared" >&5 $as_echo "$enable_shared" >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to build static libraries" >&5 $as_echo_n "checking whether to build static libraries... " >&6; } # Make sure either enable_shared or enable_static is yes. test "$enable_shared" = yes || enable_static=yes { $as_echo "$as_me:${as_lineno-$LINENO}: result: $enable_static" >&5 $as_echo "$enable_static" >&6; } fi ac_ext=cpp ac_cpp='$CXXCPP $CPPFLAGS' ac_compile='$CXX -c $CXXFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CXX -o conftest$ac_exeext $CXXFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_cxx_compiler_gnu CC="$lt_save_CC" if test -n "$CXX" && ( test "X$CXX" != "Xno" && ( (test "X$CXX" = "Xg++" && `g++ -v >/dev/null 2>&1` ) || (test "X$CXX" != "Xg++"))) ; then ac_ext=cpp ac_cpp='$CXXCPP $CPPFLAGS' ac_compile='$CXX -c $CXXFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CXX -o conftest$ac_exeext $CXXFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_cxx_compiler_gnu { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to run the C++ preprocessor" >&5 $as_echo_n "checking how to run the C++ preprocessor... " >&6; } if test -z "$CXXCPP"; then if ${ac_cv_prog_CXXCPP+:} false; then : $as_echo_n "(cached) " >&6 else # Double quotes because CXXCPP needs to be expanded for CXXCPP in "$CXX -E" "/lib/cpp" do ac_preproc_ok=false for ac_cxx_preproc_warn_flag in '' yes do # Use a header file that comes with gcc, so configuring glibc # with a fresh cross-compiler works. # Prefer to if __STDC__ is defined, since # exists even on freestanding compilers. # On the NeXT, cc -E runs the code through the compiler's parser, # not just through cpp. "Syntax error" is here to catch this case. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #ifdef __STDC__ # include #else # include #endif Syntax error _ACEOF if ac_fn_cxx_try_cpp "$LINENO"; then : else # Broken: fails on valid input. continue fi rm -f conftest.err conftest.i conftest.$ac_ext # OK, works on sane cases. Now check whether nonexistent headers # can be detected and how. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include _ACEOF if ac_fn_cxx_try_cpp "$LINENO"; then : # Broken: success on invalid input. continue else # Passes both tests. ac_preproc_ok=: break fi rm -f conftest.err conftest.i conftest.$ac_ext done # Because of `break', _AC_PREPROC_IFELSE's cleaning code was skipped. rm -f conftest.i conftest.err conftest.$ac_ext if $ac_preproc_ok; then : break fi done ac_cv_prog_CXXCPP=$CXXCPP fi CXXCPP=$ac_cv_prog_CXXCPP else ac_cv_prog_CXXCPP=$CXXCPP fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $CXXCPP" >&5 $as_echo "$CXXCPP" >&6; } ac_preproc_ok=false for ac_cxx_preproc_warn_flag in '' yes do # Use a header file that comes with gcc, so configuring glibc # with a fresh cross-compiler works. # Prefer to if __STDC__ is defined, since # exists even on freestanding compilers. # On the NeXT, cc -E runs the code through the compiler's parser, # not just through cpp. "Syntax error" is here to catch this case. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #ifdef __STDC__ # include #else # include #endif Syntax error _ACEOF if ac_fn_cxx_try_cpp "$LINENO"; then : else # Broken: fails on valid input. continue fi rm -f conftest.err conftest.i conftest.$ac_ext # OK, works on sane cases. Now check whether nonexistent headers # can be detected and how. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include _ACEOF if ac_fn_cxx_try_cpp "$LINENO"; then : # Broken: success on invalid input. continue else # Passes both tests. ac_preproc_ok=: break fi rm -f conftest.err conftest.i conftest.$ac_ext done # Because of `break', _AC_PREPROC_IFELSE's cleaning code was skipped. rm -f conftest.i conftest.err conftest.$ac_ext if $ac_preproc_ok; then : else { { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5 $as_echo "$as_me: error: in \`$ac_pwd':" >&2;} as_fn_error $? "C++ preprocessor \"$CXXCPP\" fails sanity check See \`config.log' for more details" "$LINENO" 5; } fi ac_ext=cpp ac_cpp='$CXXCPP $CPPFLAGS' ac_compile='$CXX -c $CXXFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CXX -o conftest$ac_exeext $CXXFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_cxx_compiler_gnu else _lt_caught_CXX_error=yes fi ac_ext=cpp ac_cpp='$CXXCPP $CPPFLAGS' ac_compile='$CXX -c $CXXFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CXX -o conftest$ac_exeext $CXXFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_cxx_compiler_gnu archive_cmds_need_lc_CXX=no allow_undefined_flag_CXX= always_export_symbols_CXX=no archive_expsym_cmds_CXX= compiler_needs_object_CXX=no export_dynamic_flag_spec_CXX= hardcode_direct_CXX=no hardcode_direct_absolute_CXX=no hardcode_libdir_flag_spec_CXX= hardcode_libdir_separator_CXX= hardcode_minus_L_CXX=no hardcode_shlibpath_var_CXX=unsupported hardcode_automatic_CXX=no inherit_rpath_CXX=no module_cmds_CXX= module_expsym_cmds_CXX= link_all_deplibs_CXX=unknown old_archive_cmds_CXX=$old_archive_cmds reload_flag_CXX=$reload_flag reload_cmds_CXX=$reload_cmds no_undefined_flag_CXX= whole_archive_flag_spec_CXX= enable_shared_with_static_runtimes_CXX=no # Source file extension for C++ test sources. ac_ext=cpp # Object file extension for compiled C++ test sources. objext=o objext_CXX=$objext # No sense in running all these tests if we already determined that # the CXX compiler isn't working. Some variables (like enable_shared) # are currently assumed to apply to all compilers on this platform, # and will be corrupted by setting them based on a non-working compiler. if test "$_lt_caught_CXX_error" != yes; then # Code to be used in simple compile tests lt_simple_compile_test_code="int some_variable = 0;" # Code to be used in simple link tests lt_simple_link_test_code='int main(int, char *[]) { return(0); }' # ltmain only uses $CC for tagged configurations so make sure $CC is set. # If no C compiler was specified, use CC. LTCC=${LTCC-"$CC"} # If no C compiler flags were specified, use CFLAGS. LTCFLAGS=${LTCFLAGS-"$CFLAGS"} # Allow CC to be a program name with arguments. compiler=$CC # save warnings/boilerplate of simple test code ac_outfile=conftest.$ac_objext echo "$lt_simple_compile_test_code" >conftest.$ac_ext eval "$ac_compile" 2>&1 >/dev/null | $SED '/^$/d; /^ *+/d' >conftest.err _lt_compiler_boilerplate=`cat conftest.err` $RM conftest* ac_outfile=conftest.$ac_objext echo "$lt_simple_link_test_code" >conftest.$ac_ext eval "$ac_link" 2>&1 >/dev/null | $SED '/^$/d; /^ *+/d' >conftest.err _lt_linker_boilerplate=`cat conftest.err` $RM -r conftest* # Allow CC to be a program name with arguments. lt_save_CC=$CC lt_save_CFLAGS=$CFLAGS lt_save_LD=$LD lt_save_GCC=$GCC GCC=$GXX lt_save_with_gnu_ld=$with_gnu_ld lt_save_path_LD=$lt_cv_path_LD if test -n "${lt_cv_prog_gnu_ldcxx+set}"; then lt_cv_prog_gnu_ld=$lt_cv_prog_gnu_ldcxx else $as_unset lt_cv_prog_gnu_ld fi if test -n "${lt_cv_path_LDCXX+set}"; then lt_cv_path_LD=$lt_cv_path_LDCXX else $as_unset lt_cv_path_LD fi test -z "${LDCXX+set}" || LD=$LDCXX CC=${CXX-"c++"} CFLAGS=$CXXFLAGS compiler=$CC compiler_CXX=$CC for cc_temp in $compiler""; do case $cc_temp in compile | *[\\/]compile | ccache | *[\\/]ccache ) ;; distcc | *[\\/]distcc | purify | *[\\/]purify ) ;; \-*) ;; *) break;; esac done cc_basename=`$ECHO "$cc_temp" | $SED "s%.*/%%; s%^$host_alias-%%"` if test -n "$compiler"; then # We don't want -fno-exception when compiling C++ code, so set the # no_builtin_flag separately if test "$GXX" = yes; then lt_prog_compiler_no_builtin_flag_CXX=' -fno-builtin' else lt_prog_compiler_no_builtin_flag_CXX= fi if test "$GXX" = yes; then # Set up default GNU C++ configuration # Check whether --with-gnu-ld was given. if test "${with_gnu_ld+set}" = set; then : withval=$with_gnu_ld; test "$withval" = no || with_gnu_ld=yes else with_gnu_ld=no fi ac_prog=ld if test "$GCC" = yes; then # Check if gcc -print-prog-name=ld gives a path. { $as_echo "$as_me:${as_lineno-$LINENO}: checking for ld used by $CC" >&5 $as_echo_n "checking for ld used by $CC... " >&6; } case $host in *-*-mingw*) # gcc leaves a trailing carriage return which upsets mingw ac_prog=`($CC -print-prog-name=ld) 2>&5 | tr -d '\015'` ;; *) ac_prog=`($CC -print-prog-name=ld) 2>&5` ;; esac case $ac_prog in # Accept absolute paths. [\\/]* | ?:[\\/]*) re_direlt='/[^/][^/]*/\.\./' # Canonicalize the pathname of ld ac_prog=`$ECHO "$ac_prog"| $SED 's%\\\\%/%g'` while $ECHO "$ac_prog" | $GREP "$re_direlt" > /dev/null 2>&1; do ac_prog=`$ECHO $ac_prog| $SED "s%$re_direlt%/%"` done test -z "$LD" && LD="$ac_prog" ;; "") # If it fails, then pretend we aren't using GCC. ac_prog=ld ;; *) # If it is relative, then search for the first ld in PATH. with_gnu_ld=unknown ;; esac elif test "$with_gnu_ld" = yes; then { $as_echo "$as_me:${as_lineno-$LINENO}: checking for GNU ld" >&5 $as_echo_n "checking for GNU ld... " >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: checking for non-GNU ld" >&5 $as_echo_n "checking for non-GNU ld... " >&6; } fi if ${lt_cv_path_LD+:} false; then : $as_echo_n "(cached) " >&6 else if test -z "$LD"; then lt_save_ifs="$IFS"; IFS=$PATH_SEPARATOR for ac_dir in $PATH; do IFS="$lt_save_ifs" test -z "$ac_dir" && ac_dir=. if test -f "$ac_dir/$ac_prog" || test -f "$ac_dir/$ac_prog$ac_exeext"; then lt_cv_path_LD="$ac_dir/$ac_prog" # Check to see if the program is GNU ld. I'd rather use --version, # but apparently some variants of GNU ld only accept -v. # Break only if it was the GNU/non-GNU ld that we prefer. case `"$lt_cv_path_LD" -v 2>&1 &5 $as_echo "$LD" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi test -z "$LD" && as_fn_error $? "no acceptable ld found in \$PATH" "$LINENO" 5 { $as_echo "$as_me:${as_lineno-$LINENO}: checking if the linker ($LD) is GNU ld" >&5 $as_echo_n "checking if the linker ($LD) is GNU ld... " >&6; } if ${lt_cv_prog_gnu_ld+:} false; then : $as_echo_n "(cached) " >&6 else # I'd rather use --version here, but apparently some GNU lds only accept -v. case `$LD -v 2>&1 &5 $as_echo "$lt_cv_prog_gnu_ld" >&6; } with_gnu_ld=$lt_cv_prog_gnu_ld # Check if GNU C++ uses GNU ld as the underlying linker, since the # archiving commands below assume that GNU ld is being used. if test "$with_gnu_ld" = yes; then archive_cmds_CXX='$CC $pic_flag -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname $wl$soname -o $lib' archive_expsym_cmds_CXX='$CC $pic_flag -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib' hardcode_libdir_flag_spec_CXX='${wl}-rpath ${wl}$libdir' export_dynamic_flag_spec_CXX='${wl}--export-dynamic' # If archive_cmds runs LD, not CC, wlarc should be empty # XXX I think wlarc can be eliminated in ltcf-cxx, but I need to # investigate it a little bit more. (MM) wlarc='${wl}' # ancient GNU ld didn't support --whole-archive et. al. if eval "`$CC -print-prog-name=ld` --help 2>&1" | $GREP 'no-whole-archive' > /dev/null; then whole_archive_flag_spec_CXX="$wlarc"'--whole-archive$convenience '"$wlarc"'--no-whole-archive' else whole_archive_flag_spec_CXX= fi else with_gnu_ld=no wlarc= # A generic and very simple default shared library creation # command for GNU C++ for the case where it uses the native # linker, instead of GNU ld. If possible, this setting should # overridden to take advantage of the native linker features on # the platform it is being used on. archive_cmds_CXX='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $lib' fi # Commands to make compiler produce verbose output that lists # what "hidden" libraries, object files and flags are used when # linking a shared library. output_verbose_link_cmd='$CC -shared $CFLAGS -v conftest.$objext 2>&1 | $GREP -v "^Configured with:" | $GREP "\-L"' else GXX=no with_gnu_ld=no wlarc= fi # PORTME: fill in a description of your system's C++ link characteristics { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether the $compiler linker ($LD) supports shared libraries" >&5 $as_echo_n "checking whether the $compiler linker ($LD) supports shared libraries... " >&6; } ld_shlibs_CXX=yes case $host_os in aix3*) # FIXME: insert proper C++ library support ld_shlibs_CXX=no ;; aix[4-9]*) if test "$host_cpu" = ia64; then # On IA64, the linker does run time linking by default, so we don't # have to do anything special. aix_use_runtimelinking=no exp_sym_flag='-Bexport' no_entry_flag="" else aix_use_runtimelinking=no # Test if we are trying to use run time linking or normal # AIX style linking. If -brtl is somewhere in LDFLAGS, we # need to do runtime linking. case $host_os in aix4.[23]|aix4.[23].*|aix[5-9]*) for ld_flag in $LDFLAGS; do case $ld_flag in *-brtl*) aix_use_runtimelinking=yes break ;; esac done ;; esac exp_sym_flag='-bexport' no_entry_flag='-bnoentry' fi # When large executables or shared objects are built, AIX ld can # have problems creating the table of contents. If linking a library # or program results in "error TOC overflow" add -mminimal-toc to # CXXFLAGS/CFLAGS for g++/gcc. In the cases where that is not # enough to fix the problem, add -Wl,-bbigtoc to LDFLAGS. archive_cmds_CXX='' hardcode_direct_CXX=yes hardcode_direct_absolute_CXX=yes hardcode_libdir_separator_CXX=':' link_all_deplibs_CXX=yes file_list_spec_CXX='${wl}-f,' if test "$GXX" = yes; then case $host_os in aix4.[012]|aix4.[012].*) # We only want to do this on AIX 4.2 and lower, the check # below for broken collect2 doesn't work under 4.3+ collect2name=`${CC} -print-prog-name=collect2` if test -f "$collect2name" && strings "$collect2name" | $GREP resolve_lib_name >/dev/null then # We have reworked collect2 : else # We have old collect2 hardcode_direct_CXX=unsupported # It fails to find uninstalled libraries when the uninstalled # path is not listed in the libpath. Setting hardcode_minus_L # to unsupported forces relinking hardcode_minus_L_CXX=yes hardcode_libdir_flag_spec_CXX='-L$libdir' hardcode_libdir_separator_CXX= fi esac shared_flag='-shared' if test "$aix_use_runtimelinking" = yes; then shared_flag="$shared_flag "'${wl}-G' fi else # not using gcc if test "$host_cpu" = ia64; then # VisualAge C++, Version 5.5 for AIX 5L for IA-64, Beta 3 Release # chokes on -Wl,-G. The following line is correct: shared_flag='-G' else if test "$aix_use_runtimelinking" = yes; then shared_flag='${wl}-G' else shared_flag='${wl}-bM:SRE' fi fi fi export_dynamic_flag_spec_CXX='${wl}-bexpall' # It seems that -bexpall does not export symbols beginning with # underscore (_), so it is better to generate a list of symbols to # export. always_export_symbols_CXX=yes if test "$aix_use_runtimelinking" = yes; then # Warning - without using the other runtime loading flags (-brtl), # -berok will link without error, but may produce a broken library. allow_undefined_flag_CXX='-berok' # Determine the default libpath from the value encoded in an empty # executable. if test "${lt_cv_aix_libpath+set}" = set; then aix_libpath=$lt_cv_aix_libpath else if ${lt_cv_aix_libpath__CXX+:} false; then : $as_echo_n "(cached) " >&6 else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF if ac_fn_cxx_try_link "$LINENO"; then : lt_aix_libpath_sed=' /Import File Strings/,/^$/ { /^0/ { s/^0 *\([^ ]*\) *$/\1/ p } }' lt_cv_aix_libpath__CXX=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"` # Check for a 64-bit object if we didn't find anything. if test -z "$lt_cv_aix_libpath__CXX"; then lt_cv_aix_libpath__CXX=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"` fi fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext if test -z "$lt_cv_aix_libpath__CXX"; then lt_cv_aix_libpath__CXX="/usr/lib:/lib" fi fi aix_libpath=$lt_cv_aix_libpath__CXX fi hardcode_libdir_flag_spec_CXX='${wl}-blibpath:$libdir:'"$aix_libpath" archive_expsym_cmds_CXX='$CC -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags `if test "x${allow_undefined_flag}" != "x"; then func_echo_all "${wl}${allow_undefined_flag}"; else :; fi` '"\${wl}$exp_sym_flag:\$export_symbols $shared_flag" else if test "$host_cpu" = ia64; then hardcode_libdir_flag_spec_CXX='${wl}-R $libdir:/usr/lib:/lib' allow_undefined_flag_CXX="-z nodefs" archive_expsym_cmds_CXX="\$CC $shared_flag"' -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags ${wl}${allow_undefined_flag} '"\${wl}$exp_sym_flag:\$export_symbols" else # Determine the default libpath from the value encoded in an # empty executable. if test "${lt_cv_aix_libpath+set}" = set; then aix_libpath=$lt_cv_aix_libpath else if ${lt_cv_aix_libpath__CXX+:} false; then : $as_echo_n "(cached) " >&6 else cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF if ac_fn_cxx_try_link "$LINENO"; then : lt_aix_libpath_sed=' /Import File Strings/,/^$/ { /^0/ { s/^0 *\([^ ]*\) *$/\1/ p } }' lt_cv_aix_libpath__CXX=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"` # Check for a 64-bit object if we didn't find anything. if test -z "$lt_cv_aix_libpath__CXX"; then lt_cv_aix_libpath__CXX=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"` fi fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext if test -z "$lt_cv_aix_libpath__CXX"; then lt_cv_aix_libpath__CXX="/usr/lib:/lib" fi fi aix_libpath=$lt_cv_aix_libpath__CXX fi hardcode_libdir_flag_spec_CXX='${wl}-blibpath:$libdir:'"$aix_libpath" # Warning - without using the other run time loading flags, # -berok will link without error, but may produce a broken library. no_undefined_flag_CXX=' ${wl}-bernotok' allow_undefined_flag_CXX=' ${wl}-berok' if test "$with_gnu_ld" = yes; then # We only use this code for GNU lds that support --whole-archive. whole_archive_flag_spec_CXX='${wl}--whole-archive$convenience ${wl}--no-whole-archive' else # Exported symbols can be pulled into shared objects from archives whole_archive_flag_spec_CXX='$convenience' fi archive_cmds_need_lc_CXX=yes # This is similar to how AIX traditionally builds its shared # libraries. archive_expsym_cmds_CXX="\$CC $shared_flag"' -o $output_objdir/$soname $libobjs $deplibs ${wl}-bnoentry $compiler_flags ${wl}-bE:$export_symbols${allow_undefined_flag}~$AR $AR_FLAGS $output_objdir/$libname$release.a $output_objdir/$soname' fi fi ;; beos*) if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then allow_undefined_flag_CXX=unsupported # Joseph Beckenbach says some releases of gcc # support --undefined. This deserves some investigation. FIXME archive_cmds_CXX='$CC -nostart $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' else ld_shlibs_CXX=no fi ;; chorus*) case $cc_basename in *) # FIXME: insert proper C++ library support ld_shlibs_CXX=no ;; esac ;; cygwin* | mingw* | pw32* | cegcc*) case $GXX,$cc_basename in ,cl* | no,cl*) # Native MSVC # hardcode_libdir_flag_spec is actually meaningless, as there is # no search path for DLLs. hardcode_libdir_flag_spec_CXX=' ' allow_undefined_flag_CXX=unsupported always_export_symbols_CXX=yes file_list_spec_CXX='@' # Tell ltmain to make .lib files, not .a files. libext=lib # Tell ltmain to make .dll files, not .so files. shrext_cmds=".dll" # FIXME: Setting linknames here is a bad hack. archive_cmds_CXX='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-dll~linknames=' archive_expsym_cmds_CXX='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then $SED -n -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' -e '1\\\!p' < $export_symbols > $output_objdir/$soname.exp; else $SED -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' < $export_symbols > $output_objdir/$soname.exp; fi~ $CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~ linknames=' # The linker will not automatically build a static lib if we build a DLL. # _LT_TAGVAR(old_archive_from_new_cmds, CXX)='true' enable_shared_with_static_runtimes_CXX=yes # Don't use ranlib old_postinstall_cmds_CXX='chmod 644 $oldlib' postlink_cmds_CXX='lt_outputfile="@OUTPUT@"~ lt_tool_outputfile="@TOOL_OUTPUT@"~ case $lt_outputfile in *.exe|*.EXE) ;; *) lt_outputfile="$lt_outputfile.exe" lt_tool_outputfile="$lt_tool_outputfile.exe" ;; esac~ func_to_tool_file "$lt_outputfile"~ if test "$MANIFEST_TOOL" != ":" && test -f "$lt_outputfile.manifest"; then $MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1; $RM "$lt_outputfile.manifest"; fi' ;; *) # g++ # _LT_TAGVAR(hardcode_libdir_flag_spec, CXX) is actually meaningless, # as there is no search path for DLLs. hardcode_libdir_flag_spec_CXX='-L$libdir' export_dynamic_flag_spec_CXX='${wl}--export-all-symbols' allow_undefined_flag_CXX=unsupported always_export_symbols_CXX=no enable_shared_with_static_runtimes_CXX=yes if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then archive_cmds_CXX='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib' # If the export-symbols file already is a .def file (1st line # is EXPORTS), use it as is; otherwise, prepend... archive_expsym_cmds_CXX='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then cp $export_symbols $output_objdir/$soname.def; else echo EXPORTS > $output_objdir/$soname.def; cat $export_symbols >> $output_objdir/$soname.def; fi~ $CC -shared -nostdlib $output_objdir/$soname.def $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib' else ld_shlibs_CXX=no fi ;; esac ;; darwin* | rhapsody*) archive_cmds_need_lc_CXX=no hardcode_direct_CXX=no hardcode_automatic_CXX=yes hardcode_shlibpath_var_CXX=unsupported if test "$lt_cv_ld_force_load" = "yes"; then whole_archive_flag_spec_CXX='`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience ${wl}-force_load,$conv\"; done; func_echo_all \"$new_convenience\"`' else whole_archive_flag_spec_CXX='' fi link_all_deplibs_CXX=yes allow_undefined_flag_CXX="$_lt_dar_allow_undefined" case $cc_basename in ifort*) _lt_dar_can_shared=yes ;; *) _lt_dar_can_shared=$GCC ;; esac if test "$_lt_dar_can_shared" = "yes"; then output_verbose_link_cmd=func_echo_all archive_cmds_CXX="\$CC -dynamiclib \$allow_undefined_flag -o \$lib \$libobjs \$deplibs \$compiler_flags -install_name \$rpath/\$soname \$verstring $_lt_dar_single_mod${_lt_dsymutil}" module_cmds_CXX="\$CC \$allow_undefined_flag -o \$lib -bundle \$libobjs \$deplibs \$compiler_flags${_lt_dsymutil}" archive_expsym_cmds_CXX="sed 's,^,_,' < \$export_symbols > \$output_objdir/\${libname}-symbols.expsym~\$CC -dynamiclib \$allow_undefined_flag -o \$lib \$libobjs \$deplibs \$compiler_flags -install_name \$rpath/\$soname \$verstring ${_lt_dar_single_mod}${_lt_dar_export_syms}${_lt_dsymutil}" module_expsym_cmds_CXX="sed -e 's,^,_,' < \$export_symbols > \$output_objdir/\${libname}-symbols.expsym~\$CC \$allow_undefined_flag -o \$lib -bundle \$libobjs \$deplibs \$compiler_flags${_lt_dar_export_syms}${_lt_dsymutil}" if test "$lt_cv_apple_cc_single_mod" != "yes"; then archive_cmds_CXX="\$CC -r -keep_private_externs -nostdlib -o \${lib}-master.o \$libobjs~\$CC -dynamiclib \$allow_undefined_flag -o \$lib \${lib}-master.o \$deplibs \$compiler_flags -install_name \$rpath/\$soname \$verstring${_lt_dsymutil}" archive_expsym_cmds_CXX="sed 's,^,_,' < \$export_symbols > \$output_objdir/\${libname}-symbols.expsym~\$CC -r -keep_private_externs -nostdlib -o \${lib}-master.o \$libobjs~\$CC -dynamiclib \$allow_undefined_flag -o \$lib \${lib}-master.o \$deplibs \$compiler_flags -install_name \$rpath/\$soname \$verstring${_lt_dar_export_syms}${_lt_dsymutil}" fi else ld_shlibs_CXX=no fi ;; dgux*) case $cc_basename in ec++*) # FIXME: insert proper C++ library support ld_shlibs_CXX=no ;; ghcx*) # Green Hills C++ Compiler # FIXME: insert proper C++ library support ld_shlibs_CXX=no ;; *) # FIXME: insert proper C++ library support ld_shlibs_CXX=no ;; esac ;; freebsd2.*) # C++ shared libraries reported to be fairly broken before # switch to ELF ld_shlibs_CXX=no ;; freebsd-elf*) archive_cmds_need_lc_CXX=no ;; freebsd* | dragonfly*) # FreeBSD 3 and later use GNU C++ and GNU ld with standard ELF # conventions ld_shlibs_CXX=yes ;; gnu*) ;; haiku*) archive_cmds_CXX='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' link_all_deplibs_CXX=yes ;; hpux9*) hardcode_libdir_flag_spec_CXX='${wl}+b ${wl}$libdir' hardcode_libdir_separator_CXX=: export_dynamic_flag_spec_CXX='${wl}-E' hardcode_direct_CXX=yes hardcode_minus_L_CXX=yes # Not in the search PATH, # but as the default # location of the library. case $cc_basename in CC*) # FIXME: insert proper C++ library support ld_shlibs_CXX=no ;; aCC*) archive_cmds_CXX='$RM $output_objdir/$soname~$CC -b ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib' # Commands to make compiler produce verbose output that lists # what "hidden" libraries, object files and flags are used when # linking a shared library. # # There doesn't appear to be a way to prevent this compiler from # explicitly linking system object files so we need to strip them # from the output so that they don't get included in the library # dependencies. output_verbose_link_cmd='templist=`($CC -b $CFLAGS -v conftest.$objext 2>&1) | $EGREP "\-L"`; list=""; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; func_echo_all "$list"' ;; *) if test "$GXX" = yes; then archive_cmds_CXX='$RM $output_objdir/$soname~$CC -shared -nostdlib $pic_flag ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib' else # FIXME: insert proper C++ library support ld_shlibs_CXX=no fi ;; esac ;; hpux10*|hpux11*) if test $with_gnu_ld = no; then hardcode_libdir_flag_spec_CXX='${wl}+b ${wl}$libdir' hardcode_libdir_separator_CXX=: case $host_cpu in hppa*64*|ia64*) ;; *) export_dynamic_flag_spec_CXX='${wl}-E' ;; esac fi case $host_cpu in hppa*64*|ia64*) hardcode_direct_CXX=no hardcode_shlibpath_var_CXX=no ;; *) hardcode_direct_CXX=yes hardcode_direct_absolute_CXX=yes hardcode_minus_L_CXX=yes # Not in the search PATH, # but as the default # location of the library. ;; esac case $cc_basename in CC*) # FIXME: insert proper C++ library support ld_shlibs_CXX=no ;; aCC*) case $host_cpu in hppa*64*) archive_cmds_CXX='$CC -b ${wl}+h ${wl}$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' ;; ia64*) archive_cmds_CXX='$CC -b ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' ;; *) archive_cmds_CXX='$CC -b ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' ;; esac # Commands to make compiler produce verbose output that lists # what "hidden" libraries, object files and flags are used when # linking a shared library. # # There doesn't appear to be a way to prevent this compiler from # explicitly linking system object files so we need to strip them # from the output so that they don't get included in the library # dependencies. output_verbose_link_cmd='templist=`($CC -b $CFLAGS -v conftest.$objext 2>&1) | $GREP "\-L"`; list=""; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; func_echo_all "$list"' ;; *) if test "$GXX" = yes; then if test $with_gnu_ld = no; then case $host_cpu in hppa*64*) archive_cmds_CXX='$CC -shared -nostdlib -fPIC ${wl}+h ${wl}$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' ;; ia64*) archive_cmds_CXX='$CC -shared -nostdlib $pic_flag ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' ;; *) archive_cmds_CXX='$CC -shared -nostdlib $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' ;; esac fi else # FIXME: insert proper C++ library support ld_shlibs_CXX=no fi ;; esac ;; interix[3-9]*) hardcode_direct_CXX=no hardcode_shlibpath_var_CXX=no hardcode_libdir_flag_spec_CXX='${wl}-rpath,$libdir' export_dynamic_flag_spec_CXX='${wl}-E' # Hack: On Interix 3.x, we cannot compile PIC because of a broken gcc. # Instead, shared libraries are loaded at an image base (0x10000000 by # default) and relocated if they conflict, which is a slow very memory # consuming and fragmenting process. To avoid this, we pick a random, # 256 KiB-aligned image base between 0x50000000 and 0x6FFC0000 at link # time. Moving up from 0x10000000 also allows more sbrk(2) space. archive_cmds_CXX='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-h,$soname ${wl}--image-base,`expr ${RANDOM-$$} % 4096 / 2 \* 262144 + 1342177280` -o $lib' archive_expsym_cmds_CXX='sed "s,^,_," $export_symbols >$output_objdir/$soname.expsym~$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-h,$soname ${wl}--retain-symbols-file,$output_objdir/$soname.expsym ${wl}--image-base,`expr ${RANDOM-$$} % 4096 / 2 \* 262144 + 1342177280` -o $lib' ;; irix5* | irix6*) case $cc_basename in CC*) # SGI C++ archive_cmds_CXX='$CC -shared -all -multigot $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib' # Archives containing C++ object files must be created using # "CC -ar", where "CC" is the IRIX C++ compiler. This is # necessary to make sure instantiated templates are included # in the archive. old_archive_cmds_CXX='$CC -ar -WR,-u -o $oldlib $oldobjs' ;; *) if test "$GXX" = yes; then if test "$with_gnu_ld" = no; then archive_cmds_CXX='$CC -shared $pic_flag -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib' else archive_cmds_CXX='$CC -shared $pic_flag -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` -o $lib' fi fi link_all_deplibs_CXX=yes ;; esac hardcode_libdir_flag_spec_CXX='${wl}-rpath ${wl}$libdir' hardcode_libdir_separator_CXX=: inherit_rpath_CXX=yes ;; linux* | k*bsd*-gnu | kopensolaris*-gnu) case $cc_basename in KCC*) # Kuck and Associates, Inc. (KAI) C++ Compiler # KCC will only create a shared library if the output file # ends with ".so" (or ".sl" for HP-UX), so rename the library # to its proper name (with version) after linking. archive_cmds_CXX='tempext=`echo $shared_ext | $SED -e '\''s/\([^()0-9A-Za-z{}]\)/\\\\\1/g'\''`; templib=`echo $lib | $SED -e "s/\${tempext}\..*/.so/"`; $CC $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags --soname $soname -o \$templib; mv \$templib $lib' archive_expsym_cmds_CXX='tempext=`echo $shared_ext | $SED -e '\''s/\([^()0-9A-Za-z{}]\)/\\\\\1/g'\''`; templib=`echo $lib | $SED -e "s/\${tempext}\..*/.so/"`; $CC $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags --soname $soname -o \$templib ${wl}-retain-symbols-file,$export_symbols; mv \$templib $lib' # Commands to make compiler produce verbose output that lists # what "hidden" libraries, object files and flags are used when # linking a shared library. # # There doesn't appear to be a way to prevent this compiler from # explicitly linking system object files so we need to strip them # from the output so that they don't get included in the library # dependencies. output_verbose_link_cmd='templist=`$CC $CFLAGS -v conftest.$objext -o libconftest$shared_ext 2>&1 | $GREP "ld"`; rm -f libconftest$shared_ext; list=""; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; func_echo_all "$list"' hardcode_libdir_flag_spec_CXX='${wl}-rpath,$libdir' export_dynamic_flag_spec_CXX='${wl}--export-dynamic' # Archives containing C++ object files must be created using # "CC -Bstatic", where "CC" is the KAI C++ compiler. old_archive_cmds_CXX='$CC -Bstatic -o $oldlib $oldobjs' ;; icpc* | ecpc* ) # Intel C++ with_gnu_ld=yes # version 8.0 and above of icpc choke on multiply defined symbols # if we add $predep_objects and $postdep_objects, however 7.1 and # earlier do not add the objects themselves. case `$CC -V 2>&1` in *"Version 7."*) archive_cmds_CXX='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname $wl$soname -o $lib' archive_expsym_cmds_CXX='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib' ;; *) # Version 8.0 or newer tmp_idyn= case $host_cpu in ia64*) tmp_idyn=' -i_dynamic';; esac archive_cmds_CXX='$CC -shared'"$tmp_idyn"' $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' archive_expsym_cmds_CXX='$CC -shared'"$tmp_idyn"' $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib' ;; esac archive_cmds_need_lc_CXX=no hardcode_libdir_flag_spec_CXX='${wl}-rpath,$libdir' export_dynamic_flag_spec_CXX='${wl}--export-dynamic' whole_archive_flag_spec_CXX='${wl}--whole-archive$convenience ${wl}--no-whole-archive' ;; pgCC* | pgcpp*) # Portland Group C++ compiler case `$CC -V` in *pgCC\ [1-5].* | *pgcpp\ [1-5].*) prelink_cmds_CXX='tpldir=Template.dir~ rm -rf $tpldir~ $CC --prelink_objects --instantiation_dir $tpldir $objs $libobjs $compile_deplibs~ compile_command="$compile_command `find $tpldir -name \*.o | sort | $NL2SP`"' old_archive_cmds_CXX='tpldir=Template.dir~ rm -rf $tpldir~ $CC --prelink_objects --instantiation_dir $tpldir $oldobjs$old_deplibs~ $AR $AR_FLAGS $oldlib$oldobjs$old_deplibs `find $tpldir -name \*.o | sort | $NL2SP`~ $RANLIB $oldlib' archive_cmds_CXX='tpldir=Template.dir~ rm -rf $tpldir~ $CC --prelink_objects --instantiation_dir $tpldir $predep_objects $libobjs $deplibs $convenience $postdep_objects~ $CC -shared $pic_flag $predep_objects $libobjs $deplibs `find $tpldir -name \*.o | sort | $NL2SP` $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname -o $lib' archive_expsym_cmds_CXX='tpldir=Template.dir~ rm -rf $tpldir~ $CC --prelink_objects --instantiation_dir $tpldir $predep_objects $libobjs $deplibs $convenience $postdep_objects~ $CC -shared $pic_flag $predep_objects $libobjs $deplibs `find $tpldir -name \*.o | sort | $NL2SP` $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname ${wl}-retain-symbols-file ${wl}$export_symbols -o $lib' ;; *) # Version 6 and above use weak symbols archive_cmds_CXX='$CC -shared $pic_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname -o $lib' archive_expsym_cmds_CXX='$CC -shared $pic_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname ${wl}-retain-symbols-file ${wl}$export_symbols -o $lib' ;; esac hardcode_libdir_flag_spec_CXX='${wl}--rpath ${wl}$libdir' export_dynamic_flag_spec_CXX='${wl}--export-dynamic' whole_archive_flag_spec_CXX='${wl}--whole-archive`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` ${wl}--no-whole-archive' ;; cxx*) # Compaq C++ archive_cmds_CXX='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname $wl$soname -o $lib' archive_expsym_cmds_CXX='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname $wl$soname -o $lib ${wl}-retain-symbols-file $wl$export_symbols' runpath_var=LD_RUN_PATH hardcode_libdir_flag_spec_CXX='-rpath $libdir' hardcode_libdir_separator_CXX=: # Commands to make compiler produce verbose output that lists # what "hidden" libraries, object files and flags are used when # linking a shared library. # # There doesn't appear to be a way to prevent this compiler from # explicitly linking system object files so we need to strip them # from the output so that they don't get included in the library # dependencies. output_verbose_link_cmd='templist=`$CC -shared $CFLAGS -v conftest.$objext 2>&1 | $GREP "ld"`; templist=`func_echo_all "$templist" | $SED "s/\(^.*ld.*\)\( .*ld .*$\)/\1/"`; list=""; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; func_echo_all "X$list" | $Xsed' ;; xl* | mpixl* | bgxl*) # IBM XL 8.0 on PPC, with GNU ld hardcode_libdir_flag_spec_CXX='${wl}-rpath ${wl}$libdir' export_dynamic_flag_spec_CXX='${wl}--export-dynamic' archive_cmds_CXX='$CC -qmkshrobj $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' if test "x$supports_anon_versioning" = xyes; then archive_expsym_cmds_CXX='echo "{ global:" > $output_objdir/$libname.ver~ cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~ echo "local: *; };" >> $output_objdir/$libname.ver~ $CC -qmkshrobj $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-version-script ${wl}$output_objdir/$libname.ver -o $lib' fi ;; *) case `$CC -V 2>&1 | sed 5q` in *Sun\ C*) # Sun C++ 5.9 no_undefined_flag_CXX=' -zdefs' archive_cmds_CXX='$CC -G${allow_undefined_flag} -h$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' archive_expsym_cmds_CXX='$CC -G${allow_undefined_flag} -h$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-retain-symbols-file ${wl}$export_symbols' hardcode_libdir_flag_spec_CXX='-R$libdir' whole_archive_flag_spec_CXX='${wl}--whole-archive`new_convenience=; for conv in $convenience\"\"; do test -z \"$conv\" || new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` ${wl}--no-whole-archive' compiler_needs_object_CXX=yes # Not sure whether something based on # $CC $CFLAGS -v conftest.$objext -o libconftest$shared_ext 2>&1 # would be better. output_verbose_link_cmd='func_echo_all' # Archives containing C++ object files must be created using # "CC -xar", where "CC" is the Sun C++ compiler. This is # necessary to make sure instantiated templates are included # in the archive. old_archive_cmds_CXX='$CC -xar -o $oldlib $oldobjs' ;; esac ;; esac ;; lynxos*) # FIXME: insert proper C++ library support ld_shlibs_CXX=no ;; m88k*) # FIXME: insert proper C++ library support ld_shlibs_CXX=no ;; mvs*) case $cc_basename in cxx*) # FIXME: insert proper C++ library support ld_shlibs_CXX=no ;; *) # FIXME: insert proper C++ library support ld_shlibs_CXX=no ;; esac ;; netbsd*) if echo __ELF__ | $CC -E - | $GREP __ELF__ >/dev/null; then archive_cmds_CXX='$LD -Bshareable -o $lib $predep_objects $libobjs $deplibs $postdep_objects $linker_flags' wlarc= hardcode_libdir_flag_spec_CXX='-R$libdir' hardcode_direct_CXX=yes hardcode_shlibpath_var_CXX=no fi # Workaround some broken pre-1.5 toolchains output_verbose_link_cmd='$CC -shared $CFLAGS -v conftest.$objext 2>&1 | $GREP conftest.$objext | $SED -e "s:-lgcc -lc -lgcc::"' ;; *nto* | *qnx*) ld_shlibs_CXX=yes ;; openbsd2*) # C++ shared libraries are fairly broken ld_shlibs_CXX=no ;; openbsd*) if test -f /usr/libexec/ld.so; then hardcode_direct_CXX=yes hardcode_shlibpath_var_CXX=no hardcode_direct_absolute_CXX=yes archive_cmds_CXX='$CC -shared $pic_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $lib' hardcode_libdir_flag_spec_CXX='${wl}-rpath,$libdir' if test -z "`echo __ELF__ | $CC -E - | grep __ELF__`" || test "$host_os-$host_cpu" = "openbsd2.8-powerpc"; then archive_expsym_cmds_CXX='$CC -shared $pic_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-retain-symbols-file,$export_symbols -o $lib' export_dynamic_flag_spec_CXX='${wl}-E' whole_archive_flag_spec_CXX="$wlarc"'--whole-archive$convenience '"$wlarc"'--no-whole-archive' fi output_verbose_link_cmd=func_echo_all else ld_shlibs_CXX=no fi ;; osf3* | osf4* | osf5*) case $cc_basename in KCC*) # Kuck and Associates, Inc. (KAI) C++ Compiler # KCC will only create a shared library if the output file # ends with ".so" (or ".sl" for HP-UX), so rename the library # to its proper name (with version) after linking. archive_cmds_CXX='tempext=`echo $shared_ext | $SED -e '\''s/\([^()0-9A-Za-z{}]\)/\\\\\1/g'\''`; templib=`echo "$lib" | $SED -e "s/\${tempext}\..*/.so/"`; $CC $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags --soname $soname -o \$templib; mv \$templib $lib' hardcode_libdir_flag_spec_CXX='${wl}-rpath,$libdir' hardcode_libdir_separator_CXX=: # Archives containing C++ object files must be created using # the KAI C++ compiler. case $host in osf3*) old_archive_cmds_CXX='$CC -Bstatic -o $oldlib $oldobjs' ;; *) old_archive_cmds_CXX='$CC -o $oldlib $oldobjs' ;; esac ;; RCC*) # Rational C++ 2.4.1 # FIXME: insert proper C++ library support ld_shlibs_CXX=no ;; cxx*) case $host in osf3*) allow_undefined_flag_CXX=' ${wl}-expect_unresolved ${wl}\*' archive_cmds_CXX='$CC -shared${allow_undefined_flag} $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname $soname `test -n "$verstring" && func_echo_all "${wl}-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib' hardcode_libdir_flag_spec_CXX='${wl}-rpath ${wl}$libdir' ;; *) allow_undefined_flag_CXX=' -expect_unresolved \*' archive_cmds_CXX='$CC -shared${allow_undefined_flag} $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -msym -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib' archive_expsym_cmds_CXX='for i in `cat $export_symbols`; do printf "%s %s\\n" -exported_symbol "\$i" >> $lib.exp; done~ echo "-hidden">> $lib.exp~ $CC -shared$allow_undefined_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -msym -soname $soname ${wl}-input ${wl}$lib.exp `test -n "$verstring" && $ECHO "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib~ $RM $lib.exp' hardcode_libdir_flag_spec_CXX='-rpath $libdir' ;; esac hardcode_libdir_separator_CXX=: # Commands to make compiler produce verbose output that lists # what "hidden" libraries, object files and flags are used when # linking a shared library. # # There doesn't appear to be a way to prevent this compiler from # explicitly linking system object files so we need to strip them # from the output so that they don't get included in the library # dependencies. output_verbose_link_cmd='templist=`$CC -shared $CFLAGS -v conftest.$objext 2>&1 | $GREP "ld" | $GREP -v "ld:"`; templist=`func_echo_all "$templist" | $SED "s/\(^.*ld.*\)\( .*ld.*$\)/\1/"`; list=""; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; func_echo_all "$list"' ;; *) if test "$GXX" = yes && test "$with_gnu_ld" = no; then allow_undefined_flag_CXX=' ${wl}-expect_unresolved ${wl}\*' case $host in osf3*) archive_cmds_CXX='$CC -shared -nostdlib ${allow_undefined_flag} $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib' ;; *) archive_cmds_CXX='$CC -shared $pic_flag -nostdlib ${allow_undefined_flag} $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib' ;; esac hardcode_libdir_flag_spec_CXX='${wl}-rpath ${wl}$libdir' hardcode_libdir_separator_CXX=: # Commands to make compiler produce verbose output that lists # what "hidden" libraries, object files and flags are used when # linking a shared library. output_verbose_link_cmd='$CC -shared $CFLAGS -v conftest.$objext 2>&1 | $GREP -v "^Configured with:" | $GREP "\-L"' else # FIXME: insert proper C++ library support ld_shlibs_CXX=no fi ;; esac ;; psos*) # FIXME: insert proper C++ library support ld_shlibs_CXX=no ;; sunos4*) case $cc_basename in CC*) # Sun C++ 4.x # FIXME: insert proper C++ library support ld_shlibs_CXX=no ;; lcc*) # Lucid # FIXME: insert proper C++ library support ld_shlibs_CXX=no ;; *) # FIXME: insert proper C++ library support ld_shlibs_CXX=no ;; esac ;; solaris*) case $cc_basename in CC* | sunCC*) # Sun C++ 4.2, 5.x and Centerline C++ archive_cmds_need_lc_CXX=yes no_undefined_flag_CXX=' -zdefs' archive_cmds_CXX='$CC -G${allow_undefined_flag} -h$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' archive_expsym_cmds_CXX='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~ $CC -G${allow_undefined_flag} ${wl}-M ${wl}$lib.exp -h$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~$RM $lib.exp' hardcode_libdir_flag_spec_CXX='-R$libdir' hardcode_shlibpath_var_CXX=no case $host_os in solaris2.[0-5] | solaris2.[0-5].*) ;; *) # The compiler driver will combine and reorder linker options, # but understands `-z linker_flag'. # Supported since Solaris 2.6 (maybe 2.5.1?) whole_archive_flag_spec_CXX='-z allextract$convenience -z defaultextract' ;; esac link_all_deplibs_CXX=yes output_verbose_link_cmd='func_echo_all' # Archives containing C++ object files must be created using # "CC -xar", where "CC" is the Sun C++ compiler. This is # necessary to make sure instantiated templates are included # in the archive. old_archive_cmds_CXX='$CC -xar -o $oldlib $oldobjs' ;; gcx*) # Green Hills C++ Compiler archive_cmds_CXX='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-h $wl$soname -o $lib' # The C++ compiler must be used to create the archive. old_archive_cmds_CXX='$CC $LDFLAGS -archive -o $oldlib $oldobjs' ;; *) # GNU C++ compiler with Solaris linker if test "$GXX" = yes && test "$with_gnu_ld" = no; then no_undefined_flag_CXX=' ${wl}-z ${wl}defs' if $CC --version | $GREP -v '^2\.7' > /dev/null; then archive_cmds_CXX='$CC -shared $pic_flag -nostdlib $LDFLAGS $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-h $wl$soname -o $lib' archive_expsym_cmds_CXX='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~ $CC -shared $pic_flag -nostdlib ${wl}-M $wl$lib.exp -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~$RM $lib.exp' # Commands to make compiler produce verbose output that lists # what "hidden" libraries, object files and flags are used when # linking a shared library. output_verbose_link_cmd='$CC -shared $CFLAGS -v conftest.$objext 2>&1 | $GREP -v "^Configured with:" | $GREP "\-L"' else # g++ 2.7 appears to require `-G' NOT `-shared' on this # platform. archive_cmds_CXX='$CC -G -nostdlib $LDFLAGS $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-h $wl$soname -o $lib' archive_expsym_cmds_CXX='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~ $CC -G -nostdlib ${wl}-M $wl$lib.exp -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~$RM $lib.exp' # Commands to make compiler produce verbose output that lists # what "hidden" libraries, object files and flags are used when # linking a shared library. output_verbose_link_cmd='$CC -G $CFLAGS -v conftest.$objext 2>&1 | $GREP -v "^Configured with:" | $GREP "\-L"' fi hardcode_libdir_flag_spec_CXX='${wl}-R $wl$libdir' case $host_os in solaris2.[0-5] | solaris2.[0-5].*) ;; *) whole_archive_flag_spec_CXX='${wl}-z ${wl}allextract$convenience ${wl}-z ${wl}defaultextract' ;; esac fi ;; esac ;; sysv4*uw2* | sysv5OpenUNIX* | sysv5UnixWare7.[01].[10]* | unixware7* | sco3.2v5.0.[024]*) no_undefined_flag_CXX='${wl}-z,text' archive_cmds_need_lc_CXX=no hardcode_shlibpath_var_CXX=no runpath_var='LD_RUN_PATH' case $cc_basename in CC*) archive_cmds_CXX='$CC -G ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' archive_expsym_cmds_CXX='$CC -G ${wl}-Bexport:$export_symbols ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' ;; *) archive_cmds_CXX='$CC -shared ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' archive_expsym_cmds_CXX='$CC -shared ${wl}-Bexport:$export_symbols ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' ;; esac ;; sysv5* | sco3.2v5* | sco5v6*) # Note: We can NOT use -z defs as we might desire, because we do not # link with -lc, and that would cause any symbols used from libc to # always be unresolved, which means just about no library would # ever link correctly. If we're not using GNU ld we use -z text # though, which does catch some bad symbols but isn't as heavy-handed # as -z defs. no_undefined_flag_CXX='${wl}-z,text' allow_undefined_flag_CXX='${wl}-z,nodefs' archive_cmds_need_lc_CXX=no hardcode_shlibpath_var_CXX=no hardcode_libdir_flag_spec_CXX='${wl}-R,$libdir' hardcode_libdir_separator_CXX=':' link_all_deplibs_CXX=yes export_dynamic_flag_spec_CXX='${wl}-Bexport' runpath_var='LD_RUN_PATH' case $cc_basename in CC*) archive_cmds_CXX='$CC -G ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' archive_expsym_cmds_CXX='$CC -G ${wl}-Bexport:$export_symbols ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' old_archive_cmds_CXX='$CC -Tprelink_objects $oldobjs~ '"$old_archive_cmds_CXX" reload_cmds_CXX='$CC -Tprelink_objects $reload_objs~ '"$reload_cmds_CXX" ;; *) archive_cmds_CXX='$CC -shared ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' archive_expsym_cmds_CXX='$CC -shared ${wl}-Bexport:$export_symbols ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' ;; esac ;; tandem*) case $cc_basename in NCC*) # NonStop-UX NCC 3.20 # FIXME: insert proper C++ library support ld_shlibs_CXX=no ;; *) # FIXME: insert proper C++ library support ld_shlibs_CXX=no ;; esac ;; vxworks*) # FIXME: insert proper C++ library support ld_shlibs_CXX=no ;; *) # FIXME: insert proper C++ library support ld_shlibs_CXX=no ;; esac { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ld_shlibs_CXX" >&5 $as_echo "$ld_shlibs_CXX" >&6; } test "$ld_shlibs_CXX" = no && can_build_shared=no GCC_CXX="$GXX" LD_CXX="$LD" ## CAVEAT EMPTOR: ## There is no encapsulation within the following macros, do not change ## the running order or otherwise move them around unless you know exactly ## what you are doing... # Dependencies to place before and after the object being linked: predep_objects_CXX= postdep_objects_CXX= predeps_CXX= postdeps_CXX= compiler_lib_search_path_CXX= cat > conftest.$ac_ext <<_LT_EOF class Foo { public: Foo (void) { a = 0; } private: int a; }; _LT_EOF _lt_libdeps_save_CFLAGS=$CFLAGS case "$CC $CFLAGS " in #( *\ -flto*\ *) CFLAGS="$CFLAGS -fno-lto" ;; *\ -fwhopr*\ *) CFLAGS="$CFLAGS -fno-whopr" ;; *\ -fuse-linker-plugin*\ *) CFLAGS="$CFLAGS -fno-use-linker-plugin" ;; esac if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5 (eval $ac_compile) 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; }; then # Parse the compiler output and extract the necessary # objects, libraries and library flags. # Sentinel used to keep track of whether or not we are before # the conftest object file. pre_test_object_deps_done=no for p in `eval "$output_verbose_link_cmd"`; do case ${prev}${p} in -L* | -R* | -l*) # Some compilers place space between "-{L,R}" and the path. # Remove the space. if test $p = "-L" || test $p = "-R"; then prev=$p continue fi # Expand the sysroot to ease extracting the directories later. if test -z "$prev"; then case $p in -L*) func_stripname_cnf '-L' '' "$p"; prev=-L; p=$func_stripname_result ;; -R*) func_stripname_cnf '-R' '' "$p"; prev=-R; p=$func_stripname_result ;; -l*) func_stripname_cnf '-l' '' "$p"; prev=-l; p=$func_stripname_result ;; esac fi case $p in =*) func_stripname_cnf '=' '' "$p"; p=$lt_sysroot$func_stripname_result ;; esac if test "$pre_test_object_deps_done" = no; then case ${prev} in -L | -R) # Internal compiler library paths should come after those # provided the user. The postdeps already come after the # user supplied libs so there is no need to process them. if test -z "$compiler_lib_search_path_CXX"; then compiler_lib_search_path_CXX="${prev}${p}" else compiler_lib_search_path_CXX="${compiler_lib_search_path_CXX} ${prev}${p}" fi ;; # The "-l" case would never come before the object being # linked, so don't bother handling this case. esac else if test -z "$postdeps_CXX"; then postdeps_CXX="${prev}${p}" else postdeps_CXX="${postdeps_CXX} ${prev}${p}" fi fi prev= ;; *.lto.$objext) ;; # Ignore GCC LTO objects *.$objext) # This assumes that the test object file only shows up # once in the compiler output. if test "$p" = "conftest.$objext"; then pre_test_object_deps_done=yes continue fi if test "$pre_test_object_deps_done" = no; then if test -z "$predep_objects_CXX"; then predep_objects_CXX="$p" else predep_objects_CXX="$predep_objects_CXX $p" fi else if test -z "$postdep_objects_CXX"; then postdep_objects_CXX="$p" else postdep_objects_CXX="$postdep_objects_CXX $p" fi fi ;; *) ;; # Ignore the rest. esac done # Clean up. rm -f a.out a.exe else echo "libtool.m4: error: problem compiling CXX test program" fi $RM -f confest.$objext CFLAGS=$_lt_libdeps_save_CFLAGS # PORTME: override above test on systems where it is broken case $host_os in interix[3-9]*) # Interix 3.5 installs completely hosed .la files for C++, so rather than # hack all around it, let's just trust "g++" to DTRT. predep_objects_CXX= postdep_objects_CXX= postdeps_CXX= ;; linux*) case `$CC -V 2>&1 | sed 5q` in *Sun\ C*) # Sun C++ 5.9 # The more standards-conforming stlport4 library is # incompatible with the Cstd library. Avoid specifying # it if it's in CXXFLAGS. Ignore libCrun as # -library=stlport4 depends on it. case " $CXX $CXXFLAGS " in *" -library=stlport4 "*) solaris_use_stlport4=yes ;; esac if test "$solaris_use_stlport4" != yes; then postdeps_CXX='-library=Cstd -library=Crun' fi ;; esac ;; solaris*) case $cc_basename in CC* | sunCC*) # The more standards-conforming stlport4 library is # incompatible with the Cstd library. Avoid specifying # it if it's in CXXFLAGS. Ignore libCrun as # -library=stlport4 depends on it. case " $CXX $CXXFLAGS " in *" -library=stlport4 "*) solaris_use_stlport4=yes ;; esac # Adding this requires a known-good setup of shared libraries for # Sun compiler versions before 5.6, else PIC objects from an old # archive will be linked into the output, leading to subtle bugs. if test "$solaris_use_stlport4" != yes; then postdeps_CXX='-library=Cstd -library=Crun' fi ;; esac ;; esac case " $postdeps_CXX " in *" -lc "*) archive_cmds_need_lc_CXX=no ;; esac compiler_lib_search_dirs_CXX= if test -n "${compiler_lib_search_path_CXX}"; then compiler_lib_search_dirs_CXX=`echo " ${compiler_lib_search_path_CXX}" | ${SED} -e 's! -L! !g' -e 's!^ !!'` fi lt_prog_compiler_wl_CXX= lt_prog_compiler_pic_CXX= lt_prog_compiler_static_CXX= # C++ specific cases for pic, static, wl, etc. if test "$GXX" = yes; then lt_prog_compiler_wl_CXX='-Wl,' lt_prog_compiler_static_CXX='-static' case $host_os in aix*) # All AIX code is PIC. if test "$host_cpu" = ia64; then # AIX 5 now supports IA64 processor lt_prog_compiler_static_CXX='-Bstatic' fi ;; amigaos*) case $host_cpu in powerpc) # see comment about AmigaOS4 .so support lt_prog_compiler_pic_CXX='-fPIC' ;; m68k) # FIXME: we need at least 68020 code to build shared libraries, but # adding the `-m68020' flag to GCC prevents building anything better, # like `-m68040'. lt_prog_compiler_pic_CXX='-m68020 -resident32 -malways-restore-a4' ;; esac ;; beos* | irix5* | irix6* | nonstopux* | osf3* | osf4* | osf5*) # PIC is the default for these OSes. ;; mingw* | cygwin* | os2* | pw32* | cegcc*) # This hack is so that the source file can tell whether it is being # built for inclusion in a dll (and should export symbols for example). # Although the cygwin gcc ignores -fPIC, still need this for old-style # (--disable-auto-import) libraries lt_prog_compiler_pic_CXX='-DDLL_EXPORT' ;; darwin* | rhapsody*) # PIC is the default on this platform # Common symbols not allowed in MH_DYLIB files lt_prog_compiler_pic_CXX='-fno-common' ;; *djgpp*) # DJGPP does not support shared libraries at all lt_prog_compiler_pic_CXX= ;; haiku*) # PIC is the default for Haiku. # The "-static" flag exists, but is broken. lt_prog_compiler_static_CXX= ;; interix[3-9]*) # Interix 3.x gcc -fpic/-fPIC options generate broken code. # Instead, we relocate shared libraries at runtime. ;; sysv4*MP*) if test -d /usr/nec; then lt_prog_compiler_pic_CXX=-Kconform_pic fi ;; hpux*) # PIC is the default for 64-bit PA HP-UX, but not for 32-bit # PA HP-UX. On IA64 HP-UX, PIC is the default but the pic flag # sets the default TLS model and affects inlining. case $host_cpu in hppa*64*) ;; *) lt_prog_compiler_pic_CXX='-fPIC' ;; esac ;; *qnx* | *nto*) # QNX uses GNU C++, but need to define -shared option too, otherwise # it will coredump. lt_prog_compiler_pic_CXX='-fPIC -shared' ;; *) lt_prog_compiler_pic_CXX='-fPIC' ;; esac else case $host_os in aix[4-9]*) # All AIX code is PIC. if test "$host_cpu" = ia64; then # AIX 5 now supports IA64 processor lt_prog_compiler_static_CXX='-Bstatic' else lt_prog_compiler_static_CXX='-bnso -bI:/lib/syscalls.exp' fi ;; chorus*) case $cc_basename in cxch68*) # Green Hills C++ Compiler # _LT_TAGVAR(lt_prog_compiler_static, CXX)="--no_auto_instantiation -u __main -u __premain -u _abort -r $COOL_DIR/lib/libOrb.a $MVME_DIR/lib/CC/libC.a $MVME_DIR/lib/classix/libcx.s.a" ;; esac ;; mingw* | cygwin* | os2* | pw32* | cegcc*) # This hack is so that the source file can tell whether it is being # built for inclusion in a dll (and should export symbols for example). lt_prog_compiler_pic_CXX='-DDLL_EXPORT' ;; dgux*) case $cc_basename in ec++*) lt_prog_compiler_pic_CXX='-KPIC' ;; ghcx*) # Green Hills C++ Compiler lt_prog_compiler_pic_CXX='-pic' ;; *) ;; esac ;; freebsd* | dragonfly*) # FreeBSD uses GNU C++ ;; hpux9* | hpux10* | hpux11*) case $cc_basename in CC*) lt_prog_compiler_wl_CXX='-Wl,' lt_prog_compiler_static_CXX='${wl}-a ${wl}archive' if test "$host_cpu" != ia64; then lt_prog_compiler_pic_CXX='+Z' fi ;; aCC*) lt_prog_compiler_wl_CXX='-Wl,' lt_prog_compiler_static_CXX='${wl}-a ${wl}archive' case $host_cpu in hppa*64*|ia64*) # +Z the default ;; *) lt_prog_compiler_pic_CXX='+Z' ;; esac ;; *) ;; esac ;; interix*) # This is c89, which is MS Visual C++ (no shared libs) # Anyone wants to do a port? ;; irix5* | irix6* | nonstopux*) case $cc_basename in CC*) lt_prog_compiler_wl_CXX='-Wl,' lt_prog_compiler_static_CXX='-non_shared' # CC pic flag -KPIC is the default. ;; *) ;; esac ;; linux* | k*bsd*-gnu | kopensolaris*-gnu) case $cc_basename in KCC*) # KAI C++ Compiler lt_prog_compiler_wl_CXX='--backend -Wl,' lt_prog_compiler_pic_CXX='-fPIC' ;; ecpc* ) # old Intel C++ for x86_64 which still supported -KPIC. lt_prog_compiler_wl_CXX='-Wl,' lt_prog_compiler_pic_CXX='-KPIC' lt_prog_compiler_static_CXX='-static' ;; icpc* ) # Intel C++, used to be incompatible with GCC. # ICC 10 doesn't accept -KPIC any more. lt_prog_compiler_wl_CXX='-Wl,' lt_prog_compiler_pic_CXX='-fPIC' lt_prog_compiler_static_CXX='-static' ;; pgCC* | pgcpp*) # Portland Group C++ compiler lt_prog_compiler_wl_CXX='-Wl,' lt_prog_compiler_pic_CXX='-fpic' lt_prog_compiler_static_CXX='-Bstatic' ;; cxx*) # Compaq C++ # Make sure the PIC flag is empty. It appears that all Alpha # Linux and Compaq Tru64 Unix objects are PIC. lt_prog_compiler_pic_CXX= lt_prog_compiler_static_CXX='-non_shared' ;; xlc* | xlC* | bgxl[cC]* | mpixl[cC]*) # IBM XL 8.0, 9.0 on PPC and BlueGene lt_prog_compiler_wl_CXX='-Wl,' lt_prog_compiler_pic_CXX='-qpic' lt_prog_compiler_static_CXX='-qstaticlink' ;; *) case `$CC -V 2>&1 | sed 5q` in *Sun\ C*) # Sun C++ 5.9 lt_prog_compiler_pic_CXX='-KPIC' lt_prog_compiler_static_CXX='-Bstatic' lt_prog_compiler_wl_CXX='-Qoption ld ' ;; esac ;; esac ;; lynxos*) ;; m88k*) ;; mvs*) case $cc_basename in cxx*) lt_prog_compiler_pic_CXX='-W c,exportall' ;; *) ;; esac ;; netbsd* | netbsdelf*-gnu) ;; *qnx* | *nto*) # QNX uses GNU C++, but need to define -shared option too, otherwise # it will coredump. lt_prog_compiler_pic_CXX='-fPIC -shared' ;; osf3* | osf4* | osf5*) case $cc_basename in KCC*) lt_prog_compiler_wl_CXX='--backend -Wl,' ;; RCC*) # Rational C++ 2.4.1 lt_prog_compiler_pic_CXX='-pic' ;; cxx*) # Digital/Compaq C++ lt_prog_compiler_wl_CXX='-Wl,' # Make sure the PIC flag is empty. It appears that all Alpha # Linux and Compaq Tru64 Unix objects are PIC. lt_prog_compiler_pic_CXX= lt_prog_compiler_static_CXX='-non_shared' ;; *) ;; esac ;; psos*) ;; solaris*) case $cc_basename in CC* | sunCC*) # Sun C++ 4.2, 5.x and Centerline C++ lt_prog_compiler_pic_CXX='-KPIC' lt_prog_compiler_static_CXX='-Bstatic' lt_prog_compiler_wl_CXX='-Qoption ld ' ;; gcx*) # Green Hills C++ Compiler lt_prog_compiler_pic_CXX='-PIC' ;; *) ;; esac ;; sunos4*) case $cc_basename in CC*) # Sun C++ 4.x lt_prog_compiler_pic_CXX='-pic' lt_prog_compiler_static_CXX='-Bstatic' ;; lcc*) # Lucid lt_prog_compiler_pic_CXX='-pic' ;; *) ;; esac ;; sysv5* | unixware* | sco3.2v5* | sco5v6* | OpenUNIX*) case $cc_basename in CC*) lt_prog_compiler_wl_CXX='-Wl,' lt_prog_compiler_pic_CXX='-KPIC' lt_prog_compiler_static_CXX='-Bstatic' ;; esac ;; tandem*) case $cc_basename in NCC*) # NonStop-UX NCC 3.20 lt_prog_compiler_pic_CXX='-KPIC' ;; *) ;; esac ;; vxworks*) ;; *) lt_prog_compiler_can_build_shared_CXX=no ;; esac fi case $host_os in # For platforms which do not support PIC, -DPIC is meaningless: *djgpp*) lt_prog_compiler_pic_CXX= ;; *) lt_prog_compiler_pic_CXX="$lt_prog_compiler_pic_CXX -DPIC" ;; esac { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5 $as_echo_n "checking for $compiler option to produce PIC... " >&6; } if ${lt_cv_prog_compiler_pic_CXX+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_prog_compiler_pic_CXX=$lt_prog_compiler_pic_CXX fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_pic_CXX" >&5 $as_echo "$lt_cv_prog_compiler_pic_CXX" >&6; } lt_prog_compiler_pic_CXX=$lt_cv_prog_compiler_pic_CXX # # Check to make sure the PIC flag actually works. # if test -n "$lt_prog_compiler_pic_CXX"; then { $as_echo "$as_me:${as_lineno-$LINENO}: checking if $compiler PIC flag $lt_prog_compiler_pic_CXX works" >&5 $as_echo_n "checking if $compiler PIC flag $lt_prog_compiler_pic_CXX works... " >&6; } if ${lt_cv_prog_compiler_pic_works_CXX+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_prog_compiler_pic_works_CXX=no ac_outfile=conftest.$ac_objext echo "$lt_simple_compile_test_code" > conftest.$ac_ext lt_compiler_flag="$lt_prog_compiler_pic_CXX -DPIC" # Insert the option either (1) after the last *FLAGS variable, or # (2) before a word containing "conftest.", or (3) at the end. # Note that $ac_compile itself does not contain backslashes and begins # with a dollar sign (not a hyphen), so the echo should work correctly. # The option is referenced via a variable to avoid confusing sed. lt_compile=`echo "$ac_compile" | $SED \ -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \ -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \ -e 's:$: $lt_compiler_flag:'` (eval echo "\"\$as_me:$LINENO: $lt_compile\"" >&5) (eval "$lt_compile" 2>conftest.err) ac_status=$? cat conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 if (exit $ac_status) && test -s "$ac_outfile"; then # The compiler can only warn and ignore the option if not recognized # So say no if there are warnings other than the usual output. $ECHO "$_lt_compiler_boilerplate" | $SED '/^$/d' >conftest.exp $SED '/^$/d; /^ *+/d' conftest.err >conftest.er2 if test ! -s conftest.er2 || diff conftest.exp conftest.er2 >/dev/null; then lt_cv_prog_compiler_pic_works_CXX=yes fi fi $RM conftest* fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_pic_works_CXX" >&5 $as_echo "$lt_cv_prog_compiler_pic_works_CXX" >&6; } if test x"$lt_cv_prog_compiler_pic_works_CXX" = xyes; then case $lt_prog_compiler_pic_CXX in "" | " "*) ;; *) lt_prog_compiler_pic_CXX=" $lt_prog_compiler_pic_CXX" ;; esac else lt_prog_compiler_pic_CXX= lt_prog_compiler_can_build_shared_CXX=no fi fi # # Check to make sure the static flag actually works. # wl=$lt_prog_compiler_wl_CXX eval lt_tmp_static_flag=\"$lt_prog_compiler_static_CXX\" { $as_echo "$as_me:${as_lineno-$LINENO}: checking if $compiler static flag $lt_tmp_static_flag works" >&5 $as_echo_n "checking if $compiler static flag $lt_tmp_static_flag works... " >&6; } if ${lt_cv_prog_compiler_static_works_CXX+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_prog_compiler_static_works_CXX=no save_LDFLAGS="$LDFLAGS" LDFLAGS="$LDFLAGS $lt_tmp_static_flag" echo "$lt_simple_link_test_code" > conftest.$ac_ext if (eval $ac_link 2>conftest.err) && test -s conftest$ac_exeext; then # The linker can only warn and ignore the option if not recognized # So say no if there are warnings if test -s conftest.err; then # Append any errors to the config.log. cat conftest.err 1>&5 $ECHO "$_lt_linker_boilerplate" | $SED '/^$/d' > conftest.exp $SED '/^$/d; /^ *+/d' conftest.err >conftest.er2 if diff conftest.exp conftest.er2 >/dev/null; then lt_cv_prog_compiler_static_works_CXX=yes fi else lt_cv_prog_compiler_static_works_CXX=yes fi fi $RM -r conftest* LDFLAGS="$save_LDFLAGS" fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_static_works_CXX" >&5 $as_echo "$lt_cv_prog_compiler_static_works_CXX" >&6; } if test x"$lt_cv_prog_compiler_static_works_CXX" = xyes; then : else lt_prog_compiler_static_CXX= fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking if $compiler supports -c -o file.$ac_objext" >&5 $as_echo_n "checking if $compiler supports -c -o file.$ac_objext... " >&6; } if ${lt_cv_prog_compiler_c_o_CXX+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_prog_compiler_c_o_CXX=no $RM -r conftest 2>/dev/null mkdir conftest cd conftest mkdir out echo "$lt_simple_compile_test_code" > conftest.$ac_ext lt_compiler_flag="-o out/conftest2.$ac_objext" # Insert the option either (1) after the last *FLAGS variable, or # (2) before a word containing "conftest.", or (3) at the end. # Note that $ac_compile itself does not contain backslashes and begins # with a dollar sign (not a hyphen), so the echo should work correctly. lt_compile=`echo "$ac_compile" | $SED \ -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \ -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \ -e 's:$: $lt_compiler_flag:'` (eval echo "\"\$as_me:$LINENO: $lt_compile\"" >&5) (eval "$lt_compile" 2>out/conftest.err) ac_status=$? cat out/conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 if (exit $ac_status) && test -s out/conftest2.$ac_objext then # The compiler can only warn and ignore the option if not recognized # So say no if there are warnings $ECHO "$_lt_compiler_boilerplate" | $SED '/^$/d' > out/conftest.exp $SED '/^$/d; /^ *+/d' out/conftest.err >out/conftest.er2 if test ! -s out/conftest.er2 || diff out/conftest.exp out/conftest.er2 >/dev/null; then lt_cv_prog_compiler_c_o_CXX=yes fi fi chmod u+w . 2>&5 $RM conftest* # SGI C++ compiler will create directory out/ii_files/ for # template instantiation test -d out/ii_files && $RM out/ii_files/* && rmdir out/ii_files $RM out/* && rmdir out cd .. $RM -r conftest $RM conftest* fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_c_o_CXX" >&5 $as_echo "$lt_cv_prog_compiler_c_o_CXX" >&6; } { $as_echo "$as_me:${as_lineno-$LINENO}: checking if $compiler supports -c -o file.$ac_objext" >&5 $as_echo_n "checking if $compiler supports -c -o file.$ac_objext... " >&6; } if ${lt_cv_prog_compiler_c_o_CXX+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_prog_compiler_c_o_CXX=no $RM -r conftest 2>/dev/null mkdir conftest cd conftest mkdir out echo "$lt_simple_compile_test_code" > conftest.$ac_ext lt_compiler_flag="-o out/conftest2.$ac_objext" # Insert the option either (1) after the last *FLAGS variable, or # (2) before a word containing "conftest.", or (3) at the end. # Note that $ac_compile itself does not contain backslashes and begins # with a dollar sign (not a hyphen), so the echo should work correctly. lt_compile=`echo "$ac_compile" | $SED \ -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \ -e 's: [^ ]*conftest\.: $lt_compiler_flag&:; t' \ -e 's:$: $lt_compiler_flag:'` (eval echo "\"\$as_me:$LINENO: $lt_compile\"" >&5) (eval "$lt_compile" 2>out/conftest.err) ac_status=$? cat out/conftest.err >&5 echo "$as_me:$LINENO: \$? = $ac_status" >&5 if (exit $ac_status) && test -s out/conftest2.$ac_objext then # The compiler can only warn and ignore the option if not recognized # So say no if there are warnings $ECHO "$_lt_compiler_boilerplate" | $SED '/^$/d' > out/conftest.exp $SED '/^$/d; /^ *+/d' out/conftest.err >out/conftest.er2 if test ! -s out/conftest.er2 || diff out/conftest.exp out/conftest.er2 >/dev/null; then lt_cv_prog_compiler_c_o_CXX=yes fi fi chmod u+w . 2>&5 $RM conftest* # SGI C++ compiler will create directory out/ii_files/ for # template instantiation test -d out/ii_files && $RM out/ii_files/* && rmdir out/ii_files $RM out/* && rmdir out cd .. $RM -r conftest $RM conftest* fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_c_o_CXX" >&5 $as_echo "$lt_cv_prog_compiler_c_o_CXX" >&6; } hard_links="nottested" if test "$lt_cv_prog_compiler_c_o_CXX" = no && test "$need_locks" != no; then # do not overwrite the value of need_locks provided by the user { $as_echo "$as_me:${as_lineno-$LINENO}: checking if we can lock with hard links" >&5 $as_echo_n "checking if we can lock with hard links... " >&6; } hard_links=yes $RM conftest* ln conftest.a conftest.b 2>/dev/null && hard_links=no touch conftest.a ln conftest.a conftest.b 2>&5 || hard_links=no ln conftest.a conftest.b 2>/dev/null && hard_links=no { $as_echo "$as_me:${as_lineno-$LINENO}: result: $hard_links" >&5 $as_echo "$hard_links" >&6; } if test "$hard_links" = no; then { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: \`$CC' does not support \`-c -o', so \`make -j' may be unsafe" >&5 $as_echo "$as_me: WARNING: \`$CC' does not support \`-c -o', so \`make -j' may be unsafe" >&2;} need_locks=warn fi else need_locks=no fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether the $compiler linker ($LD) supports shared libraries" >&5 $as_echo_n "checking whether the $compiler linker ($LD) supports shared libraries... " >&6; } export_symbols_cmds_CXX='$NM $libobjs $convenience | $global_symbol_pipe | $SED '\''s/.* //'\'' | sort | uniq > $export_symbols' exclude_expsyms_CXX='_GLOBAL_OFFSET_TABLE_|_GLOBAL__F[ID]_.*' case $host_os in aix[4-9]*) # If we're using GNU nm, then we don't want the "-C" option. # -C means demangle to AIX nm, but means don't demangle with GNU nm # Also, AIX nm treats weak defined symbols like other global defined # symbols, whereas GNU nm marks them as "W". if $NM -V 2>&1 | $GREP 'GNU' > /dev/null; then export_symbols_cmds_CXX='$NM -Bpg $libobjs $convenience | awk '\''{ if (((\$ 2 == "T") || (\$ 2 == "D") || (\$ 2 == "B") || (\$ 2 == "W")) && (substr(\$ 3,1,1) != ".")) { print \$ 3 } }'\'' | sort -u > $export_symbols' else export_symbols_cmds_CXX='$NM -BCpg $libobjs $convenience | awk '\''{ if (((\$ 2 == "T") || (\$ 2 == "D") || (\$ 2 == "B")) && (substr(\$ 3,1,1) != ".")) { print \$ 3 } }'\'' | sort -u > $export_symbols' fi ;; pw32*) export_symbols_cmds_CXX="$ltdll_cmds" ;; cygwin* | mingw* | cegcc*) case $cc_basename in cl*) exclude_expsyms_CXX='_NULL_IMPORT_DESCRIPTOR|_IMPORT_DESCRIPTOR_.*' ;; *) export_symbols_cmds_CXX='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/;s/^.*[ ]__nm__\([^ ]*\)[ ][^ ]*/\1 DATA/;/^I[ ]/d;/^[AITW][ ]/s/.* //'\'' | sort | uniq > $export_symbols' exclude_expsyms_CXX='[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname' ;; esac ;; linux* | k*bsd*-gnu | gnu*) link_all_deplibs_CXX=no ;; *) export_symbols_cmds_CXX='$NM $libobjs $convenience | $global_symbol_pipe | $SED '\''s/.* //'\'' | sort | uniq > $export_symbols' ;; esac { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ld_shlibs_CXX" >&5 $as_echo "$ld_shlibs_CXX" >&6; } test "$ld_shlibs_CXX" = no && can_build_shared=no with_gnu_ld_CXX=$with_gnu_ld # # Do we need to explicitly link libc? # case "x$archive_cmds_need_lc_CXX" in x|xyes) # Assume -lc should be added archive_cmds_need_lc_CXX=yes if test "$enable_shared" = yes && test "$GCC" = yes; then case $archive_cmds_CXX in *'~'*) # FIXME: we may have to deal with multi-command sequences. ;; '$CC '*) # Test whether the compiler implicitly links with -lc since on some # systems, -lgcc has to come before -lc. If gcc already passes -lc # to ld, don't add -lc before -lgcc. { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether -lc should be explicitly linked in" >&5 $as_echo_n "checking whether -lc should be explicitly linked in... " >&6; } if ${lt_cv_archive_cmds_need_lc_CXX+:} false; then : $as_echo_n "(cached) " >&6 else $RM conftest* echo "$lt_simple_compile_test_code" > conftest.$ac_ext if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5 (eval $ac_compile) 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } 2>conftest.err; then soname=conftest lib=conftest libobjs=conftest.$ac_objext deplibs= wl=$lt_prog_compiler_wl_CXX pic_flag=$lt_prog_compiler_pic_CXX compiler_flags=-v linker_flags=-v verstring= output_objdir=. libname=conftest lt_save_allow_undefined_flag=$allow_undefined_flag_CXX allow_undefined_flag_CXX= if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$archive_cmds_CXX 2\>\&1 \| $GREP \" -lc \" \>/dev/null 2\>\&1\""; } >&5 (eval $archive_cmds_CXX 2\>\&1 \| $GREP \" -lc \" \>/dev/null 2\>\&1) 2>&5 ac_status=$? $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5 test $ac_status = 0; } then lt_cv_archive_cmds_need_lc_CXX=no else lt_cv_archive_cmds_need_lc_CXX=yes fi allow_undefined_flag_CXX=$lt_save_allow_undefined_flag else cat conftest.err 1>&5 fi $RM conftest* fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_archive_cmds_need_lc_CXX" >&5 $as_echo "$lt_cv_archive_cmds_need_lc_CXX" >&6; } archive_cmds_need_lc_CXX=$lt_cv_archive_cmds_need_lc_CXX ;; esac fi ;; esac { $as_echo "$as_me:${as_lineno-$LINENO}: checking dynamic linker characteristics" >&5 $as_echo_n "checking dynamic linker characteristics... " >&6; } library_names_spec= libname_spec='lib$name' soname_spec= shrext_cmds=".so" postinstall_cmds= postuninstall_cmds= finish_cmds= finish_eval= shlibpath_var= shlibpath_overrides_runpath=unknown version_type=none dynamic_linker="$host_os ld.so" sys_lib_dlsearch_path_spec="/lib /usr/lib" need_lib_prefix=unknown hardcode_into_libs=no # when you set need_version to no, make sure it does not cause -set_version # flags to be left without arguments need_version=unknown case $host_os in aix3*) version_type=linux # correct to gnu/linux during the next big refactor library_names_spec='${libname}${release}${shared_ext}$versuffix $libname.a' shlibpath_var=LIBPATH # AIX 3 has no versioning support, so we append a major version to the name. soname_spec='${libname}${release}${shared_ext}$major' ;; aix[4-9]*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no hardcode_into_libs=yes if test "$host_cpu" = ia64; then # AIX 5 supports IA64 library_names_spec='${libname}${release}${shared_ext}$major ${libname}${release}${shared_ext}$versuffix $libname${shared_ext}' shlibpath_var=LD_LIBRARY_PATH else # With GCC up to 2.95.x, collect2 would create an import file # for dependence libraries. The import file would start with # the line `#! .'. This would cause the generated library to # depend on `.', always an invalid library. This was fixed in # development snapshots of GCC prior to 3.0. case $host_os in aix4 | aix4.[01] | aix4.[01].*) if { echo '#if __GNUC__ > 2 || (__GNUC__ == 2 && __GNUC_MINOR__ >= 97)' echo ' yes ' echo '#endif'; } | ${CC} -E - | $GREP yes > /dev/null; then : else can_build_shared=no fi ;; esac # AIX (on Power*) has no versioning support, so currently we can not hardcode correct # soname into executable. Probably we can add versioning support to # collect2, so additional links can be useful in future. if test "$aix_use_runtimelinking" = yes; then # If using run time linking (on AIX 4.2 or later) use lib.so # instead of lib.a to let people know that these are not # typical AIX shared libraries. library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' else # We preserve .a as extension for shared libraries through AIX4.2 # and later when we are not doing run time linking. library_names_spec='${libname}${release}.a $libname.a' soname_spec='${libname}${release}${shared_ext}$major' fi shlibpath_var=LIBPATH fi ;; amigaos*) case $host_cpu in powerpc) # Since July 2007 AmigaOS4 officially supports .so libraries. # When compiling the executable, add -use-dynld -Lsobjs: to the compileline. library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' ;; m68k) library_names_spec='$libname.ixlibrary $libname.a' # Create ${libname}_ixlibrary.a entries in /sys/libs. finish_eval='for lib in `ls $libdir/*.ixlibrary 2>/dev/null`; do libname=`func_echo_all "$lib" | $SED '\''s%^.*/\([^/]*\)\.ixlibrary$%\1%'\''`; test $RM /sys/libs/${libname}_ixlibrary.a; $show "cd /sys/libs && $LN_S $lib ${libname}_ixlibrary.a"; cd /sys/libs && $LN_S $lib ${libname}_ixlibrary.a || exit 1; done' ;; esac ;; beos*) library_names_spec='${libname}${shared_ext}' dynamic_linker="$host_os ld.so" shlibpath_var=LIBRARY_PATH ;; bsdi[45]*) version_type=linux # correct to gnu/linux during the next big refactor need_version=no library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' finish_cmds='PATH="\$PATH:/sbin" ldconfig $libdir' shlibpath_var=LD_LIBRARY_PATH sys_lib_search_path_spec="/shlib /usr/lib /usr/X11/lib /usr/contrib/lib /lib /usr/local/lib" sys_lib_dlsearch_path_spec="/shlib /usr/lib /usr/local/lib" # the default ld.so.conf also contains /usr/contrib/lib and # /usr/X11R6/lib (/usr/X11 is a link to /usr/X11R6), but let us allow # libtool to hard-code these into programs ;; cygwin* | mingw* | pw32* | cegcc*) version_type=windows shrext_cmds=".dll" need_version=no need_lib_prefix=no case $GCC,$cc_basename in yes,*) # gcc library_names_spec='$libname.dll.a' # DLL is installed to $(libdir)/../bin by postinstall_cmds postinstall_cmds='base_file=`basename \${file}`~ dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i; echo \$dlname'\''`~ dldir=$destdir/`dirname \$dlpath`~ test -d \$dldir || mkdir -p \$dldir~ $install_prog $dir/$dlname \$dldir/$dlname~ chmod a+x \$dldir/$dlname~ if test -n '\''$stripme'\'' && test -n '\''$striplib'\''; then eval '\''$striplib \$dldir/$dlname'\'' || exit \$?; fi' postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~ dlpath=$dir/\$dldll~ $RM \$dlpath' shlibpath_overrides_runpath=yes case $host_os in cygwin*) # Cygwin DLLs use 'cyg' prefix rather than 'lib' soname_spec='`echo ${libname} | sed -e 's/^lib/cyg/'``echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}' ;; mingw* | cegcc*) # MinGW DLLs use traditional 'lib' prefix soname_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}' ;; pw32*) # pw32 DLLs use 'pw' prefix rather than 'lib' library_names_spec='`echo ${libname} | sed -e 's/^lib/pw/'``echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}' ;; esac dynamic_linker='Win32 ld.exe' ;; *,cl*) # Native MSVC libname_spec='$name' soname_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}' library_names_spec='${libname}.dll.lib' case $build_os in mingw*) sys_lib_search_path_spec= lt_save_ifs=$IFS IFS=';' for lt_path in $LIB do IFS=$lt_save_ifs # Let DOS variable expansion print the short 8.3 style file name. lt_path=`cd "$lt_path" 2>/dev/null && cmd //C "for %i in (".") do @echo %~si"` sys_lib_search_path_spec="$sys_lib_search_path_spec $lt_path" done IFS=$lt_save_ifs # Convert to MSYS style. sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | sed -e 's|\\\\|/|g' -e 's| \\([a-zA-Z]\\):| /\\1|g' -e 's|^ ||'` ;; cygwin*) # Convert to unix form, then to dos form, then back to unix form # but this time dos style (no spaces!) so that the unix form looks # like /cygdrive/c/PROGRA~1:/cygdr... sys_lib_search_path_spec=`cygpath --path --unix "$LIB"` sys_lib_search_path_spec=`cygpath --path --dos "$sys_lib_search_path_spec" 2>/dev/null` sys_lib_search_path_spec=`cygpath --path --unix "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"` ;; *) sys_lib_search_path_spec="$LIB" if $ECHO "$sys_lib_search_path_spec" | $GREP ';[c-zC-Z]:/' >/dev/null; then # It is most probably a Windows format PATH. sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'` else sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"` fi # FIXME: find the short name or the path components, as spaces are # common. (e.g. "Program Files" -> "PROGRA~1") ;; esac # DLL is installed to $(libdir)/../bin by postinstall_cmds postinstall_cmds='base_file=`basename \${file}`~ dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i; echo \$dlname'\''`~ dldir=$destdir/`dirname \$dlpath`~ test -d \$dldir || mkdir -p \$dldir~ $install_prog $dir/$dlname \$dldir/$dlname' postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~ dlpath=$dir/\$dldll~ $RM \$dlpath' shlibpath_overrides_runpath=yes dynamic_linker='Win32 link.exe' ;; *) # Assume MSVC wrapper library_names_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext} $libname.lib' dynamic_linker='Win32 ld.exe' ;; esac # FIXME: first we should search . and the directory the executable is in shlibpath_var=PATH ;; darwin* | rhapsody*) dynamic_linker="$host_os dyld" version_type=darwin need_lib_prefix=no need_version=no library_names_spec='${libname}${release}${major}$shared_ext ${libname}$shared_ext' soname_spec='${libname}${release}${major}$shared_ext' shlibpath_overrides_runpath=yes shlibpath_var=DYLD_LIBRARY_PATH shrext_cmds='`test .$module = .yes && echo .so || echo .dylib`' sys_lib_dlsearch_path_spec='/usr/local/lib /lib /usr/lib' ;; dgux*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname$shared_ext' soname_spec='${libname}${release}${shared_ext}$major' shlibpath_var=LD_LIBRARY_PATH ;; freebsd* | dragonfly*) # DragonFly does not have aout. When/if they implement a new # versioning mechanism, adjust this. if test -x /usr/bin/objformat; then objformat=`/usr/bin/objformat` else case $host_os in freebsd[23].*) objformat=aout ;; *) objformat=elf ;; esac fi version_type=freebsd-$objformat case $version_type in freebsd-elf*) library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext} $libname${shared_ext}' need_version=no need_lib_prefix=no ;; freebsd-*) library_names_spec='${libname}${release}${shared_ext}$versuffix $libname${shared_ext}$versuffix' need_version=yes ;; esac shlibpath_var=LD_LIBRARY_PATH case $host_os in freebsd2.*) shlibpath_overrides_runpath=yes ;; freebsd3.[01]* | freebsdelf3.[01]*) shlibpath_overrides_runpath=yes hardcode_into_libs=yes ;; freebsd3.[2-9]* | freebsdelf3.[2-9]* | \ freebsd4.[0-5] | freebsdelf4.[0-5] | freebsd4.1.1 | freebsdelf4.1.1) shlibpath_overrides_runpath=no hardcode_into_libs=yes ;; *) # from 4.6 on, and DragonFly shlibpath_overrides_runpath=yes hardcode_into_libs=yes ;; esac ;; gnu*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}${major} ${libname}${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=no hardcode_into_libs=yes ;; haiku*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no dynamic_linker="$host_os runtime_loader" library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}${major} ${libname}${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' shlibpath_var=LIBRARY_PATH shlibpath_overrides_runpath=yes sys_lib_dlsearch_path_spec='/boot/home/config/lib /boot/common/lib /boot/system/lib' hardcode_into_libs=yes ;; hpux9* | hpux10* | hpux11*) # Give a soname corresponding to the major version so that dld.sl refuses to # link against other versions. version_type=sunos need_lib_prefix=no need_version=no case $host_cpu in ia64*) shrext_cmds='.so' hardcode_into_libs=yes dynamic_linker="$host_os dld.so" shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes # Unless +noenvvar is specified. library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' if test "X$HPUX_IA64_MODE" = X32; then sys_lib_search_path_spec="/usr/lib/hpux32 /usr/local/lib/hpux32 /usr/local/lib" else sys_lib_search_path_spec="/usr/lib/hpux64 /usr/local/lib/hpux64" fi sys_lib_dlsearch_path_spec=$sys_lib_search_path_spec ;; hppa*64*) shrext_cmds='.sl' hardcode_into_libs=yes dynamic_linker="$host_os dld.sl" shlibpath_var=LD_LIBRARY_PATH # How should we handle SHLIB_PATH shlibpath_overrides_runpath=yes # Unless +noenvvar is specified. library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' sys_lib_search_path_spec="/usr/lib/pa20_64 /usr/ccs/lib/pa20_64" sys_lib_dlsearch_path_spec=$sys_lib_search_path_spec ;; *) shrext_cmds='.sl' dynamic_linker="$host_os dld.sl" shlibpath_var=SHLIB_PATH shlibpath_overrides_runpath=no # +s is required to enable SHLIB_PATH library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' ;; esac # HP-UX runs *really* slowly unless shared libraries are mode 555, ... postinstall_cmds='chmod 555 $lib' # or fails outright, so override atomically: install_override_mode=555 ;; interix[3-9]*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' dynamic_linker='Interix 3.x ld.so.1 (PE, like ELF)' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=no hardcode_into_libs=yes ;; irix5* | irix6* | nonstopux*) case $host_os in nonstopux*) version_type=nonstopux ;; *) if test "$lt_cv_prog_gnu_ld" = yes; then version_type=linux # correct to gnu/linux during the next big refactor else version_type=irix fi ;; esac need_lib_prefix=no need_version=no soname_spec='${libname}${release}${shared_ext}$major' library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${release}${shared_ext} $libname${shared_ext}' case $host_os in irix5* | nonstopux*) libsuff= shlibsuff= ;; *) case $LD in # libtool.m4 will add one of these switches to LD *-32|*"-32 "|*-melf32bsmip|*"-melf32bsmip ") libsuff= shlibsuff= libmagic=32-bit;; *-n32|*"-n32 "|*-melf32bmipn32|*"-melf32bmipn32 ") libsuff=32 shlibsuff=N32 libmagic=N32;; *-64|*"-64 "|*-melf64bmip|*"-melf64bmip ") libsuff=64 shlibsuff=64 libmagic=64-bit;; *) libsuff= shlibsuff= libmagic=never-match;; esac ;; esac shlibpath_var=LD_LIBRARY${shlibsuff}_PATH shlibpath_overrides_runpath=no sys_lib_search_path_spec="/usr/lib${libsuff} /lib${libsuff} /usr/local/lib${libsuff}" sys_lib_dlsearch_path_spec="/usr/lib${libsuff} /lib${libsuff}" hardcode_into_libs=yes ;; # No shared lib support for Linux oldld, aout, or coff. linux*oldld* | linux*aout* | linux*coff*) dynamic_linker=no ;; # This must be glibc/ELF. linux* | k*bsd*-gnu | kopensolaris*-gnu) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' finish_cmds='PATH="\$PATH:/sbin" ldconfig -n $libdir' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=no # Some binutils ld are patched to set DT_RUNPATH if ${lt_cv_shlibpath_overrides_runpath+:} false; then : $as_echo_n "(cached) " >&6 else lt_cv_shlibpath_overrides_runpath=no save_LDFLAGS=$LDFLAGS save_libdir=$libdir eval "libdir=/foo; wl=\"$lt_prog_compiler_wl_CXX\"; \ LDFLAGS=\"\$LDFLAGS $hardcode_libdir_flag_spec_CXX\"" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF if ac_fn_cxx_try_link "$LINENO"; then : if ($OBJDUMP -p conftest$ac_exeext) 2>/dev/null | grep "RUNPATH.*$libdir" >/dev/null; then : lt_cv_shlibpath_overrides_runpath=yes fi fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LDFLAGS=$save_LDFLAGS libdir=$save_libdir fi shlibpath_overrides_runpath=$lt_cv_shlibpath_overrides_runpath # This implies no fast_install, which is unacceptable. # Some rework will be needed to allow for fast_install # before this can be enabled. hardcode_into_libs=yes # Append ld.so.conf contents to the search path if test -f /etc/ld.so.conf; then lt_ld_extra=`awk '/^include / { system(sprintf("cd /etc; cat %s 2>/dev/null", \$2)); skip = 1; } { if (!skip) print \$0; skip = 0; }' < /etc/ld.so.conf | $SED -e 's/#.*//;/^[ ]*hwcap[ ]/d;s/[:, ]/ /g;s/=[^=]*$//;s/=[^= ]* / /g;s/"//g;/^$/d' | tr '\n' ' '` sys_lib_dlsearch_path_spec="/lib /usr/lib $lt_ld_extra" fi # We used to test for /lib/ld.so.1 and disable shared libraries on # powerpc, because MkLinux only supported shared libraries with the # GNU dynamic linker. Since this was broken with cross compilers, # most powerpc-linux boxes support dynamic linking these days and # people can always --disable-shared, the test was removed, and we # assume the GNU/Linux dynamic linker is in use. dynamic_linker='GNU/Linux ld.so' ;; netbsdelf*-gnu) version_type=linux need_lib_prefix=no need_version=no library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=no hardcode_into_libs=yes dynamic_linker='NetBSD ld.elf_so' ;; netbsd*) version_type=sunos need_lib_prefix=no need_version=no if echo __ELF__ | $CC -E - | $GREP __ELF__ >/dev/null; then library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${shared_ext}$versuffix' finish_cmds='PATH="\$PATH:/sbin" ldconfig -m $libdir' dynamic_linker='NetBSD (a.out) ld.so' else library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' dynamic_linker='NetBSD ld.elf_so' fi shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes hardcode_into_libs=yes ;; newsos6) version_type=linux # correct to gnu/linux during the next big refactor library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes ;; *nto* | *qnx*) version_type=qnx need_lib_prefix=no need_version=no library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=no hardcode_into_libs=yes dynamic_linker='ldqnx.so' ;; openbsd*) version_type=sunos sys_lib_dlsearch_path_spec="/usr/lib" need_lib_prefix=no # Some older versions of OpenBSD (3.3 at least) *do* need versioned libs. case $host_os in openbsd3.3 | openbsd3.3.*) need_version=yes ;; *) need_version=no ;; esac library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${shared_ext}$versuffix' finish_cmds='PATH="\$PATH:/sbin" ldconfig -m $libdir' shlibpath_var=LD_LIBRARY_PATH if test -z "`echo __ELF__ | $CC -E - | $GREP __ELF__`" || test "$host_os-$host_cpu" = "openbsd2.8-powerpc"; then case $host_os in openbsd2.[89] | openbsd2.[89].*) shlibpath_overrides_runpath=no ;; *) shlibpath_overrides_runpath=yes ;; esac else shlibpath_overrides_runpath=yes fi ;; os2*) libname_spec='$name' shrext_cmds=".dll" need_lib_prefix=no library_names_spec='$libname${shared_ext} $libname.a' dynamic_linker='OS/2 ld.exe' shlibpath_var=LIBPATH ;; osf3* | osf4* | osf5*) version_type=osf need_lib_prefix=no need_version=no soname_spec='${libname}${release}${shared_ext}$major' library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' shlibpath_var=LD_LIBRARY_PATH sys_lib_search_path_spec="/usr/shlib /usr/ccs/lib /usr/lib/cmplrs/cc /usr/lib /usr/local/lib /var/shlib" sys_lib_dlsearch_path_spec="$sys_lib_search_path_spec" ;; rdos*) dynamic_linker=no ;; solaris*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes hardcode_into_libs=yes # ldd complains unless libraries are executable postinstall_cmds='chmod +x $lib' ;; sunos4*) version_type=sunos library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${shared_ext}$versuffix' finish_cmds='PATH="\$PATH:/usr/etc" ldconfig $libdir' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes if test "$with_gnu_ld" = yes; then need_lib_prefix=no fi need_version=yes ;; sysv4 | sysv4.3*) version_type=linux # correct to gnu/linux during the next big refactor library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' shlibpath_var=LD_LIBRARY_PATH case $host_vendor in sni) shlibpath_overrides_runpath=no need_lib_prefix=no runpath_var=LD_RUN_PATH ;; siemens) need_lib_prefix=no ;; motorola) need_lib_prefix=no need_version=no shlibpath_overrides_runpath=no sys_lib_search_path_spec='/lib /usr/lib /usr/ccs/lib' ;; esac ;; sysv4*MP*) if test -d /usr/nec ;then version_type=linux # correct to gnu/linux during the next big refactor library_names_spec='$libname${shared_ext}.$versuffix $libname${shared_ext}.$major $libname${shared_ext}' soname_spec='$libname${shared_ext}.$major' shlibpath_var=LD_LIBRARY_PATH fi ;; sysv5* | sco3.2v5* | sco5v6* | unixware* | OpenUNIX* | sysv4*uw2*) version_type=freebsd-elf need_lib_prefix=no need_version=no library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext} $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes hardcode_into_libs=yes if test "$with_gnu_ld" = yes; then sys_lib_search_path_spec='/usr/local/lib /usr/gnu/lib /usr/ccs/lib /usr/lib /lib' else sys_lib_search_path_spec='/usr/ccs/lib /usr/lib' case $host_os in sco3.2v5*) sys_lib_search_path_spec="$sys_lib_search_path_spec /lib" ;; esac fi sys_lib_dlsearch_path_spec='/usr/lib' ;; tpf*) # TPF is a cross-target only. Preferred cross-host = GNU/Linux. version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=no hardcode_into_libs=yes ;; uts4*) version_type=linux # correct to gnu/linux during the next big refactor library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' shlibpath_var=LD_LIBRARY_PATH ;; *) dynamic_linker=no ;; esac { $as_echo "$as_me:${as_lineno-$LINENO}: result: $dynamic_linker" >&5 $as_echo "$dynamic_linker" >&6; } test "$dynamic_linker" = no && can_build_shared=no variables_saved_for_relink="PATH $shlibpath_var $runpath_var" if test "$GCC" = yes; then variables_saved_for_relink="$variables_saved_for_relink GCC_EXEC_PREFIX COMPILER_PATH LIBRARY_PATH" fi if test "${lt_cv_sys_lib_search_path_spec+set}" = set; then sys_lib_search_path_spec="$lt_cv_sys_lib_search_path_spec" fi if test "${lt_cv_sys_lib_dlsearch_path_spec+set}" = set; then sys_lib_dlsearch_path_spec="$lt_cv_sys_lib_dlsearch_path_spec" fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to hardcode library paths into programs" >&5 $as_echo_n "checking how to hardcode library paths into programs... " >&6; } hardcode_action_CXX= if test -n "$hardcode_libdir_flag_spec_CXX" || test -n "$runpath_var_CXX" || test "X$hardcode_automatic_CXX" = "Xyes" ; then # We can hardcode non-existent directories. if test "$hardcode_direct_CXX" != no && # If the only mechanism to avoid hardcoding is shlibpath_var, we # have to relink, otherwise we might link with an installed library # when we should be linking with a yet-to-be-installed one ## test "$_LT_TAGVAR(hardcode_shlibpath_var, CXX)" != no && test "$hardcode_minus_L_CXX" != no; then # Linking always hardcodes the temporary library directory. hardcode_action_CXX=relink else # We can link without hardcoding, and we can hardcode nonexisting dirs. hardcode_action_CXX=immediate fi else # We cannot hardcode anything, or else we can only hardcode existing # directories. hardcode_action_CXX=unsupported fi { $as_echo "$as_me:${as_lineno-$LINENO}: result: $hardcode_action_CXX" >&5 $as_echo "$hardcode_action_CXX" >&6; } if test "$hardcode_action_CXX" = relink || test "$inherit_rpath_CXX" = yes; then # Fast installation is not supported enable_fast_install=no elif test "$shlibpath_overrides_runpath" = yes || test "$enable_shared" = no; then # Fast installation is not necessary enable_fast_install=needless fi fi # test -n "$compiler" CC=$lt_save_CC CFLAGS=$lt_save_CFLAGS LDCXX=$LD LD=$lt_save_LD GCC=$lt_save_GCC with_gnu_ld=$lt_save_with_gnu_ld lt_cv_path_LDCXX=$lt_cv_path_LD lt_cv_path_LD=$lt_save_path_LD lt_cv_prog_gnu_ldcxx=$lt_cv_prog_gnu_ld lt_cv_prog_gnu_ld=$lt_save_with_gnu_ld fi # test "$_lt_caught_CXX_error" != yes ac_ext=cpp ac_cpp='$CXXCPP $CPPFLAGS' ac_compile='$CXX -c $CXXFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CXX -o conftest$ac_exeext $CXXFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_cxx_compiler_gnu ac_config_commands="$ac_config_commands libtool" # Only expand once: # TODO(chandlerc@google.com): Currently we aren't running the Python tests # against the interpreter detected by AM_PATH_PYTHON, and so we condition # HAVE_PYTHON by requiring "python" to be in the PATH, and that interpreter's # version to be >= 2.3. This will allow the scripts to use a "/usr/bin/env" # hashbang. PYTHON= # We *do not* allow the user to specify a python interpreter # Extract the first word of "python", so it can be a program name with args. set dummy python; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_path_PYTHON+:} false; then : $as_echo_n "(cached) " >&6 else case $PYTHON in [\\/]* | ?:[\\/]*) ac_cv_path_PYTHON="$PYTHON" # Let the user override the test with a path. ;; *) as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then ac_cv_path_PYTHON="$as_dir/$ac_word$ac_exec_ext" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS test -z "$ac_cv_path_PYTHON" && ac_cv_path_PYTHON=":" ;; esac fi PYTHON=$ac_cv_path_PYTHON if test -n "$PYTHON"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $PYTHON" >&5 $as_echo "$PYTHON" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test "$PYTHON" != ":"; then : prog="import sys # split strings by '.' and convert to numeric. Append some zeros # because we need at least 4 digits for the hex conversion. # map returns an iterator in Python 3.0 and a list in 2.x minver = list(map(int, '2.3'.split('.'))) + [0, 0, 0] minverhex = 0 # xrange is not present in Python 3.0 and range returns an iterator for i in list(range(0, 4)): minverhex = (minverhex << 8) + minver[i] sys.exit(sys.hexversion < minverhex)" if { echo "$as_me:$LINENO: $PYTHON -c "$prog"" >&5 ($PYTHON -c "$prog") >&5 2>&5 ac_status=$? echo "$as_me:$LINENO: \$? = $ac_status" >&5 (exit $ac_status); }; then : : else PYTHON=":" fi fi if test "$PYTHON" != ":"; then HAVE_PYTHON_TRUE= HAVE_PYTHON_FALSE='#' else HAVE_PYTHON_TRUE='#' HAVE_PYTHON_FALSE= fi # Configure pthreads. # Check whether --with-pthreads was given. if test "${with_pthreads+set}" = set; then : withval=$with_pthreads; with_pthreads=$withval else with_pthreads=check fi have_pthreads=no if test "x$with_pthreads" != "xno"; then : ac_ext=c ac_cpp='$CPP $CPPFLAGS' ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_c_compiler_gnu acx_pthread_ok=no # We used to check for pthread.h first, but this fails if pthread.h # requires special compiler flags (e.g. on True64 or Sequent). # It gets checked for in the link test anyway. # First of all, check if the user has set any of the PTHREAD_LIBS, # etcetera environment variables, and if threads linking works using # them: if test x"$PTHREAD_LIBS$PTHREAD_CFLAGS" != x; then save_CFLAGS="$CFLAGS" CFLAGS="$CFLAGS $PTHREAD_CFLAGS" save_LIBS="$LIBS" LIBS="$PTHREAD_LIBS $LIBS" { $as_echo "$as_me:${as_lineno-$LINENO}: checking for pthread_join in LIBS=$PTHREAD_LIBS with CFLAGS=$PTHREAD_CFLAGS" >&5 $as_echo_n "checking for pthread_join in LIBS=$PTHREAD_LIBS with CFLAGS=$PTHREAD_CFLAGS... " >&6; } cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ /* Override any GCC internal prototype to avoid an error. Use char because int might match the return type of a GCC builtin and then its argument prototype would still apply. */ #ifdef __cplusplus extern "C" #endif char pthread_join (); int main () { return pthread_join (); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : acx_pthread_ok=yes fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext { $as_echo "$as_me:${as_lineno-$LINENO}: result: $acx_pthread_ok" >&5 $as_echo "$acx_pthread_ok" >&6; } if test x"$acx_pthread_ok" = xno; then PTHREAD_LIBS="" PTHREAD_CFLAGS="" fi LIBS="$save_LIBS" CFLAGS="$save_CFLAGS" fi # We must check for the threads library under a number of different # names; the ordering is very important because some systems # (e.g. DEC) have both -lpthread and -lpthreads, where one of the # libraries is broken (non-POSIX). # Create a list of thread flags to try. Items starting with a "-" are # C compiler flags, and other items are library names, except for "none" # which indicates that we try without any flags at all, and "pthread-config" # which is a program returning the flags for the Pth emulation library. acx_pthread_flags="pthreads none -Kthread -kthread lthread -pthread -pthreads -mthreads pthread --thread-safe -mt pthread-config" # The ordering *is* (sometimes) important. Some notes on the # individual items follow: # pthreads: AIX (must check this before -lpthread) # none: in case threads are in libc; should be tried before -Kthread and # other compiler flags to prevent continual compiler warnings # -Kthread: Sequent (threads in libc, but -Kthread needed for pthread.h) # -kthread: FreeBSD kernel threads (preferred to -pthread since SMP-able) # lthread: LinuxThreads port on FreeBSD (also preferred to -pthread) # -pthread: Linux/gcc (kernel threads), BSD/gcc (userland threads) # -pthreads: Solaris/gcc # -mthreads: Mingw32/gcc, Lynx/gcc # -mt: Sun Workshop C (may only link SunOS threads [-lthread], but it # doesn't hurt to check since this sometimes defines pthreads too; # also defines -D_REENTRANT) # ... -mt is also the pthreads flag for HP/aCC # pthread: Linux, etcetera # --thread-safe: KAI C++ # pthread-config: use pthread-config program (for GNU Pth library) case "${host_cpu}-${host_os}" in *solaris*) # On Solaris (at least, for some versions), libc contains stubbed # (non-functional) versions of the pthreads routines, so link-based # tests will erroneously succeed. (We need to link with -pthreads/-mt/ # -lpthread.) (The stubs are missing pthread_cleanup_push, or rather # a function called by this macro, so we could check for that, but # who knows whether they'll stub that too in a future libc.) So, # we'll just look for -pthreads and -lpthread first: acx_pthread_flags="-pthreads pthread -mt -pthread $acx_pthread_flags" ;; esac if test x"$acx_pthread_ok" = xno; then for flag in $acx_pthread_flags; do case $flag in none) { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether pthreads work without any flags" >&5 $as_echo_n "checking whether pthreads work without any flags... " >&6; } ;; -*) { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether pthreads work with $flag" >&5 $as_echo_n "checking whether pthreads work with $flag... " >&6; } PTHREAD_CFLAGS="$flag" ;; pthread-config) # Extract the first word of "pthread-config", so it can be a program name with args. set dummy pthread-config; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_acx_pthread_config+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$acx_pthread_config"; then ac_cv_prog_acx_pthread_config="$acx_pthread_config" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then ac_cv_prog_acx_pthread_config="yes" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS test -z "$ac_cv_prog_acx_pthread_config" && ac_cv_prog_acx_pthread_config="no" fi fi acx_pthread_config=$ac_cv_prog_acx_pthread_config if test -n "$acx_pthread_config"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $acx_pthread_config" >&5 $as_echo "$acx_pthread_config" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi if test x"$acx_pthread_config" = xno; then continue; fi PTHREAD_CFLAGS="`pthread-config --cflags`" PTHREAD_LIBS="`pthread-config --ldflags` `pthread-config --libs`" ;; *) { $as_echo "$as_me:${as_lineno-$LINENO}: checking for the pthreads library -l$flag" >&5 $as_echo_n "checking for the pthreads library -l$flag... " >&6; } PTHREAD_LIBS="-l$flag" ;; esac save_LIBS="$LIBS" save_CFLAGS="$CFLAGS" LIBS="$PTHREAD_LIBS $LIBS" CFLAGS="$CFLAGS $PTHREAD_CFLAGS" # Check for various functions. We must include pthread.h, # since some functions may be macros. (On the Sequent, we # need a special flag -Kthread to make this header compile.) # We check for pthread_join because it is in -lpthread on IRIX # while pthread_create is in libc. We check for pthread_attr_init # due to DEC craziness with -lpthreads. We check for # pthread_cleanup_push because it is one of the few pthread # functions on Solaris that doesn't have a non-functional libc stub. # We try pthread_create on general principles. cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include int main () { pthread_t th; pthread_join(th, 0); pthread_attr_init(0); pthread_cleanup_push(0, 0); pthread_create(0,0,0,0); pthread_cleanup_pop(0); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : acx_pthread_ok=yes fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext LIBS="$save_LIBS" CFLAGS="$save_CFLAGS" { $as_echo "$as_me:${as_lineno-$LINENO}: result: $acx_pthread_ok" >&5 $as_echo "$acx_pthread_ok" >&6; } if test "x$acx_pthread_ok" = xyes; then break; fi PTHREAD_LIBS="" PTHREAD_CFLAGS="" done fi # Various other checks: if test "x$acx_pthread_ok" = xyes; then save_LIBS="$LIBS" LIBS="$PTHREAD_LIBS $LIBS" save_CFLAGS="$CFLAGS" CFLAGS="$CFLAGS $PTHREAD_CFLAGS" # Detect AIX lossage: JOINABLE attribute is called UNDETACHED. { $as_echo "$as_me:${as_lineno-$LINENO}: checking for joinable pthread attribute" >&5 $as_echo_n "checking for joinable pthread attribute... " >&6; } attr_name=unknown for attr in PTHREAD_CREATE_JOINABLE PTHREAD_CREATE_UNDETACHED; do cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include int main () { int attr=$attr; return attr; ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : attr_name=$attr; break fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext done { $as_echo "$as_me:${as_lineno-$LINENO}: result: $attr_name" >&5 $as_echo "$attr_name" >&6; } if test "$attr_name" != PTHREAD_CREATE_JOINABLE; then cat >>confdefs.h <<_ACEOF #define PTHREAD_CREATE_JOINABLE $attr_name _ACEOF fi { $as_echo "$as_me:${as_lineno-$LINENO}: checking if more special flags are required for pthreads" >&5 $as_echo_n "checking if more special flags are required for pthreads... " >&6; } flag=no case "${host_cpu}-${host_os}" in *-aix* | *-freebsd* | *-darwin*) flag="-D_THREAD_SAFE";; *solaris* | *-osf* | *-hpux*) flag="-D_REENTRANT";; esac { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${flag}" >&5 $as_echo "${flag}" >&6; } if test "x$flag" != xno; then PTHREAD_CFLAGS="$flag $PTHREAD_CFLAGS" fi LIBS="$save_LIBS" CFLAGS="$save_CFLAGS" # More AIX lossage: must compile with xlc_r or cc_r if test x"$GCC" != xyes; then for ac_prog in xlc_r cc_r do # Extract the first word of "$ac_prog", so it can be a program name with args. set dummy $ac_prog; ac_word=$2 { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5 $as_echo_n "checking for $ac_word... " >&6; } if ${ac_cv_prog_PTHREAD_CC+:} false; then : $as_echo_n "(cached) " >&6 else if test -n "$PTHREAD_CC"; then ac_cv_prog_PTHREAD_CC="$PTHREAD_CC" # Let the user override the test. else as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for ac_exec_ext in '' $ac_executable_extensions; do if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then ac_cv_prog_PTHREAD_CC="$ac_prog" $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5 break 2 fi done done IFS=$as_save_IFS fi fi PTHREAD_CC=$ac_cv_prog_PTHREAD_CC if test -n "$PTHREAD_CC"; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: $PTHREAD_CC" >&5 $as_echo "$PTHREAD_CC" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi test -n "$PTHREAD_CC" && break done test -n "$PTHREAD_CC" || PTHREAD_CC="${CC}" else PTHREAD_CC=$CC fi # The next part tries to detect GCC inconsistency with -shared on some # architectures and systems. The problem is that in certain # configurations, when -shared is specified, GCC "forgets" to # internally use various flags which are still necessary. # # Prepare the flags # save_CFLAGS="$CFLAGS" save_LIBS="$LIBS" save_CC="$CC" # Try with the flags determined by the earlier checks. # # -Wl,-z,defs forces link-time symbol resolution, so that the # linking checks with -shared actually have any value # # FIXME: -fPIC is required for -shared on many architectures, # so we specify it here, but the right way would probably be to # properly detect whether it is actually required. CFLAGS="-shared -fPIC -Wl,-z,defs $CFLAGS $PTHREAD_CFLAGS" LIBS="$PTHREAD_LIBS $LIBS" CC="$PTHREAD_CC" # In order not to create several levels of indentation, we test # the value of "$done" until we find the cure or run out of ideas. done="no" # First, make sure the CFLAGS we added are actually accepted by our # compiler. If not (and OS X's ld, for instance, does not accept -z), # then we can't do this test. if test x"$done" = xno; then { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether to check for GCC pthread/shared inconsistencies" >&5 $as_echo_n "checking whether to check for GCC pthread/shared inconsistencies... " >&6; } cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ int main () { ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : else done=yes fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext if test "x$done" = xyes ; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } fi fi if test x"$done" = xno; then { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether -pthread is sufficient with -shared" >&5 $as_echo_n "checking whether -pthread is sufficient with -shared... " >&6; } cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include int main () { pthread_t th; pthread_join(th, 0); pthread_attr_init(0); pthread_cleanup_push(0, 0); pthread_create(0,0,0,0); pthread_cleanup_pop(0); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : done=yes fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext if test "x$done" = xyes; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi # # Linux gcc on some architectures such as mips/mipsel forgets # about -lpthread # if test x"$done" = xno; then { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether -lpthread fixes that" >&5 $as_echo_n "checking whether -lpthread fixes that... " >&6; } LIBS="-lpthread $PTHREAD_LIBS $save_LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include int main () { pthread_t th; pthread_join(th, 0); pthread_attr_init(0); pthread_cleanup_push(0, 0); pthread_create(0,0,0,0); pthread_cleanup_pop(0); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : done=yes fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext if test "x$done" = xyes; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } PTHREAD_LIBS="-lpthread $PTHREAD_LIBS" else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi # # FreeBSD 4.10 gcc forgets to use -lc_r instead of -lc # if test x"$done" = xno; then { $as_echo "$as_me:${as_lineno-$LINENO}: checking whether -lc_r fixes that" >&5 $as_echo_n "checking whether -lc_r fixes that... " >&6; } LIBS="-lc_r $PTHREAD_LIBS $save_LIBS" cat confdefs.h - <<_ACEOF >conftest.$ac_ext /* end confdefs.h. */ #include int main () { pthread_t th; pthread_join(th, 0); pthread_attr_init(0); pthread_cleanup_push(0, 0); pthread_create(0,0,0,0); pthread_cleanup_pop(0); ; return 0; } _ACEOF if ac_fn_c_try_link "$LINENO"; then : done=yes fi rm -f core conftest.err conftest.$ac_objext \ conftest$ac_exeext conftest.$ac_ext if test "x$done" = xyes; then { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5 $as_echo "yes" >&6; } PTHREAD_LIBS="-lc_r $PTHREAD_LIBS" else { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5 $as_echo "no" >&6; } fi fi if test x"$done" = xno; then # OK, we have run out of ideas { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Impossible to determine how to use pthreads with shared libraries" >&5 $as_echo "$as_me: WARNING: Impossible to determine how to use pthreads with shared libraries" >&2;} # so it's not safe to assume that we may use pthreads acx_pthread_ok=no fi CFLAGS="$save_CFLAGS" LIBS="$save_LIBS" CC="$save_CC" else PTHREAD_CC="$CC" fi # Finally, execute ACTION-IF-FOUND/ACTION-IF-NOT-FOUND: if test x"$acx_pthread_ok" = xyes; then $as_echo "#define HAVE_PTHREAD 1" >>confdefs.h : else acx_pthread_ok=no if test "x$with_pthreads" != "xcheck"; then : { { $as_echo "$as_me:${as_lineno-$LINENO}: error: in \`$ac_pwd':" >&5 $as_echo "$as_me: error: in \`$ac_pwd':" >&2;} as_fn_error $? "--with-pthreads was specified, but unable to be used See \`config.log' for more details" "$LINENO" 5; } fi fi ac_ext=cpp ac_cpp='$CXXCPP $CPPFLAGS' ac_compile='$CXX -c $CXXFLAGS $CPPFLAGS conftest.$ac_ext >&5' ac_link='$CXX -o conftest$ac_exeext $CXXFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5' ac_compiler_gnu=$ac_cv_cxx_compiler_gnu have_pthreads="$acx_pthread_ok" fi if test "x$have_pthreads" = "xyes"; then HAVE_PTHREADS_TRUE= HAVE_PTHREADS_FALSE='#' else HAVE_PTHREADS_TRUE='#' HAVE_PTHREADS_FALSE= fi # TODO(chandlerc@google.com) Check for the necessary system headers. # TODO(chandlerc@google.com) Check the types, structures, and other compiler # and architecture characteristics. # Output the generated files. No further autoconf macros may be used. cat >confcache <<\_ACEOF # This file is a shell script that caches the results of configure # tests run on this system so they can be shared between configure # scripts and configure runs, see configure's option --config-cache. # It is not useful on other systems. If it contains results you don't # want to keep, you may remove or edit it. # # config.status only pays attention to the cache file if you give it # the --recheck option to rerun configure. # # `ac_cv_env_foo' variables (set or unset) will be overridden when # loading this file, other *unset* `ac_cv_foo' will be assigned the # following values. _ACEOF # The following way of writing the cache mishandles newlines in values, # but we know of no workaround that is simple, portable, and efficient. # So, we kill variables containing newlines. # Ultrix sh set writes to stderr and can't be redirected directly, # and sets the high bit in the cache file unless we assign to the vars. ( for ac_var in `(set) 2>&1 | sed -n 's/^\([a-zA-Z_][a-zA-Z0-9_]*\)=.*/\1/p'`; do eval ac_val=\$$ac_var case $ac_val in #( *${as_nl}*) case $ac_var in #( *_cv_*) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: cache variable $ac_var contains a newline" >&5 $as_echo "$as_me: WARNING: cache variable $ac_var contains a newline" >&2;} ;; esac case $ac_var in #( _ | IFS | as_nl) ;; #( BASH_ARGV | BASH_SOURCE) eval $ac_var= ;; #( *) { eval $ac_var=; unset $ac_var;} ;; esac ;; esac done (set) 2>&1 | case $as_nl`(ac_space=' '; set) 2>&1` in #( *${as_nl}ac_space=\ *) # `set' does not quote correctly, so add quotes: double-quote # substitution turns \\\\ into \\, and sed turns \\ into \. sed -n \ "s/'/'\\\\''/g; s/^\\([_$as_cr_alnum]*_cv_[_$as_cr_alnum]*\\)=\\(.*\\)/\\1='\\2'/p" ;; #( *) # `set' quotes correctly as required by POSIX, so do not add quotes. sed -n "/^[_$as_cr_alnum]*_cv_[_$as_cr_alnum]*=/p" ;; esac | sort ) | sed ' /^ac_cv_env_/b end t clear :clear s/^\([^=]*\)=\(.*[{}].*\)$/test "${\1+set}" = set || &/ t end s/^\([^=]*\)=\(.*\)$/\1=${\1=\2}/ :end' >>confcache if diff "$cache_file" confcache >/dev/null 2>&1; then :; else if test -w "$cache_file"; then if test "x$cache_file" != "x/dev/null"; then { $as_echo "$as_me:${as_lineno-$LINENO}: updating cache $cache_file" >&5 $as_echo "$as_me: updating cache $cache_file" >&6;} if test ! -f "$cache_file" || test -h "$cache_file"; then cat confcache >"$cache_file" else case $cache_file in #( */* | ?:*) mv -f confcache "$cache_file"$$ && mv -f "$cache_file"$$ "$cache_file" ;; #( *) mv -f confcache "$cache_file" ;; esac fi fi else { $as_echo "$as_me:${as_lineno-$LINENO}: not updating unwritable cache $cache_file" >&5 $as_echo "$as_me: not updating unwritable cache $cache_file" >&6;} fi fi rm -f confcache test "x$prefix" = xNONE && prefix=$ac_default_prefix # Let make expand exec_prefix. test "x$exec_prefix" = xNONE && exec_prefix='${prefix}' DEFS=-DHAVE_CONFIG_H ac_libobjs= ac_ltlibobjs= U= for ac_i in : $LIBOBJS; do test "x$ac_i" = x: && continue # 1. Remove the extension, and $U if already installed. ac_script='s/\$U\././;s/\.o$//;s/\.obj$//' ac_i=`$as_echo "$ac_i" | sed "$ac_script"` # 2. Prepend LIBOBJDIR. When used with automake>=1.10 LIBOBJDIR # will be set to the directory where LIBOBJS objects are built. as_fn_append ac_libobjs " \${LIBOBJDIR}$ac_i\$U.$ac_objext" as_fn_append ac_ltlibobjs " \${LIBOBJDIR}$ac_i"'$U.lo' done LIBOBJS=$ac_libobjs LTLIBOBJS=$ac_ltlibobjs if test -n "$EXEEXT"; then am__EXEEXT_TRUE= am__EXEEXT_FALSE='#' else am__EXEEXT_TRUE='#' am__EXEEXT_FALSE= fi if test -z "${AMDEP_TRUE}" && test -z "${AMDEP_FALSE}"; then as_fn_error $? "conditional \"AMDEP\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${am__fastdepCC_TRUE}" && test -z "${am__fastdepCC_FALSE}"; then as_fn_error $? "conditional \"am__fastdepCC\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${am__fastdepCXX_TRUE}" && test -z "${am__fastdepCXX_FALSE}"; then as_fn_error $? "conditional \"am__fastdepCXX\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${HAVE_PYTHON_TRUE}" && test -z "${HAVE_PYTHON_FALSE}"; then as_fn_error $? "conditional \"HAVE_PYTHON\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi if test -z "${HAVE_PTHREADS_TRUE}" && test -z "${HAVE_PTHREADS_FALSE}"; then as_fn_error $? "conditional \"HAVE_PTHREADS\" was never defined. Usually this means the macro was only invoked conditionally." "$LINENO" 5 fi : "${CONFIG_STATUS=./config.status}" ac_write_fail=0 ac_clean_files_save=$ac_clean_files ac_clean_files="$ac_clean_files $CONFIG_STATUS" { $as_echo "$as_me:${as_lineno-$LINENO}: creating $CONFIG_STATUS" >&5 $as_echo "$as_me: creating $CONFIG_STATUS" >&6;} as_write_fail=0 cat >$CONFIG_STATUS <<_ASEOF || as_write_fail=1 #! $SHELL # Generated by $as_me. # Run this file to recreate the current configuration. # Compiler output produced by configure, useful for debugging # configure, is in config.log if it exists. debug=false ac_cs_recheck=false ac_cs_silent=false SHELL=\${CONFIG_SHELL-$SHELL} export SHELL _ASEOF cat >>$CONFIG_STATUS <<\_ASEOF || as_write_fail=1 ## -------------------- ## ## M4sh Initialization. ## ## -------------------- ## # Be more Bourne compatible DUALCASE=1; export DUALCASE # for MKS sh if test -n "${ZSH_VERSION+set}" && (emulate sh) >/dev/null 2>&1; then : emulate sh NULLCMD=: # Pre-4.2 versions of Zsh do word splitting on ${1+"$@"}, which # is contrary to our usage. Disable this feature. alias -g '${1+"$@"}'='"$@"' setopt NO_GLOB_SUBST else case `(set -o) 2>/dev/null` in #( *posix*) : set -o posix ;; #( *) : ;; esac fi as_nl=' ' export as_nl # Printing a long string crashes Solaris 7 /usr/bin/printf. as_echo='\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\' as_echo=$as_echo$as_echo$as_echo$as_echo$as_echo as_echo=$as_echo$as_echo$as_echo$as_echo$as_echo$as_echo # Prefer a ksh shell builtin over an external printf program on Solaris, # but without wasting forks for bash or zsh. if test -z "$BASH_VERSION$ZSH_VERSION" \ && (test "X`print -r -- $as_echo`" = "X$as_echo") 2>/dev/null; then as_echo='print -r --' as_echo_n='print -rn --' elif (test "X`printf %s $as_echo`" = "X$as_echo") 2>/dev/null; then as_echo='printf %s\n' as_echo_n='printf %s' else if test "X`(/usr/ucb/echo -n -n $as_echo) 2>/dev/null`" = "X-n $as_echo"; then as_echo_body='eval /usr/ucb/echo -n "$1$as_nl"' as_echo_n='/usr/ucb/echo -n' else as_echo_body='eval expr "X$1" : "X\\(.*\\)"' as_echo_n_body='eval arg=$1; case $arg in #( *"$as_nl"*) expr "X$arg" : "X\\(.*\\)$as_nl"; arg=`expr "X$arg" : ".*$as_nl\\(.*\\)"`;; esac; expr "X$arg" : "X\\(.*\\)" | tr -d "$as_nl" ' export as_echo_n_body as_echo_n='sh -c $as_echo_n_body as_echo' fi export as_echo_body as_echo='sh -c $as_echo_body as_echo' fi # The user is always right. if test "${PATH_SEPARATOR+set}" != set; then PATH_SEPARATOR=: (PATH='/bin;/bin'; FPATH=$PATH; sh -c :) >/dev/null 2>&1 && { (PATH='/bin:/bin'; FPATH=$PATH; sh -c :) >/dev/null 2>&1 || PATH_SEPARATOR=';' } fi # IFS # We need space, tab and new line, in precisely that order. Quoting is # there to prevent editors from complaining about space-tab. # (If _AS_PATH_WALK were called with IFS unset, it would disable word # splitting by setting IFS to empty value.) IFS=" "" $as_nl" # Find who we are. Look in the path if we contain no directory separator. as_myself= case $0 in #(( *[\\/]* ) as_myself=$0 ;; *) as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. test -r "$as_dir/$0" && as_myself=$as_dir/$0 && break done IFS=$as_save_IFS ;; esac # We did not find ourselves, most probably we were run as `sh COMMAND' # in which case we are not to be found in the path. if test "x$as_myself" = x; then as_myself=$0 fi if test ! -f "$as_myself"; then $as_echo "$as_myself: error: cannot find myself; rerun with an absolute file name" >&2 exit 1 fi # Unset variables that we do not need and which cause bugs (e.g. in # pre-3.0 UWIN ksh). But do not cause bugs in bash 2.01; the "|| exit 1" # suppresses any "Segmentation fault" message there. '((' could # trigger a bug in pdksh 5.2.14. for as_var in BASH_ENV ENV MAIL MAILPATH do eval test x\${$as_var+set} = xset \ && ( (unset $as_var) || exit 1) >/dev/null 2>&1 && unset $as_var || : done PS1='$ ' PS2='> ' PS4='+ ' # NLS nuisances. LC_ALL=C export LC_ALL LANGUAGE=C export LANGUAGE # CDPATH. (unset CDPATH) >/dev/null 2>&1 && unset CDPATH # as_fn_error STATUS ERROR [LINENO LOG_FD] # ---------------------------------------- # Output "`basename $0`: error: ERROR" to stderr. If LINENO and LOG_FD are # provided, also output the error to LOG_FD, referencing LINENO. Then exit the # script with STATUS, using 1 if that was 0. as_fn_error () { as_status=$1; test $as_status -eq 0 && as_status=1 if test "$4"; then as_lineno=${as_lineno-"$3"} as_lineno_stack=as_lineno_stack=$as_lineno_stack $as_echo "$as_me:${as_lineno-$LINENO}: error: $2" >&$4 fi $as_echo "$as_me: error: $2" >&2 as_fn_exit $as_status } # as_fn_error # as_fn_set_status STATUS # ----------------------- # Set $? to STATUS, without forking. as_fn_set_status () { return $1 } # as_fn_set_status # as_fn_exit STATUS # ----------------- # Exit the shell with STATUS, even in a "trap 0" or "set -e" context. as_fn_exit () { set +e as_fn_set_status $1 exit $1 } # as_fn_exit # as_fn_unset VAR # --------------- # Portably unset VAR. as_fn_unset () { { eval $1=; unset $1;} } as_unset=as_fn_unset # as_fn_append VAR VALUE # ---------------------- # Append the text in VALUE to the end of the definition contained in VAR. Take # advantage of any shell optimizations that allow amortized linear growth over # repeated appends, instead of the typical quadratic growth present in naive # implementations. if (eval "as_var=1; as_var+=2; test x\$as_var = x12") 2>/dev/null; then : eval 'as_fn_append () { eval $1+=\$2 }' else as_fn_append () { eval $1=\$$1\$2 } fi # as_fn_append # as_fn_arith ARG... # ------------------ # Perform arithmetic evaluation on the ARGs, and store the result in the # global $as_val. Take advantage of shells that can avoid forks. The arguments # must be portable across $(()) and expr. if (eval "test \$(( 1 + 1 )) = 2") 2>/dev/null; then : eval 'as_fn_arith () { as_val=$(( $* )) }' else as_fn_arith () { as_val=`expr "$@" || test $? -eq 1` } fi # as_fn_arith if expr a : '\(a\)' >/dev/null 2>&1 && test "X`expr 00001 : '.*\(...\)'`" = X001; then as_expr=expr else as_expr=false fi if (basename -- /) >/dev/null 2>&1 && test "X`basename -- / 2>&1`" = "X/"; then as_basename=basename else as_basename=false fi if (as_dir=`dirname -- /` && test "X$as_dir" = X/) >/dev/null 2>&1; then as_dirname=dirname else as_dirname=false fi as_me=`$as_basename -- "$0" || $as_expr X/"$0" : '.*/\([^/][^/]*\)/*$' \| \ X"$0" : 'X\(//\)$' \| \ X"$0" : 'X\(/\)' \| . 2>/dev/null || $as_echo X/"$0" | sed '/^.*\/\([^/][^/]*\)\/*$/{ s//\1/ q } /^X\/\(\/\/\)$/{ s//\1/ q } /^X\/\(\/\).*/{ s//\1/ q } s/.*/./; q'` # Avoid depending upon Character Ranges. as_cr_letters='abcdefghijklmnopqrstuvwxyz' as_cr_LETTERS='ABCDEFGHIJKLMNOPQRSTUVWXYZ' as_cr_Letters=$as_cr_letters$as_cr_LETTERS as_cr_digits='0123456789' as_cr_alnum=$as_cr_Letters$as_cr_digits ECHO_C= ECHO_N= ECHO_T= case `echo -n x` in #((((( -n*) case `echo 'xy\c'` in *c*) ECHO_T=' ';; # ECHO_T is single tab character. xy) ECHO_C='\c';; *) echo `echo ksh88 bug on AIX 6.1` > /dev/null ECHO_T=' ';; esac;; *) ECHO_N='-n';; esac rm -f conf$$ conf$$.exe conf$$.file if test -d conf$$.dir; then rm -f conf$$.dir/conf$$.file else rm -f conf$$.dir mkdir conf$$.dir 2>/dev/null fi if (echo >conf$$.file) 2>/dev/null; then if ln -s conf$$.file conf$$ 2>/dev/null; then as_ln_s='ln -s' # ... but there are two gotchas: # 1) On MSYS, both `ln -s file dir' and `ln file dir' fail. # 2) DJGPP < 2.04 has no symlinks; `ln -s' creates a wrapper executable. # In both cases, we have to default to `cp -p'. ln -s conf$$.file conf$$.dir 2>/dev/null && test ! -f conf$$.exe || as_ln_s='cp -p' elif ln conf$$.file conf$$ 2>/dev/null; then as_ln_s=ln else as_ln_s='cp -p' fi else as_ln_s='cp -p' fi rm -f conf$$ conf$$.exe conf$$.dir/conf$$.file conf$$.file rmdir conf$$.dir 2>/dev/null # as_fn_mkdir_p # ------------- # Create "$as_dir" as a directory, including parents if necessary. as_fn_mkdir_p () { case $as_dir in #( -*) as_dir=./$as_dir;; esac test -d "$as_dir" || eval $as_mkdir_p || { as_dirs= while :; do case $as_dir in #( *\'*) as_qdir=`$as_echo "$as_dir" | sed "s/'/'\\\\\\\\''/g"`;; #'( *) as_qdir=$as_dir;; esac as_dirs="'$as_qdir' $as_dirs" as_dir=`$as_dirname -- "$as_dir" || $as_expr X"$as_dir" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \ X"$as_dir" : 'X\(//\)[^/]' \| \ X"$as_dir" : 'X\(//\)$' \| \ X"$as_dir" : 'X\(/\)' \| . 2>/dev/null || $as_echo X"$as_dir" | sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{ s//\1/ q } /^X\(\/\/\)[^/].*/{ s//\1/ q } /^X\(\/\/\)$/{ s//\1/ q } /^X\(\/\).*/{ s//\1/ q } s/.*/./; q'` test -d "$as_dir" && break done test -z "$as_dirs" || eval "mkdir $as_dirs" } || test -d "$as_dir" || as_fn_error $? "cannot create directory $as_dir" } # as_fn_mkdir_p if mkdir -p . 2>/dev/null; then as_mkdir_p='mkdir -p "$as_dir"' else test -d ./-p && rmdir ./-p as_mkdir_p=false fi if test -x / >/dev/null 2>&1; then as_test_x='test -x' else if ls -dL / >/dev/null 2>&1; then as_ls_L_option=L else as_ls_L_option= fi as_test_x=' eval sh -c '\'' if test -d "$1"; then test -d "$1/."; else case $1 in #( -*)set "./$1";; esac; case `ls -ld'$as_ls_L_option' "$1" 2>/dev/null` in #(( ???[sx]*):;;*)false;;esac;fi '\'' sh ' fi as_executable_p=$as_test_x # Sed expression to map a string onto a valid CPP name. as_tr_cpp="eval sed 'y%*$as_cr_letters%P$as_cr_LETTERS%;s%[^_$as_cr_alnum]%_%g'" # Sed expression to map a string onto a valid variable name. as_tr_sh="eval sed 'y%*+%pp%;s%[^_$as_cr_alnum]%_%g'" exec 6>&1 ## ----------------------------------- ## ## Main body of $CONFIG_STATUS script. ## ## ----------------------------------- ## _ASEOF test $as_write_fail = 0 && chmod +x $CONFIG_STATUS || ac_write_fail=1 cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 # Save the log message, to keep $0 and so on meaningful, and to # report actual input values of CONFIG_FILES etc. instead of their # values after options handling. ac_log=" This file was extended by Google C++ Testing Framework $as_me 1.7.0, which was generated by GNU Autoconf 2.68. Invocation command line was CONFIG_FILES = $CONFIG_FILES CONFIG_HEADERS = $CONFIG_HEADERS CONFIG_LINKS = $CONFIG_LINKS CONFIG_COMMANDS = $CONFIG_COMMANDS $ $0 $@ on `(hostname || uname -n) 2>/dev/null | sed 1q` " _ACEOF case $ac_config_files in *" "*) set x $ac_config_files; shift; ac_config_files=$*;; esac case $ac_config_headers in *" "*) set x $ac_config_headers; shift; ac_config_headers=$*;; esac cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1 # Files that config.status was made for. config_files="$ac_config_files" config_headers="$ac_config_headers" config_commands="$ac_config_commands" _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 ac_cs_usage="\ \`$as_me' instantiates files and other configuration actions from templates according to the current configuration. Unless the files and actions are specified as TAGs, all are instantiated by default. Usage: $0 [OPTION]... [TAG]... -h, --help print this help, then exit -V, --version print version number and configuration settings, then exit --config print configuration, then exit -q, --quiet, --silent do not print progress messages -d, --debug don't remove temporary files --recheck update $as_me by reconfiguring in the same conditions --file=FILE[:TEMPLATE] instantiate the configuration file FILE --header=FILE[:TEMPLATE] instantiate the configuration header FILE Configuration files: $config_files Configuration headers: $config_headers Configuration commands: $config_commands Report bugs to ." _ACEOF cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1 ac_cs_config="`$as_echo "$ac_configure_args" | sed 's/^ //; s/[\\""\`\$]/\\\\&/g'`" ac_cs_version="\\ Google C++ Testing Framework config.status 1.7.0 configured by $0, generated by GNU Autoconf 2.68, with options \\"\$ac_cs_config\\" Copyright (C) 2010 Free Software Foundation, Inc. This config.status script is free software; the Free Software Foundation gives unlimited permission to copy, distribute and modify it." ac_pwd='$ac_pwd' srcdir='$srcdir' INSTALL='$INSTALL' MKDIR_P='$MKDIR_P' AWK='$AWK' test -n "\$AWK" || AWK=awk _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 # The default lists apply if the user does not specify any file. ac_need_defaults=: while test $# != 0 do case $1 in --*=?*) ac_option=`expr "X$1" : 'X\([^=]*\)='` ac_optarg=`expr "X$1" : 'X[^=]*=\(.*\)'` ac_shift=: ;; --*=) ac_option=`expr "X$1" : 'X\([^=]*\)='` ac_optarg= ac_shift=: ;; *) ac_option=$1 ac_optarg=$2 ac_shift=shift ;; esac case $ac_option in # Handling of the options. -recheck | --recheck | --rechec | --reche | --rech | --rec | --re | --r) ac_cs_recheck=: ;; --version | --versio | --versi | --vers | --ver | --ve | --v | -V ) $as_echo "$ac_cs_version"; exit ;; --config | --confi | --conf | --con | --co | --c ) $as_echo "$ac_cs_config"; exit ;; --debug | --debu | --deb | --de | --d | -d ) debug=: ;; --file | --fil | --fi | --f ) $ac_shift case $ac_optarg in *\'*) ac_optarg=`$as_echo "$ac_optarg" | sed "s/'/'\\\\\\\\''/g"` ;; '') as_fn_error $? "missing file argument" ;; esac as_fn_append CONFIG_FILES " '$ac_optarg'" ac_need_defaults=false;; --header | --heade | --head | --hea ) $ac_shift case $ac_optarg in *\'*) ac_optarg=`$as_echo "$ac_optarg" | sed "s/'/'\\\\\\\\''/g"` ;; esac as_fn_append CONFIG_HEADERS " '$ac_optarg'" ac_need_defaults=false;; --he | --h) # Conflict between --help and --header as_fn_error $? "ambiguous option: \`$1' Try \`$0 --help' for more information.";; --help | --hel | -h ) $as_echo "$ac_cs_usage"; exit ;; -q | -quiet | --quiet | --quie | --qui | --qu | --q \ | -silent | --silent | --silen | --sile | --sil | --si | --s) ac_cs_silent=: ;; # This is an error. -*) as_fn_error $? "unrecognized option: \`$1' Try \`$0 --help' for more information." ;; *) as_fn_append ac_config_targets " $1" ac_need_defaults=false ;; esac shift done ac_configure_extra_args= if $ac_cs_silent; then exec 6>/dev/null ac_configure_extra_args="$ac_configure_extra_args --silent" fi _ACEOF cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1 if \$ac_cs_recheck; then set X '$SHELL' '$0' $ac_configure_args \$ac_configure_extra_args --no-create --no-recursion shift \$as_echo "running CONFIG_SHELL=$SHELL \$*" >&6 CONFIG_SHELL='$SHELL' export CONFIG_SHELL exec "\$@" fi _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 exec 5>>config.log { echo sed 'h;s/./-/g;s/^.../## /;s/...$/ ##/;p;x;p;x' <<_ASBOX ## Running $as_me. ## _ASBOX $as_echo "$ac_log" } >&5 _ACEOF cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1 # # INIT-COMMANDS # AMDEP_TRUE="$AMDEP_TRUE" ac_aux_dir="$ac_aux_dir" # The HP-UX ksh and POSIX shell print the target directory to stdout # if CDPATH is set. (unset CDPATH) >/dev/null 2>&1 && unset CDPATH sed_quote_subst='$sed_quote_subst' double_quote_subst='$double_quote_subst' delay_variable_subst='$delay_variable_subst' macro_version='`$ECHO "$macro_version" | $SED "$delay_single_quote_subst"`' macro_revision='`$ECHO "$macro_revision" | $SED "$delay_single_quote_subst"`' enable_shared='`$ECHO "$enable_shared" | $SED "$delay_single_quote_subst"`' enable_static='`$ECHO "$enable_static" | $SED "$delay_single_quote_subst"`' pic_mode='`$ECHO "$pic_mode" | $SED "$delay_single_quote_subst"`' enable_fast_install='`$ECHO "$enable_fast_install" | $SED "$delay_single_quote_subst"`' SHELL='`$ECHO "$SHELL" | $SED "$delay_single_quote_subst"`' ECHO='`$ECHO "$ECHO" | $SED "$delay_single_quote_subst"`' PATH_SEPARATOR='`$ECHO "$PATH_SEPARATOR" | $SED "$delay_single_quote_subst"`' host_alias='`$ECHO "$host_alias" | $SED "$delay_single_quote_subst"`' host='`$ECHO "$host" | $SED "$delay_single_quote_subst"`' host_os='`$ECHO "$host_os" | $SED "$delay_single_quote_subst"`' build_alias='`$ECHO "$build_alias" | $SED "$delay_single_quote_subst"`' build='`$ECHO "$build" | $SED "$delay_single_quote_subst"`' build_os='`$ECHO "$build_os" | $SED "$delay_single_quote_subst"`' SED='`$ECHO "$SED" | $SED "$delay_single_quote_subst"`' Xsed='`$ECHO "$Xsed" | $SED "$delay_single_quote_subst"`' GREP='`$ECHO "$GREP" | $SED "$delay_single_quote_subst"`' EGREP='`$ECHO "$EGREP" | $SED "$delay_single_quote_subst"`' FGREP='`$ECHO "$FGREP" | $SED "$delay_single_quote_subst"`' LD='`$ECHO "$LD" | $SED "$delay_single_quote_subst"`' NM='`$ECHO "$NM" | $SED "$delay_single_quote_subst"`' LN_S='`$ECHO "$LN_S" | $SED "$delay_single_quote_subst"`' max_cmd_len='`$ECHO "$max_cmd_len" | $SED "$delay_single_quote_subst"`' ac_objext='`$ECHO "$ac_objext" | $SED "$delay_single_quote_subst"`' exeext='`$ECHO "$exeext" | $SED "$delay_single_quote_subst"`' lt_unset='`$ECHO "$lt_unset" | $SED "$delay_single_quote_subst"`' lt_SP2NL='`$ECHO "$lt_SP2NL" | $SED "$delay_single_quote_subst"`' lt_NL2SP='`$ECHO "$lt_NL2SP" | $SED "$delay_single_quote_subst"`' lt_cv_to_host_file_cmd='`$ECHO "$lt_cv_to_host_file_cmd" | $SED "$delay_single_quote_subst"`' lt_cv_to_tool_file_cmd='`$ECHO "$lt_cv_to_tool_file_cmd" | $SED "$delay_single_quote_subst"`' reload_flag='`$ECHO "$reload_flag" | $SED "$delay_single_quote_subst"`' reload_cmds='`$ECHO "$reload_cmds" | $SED "$delay_single_quote_subst"`' OBJDUMP='`$ECHO "$OBJDUMP" | $SED "$delay_single_quote_subst"`' deplibs_check_method='`$ECHO "$deplibs_check_method" | $SED "$delay_single_quote_subst"`' file_magic_cmd='`$ECHO "$file_magic_cmd" | $SED "$delay_single_quote_subst"`' file_magic_glob='`$ECHO "$file_magic_glob" | $SED "$delay_single_quote_subst"`' want_nocaseglob='`$ECHO "$want_nocaseglob" | $SED "$delay_single_quote_subst"`' DLLTOOL='`$ECHO "$DLLTOOL" | $SED "$delay_single_quote_subst"`' sharedlib_from_linklib_cmd='`$ECHO "$sharedlib_from_linklib_cmd" | $SED "$delay_single_quote_subst"`' AR='`$ECHO "$AR" | $SED "$delay_single_quote_subst"`' AR_FLAGS='`$ECHO "$AR_FLAGS" | $SED "$delay_single_quote_subst"`' archiver_list_spec='`$ECHO "$archiver_list_spec" | $SED "$delay_single_quote_subst"`' STRIP='`$ECHO "$STRIP" | $SED "$delay_single_quote_subst"`' RANLIB='`$ECHO "$RANLIB" | $SED "$delay_single_quote_subst"`' old_postinstall_cmds='`$ECHO "$old_postinstall_cmds" | $SED "$delay_single_quote_subst"`' old_postuninstall_cmds='`$ECHO "$old_postuninstall_cmds" | $SED "$delay_single_quote_subst"`' old_archive_cmds='`$ECHO "$old_archive_cmds" | $SED "$delay_single_quote_subst"`' lock_old_archive_extraction='`$ECHO "$lock_old_archive_extraction" | $SED "$delay_single_quote_subst"`' CC='`$ECHO "$CC" | $SED "$delay_single_quote_subst"`' CFLAGS='`$ECHO "$CFLAGS" | $SED "$delay_single_quote_subst"`' compiler='`$ECHO "$compiler" | $SED "$delay_single_quote_subst"`' GCC='`$ECHO "$GCC" | $SED "$delay_single_quote_subst"`' lt_cv_sys_global_symbol_pipe='`$ECHO "$lt_cv_sys_global_symbol_pipe" | $SED "$delay_single_quote_subst"`' lt_cv_sys_global_symbol_to_cdecl='`$ECHO "$lt_cv_sys_global_symbol_to_cdecl" | $SED "$delay_single_quote_subst"`' lt_cv_sys_global_symbol_to_c_name_address='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address" | $SED "$delay_single_quote_subst"`' lt_cv_sys_global_symbol_to_c_name_address_lib_prefix='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address_lib_prefix" | $SED "$delay_single_quote_subst"`' nm_file_list_spec='`$ECHO "$nm_file_list_spec" | $SED "$delay_single_quote_subst"`' lt_sysroot='`$ECHO "$lt_sysroot" | $SED "$delay_single_quote_subst"`' objdir='`$ECHO "$objdir" | $SED "$delay_single_quote_subst"`' MAGIC_CMD='`$ECHO "$MAGIC_CMD" | $SED "$delay_single_quote_subst"`' lt_prog_compiler_no_builtin_flag='`$ECHO "$lt_prog_compiler_no_builtin_flag" | $SED "$delay_single_quote_subst"`' lt_prog_compiler_pic='`$ECHO "$lt_prog_compiler_pic" | $SED "$delay_single_quote_subst"`' lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`' lt_prog_compiler_static='`$ECHO "$lt_prog_compiler_static" | $SED "$delay_single_quote_subst"`' lt_cv_prog_compiler_c_o='`$ECHO "$lt_cv_prog_compiler_c_o" | $SED "$delay_single_quote_subst"`' need_locks='`$ECHO "$need_locks" | $SED "$delay_single_quote_subst"`' MANIFEST_TOOL='`$ECHO "$MANIFEST_TOOL" | $SED "$delay_single_quote_subst"`' DSYMUTIL='`$ECHO "$DSYMUTIL" | $SED "$delay_single_quote_subst"`' NMEDIT='`$ECHO "$NMEDIT" | $SED "$delay_single_quote_subst"`' LIPO='`$ECHO "$LIPO" | $SED "$delay_single_quote_subst"`' OTOOL='`$ECHO "$OTOOL" | $SED "$delay_single_quote_subst"`' OTOOL64='`$ECHO "$OTOOL64" | $SED "$delay_single_quote_subst"`' libext='`$ECHO "$libext" | $SED "$delay_single_quote_subst"`' shrext_cmds='`$ECHO "$shrext_cmds" | $SED "$delay_single_quote_subst"`' extract_expsyms_cmds='`$ECHO "$extract_expsyms_cmds" | $SED "$delay_single_quote_subst"`' archive_cmds_need_lc='`$ECHO "$archive_cmds_need_lc" | $SED "$delay_single_quote_subst"`' enable_shared_with_static_runtimes='`$ECHO "$enable_shared_with_static_runtimes" | $SED "$delay_single_quote_subst"`' export_dynamic_flag_spec='`$ECHO "$export_dynamic_flag_spec" | $SED "$delay_single_quote_subst"`' whole_archive_flag_spec='`$ECHO "$whole_archive_flag_spec" | $SED "$delay_single_quote_subst"`' compiler_needs_object='`$ECHO "$compiler_needs_object" | $SED "$delay_single_quote_subst"`' old_archive_from_new_cmds='`$ECHO "$old_archive_from_new_cmds" | $SED "$delay_single_quote_subst"`' old_archive_from_expsyms_cmds='`$ECHO "$old_archive_from_expsyms_cmds" | $SED "$delay_single_quote_subst"`' archive_cmds='`$ECHO "$archive_cmds" | $SED "$delay_single_quote_subst"`' archive_expsym_cmds='`$ECHO "$archive_expsym_cmds" | $SED "$delay_single_quote_subst"`' module_cmds='`$ECHO "$module_cmds" | $SED "$delay_single_quote_subst"`' module_expsym_cmds='`$ECHO "$module_expsym_cmds" | $SED "$delay_single_quote_subst"`' with_gnu_ld='`$ECHO "$with_gnu_ld" | $SED "$delay_single_quote_subst"`' allow_undefined_flag='`$ECHO "$allow_undefined_flag" | $SED "$delay_single_quote_subst"`' no_undefined_flag='`$ECHO "$no_undefined_flag" | $SED "$delay_single_quote_subst"`' hardcode_libdir_flag_spec='`$ECHO "$hardcode_libdir_flag_spec" | $SED "$delay_single_quote_subst"`' hardcode_libdir_separator='`$ECHO "$hardcode_libdir_separator" | $SED "$delay_single_quote_subst"`' hardcode_direct='`$ECHO "$hardcode_direct" | $SED "$delay_single_quote_subst"`' hardcode_direct_absolute='`$ECHO "$hardcode_direct_absolute" | $SED "$delay_single_quote_subst"`' hardcode_minus_L='`$ECHO "$hardcode_minus_L" | $SED "$delay_single_quote_subst"`' hardcode_shlibpath_var='`$ECHO "$hardcode_shlibpath_var" | $SED "$delay_single_quote_subst"`' hardcode_automatic='`$ECHO "$hardcode_automatic" | $SED "$delay_single_quote_subst"`' inherit_rpath='`$ECHO "$inherit_rpath" | $SED "$delay_single_quote_subst"`' link_all_deplibs='`$ECHO "$link_all_deplibs" | $SED "$delay_single_quote_subst"`' always_export_symbols='`$ECHO "$always_export_symbols" | $SED "$delay_single_quote_subst"`' export_symbols_cmds='`$ECHO "$export_symbols_cmds" | $SED "$delay_single_quote_subst"`' exclude_expsyms='`$ECHO "$exclude_expsyms" | $SED "$delay_single_quote_subst"`' include_expsyms='`$ECHO "$include_expsyms" | $SED "$delay_single_quote_subst"`' prelink_cmds='`$ECHO "$prelink_cmds" | $SED "$delay_single_quote_subst"`' postlink_cmds='`$ECHO "$postlink_cmds" | $SED "$delay_single_quote_subst"`' file_list_spec='`$ECHO "$file_list_spec" | $SED "$delay_single_quote_subst"`' variables_saved_for_relink='`$ECHO "$variables_saved_for_relink" | $SED "$delay_single_quote_subst"`' need_lib_prefix='`$ECHO "$need_lib_prefix" | $SED "$delay_single_quote_subst"`' need_version='`$ECHO "$need_version" | $SED "$delay_single_quote_subst"`' version_type='`$ECHO "$version_type" | $SED "$delay_single_quote_subst"`' runpath_var='`$ECHO "$runpath_var" | $SED "$delay_single_quote_subst"`' shlibpath_var='`$ECHO "$shlibpath_var" | $SED "$delay_single_quote_subst"`' shlibpath_overrides_runpath='`$ECHO "$shlibpath_overrides_runpath" | $SED "$delay_single_quote_subst"`' libname_spec='`$ECHO "$libname_spec" | $SED "$delay_single_quote_subst"`' library_names_spec='`$ECHO "$library_names_spec" | $SED "$delay_single_quote_subst"`' soname_spec='`$ECHO "$soname_spec" | $SED "$delay_single_quote_subst"`' install_override_mode='`$ECHO "$install_override_mode" | $SED "$delay_single_quote_subst"`' postinstall_cmds='`$ECHO "$postinstall_cmds" | $SED "$delay_single_quote_subst"`' postuninstall_cmds='`$ECHO "$postuninstall_cmds" | $SED "$delay_single_quote_subst"`' finish_cmds='`$ECHO "$finish_cmds" | $SED "$delay_single_quote_subst"`' finish_eval='`$ECHO "$finish_eval" | $SED "$delay_single_quote_subst"`' hardcode_into_libs='`$ECHO "$hardcode_into_libs" | $SED "$delay_single_quote_subst"`' sys_lib_search_path_spec='`$ECHO "$sys_lib_search_path_spec" | $SED "$delay_single_quote_subst"`' sys_lib_dlsearch_path_spec='`$ECHO "$sys_lib_dlsearch_path_spec" | $SED "$delay_single_quote_subst"`' hardcode_action='`$ECHO "$hardcode_action" | $SED "$delay_single_quote_subst"`' enable_dlopen='`$ECHO "$enable_dlopen" | $SED "$delay_single_quote_subst"`' enable_dlopen_self='`$ECHO "$enable_dlopen_self" | $SED "$delay_single_quote_subst"`' enable_dlopen_self_static='`$ECHO "$enable_dlopen_self_static" | $SED "$delay_single_quote_subst"`' old_striplib='`$ECHO "$old_striplib" | $SED "$delay_single_quote_subst"`' striplib='`$ECHO "$striplib" | $SED "$delay_single_quote_subst"`' compiler_lib_search_dirs='`$ECHO "$compiler_lib_search_dirs" | $SED "$delay_single_quote_subst"`' predep_objects='`$ECHO "$predep_objects" | $SED "$delay_single_quote_subst"`' postdep_objects='`$ECHO "$postdep_objects" | $SED "$delay_single_quote_subst"`' predeps='`$ECHO "$predeps" | $SED "$delay_single_quote_subst"`' postdeps='`$ECHO "$postdeps" | $SED "$delay_single_quote_subst"`' compiler_lib_search_path='`$ECHO "$compiler_lib_search_path" | $SED "$delay_single_quote_subst"`' LD_CXX='`$ECHO "$LD_CXX" | $SED "$delay_single_quote_subst"`' reload_flag_CXX='`$ECHO "$reload_flag_CXX" | $SED "$delay_single_quote_subst"`' reload_cmds_CXX='`$ECHO "$reload_cmds_CXX" | $SED "$delay_single_quote_subst"`' old_archive_cmds_CXX='`$ECHO "$old_archive_cmds_CXX" | $SED "$delay_single_quote_subst"`' compiler_CXX='`$ECHO "$compiler_CXX" | $SED "$delay_single_quote_subst"`' GCC_CXX='`$ECHO "$GCC_CXX" | $SED "$delay_single_quote_subst"`' lt_prog_compiler_no_builtin_flag_CXX='`$ECHO "$lt_prog_compiler_no_builtin_flag_CXX" | $SED "$delay_single_quote_subst"`' lt_prog_compiler_pic_CXX='`$ECHO "$lt_prog_compiler_pic_CXX" | $SED "$delay_single_quote_subst"`' lt_prog_compiler_wl_CXX='`$ECHO "$lt_prog_compiler_wl_CXX" | $SED "$delay_single_quote_subst"`' lt_prog_compiler_static_CXX='`$ECHO "$lt_prog_compiler_static_CXX" | $SED "$delay_single_quote_subst"`' lt_cv_prog_compiler_c_o_CXX='`$ECHO "$lt_cv_prog_compiler_c_o_CXX" | $SED "$delay_single_quote_subst"`' archive_cmds_need_lc_CXX='`$ECHO "$archive_cmds_need_lc_CXX" | $SED "$delay_single_quote_subst"`' enable_shared_with_static_runtimes_CXX='`$ECHO "$enable_shared_with_static_runtimes_CXX" | $SED "$delay_single_quote_subst"`' export_dynamic_flag_spec_CXX='`$ECHO "$export_dynamic_flag_spec_CXX" | $SED "$delay_single_quote_subst"`' whole_archive_flag_spec_CXX='`$ECHO "$whole_archive_flag_spec_CXX" | $SED "$delay_single_quote_subst"`' compiler_needs_object_CXX='`$ECHO "$compiler_needs_object_CXX" | $SED "$delay_single_quote_subst"`' old_archive_from_new_cmds_CXX='`$ECHO "$old_archive_from_new_cmds_CXX" | $SED "$delay_single_quote_subst"`' old_archive_from_expsyms_cmds_CXX='`$ECHO "$old_archive_from_expsyms_cmds_CXX" | $SED "$delay_single_quote_subst"`' archive_cmds_CXX='`$ECHO "$archive_cmds_CXX" | $SED "$delay_single_quote_subst"`' archive_expsym_cmds_CXX='`$ECHO "$archive_expsym_cmds_CXX" | $SED "$delay_single_quote_subst"`' module_cmds_CXX='`$ECHO "$module_cmds_CXX" | $SED "$delay_single_quote_subst"`' module_expsym_cmds_CXX='`$ECHO "$module_expsym_cmds_CXX" | $SED "$delay_single_quote_subst"`' with_gnu_ld_CXX='`$ECHO "$with_gnu_ld_CXX" | $SED "$delay_single_quote_subst"`' allow_undefined_flag_CXX='`$ECHO "$allow_undefined_flag_CXX" | $SED "$delay_single_quote_subst"`' no_undefined_flag_CXX='`$ECHO "$no_undefined_flag_CXX" | $SED "$delay_single_quote_subst"`' hardcode_libdir_flag_spec_CXX='`$ECHO "$hardcode_libdir_flag_spec_CXX" | $SED "$delay_single_quote_subst"`' hardcode_libdir_separator_CXX='`$ECHO "$hardcode_libdir_separator_CXX" | $SED "$delay_single_quote_subst"`' hardcode_direct_CXX='`$ECHO "$hardcode_direct_CXX" | $SED "$delay_single_quote_subst"`' hardcode_direct_absolute_CXX='`$ECHO "$hardcode_direct_absolute_CXX" | $SED "$delay_single_quote_subst"`' hardcode_minus_L_CXX='`$ECHO "$hardcode_minus_L_CXX" | $SED "$delay_single_quote_subst"`' hardcode_shlibpath_var_CXX='`$ECHO "$hardcode_shlibpath_var_CXX" | $SED "$delay_single_quote_subst"`' hardcode_automatic_CXX='`$ECHO "$hardcode_automatic_CXX" | $SED "$delay_single_quote_subst"`' inherit_rpath_CXX='`$ECHO "$inherit_rpath_CXX" | $SED "$delay_single_quote_subst"`' link_all_deplibs_CXX='`$ECHO "$link_all_deplibs_CXX" | $SED "$delay_single_quote_subst"`' always_export_symbols_CXX='`$ECHO "$always_export_symbols_CXX" | $SED "$delay_single_quote_subst"`' export_symbols_cmds_CXX='`$ECHO "$export_symbols_cmds_CXX" | $SED "$delay_single_quote_subst"`' exclude_expsyms_CXX='`$ECHO "$exclude_expsyms_CXX" | $SED "$delay_single_quote_subst"`' include_expsyms_CXX='`$ECHO "$include_expsyms_CXX" | $SED "$delay_single_quote_subst"`' prelink_cmds_CXX='`$ECHO "$prelink_cmds_CXX" | $SED "$delay_single_quote_subst"`' postlink_cmds_CXX='`$ECHO "$postlink_cmds_CXX" | $SED "$delay_single_quote_subst"`' file_list_spec_CXX='`$ECHO "$file_list_spec_CXX" | $SED "$delay_single_quote_subst"`' hardcode_action_CXX='`$ECHO "$hardcode_action_CXX" | $SED "$delay_single_quote_subst"`' compiler_lib_search_dirs_CXX='`$ECHO "$compiler_lib_search_dirs_CXX" | $SED "$delay_single_quote_subst"`' predep_objects_CXX='`$ECHO "$predep_objects_CXX" | $SED "$delay_single_quote_subst"`' postdep_objects_CXX='`$ECHO "$postdep_objects_CXX" | $SED "$delay_single_quote_subst"`' predeps_CXX='`$ECHO "$predeps_CXX" | $SED "$delay_single_quote_subst"`' postdeps_CXX='`$ECHO "$postdeps_CXX" | $SED "$delay_single_quote_subst"`' compiler_lib_search_path_CXX='`$ECHO "$compiler_lib_search_path_CXX" | $SED "$delay_single_quote_subst"`' LTCC='$LTCC' LTCFLAGS='$LTCFLAGS' compiler='$compiler_DEFAULT' # A function that is used when there is no print builtin or printf. func_fallback_echo () { eval 'cat <<_LTECHO_EOF \$1 _LTECHO_EOF' } # Quote evaled strings. for var in SHELL \ ECHO \ PATH_SEPARATOR \ SED \ GREP \ EGREP \ FGREP \ LD \ NM \ LN_S \ lt_SP2NL \ lt_NL2SP \ reload_flag \ OBJDUMP \ deplibs_check_method \ file_magic_cmd \ file_magic_glob \ want_nocaseglob \ DLLTOOL \ sharedlib_from_linklib_cmd \ AR \ AR_FLAGS \ archiver_list_spec \ STRIP \ RANLIB \ CC \ CFLAGS \ compiler \ lt_cv_sys_global_symbol_pipe \ lt_cv_sys_global_symbol_to_cdecl \ lt_cv_sys_global_symbol_to_c_name_address \ lt_cv_sys_global_symbol_to_c_name_address_lib_prefix \ nm_file_list_spec \ lt_prog_compiler_no_builtin_flag \ lt_prog_compiler_pic \ lt_prog_compiler_wl \ lt_prog_compiler_static \ lt_cv_prog_compiler_c_o \ need_locks \ MANIFEST_TOOL \ DSYMUTIL \ NMEDIT \ LIPO \ OTOOL \ OTOOL64 \ shrext_cmds \ export_dynamic_flag_spec \ whole_archive_flag_spec \ compiler_needs_object \ with_gnu_ld \ allow_undefined_flag \ no_undefined_flag \ hardcode_libdir_flag_spec \ hardcode_libdir_separator \ exclude_expsyms \ include_expsyms \ file_list_spec \ variables_saved_for_relink \ libname_spec \ library_names_spec \ soname_spec \ install_override_mode \ finish_eval \ old_striplib \ striplib \ compiler_lib_search_dirs \ predep_objects \ postdep_objects \ predeps \ postdeps \ compiler_lib_search_path \ LD_CXX \ reload_flag_CXX \ compiler_CXX \ lt_prog_compiler_no_builtin_flag_CXX \ lt_prog_compiler_pic_CXX \ lt_prog_compiler_wl_CXX \ lt_prog_compiler_static_CXX \ lt_cv_prog_compiler_c_o_CXX \ export_dynamic_flag_spec_CXX \ whole_archive_flag_spec_CXX \ compiler_needs_object_CXX \ with_gnu_ld_CXX \ allow_undefined_flag_CXX \ no_undefined_flag_CXX \ hardcode_libdir_flag_spec_CXX \ hardcode_libdir_separator_CXX \ exclude_expsyms_CXX \ include_expsyms_CXX \ file_list_spec_CXX \ compiler_lib_search_dirs_CXX \ predep_objects_CXX \ postdep_objects_CXX \ predeps_CXX \ postdeps_CXX \ compiler_lib_search_path_CXX; do case \`eval \\\\\$ECHO \\\\""\\\\\$\$var"\\\\"\` in *[\\\\\\\`\\"\\\$]*) eval "lt_\$var=\\\\\\"\\\`\\\$ECHO \\"\\\$\$var\\" | \\\$SED \\"\\\$sed_quote_subst\\"\\\`\\\\\\"" ;; *) eval "lt_\$var=\\\\\\"\\\$\$var\\\\\\"" ;; esac done # Double-quote double-evaled strings. for var in reload_cmds \ old_postinstall_cmds \ old_postuninstall_cmds \ old_archive_cmds \ extract_expsyms_cmds \ old_archive_from_new_cmds \ old_archive_from_expsyms_cmds \ archive_cmds \ archive_expsym_cmds \ module_cmds \ module_expsym_cmds \ export_symbols_cmds \ prelink_cmds \ postlink_cmds \ postinstall_cmds \ postuninstall_cmds \ finish_cmds \ sys_lib_search_path_spec \ sys_lib_dlsearch_path_spec \ reload_cmds_CXX \ old_archive_cmds_CXX \ old_archive_from_new_cmds_CXX \ old_archive_from_expsyms_cmds_CXX \ archive_cmds_CXX \ archive_expsym_cmds_CXX \ module_cmds_CXX \ module_expsym_cmds_CXX \ export_symbols_cmds_CXX \ prelink_cmds_CXX \ postlink_cmds_CXX; do case \`eval \\\\\$ECHO \\\\""\\\\\$\$var"\\\\"\` in *[\\\\\\\`\\"\\\$]*) eval "lt_\$var=\\\\\\"\\\`\\\$ECHO \\"\\\$\$var\\" | \\\$SED -e \\"\\\$double_quote_subst\\" -e \\"\\\$sed_quote_subst\\" -e \\"\\\$delay_variable_subst\\"\\\`\\\\\\"" ;; *) eval "lt_\$var=\\\\\\"\\\$\$var\\\\\\"" ;; esac done ac_aux_dir='$ac_aux_dir' xsi_shell='$xsi_shell' lt_shell_append='$lt_shell_append' # See if we are running on zsh, and set the options which allow our # commands through without removal of \ escapes INIT. if test -n "\${ZSH_VERSION+set}" ; then setopt NO_GLOB_SUBST fi PACKAGE='$PACKAGE' VERSION='$VERSION' TIMESTAMP='$TIMESTAMP' RM='$RM' ofile='$ofile' _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 # Handling of arguments. for ac_config_target in $ac_config_targets do case $ac_config_target in "build-aux/config.h") CONFIG_HEADERS="$CONFIG_HEADERS build-aux/config.h" ;; "Makefile") CONFIG_FILES="$CONFIG_FILES Makefile" ;; "scripts/gtest-config") CONFIG_FILES="$CONFIG_FILES scripts/gtest-config" ;; "depfiles") CONFIG_COMMANDS="$CONFIG_COMMANDS depfiles" ;; "libtool") CONFIG_COMMANDS="$CONFIG_COMMANDS libtool" ;; *) as_fn_error $? "invalid argument: \`$ac_config_target'" "$LINENO" 5;; esac done # If the user did not use the arguments to specify the items to instantiate, # then the envvar interface is used. Set only those that are not. # We use the long form for the default assignment because of an extremely # bizarre bug on SunOS 4.1.3. if $ac_need_defaults; then test "${CONFIG_FILES+set}" = set || CONFIG_FILES=$config_files test "${CONFIG_HEADERS+set}" = set || CONFIG_HEADERS=$config_headers test "${CONFIG_COMMANDS+set}" = set || CONFIG_COMMANDS=$config_commands fi # Have a temporary directory for convenience. Make it in the build tree # simply because there is no reason against having it here, and in addition, # creating and moving files from /tmp can sometimes cause problems. # Hook for its removal unless debugging. # Note that there is a small window in which the directory will not be cleaned: # after its creation but before its name has been assigned to `$tmp'. $debug || { tmp= ac_tmp= trap 'exit_status=$? : "${ac_tmp:=$tmp}" { test ! -d "$ac_tmp" || rm -fr "$ac_tmp"; } && exit $exit_status ' 0 trap 'as_fn_exit 1' 1 2 13 15 } # Create a (secure) tmp directory for tmp files. { tmp=`(umask 077 && mktemp -d "./confXXXXXX") 2>/dev/null` && test -d "$tmp" } || { tmp=./conf$$-$RANDOM (umask 077 && mkdir "$tmp") } || as_fn_error $? "cannot create a temporary directory in ." "$LINENO" 5 ac_tmp=$tmp # Set up the scripts for CONFIG_FILES section. # No need to generate them if there are no CONFIG_FILES. # This happens for instance with `./config.status config.h'. if test -n "$CONFIG_FILES"; then ac_cr=`echo X | tr X '\015'` # On cygwin, bash can eat \r inside `` if the user requested igncr. # But we know of no other shell where ac_cr would be empty at this # point, so we can use a bashism as a fallback. if test "x$ac_cr" = x; then eval ac_cr=\$\'\\r\' fi ac_cs_awk_cr=`$AWK 'BEGIN { print "a\rb" }' /dev/null` if test "$ac_cs_awk_cr" = "a${ac_cr}b"; then ac_cs_awk_cr='\\r' else ac_cs_awk_cr=$ac_cr fi echo 'BEGIN {' >"$ac_tmp/subs1.awk" && _ACEOF { echo "cat >conf$$subs.awk <<_ACEOF" && echo "$ac_subst_vars" | sed 's/.*/&!$&$ac_delim/' && echo "_ACEOF" } >conf$$subs.sh || as_fn_error $? "could not make $CONFIG_STATUS" "$LINENO" 5 ac_delim_num=`echo "$ac_subst_vars" | grep -c '^'` ac_delim='%!_!# ' for ac_last_try in false false false false false :; do . ./conf$$subs.sh || as_fn_error $? "could not make $CONFIG_STATUS" "$LINENO" 5 ac_delim_n=`sed -n "s/.*$ac_delim\$/X/p" conf$$subs.awk | grep -c X` if test $ac_delim_n = $ac_delim_num; then break elif $ac_last_try; then as_fn_error $? "could not make $CONFIG_STATUS" "$LINENO" 5 else ac_delim="$ac_delim!$ac_delim _$ac_delim!! " fi done rm -f conf$$subs.sh cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1 cat >>"\$ac_tmp/subs1.awk" <<\\_ACAWK && _ACEOF sed -n ' h s/^/S["/; s/!.*/"]=/ p g s/^[^!]*!// :repl t repl s/'"$ac_delim"'$// t delim :nl h s/\(.\{148\}\)..*/\1/ t more1 s/["\\]/\\&/g; s/^/"/; s/$/\\n"\\/ p n b repl :more1 s/["\\]/\\&/g; s/^/"/; s/$/"\\/ p g s/.\{148\}// t nl :delim h s/\(.\{148\}\)..*/\1/ t more2 s/["\\]/\\&/g; s/^/"/; s/$/"/ p b :more2 s/["\\]/\\&/g; s/^/"/; s/$/"\\/ p g s/.\{148\}// t delim ' >$CONFIG_STATUS || ac_write_fail=1 rm -f conf$$subs.awk cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1 _ACAWK cat >>"\$ac_tmp/subs1.awk" <<_ACAWK && for (key in S) S_is_set[key] = 1 FS = "" } { line = $ 0 nfields = split(line, field, "@") substed = 0 len = length(field[1]) for (i = 2; i < nfields; i++) { key = field[i] keylen = length(key) if (S_is_set[key]) { value = S[key] line = substr(line, 1, len) "" value "" substr(line, len + keylen + 3) len += length(value) + length(field[++i]) substed = 1 } else len += 1 + keylen } print line } _ACAWK _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 if sed "s/$ac_cr//" < /dev/null > /dev/null 2>&1; then sed "s/$ac_cr\$//; s/$ac_cr/$ac_cs_awk_cr/g" else cat fi < "$ac_tmp/subs1.awk" > "$ac_tmp/subs.awk" \ || as_fn_error $? "could not setup config files machinery" "$LINENO" 5 _ACEOF # VPATH may cause trouble with some makes, so we remove sole $(srcdir), # ${srcdir} and @srcdir@ entries from VPATH if srcdir is ".", strip leading and # trailing colons and then remove the whole line if VPATH becomes empty # (actually we leave an empty line to preserve line numbers). if test "x$srcdir" = x.; then ac_vpsub='/^[ ]*VPATH[ ]*=[ ]*/{ h s/// s/^/:/ s/[ ]*$/:/ s/:\$(srcdir):/:/g s/:\${srcdir}:/:/g s/:@srcdir@:/:/g s/^:*// s/:*$// x s/\(=[ ]*\).*/\1/ G s/\n// s/^[^=]*=[ ]*$// }' fi cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 fi # test -n "$CONFIG_FILES" # Set up the scripts for CONFIG_HEADERS section. # No need to generate them if there are no CONFIG_HEADERS. # This happens for instance with `./config.status Makefile'. if test -n "$CONFIG_HEADERS"; then cat >"$ac_tmp/defines.awk" <<\_ACAWK || BEGIN { _ACEOF # Transform confdefs.h into an awk script `defines.awk', embedded as # here-document in config.status, that substitutes the proper values into # config.h.in to produce config.h. # Create a delimiter string that does not exist in confdefs.h, to ease # handling of long lines. ac_delim='%!_!# ' for ac_last_try in false false :; do ac_tt=`sed -n "/$ac_delim/p" confdefs.h` if test -z "$ac_tt"; then break elif $ac_last_try; then as_fn_error $? "could not make $CONFIG_HEADERS" "$LINENO" 5 else ac_delim="$ac_delim!$ac_delim _$ac_delim!! " fi done # For the awk script, D is an array of macro values keyed by name, # likewise P contains macro parameters if any. Preserve backslash # newline sequences. ac_word_re=[_$as_cr_Letters][_$as_cr_alnum]* sed -n ' s/.\{148\}/&'"$ac_delim"'/g t rset :rset s/^[ ]*#[ ]*define[ ][ ]*/ / t def d :def s/\\$// t bsnl s/["\\]/\\&/g s/^ \('"$ac_word_re"'\)\(([^()]*)\)[ ]*\(.*\)/P["\1"]="\2"\ D["\1"]=" \3"/p s/^ \('"$ac_word_re"'\)[ ]*\(.*\)/D["\1"]=" \2"/p d :bsnl s/["\\]/\\&/g s/^ \('"$ac_word_re"'\)\(([^()]*)\)[ ]*\(.*\)/P["\1"]="\2"\ D["\1"]=" \3\\\\\\n"\\/p t cont s/^ \('"$ac_word_re"'\)[ ]*\(.*\)/D["\1"]=" \2\\\\\\n"\\/p t cont d :cont n s/.\{148\}/&'"$ac_delim"'/g t clear :clear s/\\$// t bsnlc s/["\\]/\\&/g; s/^/"/; s/$/"/p d :bsnlc s/["\\]/\\&/g; s/^/"/; s/$/\\\\\\n"\\/p b cont ' >$CONFIG_STATUS || ac_write_fail=1 cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1 for (key in D) D_is_set[key] = 1 FS = "" } /^[\t ]*#[\t ]*(define|undef)[\t ]+$ac_word_re([\t (]|\$)/ { line = \$ 0 split(line, arg, " ") if (arg[1] == "#") { defundef = arg[2] mac1 = arg[3] } else { defundef = substr(arg[1], 2) mac1 = arg[2] } split(mac1, mac2, "(") #) macro = mac2[1] prefix = substr(line, 1, index(line, defundef) - 1) if (D_is_set[macro]) { # Preserve the white space surrounding the "#". print prefix "define", macro P[macro] D[macro] next } else { # Replace #undef with comments. This is necessary, for example, # in the case of _POSIX_SOURCE, which is predefined and required # on some systems where configure will not decide to define it. if (defundef == "undef") { print "/*", prefix defundef, macro, "*/" next } } } { print } _ACAWK _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 as_fn_error $? "could not setup config headers machinery" "$LINENO" 5 fi # test -n "$CONFIG_HEADERS" eval set X " :F $CONFIG_FILES :H $CONFIG_HEADERS :C $CONFIG_COMMANDS" shift for ac_tag do case $ac_tag in :[FHLC]) ac_mode=$ac_tag; continue;; esac case $ac_mode$ac_tag in :[FHL]*:*);; :L* | :C*:*) as_fn_error $? "invalid tag \`$ac_tag'" "$LINENO" 5;; :[FH]-) ac_tag=-:-;; :[FH]*) ac_tag=$ac_tag:$ac_tag.in;; esac ac_save_IFS=$IFS IFS=: set x $ac_tag IFS=$ac_save_IFS shift ac_file=$1 shift case $ac_mode in :L) ac_source=$1;; :[FH]) ac_file_inputs= for ac_f do case $ac_f in -) ac_f="$ac_tmp/stdin";; *) # Look for the file first in the build tree, then in the source tree # (if the path is not absolute). The absolute path cannot be DOS-style, # because $ac_f cannot contain `:'. test -f "$ac_f" || case $ac_f in [\\/$]*) false;; *) test -f "$srcdir/$ac_f" && ac_f="$srcdir/$ac_f";; esac || as_fn_error 1 "cannot find input file: \`$ac_f'" "$LINENO" 5;; esac case $ac_f in *\'*) ac_f=`$as_echo "$ac_f" | sed "s/'/'\\\\\\\\''/g"`;; esac as_fn_append ac_file_inputs " '$ac_f'" done # Let's still pretend it is `configure' which instantiates (i.e., don't # use $as_me), people would be surprised to read: # /* config.h. Generated by config.status. */ configure_input='Generated from '` $as_echo "$*" | sed 's|^[^:]*/||;s|:[^:]*/|, |g' `' by configure.' if test x"$ac_file" != x-; then configure_input="$ac_file. $configure_input" { $as_echo "$as_me:${as_lineno-$LINENO}: creating $ac_file" >&5 $as_echo "$as_me: creating $ac_file" >&6;} fi # Neutralize special characters interpreted by sed in replacement strings. case $configure_input in #( *\&* | *\|* | *\\* ) ac_sed_conf_input=`$as_echo "$configure_input" | sed 's/[\\\\&|]/\\\\&/g'`;; #( *) ac_sed_conf_input=$configure_input;; esac case $ac_tag in *:-:* | *:-) cat >"$ac_tmp/stdin" \ || as_fn_error $? "could not create $ac_file" "$LINENO" 5 ;; esac ;; esac ac_dir=`$as_dirname -- "$ac_file" || $as_expr X"$ac_file" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \ X"$ac_file" : 'X\(//\)[^/]' \| \ X"$ac_file" : 'X\(//\)$' \| \ X"$ac_file" : 'X\(/\)' \| . 2>/dev/null || $as_echo X"$ac_file" | sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{ s//\1/ q } /^X\(\/\/\)[^/].*/{ s//\1/ q } /^X\(\/\/\)$/{ s//\1/ q } /^X\(\/\).*/{ s//\1/ q } s/.*/./; q'` as_dir="$ac_dir"; as_fn_mkdir_p ac_builddir=. case "$ac_dir" in .) ac_dir_suffix= ac_top_builddir_sub=. ac_top_build_prefix= ;; *) ac_dir_suffix=/`$as_echo "$ac_dir" | sed 's|^\.[\\/]||'` # A ".." for each directory in $ac_dir_suffix. ac_top_builddir_sub=`$as_echo "$ac_dir_suffix" | sed 's|/[^\\/]*|/..|g;s|/||'` case $ac_top_builddir_sub in "") ac_top_builddir_sub=. ac_top_build_prefix= ;; *) ac_top_build_prefix=$ac_top_builddir_sub/ ;; esac ;; esac ac_abs_top_builddir=$ac_pwd ac_abs_builddir=$ac_pwd$ac_dir_suffix # for backward compatibility: ac_top_builddir=$ac_top_build_prefix case $srcdir in .) # We are building in place. ac_srcdir=. ac_top_srcdir=$ac_top_builddir_sub ac_abs_top_srcdir=$ac_pwd ;; [\\/]* | ?:[\\/]* ) # Absolute name. ac_srcdir=$srcdir$ac_dir_suffix; ac_top_srcdir=$srcdir ac_abs_top_srcdir=$srcdir ;; *) # Relative name. ac_srcdir=$ac_top_build_prefix$srcdir$ac_dir_suffix ac_top_srcdir=$ac_top_build_prefix$srcdir ac_abs_top_srcdir=$ac_pwd/$srcdir ;; esac ac_abs_srcdir=$ac_abs_top_srcdir$ac_dir_suffix case $ac_mode in :F) # # CONFIG_FILE # case $INSTALL in [\\/$]* | ?:[\\/]* ) ac_INSTALL=$INSTALL ;; *) ac_INSTALL=$ac_top_build_prefix$INSTALL ;; esac ac_MKDIR_P=$MKDIR_P case $MKDIR_P in [\\/$]* | ?:[\\/]* ) ;; */*) ac_MKDIR_P=$ac_top_build_prefix$MKDIR_P ;; esac _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 # If the template does not know about datarootdir, expand it. # FIXME: This hack should be removed a few years after 2.60. ac_datarootdir_hack=; ac_datarootdir_seen= ac_sed_dataroot=' /datarootdir/ { p q } /@datadir@/p /@docdir@/p /@infodir@/p /@localedir@/p /@mandir@/p' case `eval "sed -n \"\$ac_sed_dataroot\" $ac_file_inputs"` in *datarootdir*) ac_datarootdir_seen=yes;; *@datadir@*|*@docdir@*|*@infodir@*|*@localedir@*|*@mandir@*) { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: $ac_file_inputs seems to ignore the --datarootdir setting" >&5 $as_echo "$as_me: WARNING: $ac_file_inputs seems to ignore the --datarootdir setting" >&2;} _ACEOF cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1 ac_datarootdir_hack=' s&@datadir@&$datadir&g s&@docdir@&$docdir&g s&@infodir@&$infodir&g s&@localedir@&$localedir&g s&@mandir@&$mandir&g s&\\\${datarootdir}&$datarootdir&g' ;; esac _ACEOF # Neutralize VPATH when `$srcdir' = `.'. # Shell code in configure.ac might set extrasub. # FIXME: do we really want to maintain this feature? cat >>$CONFIG_STATUS <<_ACEOF || ac_write_fail=1 ac_sed_extra="$ac_vpsub $extrasub _ACEOF cat >>$CONFIG_STATUS <<\_ACEOF || ac_write_fail=1 :t /@[a-zA-Z_][a-zA-Z_0-9]*@/!b s|@configure_input@|$ac_sed_conf_input|;t t s&@top_builddir@&$ac_top_builddir_sub&;t t s&@top_build_prefix@&$ac_top_build_prefix&;t t s&@srcdir@&$ac_srcdir&;t t s&@abs_srcdir@&$ac_abs_srcdir&;t t s&@top_srcdir@&$ac_top_srcdir&;t t s&@abs_top_srcdir@&$ac_abs_top_srcdir&;t t s&@builddir@&$ac_builddir&;t t s&@abs_builddir@&$ac_abs_builddir&;t t s&@abs_top_builddir@&$ac_abs_top_builddir&;t t s&@INSTALL@&$ac_INSTALL&;t t s&@MKDIR_P@&$ac_MKDIR_P&;t t $ac_datarootdir_hack " eval sed \"\$ac_sed_extra\" "$ac_file_inputs" | $AWK -f "$ac_tmp/subs.awk" \ >$ac_tmp/out || as_fn_error $? "could not create $ac_file" "$LINENO" 5 test -z "$ac_datarootdir_hack$ac_datarootdir_seen" && { ac_out=`sed -n '/\${datarootdir}/p' "$ac_tmp/out"`; test -n "$ac_out"; } && { ac_out=`sed -n '/^[ ]*datarootdir[ ]*:*=/p' \ "$ac_tmp/out"`; test -z "$ac_out"; } && { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: $ac_file contains a reference to the variable \`datarootdir' which seems to be undefined. Please make sure it is defined" >&5 $as_echo "$as_me: WARNING: $ac_file contains a reference to the variable \`datarootdir' which seems to be undefined. Please make sure it is defined" >&2;} rm -f "$ac_tmp/stdin" case $ac_file in -) cat "$ac_tmp/out" && rm -f "$ac_tmp/out";; *) rm -f "$ac_file" && mv "$ac_tmp/out" "$ac_file";; esac \ || as_fn_error $? "could not create $ac_file" "$LINENO" 5 ;; :H) # # CONFIG_HEADER # if test x"$ac_file" != x-; then { $as_echo "/* $configure_input */" \ && eval '$AWK -f "$ac_tmp/defines.awk"' "$ac_file_inputs" } >"$ac_tmp/config.h" \ || as_fn_error $? "could not create $ac_file" "$LINENO" 5 if diff "$ac_file" "$ac_tmp/config.h" >/dev/null 2>&1; then { $as_echo "$as_me:${as_lineno-$LINENO}: $ac_file is unchanged" >&5 $as_echo "$as_me: $ac_file is unchanged" >&6;} else rm -f "$ac_file" mv "$ac_tmp/config.h" "$ac_file" \ || as_fn_error $? "could not create $ac_file" "$LINENO" 5 fi else $as_echo "/* $configure_input */" \ && eval '$AWK -f "$ac_tmp/defines.awk"' "$ac_file_inputs" \ || as_fn_error $? "could not create -" "$LINENO" 5 fi # Compute "$ac_file"'s index in $config_headers. _am_arg="$ac_file" _am_stamp_count=1 for _am_header in $config_headers :; do case $_am_header in $_am_arg | $_am_arg:* ) break ;; * ) _am_stamp_count=`expr $_am_stamp_count + 1` ;; esac done echo "timestamp for $_am_arg" >`$as_dirname -- "$_am_arg" || $as_expr X"$_am_arg" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \ X"$_am_arg" : 'X\(//\)[^/]' \| \ X"$_am_arg" : 'X\(//\)$' \| \ X"$_am_arg" : 'X\(/\)' \| . 2>/dev/null || $as_echo X"$_am_arg" | sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{ s//\1/ q } /^X\(\/\/\)[^/].*/{ s//\1/ q } /^X\(\/\/\)$/{ s//\1/ q } /^X\(\/\).*/{ s//\1/ q } s/.*/./; q'`/stamp-h$_am_stamp_count ;; :C) { $as_echo "$as_me:${as_lineno-$LINENO}: executing $ac_file commands" >&5 $as_echo "$as_me: executing $ac_file commands" >&6;} ;; esac case $ac_file$ac_mode in "scripts/gtest-config":F) chmod +x scripts/gtest-config ;; "depfiles":C) test x"$AMDEP_TRUE" != x"" || { # Autoconf 2.62 quotes --file arguments for eval, but not when files # are listed without --file. Let's play safe and only enable the eval # if we detect the quoting. case $CONFIG_FILES in *\'*) eval set x "$CONFIG_FILES" ;; *) set x $CONFIG_FILES ;; esac shift for mf do # Strip MF so we end up with the name of the file. mf=`echo "$mf" | sed -e 's/:.*$//'` # Check whether this is an Automake generated Makefile or not. # We used to match only the files named `Makefile.in', but # some people rename them; so instead we look at the file content. # Grep'ing the first line is not enough: some people post-process # each Makefile.in and add a new line on top of each file to say so. # Grep'ing the whole file is not good either: AIX grep has a line # limit of 2048, but all sed's we know have understand at least 4000. if sed -n 's,^#.*generated by automake.*,X,p' "$mf" | grep X >/dev/null 2>&1; then dirpart=`$as_dirname -- "$mf" || $as_expr X"$mf" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \ X"$mf" : 'X\(//\)[^/]' \| \ X"$mf" : 'X\(//\)$' \| \ X"$mf" : 'X\(/\)' \| . 2>/dev/null || $as_echo X"$mf" | sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{ s//\1/ q } /^X\(\/\/\)[^/].*/{ s//\1/ q } /^X\(\/\/\)$/{ s//\1/ q } /^X\(\/\).*/{ s//\1/ q } s/.*/./; q'` else continue fi # Extract the definition of DEPDIR, am__include, and am__quote # from the Makefile without running `make'. DEPDIR=`sed -n 's/^DEPDIR = //p' < "$mf"` test -z "$DEPDIR" && continue am__include=`sed -n 's/^am__include = //p' < "$mf"` test -z "am__include" && continue am__quote=`sed -n 's/^am__quote = //p' < "$mf"` # When using ansi2knr, U may be empty or an underscore; expand it U=`sed -n 's/^U = //p' < "$mf"` # Find all dependency output files, they are included files with # $(DEPDIR) in their names. We invoke sed twice because it is the # simplest approach to changing $(DEPDIR) to its actual value in the # expansion. for file in `sed -n " s/^$am__include $am__quote\(.*(DEPDIR).*\)$am__quote"'$/\1/p' <"$mf" | \ sed -e 's/\$(DEPDIR)/'"$DEPDIR"'/g' -e 's/\$U/'"$U"'/g'`; do # Make sure the directory exists. test -f "$dirpart/$file" && continue fdir=`$as_dirname -- "$file" || $as_expr X"$file" : 'X\(.*[^/]\)//*[^/][^/]*/*$' \| \ X"$file" : 'X\(//\)[^/]' \| \ X"$file" : 'X\(//\)$' \| \ X"$file" : 'X\(/\)' \| . 2>/dev/null || $as_echo X"$file" | sed '/^X\(.*[^/]\)\/\/*[^/][^/]*\/*$/{ s//\1/ q } /^X\(\/\/\)[^/].*/{ s//\1/ q } /^X\(\/\/\)$/{ s//\1/ q } /^X\(\/\).*/{ s//\1/ q } s/.*/./; q'` as_dir=$dirpart/$fdir; as_fn_mkdir_p # echo "creating $dirpart/$file" echo '# dummy' > "$dirpart/$file" done done } ;; "libtool":C) # See if we are running on zsh, and set the options which allow our # commands through without removal of \ escapes. if test -n "${ZSH_VERSION+set}" ; then setopt NO_GLOB_SUBST fi cfgfile="${ofile}T" trap "$RM \"$cfgfile\"; exit 1" 1 2 15 $RM "$cfgfile" cat <<_LT_EOF >> "$cfgfile" #! $SHELL # `$ECHO "$ofile" | sed 's%^.*/%%'` - Provide generalized library-building support services. # Generated automatically by $as_me ($PACKAGE$TIMESTAMP) $VERSION # Libtool was configured on host `(hostname || uname -n) 2>/dev/null | sed 1q`: # NOTE: Changes made to this file will be lost: look at ltmain.sh. # # Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2003, 2004, 2005, # 2006, 2007, 2008, 2009, 2010, 2011 Free Software # Foundation, Inc. # Written by Gordon Matzigkeit, 1996 # # This file is part of GNU Libtool. # # GNU Libtool is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License as # published by the Free Software Foundation; either version 2 of # the License, or (at your option) any later version. # # As a special exception to the GNU General Public License, # if you distribute this file as part of a program or library that # is built using GNU Libtool, you may include this file under the # same distribution terms that you use for the rest of that program. # # GNU Libtool is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with GNU Libtool; see the file COPYING. If not, a copy # can be downloaded from http://www.gnu.org/licenses/gpl.html, or # obtained by writing to the Free Software Foundation, Inc., # 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. # The names of the tagged configurations supported by this script. available_tags="CXX " # ### BEGIN LIBTOOL CONFIG # Which release of libtool.m4 was used? macro_version=$macro_version macro_revision=$macro_revision # Whether or not to build shared libraries. build_libtool_libs=$enable_shared # Whether or not to build static libraries. build_old_libs=$enable_static # What type of objects to build. pic_mode=$pic_mode # Whether or not to optimize for fast installation. fast_install=$enable_fast_install # Shell to use when invoking shell scripts. SHELL=$lt_SHELL # An echo program that protects backslashes. ECHO=$lt_ECHO # The PATH separator for the build system. PATH_SEPARATOR=$lt_PATH_SEPARATOR # The host system. host_alias=$host_alias host=$host host_os=$host_os # The build system. build_alias=$build_alias build=$build build_os=$build_os # A sed program that does not truncate output. SED=$lt_SED # Sed that helps us avoid accidentally triggering echo(1) options like -n. Xsed="\$SED -e 1s/^X//" # A grep program that handles long lines. GREP=$lt_GREP # An ERE matcher. EGREP=$lt_EGREP # A literal string matcher. FGREP=$lt_FGREP # A BSD- or MS-compatible name lister. NM=$lt_NM # Whether we need soft or hard links. LN_S=$lt_LN_S # What is the maximum length of a command? max_cmd_len=$max_cmd_len # Object file suffix (normally "o"). objext=$ac_objext # Executable file suffix (normally ""). exeext=$exeext # whether the shell understands "unset". lt_unset=$lt_unset # turn spaces into newlines. SP2NL=$lt_lt_SP2NL # turn newlines into spaces. NL2SP=$lt_lt_NL2SP # convert \$build file names to \$host format. to_host_file_cmd=$lt_cv_to_host_file_cmd # convert \$build files to toolchain format. to_tool_file_cmd=$lt_cv_to_tool_file_cmd # An object symbol dumper. OBJDUMP=$lt_OBJDUMP # Method to check whether dependent libraries are shared objects. deplibs_check_method=$lt_deplibs_check_method # Command to use when deplibs_check_method = "file_magic". file_magic_cmd=$lt_file_magic_cmd # How to find potential files when deplibs_check_method = "file_magic". file_magic_glob=$lt_file_magic_glob # Find potential files using nocaseglob when deplibs_check_method = "file_magic". want_nocaseglob=$lt_want_nocaseglob # DLL creation program. DLLTOOL=$lt_DLLTOOL # Command to associate shared and link libraries. sharedlib_from_linklib_cmd=$lt_sharedlib_from_linklib_cmd # The archiver. AR=$lt_AR # Flags to create an archive. AR_FLAGS=$lt_AR_FLAGS # How to feed a file listing to the archiver. archiver_list_spec=$lt_archiver_list_spec # A symbol stripping program. STRIP=$lt_STRIP # Commands used to install an old-style archive. RANLIB=$lt_RANLIB old_postinstall_cmds=$lt_old_postinstall_cmds old_postuninstall_cmds=$lt_old_postuninstall_cmds # Whether to use a lock for old archive extraction. lock_old_archive_extraction=$lock_old_archive_extraction # A C compiler. LTCC=$lt_CC # LTCC compiler flags. LTCFLAGS=$lt_CFLAGS # Take the output of nm and produce a listing of raw symbols and C names. global_symbol_pipe=$lt_lt_cv_sys_global_symbol_pipe # Transform the output of nm in a proper C declaration. global_symbol_to_cdecl=$lt_lt_cv_sys_global_symbol_to_cdecl # Transform the output of nm in a C name address pair. global_symbol_to_c_name_address=$lt_lt_cv_sys_global_symbol_to_c_name_address # Transform the output of nm in a C name address pair when lib prefix is needed. global_symbol_to_c_name_address_lib_prefix=$lt_lt_cv_sys_global_symbol_to_c_name_address_lib_prefix # Specify filename containing input files for \$NM. nm_file_list_spec=$lt_nm_file_list_spec # The root where to search for dependent libraries,and in which our libraries should be installed. lt_sysroot=$lt_sysroot # The name of the directory that contains temporary libtool files. objdir=$objdir # Used to examine libraries when file_magic_cmd begins with "file". MAGIC_CMD=$MAGIC_CMD # Must we lock files when doing compilation? need_locks=$lt_need_locks # Manifest tool. MANIFEST_TOOL=$lt_MANIFEST_TOOL # Tool to manipulate archived DWARF debug symbol files on Mac OS X. DSYMUTIL=$lt_DSYMUTIL # Tool to change global to local symbols on Mac OS X. NMEDIT=$lt_NMEDIT # Tool to manipulate fat objects and archives on Mac OS X. LIPO=$lt_LIPO # ldd/readelf like tool for Mach-O binaries on Mac OS X. OTOOL=$lt_OTOOL # ldd/readelf like tool for 64 bit Mach-O binaries on Mac OS X 10.4. OTOOL64=$lt_OTOOL64 # Old archive suffix (normally "a"). libext=$libext # Shared library suffix (normally ".so"). shrext_cmds=$lt_shrext_cmds # The commands to extract the exported symbol list from a shared archive. extract_expsyms_cmds=$lt_extract_expsyms_cmds # Variables whose values should be saved in libtool wrapper scripts and # restored at link time. variables_saved_for_relink=$lt_variables_saved_for_relink # Do we need the "lib" prefix for modules? need_lib_prefix=$need_lib_prefix # Do we need a version for libraries? need_version=$need_version # Library versioning type. version_type=$version_type # Shared library runtime path variable. runpath_var=$runpath_var # Shared library path variable. shlibpath_var=$shlibpath_var # Is shlibpath searched before the hard-coded library search path? shlibpath_overrides_runpath=$shlibpath_overrides_runpath # Format of library name prefix. libname_spec=$lt_libname_spec # List of archive names. First name is the real one, the rest are links. # The last name is the one that the linker finds with -lNAME library_names_spec=$lt_library_names_spec # The coded name of the library, if different from the real name. soname_spec=$lt_soname_spec # Permission mode override for installation of shared libraries. install_override_mode=$lt_install_override_mode # Command to use after installation of a shared archive. postinstall_cmds=$lt_postinstall_cmds # Command to use after uninstallation of a shared archive. postuninstall_cmds=$lt_postuninstall_cmds # Commands used to finish a libtool library installation in a directory. finish_cmds=$lt_finish_cmds # As "finish_cmds", except a single script fragment to be evaled but # not shown. finish_eval=$lt_finish_eval # Whether we should hardcode library paths into libraries. hardcode_into_libs=$hardcode_into_libs # Compile-time system search path for libraries. sys_lib_search_path_spec=$lt_sys_lib_search_path_spec # Run-time system search path for libraries. sys_lib_dlsearch_path_spec=$lt_sys_lib_dlsearch_path_spec # Whether dlopen is supported. dlopen_support=$enable_dlopen # Whether dlopen of programs is supported. dlopen_self=$enable_dlopen_self # Whether dlopen of statically linked programs is supported. dlopen_self_static=$enable_dlopen_self_static # Commands to strip libraries. old_striplib=$lt_old_striplib striplib=$lt_striplib # The linker used to build libraries. LD=$lt_LD # How to create reloadable object files. reload_flag=$lt_reload_flag reload_cmds=$lt_reload_cmds # Commands used to build an old-style archive. old_archive_cmds=$lt_old_archive_cmds # A language specific compiler. CC=$lt_compiler # Is the compiler the GNU compiler? with_gcc=$GCC # Compiler flag to turn off builtin functions. no_builtin_flag=$lt_lt_prog_compiler_no_builtin_flag # Additional compiler flags for building library objects. pic_flag=$lt_lt_prog_compiler_pic # How to pass a linker flag through the compiler. wl=$lt_lt_prog_compiler_wl # Compiler flag to prevent dynamic linking. link_static_flag=$lt_lt_prog_compiler_static # Does compiler simultaneously support -c and -o options? compiler_c_o=$lt_lt_cv_prog_compiler_c_o # Whether or not to add -lc for building shared libraries. build_libtool_need_lc=$archive_cmds_need_lc # Whether or not to disallow shared libs when runtime libs are static. allow_libtool_libs_with_static_runtimes=$enable_shared_with_static_runtimes # Compiler flag to allow reflexive dlopens. export_dynamic_flag_spec=$lt_export_dynamic_flag_spec # Compiler flag to generate shared objects directly from archives. whole_archive_flag_spec=$lt_whole_archive_flag_spec # Whether the compiler copes with passing no objects directly. compiler_needs_object=$lt_compiler_needs_object # Create an old-style archive from a shared archive. old_archive_from_new_cmds=$lt_old_archive_from_new_cmds # Create a temporary old-style archive to link instead of a shared archive. old_archive_from_expsyms_cmds=$lt_old_archive_from_expsyms_cmds # Commands used to build a shared archive. archive_cmds=$lt_archive_cmds archive_expsym_cmds=$lt_archive_expsym_cmds # Commands used to build a loadable module if different from building # a shared archive. module_cmds=$lt_module_cmds module_expsym_cmds=$lt_module_expsym_cmds # Whether we are building with GNU ld or not. with_gnu_ld=$lt_with_gnu_ld # Flag that allows shared libraries with undefined symbols to be built. allow_undefined_flag=$lt_allow_undefined_flag # Flag that enforces no undefined symbols. no_undefined_flag=$lt_no_undefined_flag # Flag to hardcode \$libdir into a binary during linking. # This must work even if \$libdir does not exist hardcode_libdir_flag_spec=$lt_hardcode_libdir_flag_spec # Whether we need a single "-rpath" flag with a separated argument. hardcode_libdir_separator=$lt_hardcode_libdir_separator # Set to "yes" if using DIR/libNAME\${shared_ext} during linking hardcodes # DIR into the resulting binary. hardcode_direct=$hardcode_direct # Set to "yes" if using DIR/libNAME\${shared_ext} during linking hardcodes # DIR into the resulting binary and the resulting library dependency is # "absolute",i.e impossible to change by setting \${shlibpath_var} if the # library is relocated. hardcode_direct_absolute=$hardcode_direct_absolute # Set to "yes" if using the -LDIR flag during linking hardcodes DIR # into the resulting binary. hardcode_minus_L=$hardcode_minus_L # Set to "yes" if using SHLIBPATH_VAR=DIR during linking hardcodes DIR # into the resulting binary. hardcode_shlibpath_var=$hardcode_shlibpath_var # Set to "yes" if building a shared library automatically hardcodes DIR # into the library and all subsequent libraries and executables linked # against it. hardcode_automatic=$hardcode_automatic # Set to yes if linker adds runtime paths of dependent libraries # to runtime path list. inherit_rpath=$inherit_rpath # Whether libtool must link a program against all its dependency libraries. link_all_deplibs=$link_all_deplibs # Set to "yes" if exported symbols are required. always_export_symbols=$always_export_symbols # The commands to list exported symbols. export_symbols_cmds=$lt_export_symbols_cmds # Symbols that should not be listed in the preloaded symbols. exclude_expsyms=$lt_exclude_expsyms # Symbols that must always be exported. include_expsyms=$lt_include_expsyms # Commands necessary for linking programs (against libraries) with templates. prelink_cmds=$lt_prelink_cmds # Commands necessary for finishing linking programs. postlink_cmds=$lt_postlink_cmds # Specify filename containing input files. file_list_spec=$lt_file_list_spec # How to hardcode a shared library path into an executable. hardcode_action=$hardcode_action # The directories searched by this compiler when creating a shared library. compiler_lib_search_dirs=$lt_compiler_lib_search_dirs # Dependencies to place before and after the objects being linked to # create a shared library. predep_objects=$lt_predep_objects postdep_objects=$lt_postdep_objects predeps=$lt_predeps postdeps=$lt_postdeps # The library search path used internally by the compiler when linking # a shared library. compiler_lib_search_path=$lt_compiler_lib_search_path # ### END LIBTOOL CONFIG _LT_EOF case $host_os in aix3*) cat <<\_LT_EOF >> "$cfgfile" # AIX sometimes has problems with the GCC collect2 program. For some # reason, if we set the COLLECT_NAMES environment variable, the problems # vanish in a puff of smoke. if test "X${COLLECT_NAMES+set}" != Xset; then COLLECT_NAMES= export COLLECT_NAMES fi _LT_EOF ;; esac ltmain="$ac_aux_dir/ltmain.sh" # We use sed instead of cat because bash on DJGPP gets confused if # if finds mixed CR/LF and LF-only lines. Since sed operates in # text mode, it properly converts lines to CR/LF. This bash problem # is reportedly fixed, but why not run on old versions too? sed '$q' "$ltmain" >> "$cfgfile" \ || (rm -f "$cfgfile"; exit 1) if test x"$xsi_shell" = xyes; then sed -e '/^func_dirname ()$/,/^} # func_dirname /c\ func_dirname ()\ {\ \ case ${1} in\ \ */*) func_dirname_result="${1%/*}${2}" ;;\ \ * ) func_dirname_result="${3}" ;;\ \ esac\ } # Extended-shell func_dirname implementation' "$cfgfile" > $cfgfile.tmp \ && mv -f "$cfgfile.tmp" "$cfgfile" \ || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp") test 0 -eq $? || _lt_function_replace_fail=: sed -e '/^func_basename ()$/,/^} # func_basename /c\ func_basename ()\ {\ \ func_basename_result="${1##*/}"\ } # Extended-shell func_basename implementation' "$cfgfile" > $cfgfile.tmp \ && mv -f "$cfgfile.tmp" "$cfgfile" \ || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp") test 0 -eq $? || _lt_function_replace_fail=: sed -e '/^func_dirname_and_basename ()$/,/^} # func_dirname_and_basename /c\ func_dirname_and_basename ()\ {\ \ case ${1} in\ \ */*) func_dirname_result="${1%/*}${2}" ;;\ \ * ) func_dirname_result="${3}" ;;\ \ esac\ \ func_basename_result="${1##*/}"\ } # Extended-shell func_dirname_and_basename implementation' "$cfgfile" > $cfgfile.tmp \ && mv -f "$cfgfile.tmp" "$cfgfile" \ || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp") test 0 -eq $? || _lt_function_replace_fail=: sed -e '/^func_stripname ()$/,/^} # func_stripname /c\ func_stripname ()\ {\ \ # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are\ \ # positional parameters, so assign one to ordinary parameter first.\ \ func_stripname_result=${3}\ \ func_stripname_result=${func_stripname_result#"${1}"}\ \ func_stripname_result=${func_stripname_result%"${2}"}\ } # Extended-shell func_stripname implementation' "$cfgfile" > $cfgfile.tmp \ && mv -f "$cfgfile.tmp" "$cfgfile" \ || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp") test 0 -eq $? || _lt_function_replace_fail=: sed -e '/^func_split_long_opt ()$/,/^} # func_split_long_opt /c\ func_split_long_opt ()\ {\ \ func_split_long_opt_name=${1%%=*}\ \ func_split_long_opt_arg=${1#*=}\ } # Extended-shell func_split_long_opt implementation' "$cfgfile" > $cfgfile.tmp \ && mv -f "$cfgfile.tmp" "$cfgfile" \ || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp") test 0 -eq $? || _lt_function_replace_fail=: sed -e '/^func_split_short_opt ()$/,/^} # func_split_short_opt /c\ func_split_short_opt ()\ {\ \ func_split_short_opt_arg=${1#??}\ \ func_split_short_opt_name=${1%"$func_split_short_opt_arg"}\ } # Extended-shell func_split_short_opt implementation' "$cfgfile" > $cfgfile.tmp \ && mv -f "$cfgfile.tmp" "$cfgfile" \ || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp") test 0 -eq $? || _lt_function_replace_fail=: sed -e '/^func_lo2o ()$/,/^} # func_lo2o /c\ func_lo2o ()\ {\ \ case ${1} in\ \ *.lo) func_lo2o_result=${1%.lo}.${objext} ;;\ \ *) func_lo2o_result=${1} ;;\ \ esac\ } # Extended-shell func_lo2o implementation' "$cfgfile" > $cfgfile.tmp \ && mv -f "$cfgfile.tmp" "$cfgfile" \ || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp") test 0 -eq $? || _lt_function_replace_fail=: sed -e '/^func_xform ()$/,/^} # func_xform /c\ func_xform ()\ {\ func_xform_result=${1%.*}.lo\ } # Extended-shell func_xform implementation' "$cfgfile" > $cfgfile.tmp \ && mv -f "$cfgfile.tmp" "$cfgfile" \ || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp") test 0 -eq $? || _lt_function_replace_fail=: sed -e '/^func_arith ()$/,/^} # func_arith /c\ func_arith ()\ {\ func_arith_result=$(( $* ))\ } # Extended-shell func_arith implementation' "$cfgfile" > $cfgfile.tmp \ && mv -f "$cfgfile.tmp" "$cfgfile" \ || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp") test 0 -eq $? || _lt_function_replace_fail=: sed -e '/^func_len ()$/,/^} # func_len /c\ func_len ()\ {\ func_len_result=${#1}\ } # Extended-shell func_len implementation' "$cfgfile" > $cfgfile.tmp \ && mv -f "$cfgfile.tmp" "$cfgfile" \ || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp") test 0 -eq $? || _lt_function_replace_fail=: fi if test x"$lt_shell_append" = xyes; then sed -e '/^func_append ()$/,/^} # func_append /c\ func_append ()\ {\ eval "${1}+=\\${2}"\ } # Extended-shell func_append implementation' "$cfgfile" > $cfgfile.tmp \ && mv -f "$cfgfile.tmp" "$cfgfile" \ || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp") test 0 -eq $? || _lt_function_replace_fail=: sed -e '/^func_append_quoted ()$/,/^} # func_append_quoted /c\ func_append_quoted ()\ {\ \ func_quote_for_eval "${2}"\ \ eval "${1}+=\\\\ \\$func_quote_for_eval_result"\ } # Extended-shell func_append_quoted implementation' "$cfgfile" > $cfgfile.tmp \ && mv -f "$cfgfile.tmp" "$cfgfile" \ || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp") test 0 -eq $? || _lt_function_replace_fail=: # Save a `func_append' function call where possible by direct use of '+=' sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1+="%g' $cfgfile > $cfgfile.tmp \ && mv -f "$cfgfile.tmp" "$cfgfile" \ || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp") test 0 -eq $? || _lt_function_replace_fail=: else # Save a `func_append' function call even when '+=' is not available sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1="$\1%g' $cfgfile > $cfgfile.tmp \ && mv -f "$cfgfile.tmp" "$cfgfile" \ || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp") test 0 -eq $? || _lt_function_replace_fail=: fi if test x"$_lt_function_replace_fail" = x":"; then { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Unable to substitute extended shell functions in $ofile" >&5 $as_echo "$as_me: WARNING: Unable to substitute extended shell functions in $ofile" >&2;} fi mv -f "$cfgfile" "$ofile" || (rm -f "$ofile" && cp "$cfgfile" "$ofile" && rm -f "$cfgfile") chmod +x "$ofile" cat <<_LT_EOF >> "$ofile" # ### BEGIN LIBTOOL TAG CONFIG: CXX # The linker used to build libraries. LD=$lt_LD_CXX # How to create reloadable object files. reload_flag=$lt_reload_flag_CXX reload_cmds=$lt_reload_cmds_CXX # Commands used to build an old-style archive. old_archive_cmds=$lt_old_archive_cmds_CXX # A language specific compiler. CC=$lt_compiler_CXX # Is the compiler the GNU compiler? with_gcc=$GCC_CXX # Compiler flag to turn off builtin functions. no_builtin_flag=$lt_lt_prog_compiler_no_builtin_flag_CXX # Additional compiler flags for building library objects. pic_flag=$lt_lt_prog_compiler_pic_CXX # How to pass a linker flag through the compiler. wl=$lt_lt_prog_compiler_wl_CXX # Compiler flag to prevent dynamic linking. link_static_flag=$lt_lt_prog_compiler_static_CXX # Does compiler simultaneously support -c and -o options? compiler_c_o=$lt_lt_cv_prog_compiler_c_o_CXX # Whether or not to add -lc for building shared libraries. build_libtool_need_lc=$archive_cmds_need_lc_CXX # Whether or not to disallow shared libs when runtime libs are static. allow_libtool_libs_with_static_runtimes=$enable_shared_with_static_runtimes_CXX # Compiler flag to allow reflexive dlopens. export_dynamic_flag_spec=$lt_export_dynamic_flag_spec_CXX # Compiler flag to generate shared objects directly from archives. whole_archive_flag_spec=$lt_whole_archive_flag_spec_CXX # Whether the compiler copes with passing no objects directly. compiler_needs_object=$lt_compiler_needs_object_CXX # Create an old-style archive from a shared archive. old_archive_from_new_cmds=$lt_old_archive_from_new_cmds_CXX # Create a temporary old-style archive to link instead of a shared archive. old_archive_from_expsyms_cmds=$lt_old_archive_from_expsyms_cmds_CXX # Commands used to build a shared archive. archive_cmds=$lt_archive_cmds_CXX archive_expsym_cmds=$lt_archive_expsym_cmds_CXX # Commands used to build a loadable module if different from building # a shared archive. module_cmds=$lt_module_cmds_CXX module_expsym_cmds=$lt_module_expsym_cmds_CXX # Whether we are building with GNU ld or not. with_gnu_ld=$lt_with_gnu_ld_CXX # Flag that allows shared libraries with undefined symbols to be built. allow_undefined_flag=$lt_allow_undefined_flag_CXX # Flag that enforces no undefined symbols. no_undefined_flag=$lt_no_undefined_flag_CXX # Flag to hardcode \$libdir into a binary during linking. # This must work even if \$libdir does not exist hardcode_libdir_flag_spec=$lt_hardcode_libdir_flag_spec_CXX # Whether we need a single "-rpath" flag with a separated argument. hardcode_libdir_separator=$lt_hardcode_libdir_separator_CXX # Set to "yes" if using DIR/libNAME\${shared_ext} during linking hardcodes # DIR into the resulting binary. hardcode_direct=$hardcode_direct_CXX # Set to "yes" if using DIR/libNAME\${shared_ext} during linking hardcodes # DIR into the resulting binary and the resulting library dependency is # "absolute",i.e impossible to change by setting \${shlibpath_var} if the # library is relocated. hardcode_direct_absolute=$hardcode_direct_absolute_CXX # Set to "yes" if using the -LDIR flag during linking hardcodes DIR # into the resulting binary. hardcode_minus_L=$hardcode_minus_L_CXX # Set to "yes" if using SHLIBPATH_VAR=DIR during linking hardcodes DIR # into the resulting binary. hardcode_shlibpath_var=$hardcode_shlibpath_var_CXX # Set to "yes" if building a shared library automatically hardcodes DIR # into the library and all subsequent libraries and executables linked # against it. hardcode_automatic=$hardcode_automatic_CXX # Set to yes if linker adds runtime paths of dependent libraries # to runtime path list. inherit_rpath=$inherit_rpath_CXX # Whether libtool must link a program against all its dependency libraries. link_all_deplibs=$link_all_deplibs_CXX # Set to "yes" if exported symbols are required. always_export_symbols=$always_export_symbols_CXX # The commands to list exported symbols. export_symbols_cmds=$lt_export_symbols_cmds_CXX # Symbols that should not be listed in the preloaded symbols. exclude_expsyms=$lt_exclude_expsyms_CXX # Symbols that must always be exported. include_expsyms=$lt_include_expsyms_CXX # Commands necessary for linking programs (against libraries) with templates. prelink_cmds=$lt_prelink_cmds_CXX # Commands necessary for finishing linking programs. postlink_cmds=$lt_postlink_cmds_CXX # Specify filename containing input files. file_list_spec=$lt_file_list_spec_CXX # How to hardcode a shared library path into an executable. hardcode_action=$hardcode_action_CXX # The directories searched by this compiler when creating a shared library. compiler_lib_search_dirs=$lt_compiler_lib_search_dirs_CXX # Dependencies to place before and after the objects being linked to # create a shared library. predep_objects=$lt_predep_objects_CXX postdep_objects=$lt_postdep_objects_CXX predeps=$lt_predeps_CXX postdeps=$lt_postdeps_CXX # The library search path used internally by the compiler when linking # a shared library. compiler_lib_search_path=$lt_compiler_lib_search_path_CXX # ### END LIBTOOL TAG CONFIG: CXX _LT_EOF ;; esac done # for ac_tag as_fn_exit 0 _ACEOF ac_clean_files=$ac_clean_files_save test $ac_write_fail = 0 || as_fn_error $? "write failure creating $CONFIG_STATUS" "$LINENO" 5 # configure is writing to config.log, and then calls config.status. # config.status does its own redirection, appending to config.log. # Unfortunately, on DOS this fails, as config.log is still kept open # by configure, so config.status won't be able to write to it; its # output is simply discarded. So we exec the FD to /dev/null, # effectively closing config.log, so it can be properly (re)opened and # appended to by config.status. When coming back to configure, we # need to make the FD available again. if test "$no_create" != yes; then ac_cs_success=: ac_config_status_args= test "$silent" = yes && ac_config_status_args="$ac_config_status_args --quiet" exec 5>/dev/null $SHELL $CONFIG_STATUS $ac_config_status_args || ac_cs_success=false exec 5>>config.log # Use ||, not &&, to avoid exiting from the if with $? = 1, which # would make configure fail if this is the last instruction. $ac_cs_success || as_fn_exit 1 fi if test -n "$ac_unrecognized_opts" && test "$enable_option_checking" != no; then { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: unrecognized options: $ac_unrecognized_opts" >&5 $as_echo "$as_me: WARNING: unrecognized options: $ac_unrecognized_opts" >&2;} fi ffc-2019.1.0.post0/libs/gtest-1.7.0/configure.ac000066400000000000000000000050161346026536000205630ustar00rootroot00000000000000m4_include(m4/acx_pthread.m4) # At this point, the Xcode project assumes the version string will be three # integers separated by periods and surrounded by square brackets (e.g. # "[1.0.1]"). It also asumes that there won't be any closing parenthesis # between "AC_INIT(" and the closing ")" including comments and strings. AC_INIT([Google C++ Testing Framework], [1.7.0], [googletestframework@googlegroups.com], [gtest]) # Provide various options to initialize the Autoconf and configure processes. AC_PREREQ([2.59]) AC_CONFIG_SRCDIR([./LICENSE]) AC_CONFIG_MACRO_DIR([m4]) AC_CONFIG_AUX_DIR([build-aux]) AC_CONFIG_HEADERS([build-aux/config.h]) AC_CONFIG_FILES([Makefile]) AC_CONFIG_FILES([scripts/gtest-config], [chmod +x scripts/gtest-config]) # Initialize Automake with various options. We require at least v1.9, prevent # pedantic complaints about package files, and enable various distribution # targets. AM_INIT_AUTOMAKE([1.9 dist-bzip2 dist-zip foreign subdir-objects]) # Check for programs used in building Google Test. AC_PROG_CC AC_PROG_CXX AC_LANG([C++]) AC_PROG_LIBTOOL # TODO(chandlerc@google.com): Currently we aren't running the Python tests # against the interpreter detected by AM_PATH_PYTHON, and so we condition # HAVE_PYTHON by requiring "python" to be in the PATH, and that interpreter's # version to be >= 2.3. This will allow the scripts to use a "/usr/bin/env" # hashbang. PYTHON= # We *do not* allow the user to specify a python interpreter AC_PATH_PROG([PYTHON],[python],[:]) AS_IF([test "$PYTHON" != ":"], [AM_PYTHON_CHECK_VERSION([$PYTHON],[2.3],[:],[PYTHON=":"])]) AM_CONDITIONAL([HAVE_PYTHON],[test "$PYTHON" != ":"]) # Configure pthreads. AC_ARG_WITH([pthreads], [AS_HELP_STRING([--with-pthreads], [use pthreads (default is yes)])], [with_pthreads=$withval], [with_pthreads=check]) have_pthreads=no AS_IF([test "x$with_pthreads" != "xno"], [ACX_PTHREAD( [], [AS_IF([test "x$with_pthreads" != "xcheck"], [AC_MSG_FAILURE( [--with-pthreads was specified, but unable to be used])])]) have_pthreads="$acx_pthread_ok"]) AM_CONDITIONAL([HAVE_PTHREADS],[test "x$have_pthreads" = "xyes"]) AC_SUBST(PTHREAD_CFLAGS) AC_SUBST(PTHREAD_LIBS) # TODO(chandlerc@google.com) Check for the necessary system headers. # TODO(chandlerc@google.com) Check the types, structures, and other compiler # and architecture characteristics. # Output the generated files. No further autoconf macros may be used. AC_OUTPUT ffc-2019.1.0.post0/libs/gtest-1.7.0/fused-src/000077500000000000000000000000001346026536000201665ustar00rootroot00000000000000ffc-2019.1.0.post0/libs/gtest-1.7.0/fused-src/gtest/000077500000000000000000000000001346026536000213145ustar00rootroot00000000000000ffc-2019.1.0.post0/libs/gtest-1.7.0/fused-src/gtest/gtest-all.cc000066400000000000000000012640701346026536000235310ustar00rootroot00000000000000// Copyright 2008, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: mheule@google.com (Markus Heule) // // Google C++ Testing Framework (Google Test) // // Sometimes it's desirable to build Google Test by compiling a single file. // This file serves this purpose. // This line ensures that gtest.h can be compiled on its own, even // when it's fused. #include "gtest/gtest.h" // The following lines pull in the real gtest *.cc files. // Copyright 2005, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: wan@google.com (Zhanyong Wan) // // The Google C++ Testing Framework (Google Test) // Copyright 2007, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: wan@google.com (Zhanyong Wan) // // Utilities for testing Google Test itself and code that uses Google Test // (e.g. frameworks built on top of Google Test). #ifndef GTEST_INCLUDE_GTEST_GTEST_SPI_H_ #define GTEST_INCLUDE_GTEST_GTEST_SPI_H_ namespace testing { // This helper class can be used to mock out Google Test failure reporting // so that we can test Google Test or code that builds on Google Test. // // An object of this class appends a TestPartResult object to the // TestPartResultArray object given in the constructor whenever a Google Test // failure is reported. It can either intercept only failures that are // generated in the same thread that created this object or it can intercept // all generated failures. The scope of this mock object can be controlled with // the second argument to the two arguments constructor. class GTEST_API_ ScopedFakeTestPartResultReporter : public TestPartResultReporterInterface { public: // The two possible mocking modes of this object. enum InterceptMode { INTERCEPT_ONLY_CURRENT_THREAD, // Intercepts only thread local failures. INTERCEPT_ALL_THREADS // Intercepts all failures. }; // The c'tor sets this object as the test part result reporter used // by Google Test. The 'result' parameter specifies where to report the // results. This reporter will only catch failures generated in the current // thread. DEPRECATED explicit ScopedFakeTestPartResultReporter(TestPartResultArray* result); // Same as above, but you can choose the interception scope of this object. ScopedFakeTestPartResultReporter(InterceptMode intercept_mode, TestPartResultArray* result); // The d'tor restores the previous test part result reporter. virtual ~ScopedFakeTestPartResultReporter(); // Appends the TestPartResult object to the TestPartResultArray // received in the constructor. // // This method is from the TestPartResultReporterInterface // interface. virtual void ReportTestPartResult(const TestPartResult& result); private: void Init(); const InterceptMode intercept_mode_; TestPartResultReporterInterface* old_reporter_; TestPartResultArray* const result_; GTEST_DISALLOW_COPY_AND_ASSIGN_(ScopedFakeTestPartResultReporter); }; namespace internal { // A helper class for implementing EXPECT_FATAL_FAILURE() and // EXPECT_NONFATAL_FAILURE(). Its destructor verifies that the given // TestPartResultArray contains exactly one failure that has the given // type and contains the given substring. If that's not the case, a // non-fatal failure will be generated. class GTEST_API_ SingleFailureChecker { public: // The constructor remembers the arguments. SingleFailureChecker(const TestPartResultArray* results, TestPartResult::Type type, const string& substr); ~SingleFailureChecker(); private: const TestPartResultArray* const results_; const TestPartResult::Type type_; const string substr_; GTEST_DISALLOW_COPY_AND_ASSIGN_(SingleFailureChecker); }; } // namespace internal } // namespace testing // A set of macros for testing Google Test assertions or code that's expected // to generate Google Test fatal failures. It verifies that the given // statement will cause exactly one fatal Google Test failure with 'substr' // being part of the failure message. // // There are two different versions of this macro. EXPECT_FATAL_FAILURE only // affects and considers failures generated in the current thread and // EXPECT_FATAL_FAILURE_ON_ALL_THREADS does the same but for all threads. // // The verification of the assertion is done correctly even when the statement // throws an exception or aborts the current function. // // Known restrictions: // - 'statement' cannot reference local non-static variables or // non-static members of the current object. // - 'statement' cannot return a value. // - You cannot stream a failure message to this macro. // // Note that even though the implementations of the following two // macros are much alike, we cannot refactor them to use a common // helper macro, due to some peculiarity in how the preprocessor // works. The AcceptsMacroThatExpandsToUnprotectedComma test in // gtest_unittest.cc will fail to compile if we do that. #define EXPECT_FATAL_FAILURE(statement, substr) \ do { \ class GTestExpectFatalFailureHelper {\ public:\ static void Execute() { statement; }\ };\ ::testing::TestPartResultArray gtest_failures;\ ::testing::internal::SingleFailureChecker gtest_checker(\ >est_failures, ::testing::TestPartResult::kFatalFailure, (substr));\ {\ ::testing::ScopedFakeTestPartResultReporter gtest_reporter(\ ::testing::ScopedFakeTestPartResultReporter:: \ INTERCEPT_ONLY_CURRENT_THREAD, >est_failures);\ GTestExpectFatalFailureHelper::Execute();\ }\ } while (::testing::internal::AlwaysFalse()) #define EXPECT_FATAL_FAILURE_ON_ALL_THREADS(statement, substr) \ do { \ class GTestExpectFatalFailureHelper {\ public:\ static void Execute() { statement; }\ };\ ::testing::TestPartResultArray gtest_failures;\ ::testing::internal::SingleFailureChecker gtest_checker(\ >est_failures, ::testing::TestPartResult::kFatalFailure, (substr));\ {\ ::testing::ScopedFakeTestPartResultReporter gtest_reporter(\ ::testing::ScopedFakeTestPartResultReporter:: \ INTERCEPT_ALL_THREADS, >est_failures);\ GTestExpectFatalFailureHelper::Execute();\ }\ } while (::testing::internal::AlwaysFalse()) // A macro for testing Google Test assertions or code that's expected to // generate Google Test non-fatal failures. It asserts that the given // statement will cause exactly one non-fatal Google Test failure with 'substr' // being part of the failure message. // // There are two different versions of this macro. EXPECT_NONFATAL_FAILURE only // affects and considers failures generated in the current thread and // EXPECT_NONFATAL_FAILURE_ON_ALL_THREADS does the same but for all threads. // // 'statement' is allowed to reference local variables and members of // the current object. // // The verification of the assertion is done correctly even when the statement // throws an exception or aborts the current function. // // Known restrictions: // - You cannot stream a failure message to this macro. // // Note that even though the implementations of the following two // macros are much alike, we cannot refactor them to use a common // helper macro, due to some peculiarity in how the preprocessor // works. If we do that, the code won't compile when the user gives // EXPECT_NONFATAL_FAILURE() a statement that contains a macro that // expands to code containing an unprotected comma. The // AcceptsMacroThatExpandsToUnprotectedComma test in gtest_unittest.cc // catches that. // // For the same reason, we have to write // if (::testing::internal::AlwaysTrue()) { statement; } // instead of // GTEST_SUPPRESS_UNREACHABLE_CODE_WARNING_BELOW_(statement) // to avoid an MSVC warning on unreachable code. #define EXPECT_NONFATAL_FAILURE(statement, substr) \ do {\ ::testing::TestPartResultArray gtest_failures;\ ::testing::internal::SingleFailureChecker gtest_checker(\ >est_failures, ::testing::TestPartResult::kNonFatalFailure, \ (substr));\ {\ ::testing::ScopedFakeTestPartResultReporter gtest_reporter(\ ::testing::ScopedFakeTestPartResultReporter:: \ INTERCEPT_ONLY_CURRENT_THREAD, >est_failures);\ if (::testing::internal::AlwaysTrue()) { statement; }\ }\ } while (::testing::internal::AlwaysFalse()) #define EXPECT_NONFATAL_FAILURE_ON_ALL_THREADS(statement, substr) \ do {\ ::testing::TestPartResultArray gtest_failures;\ ::testing::internal::SingleFailureChecker gtest_checker(\ >est_failures, ::testing::TestPartResult::kNonFatalFailure, \ (substr));\ {\ ::testing::ScopedFakeTestPartResultReporter gtest_reporter(\ ::testing::ScopedFakeTestPartResultReporter::INTERCEPT_ALL_THREADS, \ >est_failures);\ if (::testing::internal::AlwaysTrue()) { statement; }\ }\ } while (::testing::internal::AlwaysFalse()) #endif // GTEST_INCLUDE_GTEST_GTEST_SPI_H_ #include #include #include #include #include #include #include #include #include #include #include #include // NOLINT #include #include #if GTEST_OS_LINUX // TODO(kenton@google.com): Use autoconf to detect availability of // gettimeofday(). # define GTEST_HAS_GETTIMEOFDAY_ 1 # include // NOLINT # include // NOLINT # include // NOLINT // Declares vsnprintf(). This header is not available on Windows. # include // NOLINT # include // NOLINT # include // NOLINT # include // NOLINT # include #elif GTEST_OS_SYMBIAN # define GTEST_HAS_GETTIMEOFDAY_ 1 # include // NOLINT #elif GTEST_OS_ZOS # define GTEST_HAS_GETTIMEOFDAY_ 1 # include // NOLINT // On z/OS we additionally need strings.h for strcasecmp. # include // NOLINT #elif GTEST_OS_WINDOWS_MOBILE // We are on Windows CE. # include // NOLINT #elif GTEST_OS_WINDOWS // We are on Windows proper. # include // NOLINT # include // NOLINT # include // NOLINT # include // NOLINT # if GTEST_OS_WINDOWS_MINGW // MinGW has gettimeofday() but not _ftime64(). // TODO(kenton@google.com): Use autoconf to detect availability of // gettimeofday(). // TODO(kenton@google.com): There are other ways to get the time on // Windows, like GetTickCount() or GetSystemTimeAsFileTime(). MinGW // supports these. consider using them instead. # define GTEST_HAS_GETTIMEOFDAY_ 1 # include // NOLINT # endif // GTEST_OS_WINDOWS_MINGW // cpplint thinks that the header is already included, so we want to // silence it. # include // NOLINT #else // Assume other platforms have gettimeofday(). // TODO(kenton@google.com): Use autoconf to detect availability of // gettimeofday(). # define GTEST_HAS_GETTIMEOFDAY_ 1 // cpplint thinks that the header is already included, so we want to // silence it. # include // NOLINT # include // NOLINT #endif // GTEST_OS_LINUX #if GTEST_HAS_EXCEPTIONS # include #endif #if GTEST_CAN_STREAM_RESULTS_ # include // NOLINT # include // NOLINT #endif // Indicates that this translation unit is part of Google Test's // implementation. It must come before gtest-internal-inl.h is // included, or there will be a compiler error. This trick is to // prevent a user from accidentally including gtest-internal-inl.h in // his code. #define GTEST_IMPLEMENTATION_ 1 // Copyright 2005, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // Utility functions and classes used by the Google C++ testing framework. // // Author: wan@google.com (Zhanyong Wan) // // This file contains purely Google Test's internal implementation. Please // DO NOT #INCLUDE IT IN A USER PROGRAM. #ifndef GTEST_SRC_GTEST_INTERNAL_INL_H_ #define GTEST_SRC_GTEST_INTERNAL_INL_H_ // GTEST_IMPLEMENTATION_ is defined to 1 iff the current translation unit is // part of Google Test's implementation; otherwise it's undefined. #if !GTEST_IMPLEMENTATION_ // A user is trying to include this from his code - just say no. # error "gtest-internal-inl.h is part of Google Test's internal implementation." # error "It must not be included except by Google Test itself." #endif // GTEST_IMPLEMENTATION_ #ifndef _WIN32_WCE # include #endif // !_WIN32_WCE #include #include // For strtoll/_strtoul64/malloc/free. #include // For memmove. #include #include #include #if GTEST_CAN_STREAM_RESULTS_ # include // NOLINT # include // NOLINT #endif #if GTEST_OS_WINDOWS # include // NOLINT #endif // GTEST_OS_WINDOWS namespace testing { // Declares the flags. // // We don't want the users to modify this flag in the code, but want // Google Test's own unit tests to be able to access it. Therefore we // declare it here as opposed to in gtest.h. GTEST_DECLARE_bool_(death_test_use_fork); namespace internal { // The value of GetTestTypeId() as seen from within the Google Test // library. This is solely for testing GetTestTypeId(). GTEST_API_ extern const TypeId kTestTypeIdInGoogleTest; // Names of the flags (needed for parsing Google Test flags). const char kAlsoRunDisabledTestsFlag[] = "also_run_disabled_tests"; const char kBreakOnFailureFlag[] = "break_on_failure"; const char kCatchExceptionsFlag[] = "catch_exceptions"; const char kColorFlag[] = "color"; const char kFilterFlag[] = "filter"; const char kListTestsFlag[] = "list_tests"; const char kOutputFlag[] = "output"; const char kPrintTimeFlag[] = "print_time"; const char kRandomSeedFlag[] = "random_seed"; const char kRepeatFlag[] = "repeat"; const char kShuffleFlag[] = "shuffle"; const char kStackTraceDepthFlag[] = "stack_trace_depth"; const char kStreamResultToFlag[] = "stream_result_to"; const char kThrowOnFailureFlag[] = "throw_on_failure"; // A valid random seed must be in [1, kMaxRandomSeed]. const int kMaxRandomSeed = 99999; // g_help_flag is true iff the --help flag or an equivalent form is // specified on the command line. GTEST_API_ extern bool g_help_flag; // Returns the current time in milliseconds. GTEST_API_ TimeInMillis GetTimeInMillis(); // Returns true iff Google Test should use colors in the output. GTEST_API_ bool ShouldUseColor(bool stdout_is_tty); // Formats the given time in milliseconds as seconds. GTEST_API_ std::string FormatTimeInMillisAsSeconds(TimeInMillis ms); // Converts the given time in milliseconds to a date string in the ISO 8601 // format, without the timezone information. N.B.: due to the use the // non-reentrant localtime() function, this function is not thread safe. Do // not use it in any code that can be called from multiple threads. GTEST_API_ std::string FormatEpochTimeInMillisAsIso8601(TimeInMillis ms); // Parses a string for an Int32 flag, in the form of "--flag=value". // // On success, stores the value of the flag in *value, and returns // true. On failure, returns false without changing *value. GTEST_API_ bool ParseInt32Flag( const char* str, const char* flag, Int32* value); // Returns a random seed in range [1, kMaxRandomSeed] based on the // given --gtest_random_seed flag value. inline int GetRandomSeedFromFlag(Int32 random_seed_flag) { const unsigned int raw_seed = (random_seed_flag == 0) ? static_cast(GetTimeInMillis()) : static_cast(random_seed_flag); // Normalizes the actual seed to range [1, kMaxRandomSeed] such that // it's easy to type. const int normalized_seed = static_cast((raw_seed - 1U) % static_cast(kMaxRandomSeed)) + 1; return normalized_seed; } // Returns the first valid random seed after 'seed'. The behavior is // undefined if 'seed' is invalid. The seed after kMaxRandomSeed is // considered to be 1. inline int GetNextRandomSeed(int seed) { GTEST_CHECK_(1 <= seed && seed <= kMaxRandomSeed) << "Invalid random seed " << seed << " - must be in [1, " << kMaxRandomSeed << "]."; const int next_seed = seed + 1; return (next_seed > kMaxRandomSeed) ? 1 : next_seed; } // This class saves the values of all Google Test flags in its c'tor, and // restores them in its d'tor. class GTestFlagSaver { public: // The c'tor. GTestFlagSaver() { also_run_disabled_tests_ = GTEST_FLAG(also_run_disabled_tests); break_on_failure_ = GTEST_FLAG(break_on_failure); catch_exceptions_ = GTEST_FLAG(catch_exceptions); color_ = GTEST_FLAG(color); death_test_style_ = GTEST_FLAG(death_test_style); death_test_use_fork_ = GTEST_FLAG(death_test_use_fork); filter_ = GTEST_FLAG(filter); internal_run_death_test_ = GTEST_FLAG(internal_run_death_test); list_tests_ = GTEST_FLAG(list_tests); output_ = GTEST_FLAG(output); print_time_ = GTEST_FLAG(print_time); random_seed_ = GTEST_FLAG(random_seed); repeat_ = GTEST_FLAG(repeat); shuffle_ = GTEST_FLAG(shuffle); stack_trace_depth_ = GTEST_FLAG(stack_trace_depth); stream_result_to_ = GTEST_FLAG(stream_result_to); throw_on_failure_ = GTEST_FLAG(throw_on_failure); } // The d'tor is not virtual. DO NOT INHERIT FROM THIS CLASS. ~GTestFlagSaver() { GTEST_FLAG(also_run_disabled_tests) = also_run_disabled_tests_; GTEST_FLAG(break_on_failure) = break_on_failure_; GTEST_FLAG(catch_exceptions) = catch_exceptions_; GTEST_FLAG(color) = color_; GTEST_FLAG(death_test_style) = death_test_style_; GTEST_FLAG(death_test_use_fork) = death_test_use_fork_; GTEST_FLAG(filter) = filter_; GTEST_FLAG(internal_run_death_test) = internal_run_death_test_; GTEST_FLAG(list_tests) = list_tests_; GTEST_FLAG(output) = output_; GTEST_FLAG(print_time) = print_time_; GTEST_FLAG(random_seed) = random_seed_; GTEST_FLAG(repeat) = repeat_; GTEST_FLAG(shuffle) = shuffle_; GTEST_FLAG(stack_trace_depth) = stack_trace_depth_; GTEST_FLAG(stream_result_to) = stream_result_to_; GTEST_FLAG(throw_on_failure) = throw_on_failure_; } private: // Fields for saving the original values of flags. bool also_run_disabled_tests_; bool break_on_failure_; bool catch_exceptions_; std::string color_; std::string death_test_style_; bool death_test_use_fork_; std::string filter_; std::string internal_run_death_test_; bool list_tests_; std::string output_; bool print_time_; internal::Int32 random_seed_; internal::Int32 repeat_; bool shuffle_; internal::Int32 stack_trace_depth_; std::string stream_result_to_; bool throw_on_failure_; } GTEST_ATTRIBUTE_UNUSED_; // Converts a Unicode code point to a narrow string in UTF-8 encoding. // code_point parameter is of type UInt32 because wchar_t may not be // wide enough to contain a code point. // If the code_point is not a valid Unicode code point // (i.e. outside of Unicode range U+0 to U+10FFFF) it will be converted // to "(Invalid Unicode 0xXXXXXXXX)". GTEST_API_ std::string CodePointToUtf8(UInt32 code_point); // Converts a wide string to a narrow string in UTF-8 encoding. // The wide string is assumed to have the following encoding: // UTF-16 if sizeof(wchar_t) == 2 (on Windows, Cygwin, Symbian OS) // UTF-32 if sizeof(wchar_t) == 4 (on Linux) // Parameter str points to a null-terminated wide string. // Parameter num_chars may additionally limit the number // of wchar_t characters processed. -1 is used when the entire string // should be processed. // If the string contains code points that are not valid Unicode code points // (i.e. outside of Unicode range U+0 to U+10FFFF) they will be output // as '(Invalid Unicode 0xXXXXXXXX)'. If the string is in UTF16 encoding // and contains invalid UTF-16 surrogate pairs, values in those pairs // will be encoded as individual Unicode characters from Basic Normal Plane. GTEST_API_ std::string WideStringToUtf8(const wchar_t* str, int num_chars); // Reads the GTEST_SHARD_STATUS_FILE environment variable, and creates the file // if the variable is present. If a file already exists at this location, this // function will write over it. If the variable is present, but the file cannot // be created, prints an error and exits. void WriteToShardStatusFileIfNeeded(); // Checks whether sharding is enabled by examining the relevant // environment variable values. If the variables are present, // but inconsistent (e.g., shard_index >= total_shards), prints // an error and exits. If in_subprocess_for_death_test, sharding is // disabled because it must only be applied to the original test // process. Otherwise, we could filter out death tests we intended to execute. GTEST_API_ bool ShouldShard(const char* total_shards_str, const char* shard_index_str, bool in_subprocess_for_death_test); // Parses the environment variable var as an Int32. If it is unset, // returns default_val. If it is not an Int32, prints an error and // and aborts. GTEST_API_ Int32 Int32FromEnvOrDie(const char* env_var, Int32 default_val); // Given the total number of shards, the shard index, and the test id, // returns true iff the test should be run on this shard. The test id is // some arbitrary but unique non-negative integer assigned to each test // method. Assumes that 0 <= shard_index < total_shards. GTEST_API_ bool ShouldRunTestOnShard( int total_shards, int shard_index, int test_id); // STL container utilities. // Returns the number of elements in the given container that satisfy // the given predicate. template inline int CountIf(const Container& c, Predicate predicate) { // Implemented as an explicit loop since std::count_if() in libCstd on // Solaris has a non-standard signature. int count = 0; for (typename Container::const_iterator it = c.begin(); it != c.end(); ++it) { if (predicate(*it)) ++count; } return count; } // Applies a function/functor to each element in the container. template void ForEach(const Container& c, Functor functor) { std::for_each(c.begin(), c.end(), functor); } // Returns the i-th element of the vector, or default_value if i is not // in range [0, v.size()). template inline E GetElementOr(const std::vector& v, int i, E default_value) { return (i < 0 || i >= static_cast(v.size())) ? default_value : v[i]; } // Performs an in-place shuffle of a range of the vector's elements. // 'begin' and 'end' are element indices as an STL-style range; // i.e. [begin, end) are shuffled, where 'end' == size() means to // shuffle to the end of the vector. template void ShuffleRange(internal::Random* random, int begin, int end, std::vector* v) { const int size = static_cast(v->size()); GTEST_CHECK_(0 <= begin && begin <= size) << "Invalid shuffle range start " << begin << ": must be in range [0, " << size << "]."; GTEST_CHECK_(begin <= end && end <= size) << "Invalid shuffle range finish " << end << ": must be in range [" << begin << ", " << size << "]."; // Fisher-Yates shuffle, from // http://en.wikipedia.org/wiki/Fisher-Yates_shuffle for (int range_width = end - begin; range_width >= 2; range_width--) { const int last_in_range = begin + range_width - 1; const int selected = begin + random->Generate(range_width); std::swap((*v)[selected], (*v)[last_in_range]); } } // Performs an in-place shuffle of the vector's elements. template inline void Shuffle(internal::Random* random, std::vector* v) { ShuffleRange(random, 0, static_cast(v->size()), v); } // A function for deleting an object. Handy for being used as a // functor. template static void Delete(T* x) { delete x; } // A predicate that checks the key of a TestProperty against a known key. // // TestPropertyKeyIs is copyable. class TestPropertyKeyIs { public: // Constructor. // // TestPropertyKeyIs has NO default constructor. explicit TestPropertyKeyIs(const std::string& key) : key_(key) {} // Returns true iff the test name of test property matches on key_. bool operator()(const TestProperty& test_property) const { return test_property.key() == key_; } private: std::string key_; }; // Class UnitTestOptions. // // This class contains functions for processing options the user // specifies when running the tests. It has only static members. // // In most cases, the user can specify an option using either an // environment variable or a command line flag. E.g. you can set the // test filter using either GTEST_FILTER or --gtest_filter. If both // the variable and the flag are present, the latter overrides the // former. class GTEST_API_ UnitTestOptions { public: // Functions for processing the gtest_output flag. // Returns the output format, or "" for normal printed output. static std::string GetOutputFormat(); // Returns the absolute path of the requested output file, or the // default (test_detail.xml in the original working directory) if // none was explicitly specified. static std::string GetAbsolutePathToOutputFile(); // Functions for processing the gtest_filter flag. // Returns true iff the wildcard pattern matches the string. The // first ':' or '\0' character in pattern marks the end of it. // // This recursive algorithm isn't very efficient, but is clear and // works well enough for matching test names, which are short. static bool PatternMatchesString(const char *pattern, const char *str); // Returns true iff the user-specified filter matches the test case // name and the test name. static bool FilterMatchesTest(const std::string &test_case_name, const std::string &test_name); #if GTEST_OS_WINDOWS // Function for supporting the gtest_catch_exception flag. // Returns EXCEPTION_EXECUTE_HANDLER if Google Test should handle the // given SEH exception, or EXCEPTION_CONTINUE_SEARCH otherwise. // This function is useful as an __except condition. static int GTestShouldProcessSEH(DWORD exception_code); #endif // GTEST_OS_WINDOWS // Returns true if "name" matches the ':' separated list of glob-style // filters in "filter". static bool MatchesFilter(const std::string& name, const char* filter); }; // Returns the current application's name, removing directory path if that // is present. Used by UnitTestOptions::GetOutputFile. GTEST_API_ FilePath GetCurrentExecutableName(); // The role interface for getting the OS stack trace as a string. class OsStackTraceGetterInterface { public: OsStackTraceGetterInterface() {} virtual ~OsStackTraceGetterInterface() {} // Returns the current OS stack trace as an std::string. Parameters: // // max_depth - the maximum number of stack frames to be included // in the trace. // skip_count - the number of top frames to be skipped; doesn't count // against max_depth. virtual string CurrentStackTrace(int max_depth, int skip_count) = 0; // UponLeavingGTest() should be called immediately before Google Test calls // user code. It saves some information about the current stack that // CurrentStackTrace() will use to find and hide Google Test stack frames. virtual void UponLeavingGTest() = 0; private: GTEST_DISALLOW_COPY_AND_ASSIGN_(OsStackTraceGetterInterface); }; // A working implementation of the OsStackTraceGetterInterface interface. class OsStackTraceGetter : public OsStackTraceGetterInterface { public: OsStackTraceGetter() : caller_frame_(NULL) {} virtual string CurrentStackTrace(int max_depth, int skip_count) GTEST_LOCK_EXCLUDED_(mutex_); virtual void UponLeavingGTest() GTEST_LOCK_EXCLUDED_(mutex_); // This string is inserted in place of stack frames that are part of // Google Test's implementation. static const char* const kElidedFramesMarker; private: Mutex mutex_; // protects all internal state // We save the stack frame below the frame that calls user code. // We do this because the address of the frame immediately below // the user code changes between the call to UponLeavingGTest() // and any calls to CurrentStackTrace() from within the user code. void* caller_frame_; GTEST_DISALLOW_COPY_AND_ASSIGN_(OsStackTraceGetter); }; // Information about a Google Test trace point. struct TraceInfo { const char* file; int line; std::string message; }; // This is the default global test part result reporter used in UnitTestImpl. // This class should only be used by UnitTestImpl. class DefaultGlobalTestPartResultReporter : public TestPartResultReporterInterface { public: explicit DefaultGlobalTestPartResultReporter(UnitTestImpl* unit_test); // Implements the TestPartResultReporterInterface. Reports the test part // result in the current test. virtual void ReportTestPartResult(const TestPartResult& result); private: UnitTestImpl* const unit_test_; GTEST_DISALLOW_COPY_AND_ASSIGN_(DefaultGlobalTestPartResultReporter); }; // This is the default per thread test part result reporter used in // UnitTestImpl. This class should only be used by UnitTestImpl. class DefaultPerThreadTestPartResultReporter : public TestPartResultReporterInterface { public: explicit DefaultPerThreadTestPartResultReporter(UnitTestImpl* unit_test); // Implements the TestPartResultReporterInterface. The implementation just // delegates to the current global test part result reporter of *unit_test_. virtual void ReportTestPartResult(const TestPartResult& result); private: UnitTestImpl* const unit_test_; GTEST_DISALLOW_COPY_AND_ASSIGN_(DefaultPerThreadTestPartResultReporter); }; // The private implementation of the UnitTest class. We don't protect // the methods under a mutex, as this class is not accessible by a // user and the UnitTest class that delegates work to this class does // proper locking. class GTEST_API_ UnitTestImpl { public: explicit UnitTestImpl(UnitTest* parent); virtual ~UnitTestImpl(); // There are two different ways to register your own TestPartResultReporter. // You can register your own repoter to listen either only for test results // from the current thread or for results from all threads. // By default, each per-thread test result repoter just passes a new // TestPartResult to the global test result reporter, which registers the // test part result for the currently running test. // Returns the global test part result reporter. TestPartResultReporterInterface* GetGlobalTestPartResultReporter(); // Sets the global test part result reporter. void SetGlobalTestPartResultReporter( TestPartResultReporterInterface* reporter); // Returns the test part result reporter for the current thread. TestPartResultReporterInterface* GetTestPartResultReporterForCurrentThread(); // Sets the test part result reporter for the current thread. void SetTestPartResultReporterForCurrentThread( TestPartResultReporterInterface* reporter); // Gets the number of successful test cases. int successful_test_case_count() const; // Gets the number of failed test cases. int failed_test_case_count() const; // Gets the number of all test cases. int total_test_case_count() const; // Gets the number of all test cases that contain at least one test // that should run. int test_case_to_run_count() const; // Gets the number of successful tests. int successful_test_count() const; // Gets the number of failed tests. int failed_test_count() const; // Gets the number of disabled tests that will be reported in the XML report. int reportable_disabled_test_count() const; // Gets the number of disabled tests. int disabled_test_count() const; // Gets the number of tests to be printed in the XML report. int reportable_test_count() const; // Gets the number of all tests. int total_test_count() const; // Gets the number of tests that should run. int test_to_run_count() const; // Gets the time of the test program start, in ms from the start of the // UNIX epoch. TimeInMillis start_timestamp() const { return start_timestamp_; } // Gets the elapsed time, in milliseconds. TimeInMillis elapsed_time() const { return elapsed_time_; } // Returns true iff the unit test passed (i.e. all test cases passed). bool Passed() const { return !Failed(); } // Returns true iff the unit test failed (i.e. some test case failed // or something outside of all tests failed). bool Failed() const { return failed_test_case_count() > 0 || ad_hoc_test_result()->Failed(); } // Gets the i-th test case among all the test cases. i can range from 0 to // total_test_case_count() - 1. If i is not in that range, returns NULL. const TestCase* GetTestCase(int i) const { const int index = GetElementOr(test_case_indices_, i, -1); return index < 0 ? NULL : test_cases_[i]; } // Gets the i-th test case among all the test cases. i can range from 0 to // total_test_case_count() - 1. If i is not in that range, returns NULL. TestCase* GetMutableTestCase(int i) { const int index = GetElementOr(test_case_indices_, i, -1); return index < 0 ? NULL : test_cases_[index]; } // Provides access to the event listener list. TestEventListeners* listeners() { return &listeners_; } // Returns the TestResult for the test that's currently running, or // the TestResult for the ad hoc test if no test is running. TestResult* current_test_result(); // Returns the TestResult for the ad hoc test. const TestResult* ad_hoc_test_result() const { return &ad_hoc_test_result_; } // Sets the OS stack trace getter. // // Does nothing if the input and the current OS stack trace getter // are the same; otherwise, deletes the old getter and makes the // input the current getter. void set_os_stack_trace_getter(OsStackTraceGetterInterface* getter); // Returns the current OS stack trace getter if it is not NULL; // otherwise, creates an OsStackTraceGetter, makes it the current // getter, and returns it. OsStackTraceGetterInterface* os_stack_trace_getter(); // Returns the current OS stack trace as an std::string. // // The maximum number of stack frames to be included is specified by // the gtest_stack_trace_depth flag. The skip_count parameter // specifies the number of top frames to be skipped, which doesn't // count against the number of frames to be included. // // For example, if Foo() calls Bar(), which in turn calls // CurrentOsStackTraceExceptTop(1), Foo() will be included in the // trace but Bar() and CurrentOsStackTraceExceptTop() won't. std::string CurrentOsStackTraceExceptTop(int skip_count) GTEST_NO_INLINE_; // Finds and returns a TestCase with the given name. If one doesn't // exist, creates one and returns it. // // Arguments: // // test_case_name: name of the test case // type_param: the name of the test's type parameter, or NULL if // this is not a typed or a type-parameterized test. // set_up_tc: pointer to the function that sets up the test case // tear_down_tc: pointer to the function that tears down the test case TestCase* GetTestCase(const char* test_case_name, const char* type_param, Test::SetUpTestCaseFunc set_up_tc, Test::TearDownTestCaseFunc tear_down_tc); // Adds a TestInfo to the unit test. // // Arguments: // // set_up_tc: pointer to the function that sets up the test case // tear_down_tc: pointer to the function that tears down the test case // test_info: the TestInfo object void AddTestInfo(Test::SetUpTestCaseFunc set_up_tc, Test::TearDownTestCaseFunc tear_down_tc, TestInfo* test_info) { // In order to support thread-safe death tests, we need to // remember the original working directory when the test program // was first invoked. We cannot do this in RUN_ALL_TESTS(), as // the user may have changed the current directory before calling // RUN_ALL_TESTS(). Therefore we capture the current directory in // AddTestInfo(), which is called to register a TEST or TEST_F // before main() is reached. if (original_working_dir_.IsEmpty()) { original_working_dir_.Set(FilePath::GetCurrentDir()); GTEST_CHECK_(!original_working_dir_.IsEmpty()) << "Failed to get the current working directory."; } GetTestCase(test_info->test_case_name(), test_info->type_param(), set_up_tc, tear_down_tc)->AddTestInfo(test_info); } #if GTEST_HAS_PARAM_TEST // Returns ParameterizedTestCaseRegistry object used to keep track of // value-parameterized tests and instantiate and register them. internal::ParameterizedTestCaseRegistry& parameterized_test_registry() { return parameterized_test_registry_; } #endif // GTEST_HAS_PARAM_TEST // Sets the TestCase object for the test that's currently running. void set_current_test_case(TestCase* a_current_test_case) { current_test_case_ = a_current_test_case; } // Sets the TestInfo object for the test that's currently running. If // current_test_info is NULL, the assertion results will be stored in // ad_hoc_test_result_. void set_current_test_info(TestInfo* a_current_test_info) { current_test_info_ = a_current_test_info; } // Registers all parameterized tests defined using TEST_P and // INSTANTIATE_TEST_CASE_P, creating regular tests for each test/parameter // combination. This method can be called more then once; it has guards // protecting from registering the tests more then once. If // value-parameterized tests are disabled, RegisterParameterizedTests is // present but does nothing. void RegisterParameterizedTests(); // Runs all tests in this UnitTest object, prints the result, and // returns true if all tests are successful. If any exception is // thrown during a test, this test is considered to be failed, but // the rest of the tests will still be run. bool RunAllTests(); // Clears the results of all tests, except the ad hoc tests. void ClearNonAdHocTestResult() { ForEach(test_cases_, TestCase::ClearTestCaseResult); } // Clears the results of ad-hoc test assertions. void ClearAdHocTestResult() { ad_hoc_test_result_.Clear(); } // Adds a TestProperty to the current TestResult object when invoked in a // context of a test or a test case, or to the global property set. If the // result already contains a property with the same key, the value will be // updated. void RecordProperty(const TestProperty& test_property); enum ReactionToSharding { HONOR_SHARDING_PROTOCOL, IGNORE_SHARDING_PROTOCOL }; // Matches the full name of each test against the user-specified // filter to decide whether the test should run, then records the // result in each TestCase and TestInfo object. // If shard_tests == HONOR_SHARDING_PROTOCOL, further filters tests // based on sharding variables in the environment. // Returns the number of tests that should run. int FilterTests(ReactionToSharding shard_tests); // Prints the names of the tests matching the user-specified filter flag. void ListTestsMatchingFilter(); const TestCase* current_test_case() const { return current_test_case_; } TestInfo* current_test_info() { return current_test_info_; } const TestInfo* current_test_info() const { return current_test_info_; } // Returns the vector of environments that need to be set-up/torn-down // before/after the tests are run. std::vector& environments() { return environments_; } // Getters for the per-thread Google Test trace stack. std::vector& gtest_trace_stack() { return *(gtest_trace_stack_.pointer()); } const std::vector& gtest_trace_stack() const { return gtest_trace_stack_.get(); } #if GTEST_HAS_DEATH_TEST void InitDeathTestSubprocessControlInfo() { internal_run_death_test_flag_.reset(ParseInternalRunDeathTestFlag()); } // Returns a pointer to the parsed --gtest_internal_run_death_test // flag, or NULL if that flag was not specified. // This information is useful only in a death test child process. // Must not be called before a call to InitGoogleTest. const InternalRunDeathTestFlag* internal_run_death_test_flag() const { return internal_run_death_test_flag_.get(); } // Returns a pointer to the current death test factory. internal::DeathTestFactory* death_test_factory() { return death_test_factory_.get(); } void SuppressTestEventsIfInSubprocess(); friend class ReplaceDeathTestFactory; #endif // GTEST_HAS_DEATH_TEST // Initializes the event listener performing XML output as specified by // UnitTestOptions. Must not be called before InitGoogleTest. void ConfigureXmlOutput(); #if GTEST_CAN_STREAM_RESULTS_ // Initializes the event listener for streaming test results to a socket. // Must not be called before InitGoogleTest. void ConfigureStreamingOutput(); #endif // Performs initialization dependent upon flag values obtained in // ParseGoogleTestFlagsOnly. Is called from InitGoogleTest after the call to // ParseGoogleTestFlagsOnly. In case a user neglects to call InitGoogleTest // this function is also called from RunAllTests. Since this function can be // called more than once, it has to be idempotent. void PostFlagParsingInit(); // Gets the random seed used at the start of the current test iteration. int random_seed() const { return random_seed_; } // Gets the random number generator. internal::Random* random() { return &random_; } // Shuffles all test cases, and the tests within each test case, // making sure that death tests are still run first. void ShuffleTests(); // Restores the test cases and tests to their order before the first shuffle. void UnshuffleTests(); // Returns the value of GTEST_FLAG(catch_exceptions) at the moment // UnitTest::Run() starts. bool catch_exceptions() const { return catch_exceptions_; } private: friend class ::testing::UnitTest; // Used by UnitTest::Run() to capture the state of // GTEST_FLAG(catch_exceptions) at the moment it starts. void set_catch_exceptions(bool value) { catch_exceptions_ = value; } // The UnitTest object that owns this implementation object. UnitTest* const parent_; // The working directory when the first TEST() or TEST_F() was // executed. internal::FilePath original_working_dir_; // The default test part result reporters. DefaultGlobalTestPartResultReporter default_global_test_part_result_reporter_; DefaultPerThreadTestPartResultReporter default_per_thread_test_part_result_reporter_; // Points to (but doesn't own) the global test part result reporter. TestPartResultReporterInterface* global_test_part_result_repoter_; // Protects read and write access to global_test_part_result_reporter_. internal::Mutex global_test_part_result_reporter_mutex_; // Points to (but doesn't own) the per-thread test part result reporter. internal::ThreadLocal per_thread_test_part_result_reporter_; // The vector of environments that need to be set-up/torn-down // before/after the tests are run. std::vector environments_; // The vector of TestCases in their original order. It owns the // elements in the vector. std::vector test_cases_; // Provides a level of indirection for the test case list to allow // easy shuffling and restoring the test case order. The i-th // element of this vector is the index of the i-th test case in the // shuffled order. std::vector test_case_indices_; #if GTEST_HAS_PARAM_TEST // ParameterizedTestRegistry object used to register value-parameterized // tests. internal::ParameterizedTestCaseRegistry parameterized_test_registry_; // Indicates whether RegisterParameterizedTests() has been called already. bool parameterized_tests_registered_; #endif // GTEST_HAS_PARAM_TEST // Index of the last death test case registered. Initially -1. int last_death_test_case_; // This points to the TestCase for the currently running test. It // changes as Google Test goes through one test case after another. // When no test is running, this is set to NULL and Google Test // stores assertion results in ad_hoc_test_result_. Initially NULL. TestCase* current_test_case_; // This points to the TestInfo for the currently running test. It // changes as Google Test goes through one test after another. When // no test is running, this is set to NULL and Google Test stores // assertion results in ad_hoc_test_result_. Initially NULL. TestInfo* current_test_info_; // Normally, a user only writes assertions inside a TEST or TEST_F, // or inside a function called by a TEST or TEST_F. Since Google // Test keeps track of which test is current running, it can // associate such an assertion with the test it belongs to. // // If an assertion is encountered when no TEST or TEST_F is running, // Google Test attributes the assertion result to an imaginary "ad hoc" // test, and records the result in ad_hoc_test_result_. TestResult ad_hoc_test_result_; // The list of event listeners that can be used to track events inside // Google Test. TestEventListeners listeners_; // The OS stack trace getter. Will be deleted when the UnitTest // object is destructed. By default, an OsStackTraceGetter is used, // but the user can set this field to use a custom getter if that is // desired. OsStackTraceGetterInterface* os_stack_trace_getter_; // True iff PostFlagParsingInit() has been called. bool post_flag_parse_init_performed_; // The random number seed used at the beginning of the test run. int random_seed_; // Our random number generator. internal::Random random_; // The time of the test program start, in ms from the start of the // UNIX epoch. TimeInMillis start_timestamp_; // How long the test took to run, in milliseconds. TimeInMillis elapsed_time_; #if GTEST_HAS_DEATH_TEST // The decomposed components of the gtest_internal_run_death_test flag, // parsed when RUN_ALL_TESTS is called. internal::scoped_ptr internal_run_death_test_flag_; internal::scoped_ptr death_test_factory_; #endif // GTEST_HAS_DEATH_TEST // A per-thread stack of traces created by the SCOPED_TRACE() macro. internal::ThreadLocal > gtest_trace_stack_; // The value of GTEST_FLAG(catch_exceptions) at the moment RunAllTests() // starts. bool catch_exceptions_; GTEST_DISALLOW_COPY_AND_ASSIGN_(UnitTestImpl); }; // class UnitTestImpl // Convenience function for accessing the global UnitTest // implementation object. inline UnitTestImpl* GetUnitTestImpl() { return UnitTest::GetInstance()->impl(); } #if GTEST_USES_SIMPLE_RE // Internal helper functions for implementing the simple regular // expression matcher. GTEST_API_ bool IsInSet(char ch, const char* str); GTEST_API_ bool IsAsciiDigit(char ch); GTEST_API_ bool IsAsciiPunct(char ch); GTEST_API_ bool IsRepeat(char ch); GTEST_API_ bool IsAsciiWhiteSpace(char ch); GTEST_API_ bool IsAsciiWordChar(char ch); GTEST_API_ bool IsValidEscape(char ch); GTEST_API_ bool AtomMatchesChar(bool escaped, char pattern, char ch); GTEST_API_ bool ValidateRegex(const char* regex); GTEST_API_ bool MatchRegexAtHead(const char* regex, const char* str); GTEST_API_ bool MatchRepetitionAndRegexAtHead( bool escaped, char ch, char repeat, const char* regex, const char* str); GTEST_API_ bool MatchRegexAnywhere(const char* regex, const char* str); #endif // GTEST_USES_SIMPLE_RE // Parses the command line for Google Test flags, without initializing // other parts of Google Test. GTEST_API_ void ParseGoogleTestFlagsOnly(int* argc, char** argv); GTEST_API_ void ParseGoogleTestFlagsOnly(int* argc, wchar_t** argv); #if GTEST_HAS_DEATH_TEST // Returns the message describing the last system error, regardless of the // platform. GTEST_API_ std::string GetLastErrnoDescription(); # if GTEST_OS_WINDOWS // Provides leak-safe Windows kernel handle ownership. class AutoHandle { public: AutoHandle() : handle_(INVALID_HANDLE_VALUE) {} explicit AutoHandle(HANDLE handle) : handle_(handle) {} ~AutoHandle() { Reset(); } HANDLE Get() const { return handle_; } void Reset() { Reset(INVALID_HANDLE_VALUE); } void Reset(HANDLE handle) { if (handle != handle_) { if (handle_ != INVALID_HANDLE_VALUE) ::CloseHandle(handle_); handle_ = handle; } } private: HANDLE handle_; GTEST_DISALLOW_COPY_AND_ASSIGN_(AutoHandle); }; # endif // GTEST_OS_WINDOWS // Attempts to parse a string into a positive integer pointed to by the // number parameter. Returns true if that is possible. // GTEST_HAS_DEATH_TEST implies that we have ::std::string, so we can use // it here. template bool ParseNaturalNumber(const ::std::string& str, Integer* number) { // Fail fast if the given string does not begin with a digit; // this bypasses strtoXXX's "optional leading whitespace and plus // or minus sign" semantics, which are undesirable here. if (str.empty() || !IsDigit(str[0])) { return false; } errno = 0; char* end; // BiggestConvertible is the largest integer type that system-provided // string-to-number conversion routines can return. # if GTEST_OS_WINDOWS && !defined(__GNUC__) // MSVC and C++ Builder define __int64 instead of the standard long long. typedef unsigned __int64 BiggestConvertible; const BiggestConvertible parsed = _strtoui64(str.c_str(), &end, 10); # else typedef unsigned long long BiggestConvertible; // NOLINT const BiggestConvertible parsed = strtoull(str.c_str(), &end, 10); # endif // GTEST_OS_WINDOWS && !defined(__GNUC__) const bool parse_success = *end == '\0' && errno == 0; // TODO(vladl@google.com): Convert this to compile time assertion when it is // available. GTEST_CHECK_(sizeof(Integer) <= sizeof(parsed)); const Integer result = static_cast(parsed); if (parse_success && static_cast(result) == parsed) { *number = result; return true; } return false; } #endif // GTEST_HAS_DEATH_TEST // TestResult contains some private methods that should be hidden from // Google Test user but are required for testing. This class allow our tests // to access them. // // This class is supplied only for the purpose of testing Google Test's own // constructs. Do not use it in user tests, either directly or indirectly. class TestResultAccessor { public: static void RecordProperty(TestResult* test_result, const std::string& xml_element, const TestProperty& property) { test_result->RecordProperty(xml_element, property); } static void ClearTestPartResults(TestResult* test_result) { test_result->ClearTestPartResults(); } static const std::vector& test_part_results( const TestResult& test_result) { return test_result.test_part_results(); } }; #if GTEST_CAN_STREAM_RESULTS_ // Streams test results to the given port on the given host machine. class StreamingListener : public EmptyTestEventListener { public: // Abstract base class for writing strings to a socket. class AbstractSocketWriter { public: virtual ~AbstractSocketWriter() {} // Sends a string to the socket. virtual void Send(const string& message) = 0; // Closes the socket. virtual void CloseConnection() {} // Sends a string and a newline to the socket. void SendLn(const string& message) { Send(message + "\n"); } }; // Concrete class for actually writing strings to a socket. class SocketWriter : public AbstractSocketWriter { public: SocketWriter(const string& host, const string& port) : sockfd_(-1), host_name_(host), port_num_(port) { MakeConnection(); } virtual ~SocketWriter() { if (sockfd_ != -1) CloseConnection(); } // Sends a string to the socket. virtual void Send(const string& message) { GTEST_CHECK_(sockfd_ != -1) << "Send() can be called only when there is a connection."; const int len = static_cast(message.length()); if (write(sockfd_, message.c_str(), len) != len) { GTEST_LOG_(WARNING) << "stream_result_to: failed to stream to " << host_name_ << ":" << port_num_; } } private: // Creates a client socket and connects to the server. void MakeConnection(); // Closes the socket. void CloseConnection() { GTEST_CHECK_(sockfd_ != -1) << "CloseConnection() can be called only when there is a connection."; close(sockfd_); sockfd_ = -1; } int sockfd_; // socket file descriptor const string host_name_; const string port_num_; GTEST_DISALLOW_COPY_AND_ASSIGN_(SocketWriter); }; // class SocketWriter // Escapes '=', '&', '%', and '\n' characters in str as "%xx". static string UrlEncode(const char* str); StreamingListener(const string& host, const string& port) : socket_writer_(new SocketWriter(host, port)) { Start(); } explicit StreamingListener(AbstractSocketWriter* socket_writer) : socket_writer_(socket_writer) { Start(); } void OnTestProgramStart(const UnitTest& /* unit_test */) { SendLn("event=TestProgramStart"); } void OnTestProgramEnd(const UnitTest& unit_test) { // Note that Google Test current only report elapsed time for each // test iteration, not for the entire test program. SendLn("event=TestProgramEnd&passed=" + FormatBool(unit_test.Passed())); // Notify the streaming server to stop. socket_writer_->CloseConnection(); } void OnTestIterationStart(const UnitTest& /* unit_test */, int iteration) { SendLn("event=TestIterationStart&iteration=" + StreamableToString(iteration)); } void OnTestIterationEnd(const UnitTest& unit_test, int /* iteration */) { SendLn("event=TestIterationEnd&passed=" + FormatBool(unit_test.Passed()) + "&elapsed_time=" + StreamableToString(unit_test.elapsed_time()) + "ms"); } void OnTestCaseStart(const TestCase& test_case) { SendLn(std::string("event=TestCaseStart&name=") + test_case.name()); } void OnTestCaseEnd(const TestCase& test_case) { SendLn("event=TestCaseEnd&passed=" + FormatBool(test_case.Passed()) + "&elapsed_time=" + StreamableToString(test_case.elapsed_time()) + "ms"); } void OnTestStart(const TestInfo& test_info) { SendLn(std::string("event=TestStart&name=") + test_info.name()); } void OnTestEnd(const TestInfo& test_info) { SendLn("event=TestEnd&passed=" + FormatBool((test_info.result())->Passed()) + "&elapsed_time=" + StreamableToString((test_info.result())->elapsed_time()) + "ms"); } void OnTestPartResult(const TestPartResult& test_part_result) { const char* file_name = test_part_result.file_name(); if (file_name == NULL) file_name = ""; SendLn("event=TestPartResult&file=" + UrlEncode(file_name) + "&line=" + StreamableToString(test_part_result.line_number()) + "&message=" + UrlEncode(test_part_result.message())); } private: // Sends the given message and a newline to the socket. void SendLn(const string& message) { socket_writer_->SendLn(message); } // Called at the start of streaming to notify the receiver what // protocol we are using. void Start() { SendLn("gtest_streaming_protocol_version=1.0"); } string FormatBool(bool value) { return value ? "1" : "0"; } const scoped_ptr socket_writer_; GTEST_DISALLOW_COPY_AND_ASSIGN_(StreamingListener); }; // class StreamingListener #endif // GTEST_CAN_STREAM_RESULTS_ } // namespace internal } // namespace testing #endif // GTEST_SRC_GTEST_INTERNAL_INL_H_ #undef GTEST_IMPLEMENTATION_ #if GTEST_OS_WINDOWS # define vsnprintf _vsnprintf #endif // GTEST_OS_WINDOWS namespace testing { using internal::CountIf; using internal::ForEach; using internal::GetElementOr; using internal::Shuffle; // Constants. // A test whose test case name or test name matches this filter is // disabled and not run. static const char kDisableTestFilter[] = "DISABLED_*:*/DISABLED_*"; // A test case whose name matches this filter is considered a death // test case and will be run before test cases whose name doesn't // match this filter. static const char kDeathTestCaseFilter[] = "*DeathTest:*DeathTest/*"; // A test filter that matches everything. static const char kUniversalFilter[] = "*"; // The default output file for XML output. static const char kDefaultOutputFile[] = "test_detail.xml"; // The environment variable name for the test shard index. static const char kTestShardIndex[] = "GTEST_SHARD_INDEX"; // The environment variable name for the total number of test shards. static const char kTestTotalShards[] = "GTEST_TOTAL_SHARDS"; // The environment variable name for the test shard status file. static const char kTestShardStatusFile[] = "GTEST_SHARD_STATUS_FILE"; namespace internal { // The text used in failure messages to indicate the start of the // stack trace. const char kStackTraceMarker[] = "\nStack trace:\n"; // g_help_flag is true iff the --help flag or an equivalent form is // specified on the command line. bool g_help_flag = false; } // namespace internal static const char* GetDefaultFilter() { return kUniversalFilter; } GTEST_DEFINE_bool_( also_run_disabled_tests, internal::BoolFromGTestEnv("also_run_disabled_tests", false), "Run disabled tests too, in addition to the tests normally being run."); GTEST_DEFINE_bool_( break_on_failure, internal::BoolFromGTestEnv("break_on_failure", false), "True iff a failed assertion should be a debugger break-point."); GTEST_DEFINE_bool_( catch_exceptions, internal::BoolFromGTestEnv("catch_exceptions", true), "True iff " GTEST_NAME_ " should catch exceptions and treat them as test failures."); GTEST_DEFINE_string_( color, internal::StringFromGTestEnv("color", "auto"), "Whether to use colors in the output. Valid values: yes, no, " "and auto. 'auto' means to use colors if the output is " "being sent to a terminal and the TERM environment variable " "is set to a terminal type that supports colors."); GTEST_DEFINE_string_( filter, internal::StringFromGTestEnv("filter", GetDefaultFilter()), "A colon-separated list of glob (not regex) patterns " "for filtering the tests to run, optionally followed by a " "'-' and a : separated list of negative patterns (tests to " "exclude). A test is run if it matches one of the positive " "patterns and does not match any of the negative patterns."); GTEST_DEFINE_bool_(list_tests, false, "List all tests without running them."); GTEST_DEFINE_string_( output, internal::StringFromGTestEnv("output", ""), "A format (currently must be \"xml\"), optionally followed " "by a colon and an output file name or directory. A directory " "is indicated by a trailing pathname separator. " "Examples: \"xml:filename.xml\", \"xml::directoryname/\". " "If a directory is specified, output files will be created " "within that directory, with file-names based on the test " "executable's name and, if necessary, made unique by adding " "digits."); GTEST_DEFINE_bool_( print_time, internal::BoolFromGTestEnv("print_time", true), "True iff " GTEST_NAME_ " should display elapsed time in text output."); GTEST_DEFINE_int32_( random_seed, internal::Int32FromGTestEnv("random_seed", 0), "Random number seed to use when shuffling test orders. Must be in range " "[1, 99999], or 0 to use a seed based on the current time."); GTEST_DEFINE_int32_( repeat, internal::Int32FromGTestEnv("repeat", 1), "How many times to repeat each test. Specify a negative number " "for repeating forever. Useful for shaking out flaky tests."); GTEST_DEFINE_bool_( show_internal_stack_frames, false, "True iff " GTEST_NAME_ " should include internal stack frames when " "printing test failure stack traces."); GTEST_DEFINE_bool_( shuffle, internal::BoolFromGTestEnv("shuffle", false), "True iff " GTEST_NAME_ " should randomize tests' order on every run."); GTEST_DEFINE_int32_( stack_trace_depth, internal::Int32FromGTestEnv("stack_trace_depth", kMaxStackTraceDepth), "The maximum number of stack frames to print when an " "assertion fails. The valid range is 0 through 100, inclusive."); GTEST_DEFINE_string_( stream_result_to, internal::StringFromGTestEnv("stream_result_to", ""), "This flag specifies the host name and the port number on which to stream " "test results. Example: \"localhost:555\". The flag is effective only on " "Linux."); GTEST_DEFINE_bool_( throw_on_failure, internal::BoolFromGTestEnv("throw_on_failure", false), "When this flag is specified, a failed assertion will throw an exception " "if exceptions are enabled or exit the program with a non-zero code " "otherwise."); namespace internal { // Generates a random number from [0, range), using a Linear // Congruential Generator (LCG). Crashes if 'range' is 0 or greater // than kMaxRange. UInt32 Random::Generate(UInt32 range) { // These constants are the same as are used in glibc's rand(3). state_ = (1103515245U*state_ + 12345U) % kMaxRange; GTEST_CHECK_(range > 0) << "Cannot generate a number in the range [0, 0)."; GTEST_CHECK_(range <= kMaxRange) << "Generation of a number in [0, " << range << ") was requested, " << "but this can only generate numbers in [0, " << kMaxRange << ")."; // Converting via modulus introduces a bit of downward bias, but // it's simple, and a linear congruential generator isn't too good // to begin with. return state_ % range; } // GTestIsInitialized() returns true iff the user has initialized // Google Test. Useful for catching the user mistake of not initializing // Google Test before calling RUN_ALL_TESTS(). // // A user must call testing::InitGoogleTest() to initialize Google // Test. g_init_gtest_count is set to the number of times // InitGoogleTest() has been called. We don't protect this variable // under a mutex as it is only accessed in the main thread. GTEST_API_ int g_init_gtest_count = 0; static bool GTestIsInitialized() { return g_init_gtest_count != 0; } // Iterates over a vector of TestCases, keeping a running sum of the // results of calling a given int-returning method on each. // Returns the sum. static int SumOverTestCaseList(const std::vector& case_list, int (TestCase::*method)() const) { int sum = 0; for (size_t i = 0; i < case_list.size(); i++) { sum += (case_list[i]->*method)(); } return sum; } // Returns true iff the test case passed. static bool TestCasePassed(const TestCase* test_case) { return test_case->should_run() && test_case->Passed(); } // Returns true iff the test case failed. static bool TestCaseFailed(const TestCase* test_case) { return test_case->should_run() && test_case->Failed(); } // Returns true iff test_case contains at least one test that should // run. static bool ShouldRunTestCase(const TestCase* test_case) { return test_case->should_run(); } // AssertHelper constructor. AssertHelper::AssertHelper(TestPartResult::Type type, const char* file, int line, const char* message) : data_(new AssertHelperData(type, file, line, message)) { } AssertHelper::~AssertHelper() { delete data_; } // Message assignment, for assertion streaming support. void AssertHelper::operator=(const Message& message) const { UnitTest::GetInstance()-> AddTestPartResult(data_->type, data_->file, data_->line, AppendUserMessage(data_->message, message), UnitTest::GetInstance()->impl() ->CurrentOsStackTraceExceptTop(1) // Skips the stack frame for this function itself. ); // NOLINT } // Mutex for linked pointers. GTEST_API_ GTEST_DEFINE_STATIC_MUTEX_(g_linked_ptr_mutex); // Application pathname gotten in InitGoogleTest. std::string g_executable_path; // Returns the current application's name, removing directory path if that // is present. FilePath GetCurrentExecutableName() { FilePath result; #if GTEST_OS_WINDOWS result.Set(FilePath(g_executable_path).RemoveExtension("exe")); #else result.Set(FilePath(g_executable_path)); #endif // GTEST_OS_WINDOWS return result.RemoveDirectoryName(); } // Functions for processing the gtest_output flag. // Returns the output format, or "" for normal printed output. std::string UnitTestOptions::GetOutputFormat() { const char* const gtest_output_flag = GTEST_FLAG(output).c_str(); if (gtest_output_flag == NULL) return std::string(""); const char* const colon = strchr(gtest_output_flag, ':'); return (colon == NULL) ? std::string(gtest_output_flag) : std::string(gtest_output_flag, colon - gtest_output_flag); } // Returns the name of the requested output file, or the default if none // was explicitly specified. std::string UnitTestOptions::GetAbsolutePathToOutputFile() { const char* const gtest_output_flag = GTEST_FLAG(output).c_str(); if (gtest_output_flag == NULL) return ""; const char* const colon = strchr(gtest_output_flag, ':'); if (colon == NULL) return internal::FilePath::ConcatPaths( internal::FilePath( UnitTest::GetInstance()->original_working_dir()), internal::FilePath(kDefaultOutputFile)).string(); internal::FilePath output_name(colon + 1); if (!output_name.IsAbsolutePath()) // TODO(wan@google.com): on Windows \some\path is not an absolute // path (as its meaning depends on the current drive), yet the // following logic for turning it into an absolute path is wrong. // Fix it. output_name = internal::FilePath::ConcatPaths( internal::FilePath(UnitTest::GetInstance()->original_working_dir()), internal::FilePath(colon + 1)); if (!output_name.IsDirectory()) return output_name.string(); internal::FilePath result(internal::FilePath::GenerateUniqueFileName( output_name, internal::GetCurrentExecutableName(), GetOutputFormat().c_str())); return result.string(); } // Returns true iff the wildcard pattern matches the string. The // first ':' or '\0' character in pattern marks the end of it. // // This recursive algorithm isn't very efficient, but is clear and // works well enough for matching test names, which are short. bool UnitTestOptions::PatternMatchesString(const char *pattern, const char *str) { switch (*pattern) { case '\0': case ':': // Either ':' or '\0' marks the end of the pattern. return *str == '\0'; case '?': // Matches any single character. return *str != '\0' && PatternMatchesString(pattern + 1, str + 1); case '*': // Matches any string (possibly empty) of characters. return (*str != '\0' && PatternMatchesString(pattern, str + 1)) || PatternMatchesString(pattern + 1, str); default: // Non-special character. Matches itself. return *pattern == *str && PatternMatchesString(pattern + 1, str + 1); } } bool UnitTestOptions::MatchesFilter( const std::string& name, const char* filter) { const char *cur_pattern = filter; for (;;) { if (PatternMatchesString(cur_pattern, name.c_str())) { return true; } // Finds the next pattern in the filter. cur_pattern = strchr(cur_pattern, ':'); // Returns if no more pattern can be found. if (cur_pattern == NULL) { return false; } // Skips the pattern separater (the ':' character). cur_pattern++; } } // Returns true iff the user-specified filter matches the test case // name and the test name. bool UnitTestOptions::FilterMatchesTest(const std::string &test_case_name, const std::string &test_name) { const std::string& full_name = test_case_name + "." + test_name.c_str(); // Split --gtest_filter at '-', if there is one, to separate into // positive filter and negative filter portions const char* const p = GTEST_FLAG(filter).c_str(); const char* const dash = strchr(p, '-'); std::string positive; std::string negative; if (dash == NULL) { positive = GTEST_FLAG(filter).c_str(); // Whole string is a positive filter negative = ""; } else { positive = std::string(p, dash); // Everything up to the dash negative = std::string(dash + 1); // Everything after the dash if (positive.empty()) { // Treat '-test1' as the same as '*-test1' positive = kUniversalFilter; } } // A filter is a colon-separated list of patterns. It matches a // test if any pattern in it matches the test. return (MatchesFilter(full_name, positive.c_str()) && !MatchesFilter(full_name, negative.c_str())); } #if GTEST_HAS_SEH // Returns EXCEPTION_EXECUTE_HANDLER if Google Test should handle the // given SEH exception, or EXCEPTION_CONTINUE_SEARCH otherwise. // This function is useful as an __except condition. int UnitTestOptions::GTestShouldProcessSEH(DWORD exception_code) { // Google Test should handle a SEH exception if: // 1. the user wants it to, AND // 2. this is not a breakpoint exception, AND // 3. this is not a C++ exception (VC++ implements them via SEH, // apparently). // // SEH exception code for C++ exceptions. // (see http://support.microsoft.com/kb/185294 for more information). const DWORD kCxxExceptionCode = 0xe06d7363; bool should_handle = true; if (!GTEST_FLAG(catch_exceptions)) should_handle = false; else if (exception_code == EXCEPTION_BREAKPOINT) should_handle = false; else if (exception_code == kCxxExceptionCode) should_handle = false; return should_handle ? EXCEPTION_EXECUTE_HANDLER : EXCEPTION_CONTINUE_SEARCH; } #endif // GTEST_HAS_SEH } // namespace internal // The c'tor sets this object as the test part result reporter used by // Google Test. The 'result' parameter specifies where to report the // results. Intercepts only failures from the current thread. ScopedFakeTestPartResultReporter::ScopedFakeTestPartResultReporter( TestPartResultArray* result) : intercept_mode_(INTERCEPT_ONLY_CURRENT_THREAD), result_(result) { Init(); } // The c'tor sets this object as the test part result reporter used by // Google Test. The 'result' parameter specifies where to report the // results. ScopedFakeTestPartResultReporter::ScopedFakeTestPartResultReporter( InterceptMode intercept_mode, TestPartResultArray* result) : intercept_mode_(intercept_mode), result_(result) { Init(); } void ScopedFakeTestPartResultReporter::Init() { internal::UnitTestImpl* const impl = internal::GetUnitTestImpl(); if (intercept_mode_ == INTERCEPT_ALL_THREADS) { old_reporter_ = impl->GetGlobalTestPartResultReporter(); impl->SetGlobalTestPartResultReporter(this); } else { old_reporter_ = impl->GetTestPartResultReporterForCurrentThread(); impl->SetTestPartResultReporterForCurrentThread(this); } } // The d'tor restores the test part result reporter used by Google Test // before. ScopedFakeTestPartResultReporter::~ScopedFakeTestPartResultReporter() { internal::UnitTestImpl* const impl = internal::GetUnitTestImpl(); if (intercept_mode_ == INTERCEPT_ALL_THREADS) { impl->SetGlobalTestPartResultReporter(old_reporter_); } else { impl->SetTestPartResultReporterForCurrentThread(old_reporter_); } } // Increments the test part result count and remembers the result. // This method is from the TestPartResultReporterInterface interface. void ScopedFakeTestPartResultReporter::ReportTestPartResult( const TestPartResult& result) { result_->Append(result); } namespace internal { // Returns the type ID of ::testing::Test. We should always call this // instead of GetTypeId< ::testing::Test>() to get the type ID of // testing::Test. This is to work around a suspected linker bug when // using Google Test as a framework on Mac OS X. The bug causes // GetTypeId< ::testing::Test>() to return different values depending // on whether the call is from the Google Test framework itself or // from user test code. GetTestTypeId() is guaranteed to always // return the same value, as it always calls GetTypeId<>() from the // gtest.cc, which is within the Google Test framework. TypeId GetTestTypeId() { return GetTypeId(); } // The value of GetTestTypeId() as seen from within the Google Test // library. This is solely for testing GetTestTypeId(). extern const TypeId kTestTypeIdInGoogleTest = GetTestTypeId(); // This predicate-formatter checks that 'results' contains a test part // failure of the given type and that the failure message contains the // given substring. AssertionResult HasOneFailure(const char* /* results_expr */, const char* /* type_expr */, const char* /* substr_expr */, const TestPartResultArray& results, TestPartResult::Type type, const string& substr) { const std::string expected(type == TestPartResult::kFatalFailure ? "1 fatal failure" : "1 non-fatal failure"); Message msg; if (results.size() != 1) { msg << "Expected: " << expected << "\n" << " Actual: " << results.size() << " failures"; for (int i = 0; i < results.size(); i++) { msg << "\n" << results.GetTestPartResult(i); } return AssertionFailure() << msg; } const TestPartResult& r = results.GetTestPartResult(0); if (r.type() != type) { return AssertionFailure() << "Expected: " << expected << "\n" << " Actual:\n" << r; } if (strstr(r.message(), substr.c_str()) == NULL) { return AssertionFailure() << "Expected: " << expected << " containing \"" << substr << "\"\n" << " Actual:\n" << r; } return AssertionSuccess(); } // The constructor of SingleFailureChecker remembers where to look up // test part results, what type of failure we expect, and what // substring the failure message should contain. SingleFailureChecker:: SingleFailureChecker( const TestPartResultArray* results, TestPartResult::Type type, const string& substr) : results_(results), type_(type), substr_(substr) {} // The destructor of SingleFailureChecker verifies that the given // TestPartResultArray contains exactly one failure that has the given // type and contains the given substring. If that's not the case, a // non-fatal failure will be generated. SingleFailureChecker::~SingleFailureChecker() { EXPECT_PRED_FORMAT3(HasOneFailure, *results_, type_, substr_); } DefaultGlobalTestPartResultReporter::DefaultGlobalTestPartResultReporter( UnitTestImpl* unit_test) : unit_test_(unit_test) {} void DefaultGlobalTestPartResultReporter::ReportTestPartResult( const TestPartResult& result) { unit_test_->current_test_result()->AddTestPartResult(result); unit_test_->listeners()->repeater()->OnTestPartResult(result); } DefaultPerThreadTestPartResultReporter::DefaultPerThreadTestPartResultReporter( UnitTestImpl* unit_test) : unit_test_(unit_test) {} void DefaultPerThreadTestPartResultReporter::ReportTestPartResult( const TestPartResult& result) { unit_test_->GetGlobalTestPartResultReporter()->ReportTestPartResult(result); } // Returns the global test part result reporter. TestPartResultReporterInterface* UnitTestImpl::GetGlobalTestPartResultReporter() { internal::MutexLock lock(&global_test_part_result_reporter_mutex_); return global_test_part_result_repoter_; } // Sets the global test part result reporter. void UnitTestImpl::SetGlobalTestPartResultReporter( TestPartResultReporterInterface* reporter) { internal::MutexLock lock(&global_test_part_result_reporter_mutex_); global_test_part_result_repoter_ = reporter; } // Returns the test part result reporter for the current thread. TestPartResultReporterInterface* UnitTestImpl::GetTestPartResultReporterForCurrentThread() { return per_thread_test_part_result_reporter_.get(); } // Sets the test part result reporter for the current thread. void UnitTestImpl::SetTestPartResultReporterForCurrentThread( TestPartResultReporterInterface* reporter) { per_thread_test_part_result_reporter_.set(reporter); } // Gets the number of successful test cases. int UnitTestImpl::successful_test_case_count() const { return CountIf(test_cases_, TestCasePassed); } // Gets the number of failed test cases. int UnitTestImpl::failed_test_case_count() const { return CountIf(test_cases_, TestCaseFailed); } // Gets the number of all test cases. int UnitTestImpl::total_test_case_count() const { return static_cast(test_cases_.size()); } // Gets the number of all test cases that contain at least one test // that should run. int UnitTestImpl::test_case_to_run_count() const { return CountIf(test_cases_, ShouldRunTestCase); } // Gets the number of successful tests. int UnitTestImpl::successful_test_count() const { return SumOverTestCaseList(test_cases_, &TestCase::successful_test_count); } // Gets the number of failed tests. int UnitTestImpl::failed_test_count() const { return SumOverTestCaseList(test_cases_, &TestCase::failed_test_count); } // Gets the number of disabled tests that will be reported in the XML report. int UnitTestImpl::reportable_disabled_test_count() const { return SumOverTestCaseList(test_cases_, &TestCase::reportable_disabled_test_count); } // Gets the number of disabled tests. int UnitTestImpl::disabled_test_count() const { return SumOverTestCaseList(test_cases_, &TestCase::disabled_test_count); } // Gets the number of tests to be printed in the XML report. int UnitTestImpl::reportable_test_count() const { return SumOverTestCaseList(test_cases_, &TestCase::reportable_test_count); } // Gets the number of all tests. int UnitTestImpl::total_test_count() const { return SumOverTestCaseList(test_cases_, &TestCase::total_test_count); } // Gets the number of tests that should run. int UnitTestImpl::test_to_run_count() const { return SumOverTestCaseList(test_cases_, &TestCase::test_to_run_count); } // Returns the current OS stack trace as an std::string. // // The maximum number of stack frames to be included is specified by // the gtest_stack_trace_depth flag. The skip_count parameter // specifies the number of top frames to be skipped, which doesn't // count against the number of frames to be included. // // For example, if Foo() calls Bar(), which in turn calls // CurrentOsStackTraceExceptTop(1), Foo() will be included in the // trace but Bar() and CurrentOsStackTraceExceptTop() won't. std::string UnitTestImpl::CurrentOsStackTraceExceptTop(int skip_count) { (void)skip_count; return ""; } // Returns the current time in milliseconds. TimeInMillis GetTimeInMillis() { #if GTEST_OS_WINDOWS_MOBILE || defined(__BORLANDC__) // Difference between 1970-01-01 and 1601-01-01 in milliseconds. // http://analogous.blogspot.com/2005/04/epoch.html const TimeInMillis kJavaEpochToWinFileTimeDelta = static_cast(116444736UL) * 100000UL; const DWORD kTenthMicrosInMilliSecond = 10000; SYSTEMTIME now_systime; FILETIME now_filetime; ULARGE_INTEGER now_int64; // TODO(kenton@google.com): Shouldn't this just use // GetSystemTimeAsFileTime()? GetSystemTime(&now_systime); if (SystemTimeToFileTime(&now_systime, &now_filetime)) { now_int64.LowPart = now_filetime.dwLowDateTime; now_int64.HighPart = now_filetime.dwHighDateTime; now_int64.QuadPart = (now_int64.QuadPart / kTenthMicrosInMilliSecond) - kJavaEpochToWinFileTimeDelta; return now_int64.QuadPart; } return 0; #elif GTEST_OS_WINDOWS && !GTEST_HAS_GETTIMEOFDAY_ __timeb64 now; # ifdef _MSC_VER // MSVC 8 deprecates _ftime64(), so we want to suppress warning 4996 // (deprecated function) there. // TODO(kenton@google.com): Use GetTickCount()? Or use // SystemTimeToFileTime() # pragma warning(push) // Saves the current warning state. # pragma warning(disable:4996) // Temporarily disables warning 4996. _ftime64(&now); # pragma warning(pop) // Restores the warning state. # else _ftime64(&now); # endif // _MSC_VER return static_cast(now.time) * 1000 + now.millitm; #elif GTEST_HAS_GETTIMEOFDAY_ struct timeval now; gettimeofday(&now, NULL); return static_cast(now.tv_sec) * 1000 + now.tv_usec / 1000; #else # error "Don't know how to get the current time on your system." #endif } // Utilities // class String. #if GTEST_OS_WINDOWS_MOBILE // Creates a UTF-16 wide string from the given ANSI string, allocating // memory using new. The caller is responsible for deleting the return // value using delete[]. Returns the wide string, or NULL if the // input is NULL. LPCWSTR String::AnsiToUtf16(const char* ansi) { if (!ansi) return NULL; const int length = strlen(ansi); const int unicode_length = MultiByteToWideChar(CP_ACP, 0, ansi, length, NULL, 0); WCHAR* unicode = new WCHAR[unicode_length + 1]; MultiByteToWideChar(CP_ACP, 0, ansi, length, unicode, unicode_length); unicode[unicode_length] = 0; return unicode; } // Creates an ANSI string from the given wide string, allocating // memory using new. The caller is responsible for deleting the return // value using delete[]. Returns the ANSI string, or NULL if the // input is NULL. const char* String::Utf16ToAnsi(LPCWSTR utf16_str) { if (!utf16_str) return NULL; const int ansi_length = WideCharToMultiByte(CP_ACP, 0, utf16_str, -1, NULL, 0, NULL, NULL); char* ansi = new char[ansi_length + 1]; WideCharToMultiByte(CP_ACP, 0, utf16_str, -1, ansi, ansi_length, NULL, NULL); ansi[ansi_length] = 0; return ansi; } #endif // GTEST_OS_WINDOWS_MOBILE // Compares two C strings. Returns true iff they have the same content. // // Unlike strcmp(), this function can handle NULL argument(s). A NULL // C string is considered different to any non-NULL C string, // including the empty string. bool String::CStringEquals(const char * lhs, const char * rhs) { if ( lhs == NULL ) return rhs == NULL; if ( rhs == NULL ) return false; return strcmp(lhs, rhs) == 0; } #if GTEST_HAS_STD_WSTRING || GTEST_HAS_GLOBAL_WSTRING // Converts an array of wide chars to a narrow string using the UTF-8 // encoding, and streams the result to the given Message object. static void StreamWideCharsToMessage(const wchar_t* wstr, size_t length, Message* msg) { for (size_t i = 0; i != length; ) { // NOLINT if (wstr[i] != L'\0') { *msg << WideStringToUtf8(wstr + i, static_cast(length - i)); while (i != length && wstr[i] != L'\0') i++; } else { *msg << '\0'; i++; } } } #endif // GTEST_HAS_STD_WSTRING || GTEST_HAS_GLOBAL_WSTRING } // namespace internal // Constructs an empty Message. // We allocate the stringstream separately because otherwise each use of // ASSERT/EXPECT in a procedure adds over 200 bytes to the procedure's // stack frame leading to huge stack frames in some cases; gcc does not reuse // the stack space. Message::Message() : ss_(new ::std::stringstream) { // By default, we want there to be enough precision when printing // a double to a Message. *ss_ << std::setprecision(std::numeric_limits::digits10 + 2); } // These two overloads allow streaming a wide C string to a Message // using the UTF-8 encoding. Message& Message::operator <<(const wchar_t* wide_c_str) { return *this << internal::String::ShowWideCString(wide_c_str); } Message& Message::operator <<(wchar_t* wide_c_str) { return *this << internal::String::ShowWideCString(wide_c_str); } #if GTEST_HAS_STD_WSTRING // Converts the given wide string to a narrow string using the UTF-8 // encoding, and streams the result to this Message object. Message& Message::operator <<(const ::std::wstring& wstr) { internal::StreamWideCharsToMessage(wstr.c_str(), wstr.length(), this); return *this; } #endif // GTEST_HAS_STD_WSTRING #if GTEST_HAS_GLOBAL_WSTRING // Converts the given wide string to a narrow string using the UTF-8 // encoding, and streams the result to this Message object. Message& Message::operator <<(const ::wstring& wstr) { internal::StreamWideCharsToMessage(wstr.c_str(), wstr.length(), this); return *this; } #endif // GTEST_HAS_GLOBAL_WSTRING // Gets the text streamed to this object so far as an std::string. // Each '\0' character in the buffer is replaced with "\\0". std::string Message::GetString() const { return internal::StringStreamToString(ss_.get()); } // AssertionResult constructors. // Used in EXPECT_TRUE/FALSE(assertion_result). AssertionResult::AssertionResult(const AssertionResult& other) : success_(other.success_), message_(other.message_.get() != NULL ? new ::std::string(*other.message_) : static_cast< ::std::string*>(NULL)) { } // Returns the assertion's negation. Used with EXPECT/ASSERT_FALSE. AssertionResult AssertionResult::operator!() const { AssertionResult negation(!success_); if (message_.get() != NULL) negation << *message_; return negation; } // Makes a successful assertion result. AssertionResult AssertionSuccess() { return AssertionResult(true); } // Makes a failed assertion result. AssertionResult AssertionFailure() { return AssertionResult(false); } // Makes a failed assertion result with the given failure message. // Deprecated; use AssertionFailure() << message. AssertionResult AssertionFailure(const Message& message) { return AssertionFailure() << message; } namespace internal { // Constructs and returns the message for an equality assertion // (e.g. ASSERT_EQ, EXPECT_STREQ, etc) failure. // // The first four parameters are the expressions used in the assertion // and their values, as strings. For example, for ASSERT_EQ(foo, bar) // where foo is 5 and bar is 6, we have: // // expected_expression: "foo" // actual_expression: "bar" // expected_value: "5" // actual_value: "6" // // The ignoring_case parameter is true iff the assertion is a // *_STRCASEEQ*. When it's true, the string " (ignoring case)" will // be inserted into the message. AssertionResult EqFailure(const char* expected_expression, const char* actual_expression, const std::string& expected_value, const std::string& actual_value, bool ignoring_case) { Message msg; msg << "Value of: " << actual_expression; if (actual_value != actual_expression) { msg << "\n Actual: " << actual_value; } msg << "\nExpected: " << expected_expression; if (ignoring_case) { msg << " (ignoring case)"; } if (expected_value != expected_expression) { msg << "\nWhich is: " << expected_value; } return AssertionFailure() << msg; } // Constructs a failure message for Boolean assertions such as EXPECT_TRUE. std::string GetBoolAssertionFailureMessage( const AssertionResult& assertion_result, const char* expression_text, const char* actual_predicate_value, const char* expected_predicate_value) { const char* actual_message = assertion_result.message(); Message msg; msg << "Value of: " << expression_text << "\n Actual: " << actual_predicate_value; if (actual_message[0] != '\0') msg << " (" << actual_message << ")"; msg << "\nExpected: " << expected_predicate_value; return msg.GetString(); } // Helper function for implementing ASSERT_NEAR. AssertionResult DoubleNearPredFormat(const char* expr1, const char* expr2, const char* abs_error_expr, double val1, double val2, double abs_error) { const double diff = fabs(val1 - val2); if (diff <= abs_error) return AssertionSuccess(); // TODO(wan): do not print the value of an expression if it's // already a literal. return AssertionFailure() << "The difference between " << expr1 << " and " << expr2 << " is " << diff << ", which exceeds " << abs_error_expr << ", where\n" << expr1 << " evaluates to " << val1 << ",\n" << expr2 << " evaluates to " << val2 << ", and\n" << abs_error_expr << " evaluates to " << abs_error << "."; } // Helper template for implementing FloatLE() and DoubleLE(). template AssertionResult FloatingPointLE(const char* expr1, const char* expr2, RawType val1, RawType val2) { // Returns success if val1 is less than val2, if (val1 < val2) { return AssertionSuccess(); } // or if val1 is almost equal to val2. const FloatingPoint lhs(val1), rhs(val2); if (lhs.AlmostEquals(rhs)) { return AssertionSuccess(); } // Note that the above two checks will both fail if either val1 or // val2 is NaN, as the IEEE floating-point standard requires that // any predicate involving a NaN must return false. ::std::stringstream val1_ss; val1_ss << std::setprecision(std::numeric_limits::digits10 + 2) << val1; ::std::stringstream val2_ss; val2_ss << std::setprecision(std::numeric_limits::digits10 + 2) << val2; return AssertionFailure() << "Expected: (" << expr1 << ") <= (" << expr2 << ")\n" << " Actual: " << StringStreamToString(&val1_ss) << " vs " << StringStreamToString(&val2_ss); } } // namespace internal // Asserts that val1 is less than, or almost equal to, val2. Fails // otherwise. In particular, it fails if either val1 or val2 is NaN. AssertionResult FloatLE(const char* expr1, const char* expr2, float val1, float val2) { return internal::FloatingPointLE(expr1, expr2, val1, val2); } // Asserts that val1 is less than, or almost equal to, val2. Fails // otherwise. In particular, it fails if either val1 or val2 is NaN. AssertionResult DoubleLE(const char* expr1, const char* expr2, double val1, double val2) { return internal::FloatingPointLE(expr1, expr2, val1, val2); } namespace internal { // The helper function for {ASSERT|EXPECT}_EQ with int or enum // arguments. AssertionResult CmpHelperEQ(const char* expected_expression, const char* actual_expression, BiggestInt expected, BiggestInt actual) { if (expected == actual) { return AssertionSuccess(); } return EqFailure(expected_expression, actual_expression, FormatForComparisonFailureMessage(expected, actual), FormatForComparisonFailureMessage(actual, expected), false); } // A macro for implementing the helper functions needed to implement // ASSERT_?? and EXPECT_?? with integer or enum arguments. It is here // just to avoid copy-and-paste of similar code. #define GTEST_IMPL_CMP_HELPER_(op_name, op)\ AssertionResult CmpHelper##op_name(const char* expr1, const char* expr2, \ BiggestInt val1, BiggestInt val2) {\ if (val1 op val2) {\ return AssertionSuccess();\ } else {\ return AssertionFailure() \ << "Expected: (" << expr1 << ") " #op " (" << expr2\ << "), actual: " << FormatForComparisonFailureMessage(val1, val2)\ << " vs " << FormatForComparisonFailureMessage(val2, val1);\ }\ } // Implements the helper function for {ASSERT|EXPECT}_NE with int or // enum arguments. GTEST_IMPL_CMP_HELPER_(NE, !=) // Implements the helper function for {ASSERT|EXPECT}_LE with int or // enum arguments. GTEST_IMPL_CMP_HELPER_(LE, <=) // Implements the helper function for {ASSERT|EXPECT}_LT with int or // enum arguments. GTEST_IMPL_CMP_HELPER_(LT, < ) // Implements the helper function for {ASSERT|EXPECT}_GE with int or // enum arguments. GTEST_IMPL_CMP_HELPER_(GE, >=) // Implements the helper function for {ASSERT|EXPECT}_GT with int or // enum arguments. GTEST_IMPL_CMP_HELPER_(GT, > ) #undef GTEST_IMPL_CMP_HELPER_ // The helper function for {ASSERT|EXPECT}_STREQ. AssertionResult CmpHelperSTREQ(const char* expected_expression, const char* actual_expression, const char* expected, const char* actual) { if (String::CStringEquals(expected, actual)) { return AssertionSuccess(); } return EqFailure(expected_expression, actual_expression, PrintToString(expected), PrintToString(actual), false); } // The helper function for {ASSERT|EXPECT}_STRCASEEQ. AssertionResult CmpHelperSTRCASEEQ(const char* expected_expression, const char* actual_expression, const char* expected, const char* actual) { if (String::CaseInsensitiveCStringEquals(expected, actual)) { return AssertionSuccess(); } return EqFailure(expected_expression, actual_expression, PrintToString(expected), PrintToString(actual), true); } // The helper function for {ASSERT|EXPECT}_STRNE. AssertionResult CmpHelperSTRNE(const char* s1_expression, const char* s2_expression, const char* s1, const char* s2) { if (!String::CStringEquals(s1, s2)) { return AssertionSuccess(); } else { return AssertionFailure() << "Expected: (" << s1_expression << ") != (" << s2_expression << "), actual: \"" << s1 << "\" vs \"" << s2 << "\""; } } // The helper function for {ASSERT|EXPECT}_STRCASENE. AssertionResult CmpHelperSTRCASENE(const char* s1_expression, const char* s2_expression, const char* s1, const char* s2) { if (!String::CaseInsensitiveCStringEquals(s1, s2)) { return AssertionSuccess(); } else { return AssertionFailure() << "Expected: (" << s1_expression << ") != (" << s2_expression << ") (ignoring case), actual: \"" << s1 << "\" vs \"" << s2 << "\""; } } } // namespace internal namespace { // Helper functions for implementing IsSubString() and IsNotSubstring(). // This group of overloaded functions return true iff needle is a // substring of haystack. NULL is considered a substring of itself // only. bool IsSubstringPred(const char* needle, const char* haystack) { if (needle == NULL || haystack == NULL) return needle == haystack; return strstr(haystack, needle) != NULL; } bool IsSubstringPred(const wchar_t* needle, const wchar_t* haystack) { if (needle == NULL || haystack == NULL) return needle == haystack; return wcsstr(haystack, needle) != NULL; } // StringType here can be either ::std::string or ::std::wstring. template bool IsSubstringPred(const StringType& needle, const StringType& haystack) { return haystack.find(needle) != StringType::npos; } // This function implements either IsSubstring() or IsNotSubstring(), // depending on the value of the expected_to_be_substring parameter. // StringType here can be const char*, const wchar_t*, ::std::string, // or ::std::wstring. template AssertionResult IsSubstringImpl( bool expected_to_be_substring, const char* needle_expr, const char* haystack_expr, const StringType& needle, const StringType& haystack) { if (IsSubstringPred(needle, haystack) == expected_to_be_substring) return AssertionSuccess(); const bool is_wide_string = sizeof(needle[0]) > 1; const char* const begin_string_quote = is_wide_string ? "L\"" : "\""; return AssertionFailure() << "Value of: " << needle_expr << "\n" << " Actual: " << begin_string_quote << needle << "\"\n" << "Expected: " << (expected_to_be_substring ? "" : "not ") << "a substring of " << haystack_expr << "\n" << "Which is: " << begin_string_quote << haystack << "\""; } } // namespace // IsSubstring() and IsNotSubstring() check whether needle is a // substring of haystack (NULL is considered a substring of itself // only), and return an appropriate error message when they fail. AssertionResult IsSubstring( const char* needle_expr, const char* haystack_expr, const char* needle, const char* haystack) { return IsSubstringImpl(true, needle_expr, haystack_expr, needle, haystack); } AssertionResult IsSubstring( const char* needle_expr, const char* haystack_expr, const wchar_t* needle, const wchar_t* haystack) { return IsSubstringImpl(true, needle_expr, haystack_expr, needle, haystack); } AssertionResult IsNotSubstring( const char* needle_expr, const char* haystack_expr, const char* needle, const char* haystack) { return IsSubstringImpl(false, needle_expr, haystack_expr, needle, haystack); } AssertionResult IsNotSubstring( const char* needle_expr, const char* haystack_expr, const wchar_t* needle, const wchar_t* haystack) { return IsSubstringImpl(false, needle_expr, haystack_expr, needle, haystack); } AssertionResult IsSubstring( const char* needle_expr, const char* haystack_expr, const ::std::string& needle, const ::std::string& haystack) { return IsSubstringImpl(true, needle_expr, haystack_expr, needle, haystack); } AssertionResult IsNotSubstring( const char* needle_expr, const char* haystack_expr, const ::std::string& needle, const ::std::string& haystack) { return IsSubstringImpl(false, needle_expr, haystack_expr, needle, haystack); } #if GTEST_HAS_STD_WSTRING AssertionResult IsSubstring( const char* needle_expr, const char* haystack_expr, const ::std::wstring& needle, const ::std::wstring& haystack) { return IsSubstringImpl(true, needle_expr, haystack_expr, needle, haystack); } AssertionResult IsNotSubstring( const char* needle_expr, const char* haystack_expr, const ::std::wstring& needle, const ::std::wstring& haystack) { return IsSubstringImpl(false, needle_expr, haystack_expr, needle, haystack); } #endif // GTEST_HAS_STD_WSTRING namespace internal { #if GTEST_OS_WINDOWS namespace { // Helper function for IsHRESULT{SuccessFailure} predicates AssertionResult HRESULTFailureHelper(const char* expr, const char* expected, long hr) { // NOLINT # if GTEST_OS_WINDOWS_MOBILE // Windows CE doesn't support FormatMessage. const char error_text[] = ""; # else // Looks up the human-readable system message for the HRESULT code // and since we're not passing any params to FormatMessage, we don't // want inserts expanded. const DWORD kFlags = FORMAT_MESSAGE_FROM_SYSTEM | FORMAT_MESSAGE_IGNORE_INSERTS; const DWORD kBufSize = 4096; // Gets the system's human readable message string for this HRESULT. char error_text[kBufSize] = { '\0' }; DWORD message_length = ::FormatMessageA(kFlags, 0, // no source, we're asking system hr, // the error 0, // no line width restrictions error_text, // output buffer kBufSize, // buf size NULL); // no arguments for inserts // Trims tailing white space (FormatMessage leaves a trailing CR-LF) for (; message_length && IsSpace(error_text[message_length - 1]); --message_length) { error_text[message_length - 1] = '\0'; } # endif // GTEST_OS_WINDOWS_MOBILE const std::string error_hex("0x" + String::FormatHexInt(hr)); return ::testing::AssertionFailure() << "Expected: " << expr << " " << expected << ".\n" << " Actual: " << error_hex << " " << error_text << "\n"; } } // namespace AssertionResult IsHRESULTSuccess(const char* expr, long hr) { // NOLINT if (SUCCEEDED(hr)) { return AssertionSuccess(); } return HRESULTFailureHelper(expr, "succeeds", hr); } AssertionResult IsHRESULTFailure(const char* expr, long hr) { // NOLINT if (FAILED(hr)) { return AssertionSuccess(); } return HRESULTFailureHelper(expr, "fails", hr); } #endif // GTEST_OS_WINDOWS // Utility functions for encoding Unicode text (wide strings) in // UTF-8. // A Unicode code-point can have upto 21 bits, and is encoded in UTF-8 // like this: // // Code-point length Encoding // 0 - 7 bits 0xxxxxxx // 8 - 11 bits 110xxxxx 10xxxxxx // 12 - 16 bits 1110xxxx 10xxxxxx 10xxxxxx // 17 - 21 bits 11110xxx 10xxxxxx 10xxxxxx 10xxxxxx // The maximum code-point a one-byte UTF-8 sequence can represent. const UInt32 kMaxCodePoint1 = (static_cast(1) << 7) - 1; // The maximum code-point a two-byte UTF-8 sequence can represent. const UInt32 kMaxCodePoint2 = (static_cast(1) << (5 + 6)) - 1; // The maximum code-point a three-byte UTF-8 sequence can represent. const UInt32 kMaxCodePoint3 = (static_cast(1) << (4 + 2*6)) - 1; // The maximum code-point a four-byte UTF-8 sequence can represent. const UInt32 kMaxCodePoint4 = (static_cast(1) << (3 + 3*6)) - 1; // Chops off the n lowest bits from a bit pattern. Returns the n // lowest bits. As a side effect, the original bit pattern will be // shifted to the right by n bits. inline UInt32 ChopLowBits(UInt32* bits, int n) { const UInt32 low_bits = *bits & ((static_cast(1) << n) - 1); *bits >>= n; return low_bits; } // Converts a Unicode code point to a narrow string in UTF-8 encoding. // code_point parameter is of type UInt32 because wchar_t may not be // wide enough to contain a code point. // If the code_point is not a valid Unicode code point // (i.e. outside of Unicode range U+0 to U+10FFFF) it will be converted // to "(Invalid Unicode 0xXXXXXXXX)". std::string CodePointToUtf8(UInt32 code_point) { if (code_point > kMaxCodePoint4) { return "(Invalid Unicode 0x" + String::FormatHexInt(code_point) + ")"; } char str[5]; // Big enough for the largest valid code point. if (code_point <= kMaxCodePoint1) { str[1] = '\0'; str[0] = static_cast(code_point); // 0xxxxxxx } else if (code_point <= kMaxCodePoint2) { str[2] = '\0'; str[1] = static_cast(0x80 | ChopLowBits(&code_point, 6)); // 10xxxxxx str[0] = static_cast(0xC0 | code_point); // 110xxxxx } else if (code_point <= kMaxCodePoint3) { str[3] = '\0'; str[2] = static_cast(0x80 | ChopLowBits(&code_point, 6)); // 10xxxxxx str[1] = static_cast(0x80 | ChopLowBits(&code_point, 6)); // 10xxxxxx str[0] = static_cast(0xE0 | code_point); // 1110xxxx } else { // code_point <= kMaxCodePoint4 str[4] = '\0'; str[3] = static_cast(0x80 | ChopLowBits(&code_point, 6)); // 10xxxxxx str[2] = static_cast(0x80 | ChopLowBits(&code_point, 6)); // 10xxxxxx str[1] = static_cast(0x80 | ChopLowBits(&code_point, 6)); // 10xxxxxx str[0] = static_cast(0xF0 | code_point); // 11110xxx } return str; } // The following two functions only make sense if the the system // uses UTF-16 for wide string encoding. All supported systems // with 16 bit wchar_t (Windows, Cygwin, Symbian OS) do use UTF-16. // Determines if the arguments constitute UTF-16 surrogate pair // and thus should be combined into a single Unicode code point // using CreateCodePointFromUtf16SurrogatePair. inline bool IsUtf16SurrogatePair(wchar_t first, wchar_t second) { return sizeof(wchar_t) == 2 && (first & 0xFC00) == 0xD800 && (second & 0xFC00) == 0xDC00; } // Creates a Unicode code point from UTF16 surrogate pair. inline UInt32 CreateCodePointFromUtf16SurrogatePair(wchar_t first, wchar_t second) { const UInt32 mask = (1 << 10) - 1; return (sizeof(wchar_t) == 2) ? (((first & mask) << 10) | (second & mask)) + 0x10000 : // This function should not be called when the condition is // false, but we provide a sensible default in case it is. static_cast(first); } // Converts a wide string to a narrow string in UTF-8 encoding. // The wide string is assumed to have the following encoding: // UTF-16 if sizeof(wchar_t) == 2 (on Windows, Cygwin, Symbian OS) // UTF-32 if sizeof(wchar_t) == 4 (on Linux) // Parameter str points to a null-terminated wide string. // Parameter num_chars may additionally limit the number // of wchar_t characters processed. -1 is used when the entire string // should be processed. // If the string contains code points that are not valid Unicode code points // (i.e. outside of Unicode range U+0 to U+10FFFF) they will be output // as '(Invalid Unicode 0xXXXXXXXX)'. If the string is in UTF16 encoding // and contains invalid UTF-16 surrogate pairs, values in those pairs // will be encoded as individual Unicode characters from Basic Normal Plane. std::string WideStringToUtf8(const wchar_t* str, int num_chars) { if (num_chars == -1) num_chars = static_cast(wcslen(str)); ::std::stringstream stream; for (int i = 0; i < num_chars; ++i) { UInt32 unicode_code_point; if (str[i] == L'\0') { break; } else if (i + 1 < num_chars && IsUtf16SurrogatePair(str[i], str[i + 1])) { unicode_code_point = CreateCodePointFromUtf16SurrogatePair(str[i], str[i + 1]); i++; } else { unicode_code_point = static_cast(str[i]); } stream << CodePointToUtf8(unicode_code_point); } return StringStreamToString(&stream); } // Converts a wide C string to an std::string using the UTF-8 encoding. // NULL will be converted to "(null)". std::string String::ShowWideCString(const wchar_t * wide_c_str) { if (wide_c_str == NULL) return "(null)"; return internal::WideStringToUtf8(wide_c_str, -1); } // Compares two wide C strings. Returns true iff they have the same // content. // // Unlike wcscmp(), this function can handle NULL argument(s). A NULL // C string is considered different to any non-NULL C string, // including the empty string. bool String::WideCStringEquals(const wchar_t * lhs, const wchar_t * rhs) { if (lhs == NULL) return rhs == NULL; if (rhs == NULL) return false; return wcscmp(lhs, rhs) == 0; } // Helper function for *_STREQ on wide strings. AssertionResult CmpHelperSTREQ(const char* expected_expression, const char* actual_expression, const wchar_t* expected, const wchar_t* actual) { if (String::WideCStringEquals(expected, actual)) { return AssertionSuccess(); } return EqFailure(expected_expression, actual_expression, PrintToString(expected), PrintToString(actual), false); } // Helper function for *_STRNE on wide strings. AssertionResult CmpHelperSTRNE(const char* s1_expression, const char* s2_expression, const wchar_t* s1, const wchar_t* s2) { if (!String::WideCStringEquals(s1, s2)) { return AssertionSuccess(); } return AssertionFailure() << "Expected: (" << s1_expression << ") != (" << s2_expression << "), actual: " << PrintToString(s1) << " vs " << PrintToString(s2); } // Compares two C strings, ignoring case. Returns true iff they have // the same content. // // Unlike strcasecmp(), this function can handle NULL argument(s). A // NULL C string is considered different to any non-NULL C string, // including the empty string. bool String::CaseInsensitiveCStringEquals(const char * lhs, const char * rhs) { if (lhs == NULL) return rhs == NULL; if (rhs == NULL) return false; return posix::StrCaseCmp(lhs, rhs) == 0; } // Compares two wide C strings, ignoring case. Returns true iff they // have the same content. // // Unlike wcscasecmp(), this function can handle NULL argument(s). // A NULL C string is considered different to any non-NULL wide C string, // including the empty string. // NB: The implementations on different platforms slightly differ. // On windows, this method uses _wcsicmp which compares according to LC_CTYPE // environment variable. On GNU platform this method uses wcscasecmp // which compares according to LC_CTYPE category of the current locale. // On MacOS X, it uses towlower, which also uses LC_CTYPE category of the // current locale. bool String::CaseInsensitiveWideCStringEquals(const wchar_t* lhs, const wchar_t* rhs) { if (lhs == NULL) return rhs == NULL; if (rhs == NULL) return false; #if GTEST_OS_WINDOWS return _wcsicmp(lhs, rhs) == 0; #elif GTEST_OS_LINUX && !GTEST_OS_LINUX_ANDROID return wcscasecmp(lhs, rhs) == 0; #else // Android, Mac OS X and Cygwin don't define wcscasecmp. // Other unknown OSes may not define it either. wint_t left, right; do { left = towlower(*lhs++); right = towlower(*rhs++); } while (left && left == right); return left == right; #endif // OS selector } // Returns true iff str ends with the given suffix, ignoring case. // Any string is considered to end with an empty suffix. bool String::EndsWithCaseInsensitive( const std::string& str, const std::string& suffix) { const size_t str_len = str.length(); const size_t suffix_len = suffix.length(); return (str_len >= suffix_len) && CaseInsensitiveCStringEquals(str.c_str() + str_len - suffix_len, suffix.c_str()); } // Formats an int value as "%02d". std::string String::FormatIntWidth2(int value) { std::stringstream ss; ss << std::setfill('0') << std::setw(2) << value; return ss.str(); } // Formats an int value as "%X". std::string String::FormatHexInt(int value) { std::stringstream ss; ss << std::hex << std::uppercase << value; return ss.str(); } // Formats a byte as "%02X". std::string String::FormatByte(unsigned char value) { std::stringstream ss; ss << std::setfill('0') << std::setw(2) << std::hex << std::uppercase << static_cast(value); return ss.str(); } // Converts the buffer in a stringstream to an std::string, converting NUL // bytes to "\\0" along the way. std::string StringStreamToString(::std::stringstream* ss) { const ::std::string& str = ss->str(); const char* const start = str.c_str(); const char* const end = start + str.length(); std::string result; result.reserve(2 * (end - start)); for (const char* ch = start; ch != end; ++ch) { if (*ch == '\0') { result += "\\0"; // Replaces NUL with "\\0"; } else { result += *ch; } } return result; } // Appends the user-supplied message to the Google-Test-generated message. std::string AppendUserMessage(const std::string& gtest_msg, const Message& user_msg) { // Appends the user message if it's non-empty. const std::string user_msg_string = user_msg.GetString(); if (user_msg_string.empty()) { return gtest_msg; } return gtest_msg + "\n" + user_msg_string; } } // namespace internal // class TestResult // Creates an empty TestResult. TestResult::TestResult() : death_test_count_(0), elapsed_time_(0) { } // D'tor. TestResult::~TestResult() { } // Returns the i-th test part result among all the results. i can // range from 0 to total_part_count() - 1. If i is not in that range, // aborts the program. const TestPartResult& TestResult::GetTestPartResult(int i) const { if (i < 0 || i >= total_part_count()) internal::posix::Abort(); return test_part_results_.at(i); } // Returns the i-th test property. i can range from 0 to // test_property_count() - 1. If i is not in that range, aborts the // program. const TestProperty& TestResult::GetTestProperty(int i) const { if (i < 0 || i >= test_property_count()) internal::posix::Abort(); return test_properties_.at(i); } // Clears the test part results. void TestResult::ClearTestPartResults() { test_part_results_.clear(); } // Adds a test part result to the list. void TestResult::AddTestPartResult(const TestPartResult& test_part_result) { test_part_results_.push_back(test_part_result); } // Adds a test property to the list. If a property with the same key as the // supplied property is already represented, the value of this test_property // replaces the old value for that key. void TestResult::RecordProperty(const std::string& xml_element, const TestProperty& test_property) { if (!ValidateTestProperty(xml_element, test_property)) { return; } internal::MutexLock lock(&test_properites_mutex_); const std::vector::iterator property_with_matching_key = std::find_if(test_properties_.begin(), test_properties_.end(), internal::TestPropertyKeyIs(test_property.key())); if (property_with_matching_key == test_properties_.end()) { test_properties_.push_back(test_property); return; } property_with_matching_key->SetValue(test_property.value()); } // The list of reserved attributes used in the element of XML // output. static const char* const kReservedTestSuitesAttributes[] = { "disabled", "errors", "failures", "name", "random_seed", "tests", "time", "timestamp" }; // The list of reserved attributes used in the element of XML // output. static const char* const kReservedTestSuiteAttributes[] = { "disabled", "errors", "failures", "name", "tests", "time" }; // The list of reserved attributes used in the element of XML output. static const char* const kReservedTestCaseAttributes[] = { "classname", "name", "status", "time", "type_param", "value_param" }; template std::vector ArrayAsVector(const char* const (&array)[kSize]) { return std::vector(array, array + kSize); } static std::vector GetReservedAttributesForElement( const std::string& xml_element) { if (xml_element == "testsuites") { return ArrayAsVector(kReservedTestSuitesAttributes); } else if (xml_element == "testsuite") { return ArrayAsVector(kReservedTestSuiteAttributes); } else if (xml_element == "testcase") { return ArrayAsVector(kReservedTestCaseAttributes); } else { GTEST_CHECK_(false) << "Unrecognized xml_element provided: " << xml_element; } // This code is unreachable but some compilers may not realizes that. return std::vector(); } static std::string FormatWordList(const std::vector& words) { Message word_list; for (size_t i = 0; i < words.size(); ++i) { if (i > 0 && words.size() > 2) { word_list << ", "; } if (i == words.size() - 1) { word_list << "and "; } word_list << "'" << words[i] << "'"; } return word_list.GetString(); } bool ValidateTestPropertyName(const std::string& property_name, const std::vector& reserved_names) { if (std::find(reserved_names.begin(), reserved_names.end(), property_name) != reserved_names.end()) { ADD_FAILURE() << "Reserved key used in RecordProperty(): " << property_name << " (" << FormatWordList(reserved_names) << " are reserved by " << GTEST_NAME_ << ")"; return false; } return true; } // Adds a failure if the key is a reserved attribute of the element named // xml_element. Returns true if the property is valid. bool TestResult::ValidateTestProperty(const std::string& xml_element, const TestProperty& test_property) { return ValidateTestPropertyName(test_property.key(), GetReservedAttributesForElement(xml_element)); } // Clears the object. void TestResult::Clear() { test_part_results_.clear(); test_properties_.clear(); death_test_count_ = 0; elapsed_time_ = 0; } // Returns true iff the test failed. bool TestResult::Failed() const { for (int i = 0; i < total_part_count(); ++i) { if (GetTestPartResult(i).failed()) return true; } return false; } // Returns true iff the test part fatally failed. static bool TestPartFatallyFailed(const TestPartResult& result) { return result.fatally_failed(); } // Returns true iff the test fatally failed. bool TestResult::HasFatalFailure() const { return CountIf(test_part_results_, TestPartFatallyFailed) > 0; } // Returns true iff the test part non-fatally failed. static bool TestPartNonfatallyFailed(const TestPartResult& result) { return result.nonfatally_failed(); } // Returns true iff the test has a non-fatal failure. bool TestResult::HasNonfatalFailure() const { return CountIf(test_part_results_, TestPartNonfatallyFailed) > 0; } // Gets the number of all test parts. This is the sum of the number // of successful test parts and the number of failed test parts. int TestResult::total_part_count() const { return static_cast(test_part_results_.size()); } // Returns the number of the test properties. int TestResult::test_property_count() const { return static_cast(test_properties_.size()); } // class Test // Creates a Test object. // The c'tor saves the values of all Google Test flags. Test::Test() : gtest_flag_saver_(new internal::GTestFlagSaver) { } // The d'tor restores the values of all Google Test flags. Test::~Test() { delete gtest_flag_saver_; } // Sets up the test fixture. // // A sub-class may override this. void Test::SetUp() { } // Tears down the test fixture. // // A sub-class may override this. void Test::TearDown() { } // Allows user supplied key value pairs to be recorded for later output. void Test::RecordProperty(const std::string& key, const std::string& value) { UnitTest::GetInstance()->RecordProperty(key, value); } // Allows user supplied key value pairs to be recorded for later output. void Test::RecordProperty(const std::string& key, int value) { Message value_message; value_message << value; RecordProperty(key, value_message.GetString().c_str()); } namespace internal { void ReportFailureInUnknownLocation(TestPartResult::Type result_type, const std::string& message) { // This function is a friend of UnitTest and as such has access to // AddTestPartResult. UnitTest::GetInstance()->AddTestPartResult( result_type, NULL, // No info about the source file where the exception occurred. -1, // We have no info on which line caused the exception. message, ""); // No stack trace, either. } } // namespace internal // Google Test requires all tests in the same test case to use the same test // fixture class. This function checks if the current test has the // same fixture class as the first test in the current test case. If // yes, it returns true; otherwise it generates a Google Test failure and // returns false. bool Test::HasSameFixtureClass() { internal::UnitTestImpl* const impl = internal::GetUnitTestImpl(); const TestCase* const test_case = impl->current_test_case(); // Info about the first test in the current test case. const TestInfo* const first_test_info = test_case->test_info_list()[0]; const internal::TypeId first_fixture_id = first_test_info->fixture_class_id_; const char* const first_test_name = first_test_info->name(); // Info about the current test. const TestInfo* const this_test_info = impl->current_test_info(); const internal::TypeId this_fixture_id = this_test_info->fixture_class_id_; const char* const this_test_name = this_test_info->name(); if (this_fixture_id != first_fixture_id) { // Is the first test defined using TEST? const bool first_is_TEST = first_fixture_id == internal::GetTestTypeId(); // Is this test defined using TEST? const bool this_is_TEST = this_fixture_id == internal::GetTestTypeId(); if (first_is_TEST || this_is_TEST) { // The user mixed TEST and TEST_F in this test case - we'll tell // him/her how to fix it. // Gets the name of the TEST and the name of the TEST_F. Note // that first_is_TEST and this_is_TEST cannot both be true, as // the fixture IDs are different for the two tests. const char* const TEST_name = first_is_TEST ? first_test_name : this_test_name; const char* const TEST_F_name = first_is_TEST ? this_test_name : first_test_name; ADD_FAILURE() << "All tests in the same test case must use the same test fixture\n" << "class, so mixing TEST_F and TEST in the same test case is\n" << "illegal. In test case " << this_test_info->test_case_name() << ",\n" << "test " << TEST_F_name << " is defined using TEST_F but\n" << "test " << TEST_name << " is defined using TEST. You probably\n" << "want to change the TEST to TEST_F or move it to another test\n" << "case."; } else { // The user defined two fixture classes with the same name in // two namespaces - we'll tell him/her how to fix it. ADD_FAILURE() << "All tests in the same test case must use the same test fixture\n" << "class. However, in test case " << this_test_info->test_case_name() << ",\n" << "you defined test " << first_test_name << " and test " << this_test_name << "\n" << "using two different test fixture classes. This can happen if\n" << "the two classes are from different namespaces or translation\n" << "units and have the same name. You should probably rename one\n" << "of the classes to put the tests into different test cases."; } return false; } return true; } #if GTEST_HAS_SEH // Adds an "exception thrown" fatal failure to the current test. This // function returns its result via an output parameter pointer because VC++ // prohibits creation of objects with destructors on stack in functions // using __try (see error C2712). static std::string* FormatSehExceptionMessage(DWORD exception_code, const char* location) { Message message; message << "SEH exception with code 0x" << std::setbase(16) << exception_code << std::setbase(10) << " thrown in " << location << "."; return new std::string(message.GetString()); } #endif // GTEST_HAS_SEH namespace internal { #if GTEST_HAS_EXCEPTIONS // Adds an "exception thrown" fatal failure to the current test. static std::string FormatCxxExceptionMessage(const char* description, const char* location) { Message message; if (description != NULL) { message << "C++ exception with description \"" << description << "\""; } else { message << "Unknown C++ exception"; } message << " thrown in " << location << "."; return message.GetString(); } static std::string PrintTestPartResultToString( const TestPartResult& test_part_result); GoogleTestFailureException::GoogleTestFailureException( const TestPartResult& failure) : ::std::runtime_error(PrintTestPartResultToString(failure).c_str()) {} #endif // GTEST_HAS_EXCEPTIONS // We put these helper functions in the internal namespace as IBM's xlC // compiler rejects the code if they were declared static. // Runs the given method and handles SEH exceptions it throws, when // SEH is supported; returns the 0-value for type Result in case of an // SEH exception. (Microsoft compilers cannot handle SEH and C++ // exceptions in the same function. Therefore, we provide a separate // wrapper function for handling SEH exceptions.) template Result HandleSehExceptionsInMethodIfSupported( T* object, Result (T::*method)(), const char* location) { #if GTEST_HAS_SEH __try { return (object->*method)(); } __except (internal::UnitTestOptions::GTestShouldProcessSEH( // NOLINT GetExceptionCode())) { // We create the exception message on the heap because VC++ prohibits // creation of objects with destructors on stack in functions using __try // (see error C2712). std::string* exception_message = FormatSehExceptionMessage( GetExceptionCode(), location); internal::ReportFailureInUnknownLocation(TestPartResult::kFatalFailure, *exception_message); delete exception_message; return static_cast(0); } #else (void)location; return (object->*method)(); #endif // GTEST_HAS_SEH } // Runs the given method and catches and reports C++ and/or SEH-style // exceptions, if they are supported; returns the 0-value for type // Result in case of an SEH exception. template Result HandleExceptionsInMethodIfSupported( T* object, Result (T::*method)(), const char* location) { // NOTE: The user code can affect the way in which Google Test handles // exceptions by setting GTEST_FLAG(catch_exceptions), but only before // RUN_ALL_TESTS() starts. It is technically possible to check the flag // after the exception is caught and either report or re-throw the // exception based on the flag's value: // // try { // // Perform the test method. // } catch (...) { // if (GTEST_FLAG(catch_exceptions)) // // Report the exception as failure. // else // throw; // Re-throws the original exception. // } // // However, the purpose of this flag is to allow the program to drop into // the debugger when the exception is thrown. On most platforms, once the // control enters the catch block, the exception origin information is // lost and the debugger will stop the program at the point of the // re-throw in this function -- instead of at the point of the original // throw statement in the code under test. For this reason, we perform // the check early, sacrificing the ability to affect Google Test's // exception handling in the method where the exception is thrown. if (internal::GetUnitTestImpl()->catch_exceptions()) { #if GTEST_HAS_EXCEPTIONS try { return HandleSehExceptionsInMethodIfSupported(object, method, location); } catch (const internal::GoogleTestFailureException&) { // NOLINT // This exception type can only be thrown by a failed Google // Test assertion with the intention of letting another testing // framework catch it. Therefore we just re-throw it. throw; } catch (const std::exception& e) { // NOLINT internal::ReportFailureInUnknownLocation( TestPartResult::kFatalFailure, FormatCxxExceptionMessage(e.what(), location)); } catch (...) { // NOLINT internal::ReportFailureInUnknownLocation( TestPartResult::kFatalFailure, FormatCxxExceptionMessage(NULL, location)); } return static_cast(0); #else return HandleSehExceptionsInMethodIfSupported(object, method, location); #endif // GTEST_HAS_EXCEPTIONS } else { return (object->*method)(); } } } // namespace internal // Runs the test and updates the test result. void Test::Run() { if (!HasSameFixtureClass()) return; internal::UnitTestImpl* const impl = internal::GetUnitTestImpl(); impl->os_stack_trace_getter()->UponLeavingGTest(); internal::HandleExceptionsInMethodIfSupported(this, &Test::SetUp, "SetUp()"); // We will run the test only if SetUp() was successful. if (!HasFatalFailure()) { impl->os_stack_trace_getter()->UponLeavingGTest(); internal::HandleExceptionsInMethodIfSupported( this, &Test::TestBody, "the test body"); } // However, we want to clean up as much as possible. Hence we will // always call TearDown(), even if SetUp() or the test body has // failed. impl->os_stack_trace_getter()->UponLeavingGTest(); internal::HandleExceptionsInMethodIfSupported( this, &Test::TearDown, "TearDown()"); } // Returns true iff the current test has a fatal failure. bool Test::HasFatalFailure() { return internal::GetUnitTestImpl()->current_test_result()->HasFatalFailure(); } // Returns true iff the current test has a non-fatal failure. bool Test::HasNonfatalFailure() { return internal::GetUnitTestImpl()->current_test_result()-> HasNonfatalFailure(); } // class TestInfo // Constructs a TestInfo object. It assumes ownership of the test factory // object. TestInfo::TestInfo(const std::string& a_test_case_name, const std::string& a_name, const char* a_type_param, const char* a_value_param, internal::TypeId fixture_class_id, internal::TestFactoryBase* factory) : test_case_name_(a_test_case_name), name_(a_name), type_param_(a_type_param ? new std::string(a_type_param) : NULL), value_param_(a_value_param ? new std::string(a_value_param) : NULL), fixture_class_id_(fixture_class_id), should_run_(false), is_disabled_(false), matches_filter_(false), factory_(factory), result_() {} // Destructs a TestInfo object. TestInfo::~TestInfo() { delete factory_; } namespace internal { // Creates a new TestInfo object and registers it with Google Test; // returns the created object. // // Arguments: // // test_case_name: name of the test case // name: name of the test // type_param: the name of the test's type parameter, or NULL if // this is not a typed or a type-parameterized test. // value_param: text representation of the test's value parameter, // or NULL if this is not a value-parameterized test. // fixture_class_id: ID of the test fixture class // set_up_tc: pointer to the function that sets up the test case // tear_down_tc: pointer to the function that tears down the test case // factory: pointer to the factory that creates a test object. // The newly created TestInfo instance will assume // ownership of the factory object. TestInfo* MakeAndRegisterTestInfo( const char* test_case_name, const char* name, const char* type_param, const char* value_param, TypeId fixture_class_id, SetUpTestCaseFunc set_up_tc, TearDownTestCaseFunc tear_down_tc, TestFactoryBase* factory) { TestInfo* const test_info = new TestInfo(test_case_name, name, type_param, value_param, fixture_class_id, factory); GetUnitTestImpl()->AddTestInfo(set_up_tc, tear_down_tc, test_info); return test_info; } #if GTEST_HAS_PARAM_TEST void ReportInvalidTestCaseType(const char* test_case_name, const char* file, int line) { Message errors; errors << "Attempted redefinition of test case " << test_case_name << ".\n" << "All tests in the same test case must use the same test fixture\n" << "class. However, in test case " << test_case_name << ", you tried\n" << "to define a test using a fixture class different from the one\n" << "used earlier. This can happen if the two fixture classes are\n" << "from different namespaces and have the same name. You should\n" << "probably rename one of the classes to put the tests into different\n" << "test cases."; fprintf(stderr, "%s %s", FormatFileLocation(file, line).c_str(), errors.GetString().c_str()); } #endif // GTEST_HAS_PARAM_TEST } // namespace internal namespace { // A predicate that checks the test name of a TestInfo against a known // value. // // This is used for implementation of the TestCase class only. We put // it in the anonymous namespace to prevent polluting the outer // namespace. // // TestNameIs is copyable. class TestNameIs { public: // Constructor. // // TestNameIs has NO default constructor. explicit TestNameIs(const char* name) : name_(name) {} // Returns true iff the test name of test_info matches name_. bool operator()(const TestInfo * test_info) const { return test_info && test_info->name() == name_; } private: std::string name_; }; } // namespace namespace internal { // This method expands all parameterized tests registered with macros TEST_P // and INSTANTIATE_TEST_CASE_P into regular tests and registers those. // This will be done just once during the program runtime. void UnitTestImpl::RegisterParameterizedTests() { #if GTEST_HAS_PARAM_TEST if (!parameterized_tests_registered_) { parameterized_test_registry_.RegisterTests(); parameterized_tests_registered_ = true; } #endif } } // namespace internal // Creates the test object, runs it, records its result, and then // deletes it. void TestInfo::Run() { if (!should_run_) return; // Tells UnitTest where to store test result. internal::UnitTestImpl* const impl = internal::GetUnitTestImpl(); impl->set_current_test_info(this); TestEventListener* repeater = UnitTest::GetInstance()->listeners().repeater(); // Notifies the unit test event listeners that a test is about to start. repeater->OnTestStart(*this); const TimeInMillis start = internal::GetTimeInMillis(); impl->os_stack_trace_getter()->UponLeavingGTest(); // Creates the test object. Test* const test = internal::HandleExceptionsInMethodIfSupported( factory_, &internal::TestFactoryBase::CreateTest, "the test fixture's constructor"); // Runs the test only if the test object was created and its // constructor didn't generate a fatal failure. if ((test != NULL) && !Test::HasFatalFailure()) { // This doesn't throw as all user code that can throw are wrapped into // exception handling code. test->Run(); } // Deletes the test object. impl->os_stack_trace_getter()->UponLeavingGTest(); internal::HandleExceptionsInMethodIfSupported( test, &Test::DeleteSelf_, "the test fixture's destructor"); result_.set_elapsed_time(internal::GetTimeInMillis() - start); // Notifies the unit test event listener that a test has just finished. repeater->OnTestEnd(*this); // Tells UnitTest to stop associating assertion results to this // test. impl->set_current_test_info(NULL); } // class TestCase // Gets the number of successful tests in this test case. int TestCase::successful_test_count() const { return CountIf(test_info_list_, TestPassed); } // Gets the number of failed tests in this test case. int TestCase::failed_test_count() const { return CountIf(test_info_list_, TestFailed); } // Gets the number of disabled tests that will be reported in the XML report. int TestCase::reportable_disabled_test_count() const { return CountIf(test_info_list_, TestReportableDisabled); } // Gets the number of disabled tests in this test case. int TestCase::disabled_test_count() const { return CountIf(test_info_list_, TestDisabled); } // Gets the number of tests to be printed in the XML report. int TestCase::reportable_test_count() const { return CountIf(test_info_list_, TestReportable); } // Get the number of tests in this test case that should run. int TestCase::test_to_run_count() const { return CountIf(test_info_list_, ShouldRunTest); } // Gets the number of all tests. int TestCase::total_test_count() const { return static_cast(test_info_list_.size()); } // Creates a TestCase with the given name. // // Arguments: // // name: name of the test case // a_type_param: the name of the test case's type parameter, or NULL if // this is not a typed or a type-parameterized test case. // set_up_tc: pointer to the function that sets up the test case // tear_down_tc: pointer to the function that tears down the test case TestCase::TestCase(const char* a_name, const char* a_type_param, Test::SetUpTestCaseFunc set_up_tc, Test::TearDownTestCaseFunc tear_down_tc) : name_(a_name), type_param_(a_type_param ? new std::string(a_type_param) : NULL), set_up_tc_(set_up_tc), tear_down_tc_(tear_down_tc), should_run_(false), elapsed_time_(0) { } // Destructor of TestCase. TestCase::~TestCase() { // Deletes every Test in the collection. ForEach(test_info_list_, internal::Delete); } // Returns the i-th test among all the tests. i can range from 0 to // total_test_count() - 1. If i is not in that range, returns NULL. const TestInfo* TestCase::GetTestInfo(int i) const { const int index = GetElementOr(test_indices_, i, -1); return index < 0 ? NULL : test_info_list_[index]; } // Returns the i-th test among all the tests. i can range from 0 to // total_test_count() - 1. If i is not in that range, returns NULL. TestInfo* TestCase::GetMutableTestInfo(int i) { const int index = GetElementOr(test_indices_, i, -1); return index < 0 ? NULL : test_info_list_[index]; } // Adds a test to this test case. Will delete the test upon // destruction of the TestCase object. void TestCase::AddTestInfo(TestInfo * test_info) { test_info_list_.push_back(test_info); test_indices_.push_back(static_cast(test_indices_.size())); } // Runs every test in this TestCase. void TestCase::Run() { if (!should_run_) return; internal::UnitTestImpl* const impl = internal::GetUnitTestImpl(); impl->set_current_test_case(this); TestEventListener* repeater = UnitTest::GetInstance()->listeners().repeater(); repeater->OnTestCaseStart(*this); impl->os_stack_trace_getter()->UponLeavingGTest(); internal::HandleExceptionsInMethodIfSupported( this, &TestCase::RunSetUpTestCase, "SetUpTestCase()"); const internal::TimeInMillis start = internal::GetTimeInMillis(); for (int i = 0; i < total_test_count(); i++) { GetMutableTestInfo(i)->Run(); } elapsed_time_ = internal::GetTimeInMillis() - start; impl->os_stack_trace_getter()->UponLeavingGTest(); internal::HandleExceptionsInMethodIfSupported( this, &TestCase::RunTearDownTestCase, "TearDownTestCase()"); repeater->OnTestCaseEnd(*this); impl->set_current_test_case(NULL); } // Clears the results of all tests in this test case. void TestCase::ClearResult() { ad_hoc_test_result_.Clear(); ForEach(test_info_list_, TestInfo::ClearTestResult); } // Shuffles the tests in this test case. void TestCase::ShuffleTests(internal::Random* random) { Shuffle(random, &test_indices_); } // Restores the test order to before the first shuffle. void TestCase::UnshuffleTests() { for (size_t i = 0; i < test_indices_.size(); i++) { test_indices_[i] = static_cast(i); } } // Formats a countable noun. Depending on its quantity, either the // singular form or the plural form is used. e.g. // // FormatCountableNoun(1, "formula", "formuli") returns "1 formula". // FormatCountableNoun(5, "book", "books") returns "5 books". static std::string FormatCountableNoun(int count, const char * singular_form, const char * plural_form) { return internal::StreamableToString(count) + " " + (count == 1 ? singular_form : plural_form); } // Formats the count of tests. static std::string FormatTestCount(int test_count) { return FormatCountableNoun(test_count, "test", "tests"); } // Formats the count of test cases. static std::string FormatTestCaseCount(int test_case_count) { return FormatCountableNoun(test_case_count, "test case", "test cases"); } // Converts a TestPartResult::Type enum to human-friendly string // representation. Both kNonFatalFailure and kFatalFailure are translated // to "Failure", as the user usually doesn't care about the difference // between the two when viewing the test result. static const char * TestPartResultTypeToString(TestPartResult::Type type) { switch (type) { case TestPartResult::kSuccess: return "Success"; case TestPartResult::kNonFatalFailure: case TestPartResult::kFatalFailure: #ifdef _MSC_VER return "error: "; #else return "Failure\n"; #endif default: return "Unknown result type"; } } namespace internal { // Prints a TestPartResult to an std::string. static std::string PrintTestPartResultToString( const TestPartResult& test_part_result) { return (Message() << internal::FormatFileLocation(test_part_result.file_name(), test_part_result.line_number()) << " " << TestPartResultTypeToString(test_part_result.type()) << test_part_result.message()).GetString(); } // Prints a TestPartResult. static void PrintTestPartResult(const TestPartResult& test_part_result) { const std::string& result = PrintTestPartResultToString(test_part_result); printf("%s\n", result.c_str()); fflush(stdout); // If the test program runs in Visual Studio or a debugger, the // following statements add the test part result message to the Output // window such that the user can double-click on it to jump to the // corresponding source code location; otherwise they do nothing. #if GTEST_OS_WINDOWS && !GTEST_OS_WINDOWS_MOBILE // We don't call OutputDebugString*() on Windows Mobile, as printing // to stdout is done by OutputDebugString() there already - we don't // want the same message printed twice. ::OutputDebugStringA(result.c_str()); ::OutputDebugStringA("\n"); #endif } // class PrettyUnitTestResultPrinter enum GTestColor { COLOR_DEFAULT, COLOR_RED, COLOR_GREEN, COLOR_YELLOW }; #if GTEST_OS_WINDOWS && !GTEST_OS_WINDOWS_MOBILE // Returns the character attribute for the given color. WORD GetColorAttribute(GTestColor color) { switch (color) { case COLOR_RED: return FOREGROUND_RED; case COLOR_GREEN: return FOREGROUND_GREEN; case COLOR_YELLOW: return FOREGROUND_RED | FOREGROUND_GREEN; default: return 0; } } #else // Returns the ANSI color code for the given color. COLOR_DEFAULT is // an invalid input. const char* GetAnsiColorCode(GTestColor color) { switch (color) { case COLOR_RED: return "1"; case COLOR_GREEN: return "2"; case COLOR_YELLOW: return "3"; default: return NULL; }; } #endif // GTEST_OS_WINDOWS && !GTEST_OS_WINDOWS_MOBILE // Returns true iff Google Test should use colors in the output. bool ShouldUseColor(bool stdout_is_tty) { const char* const gtest_color = GTEST_FLAG(color).c_str(); if (String::CaseInsensitiveCStringEquals(gtest_color, "auto")) { #if GTEST_OS_WINDOWS // On Windows the TERM variable is usually not set, but the // console there does support colors. return stdout_is_tty; #else // On non-Windows platforms, we rely on the TERM variable. const char* const term = posix::GetEnv("TERM"); const bool term_supports_color = String::CStringEquals(term, "xterm") || String::CStringEquals(term, "xterm-color") || String::CStringEquals(term, "xterm-256color") || String::CStringEquals(term, "screen") || String::CStringEquals(term, "screen-256color") || String::CStringEquals(term, "linux") || String::CStringEquals(term, "cygwin"); return stdout_is_tty && term_supports_color; #endif // GTEST_OS_WINDOWS } return String::CaseInsensitiveCStringEquals(gtest_color, "yes") || String::CaseInsensitiveCStringEquals(gtest_color, "true") || String::CaseInsensitiveCStringEquals(gtest_color, "t") || String::CStringEquals(gtest_color, "1"); // We take "yes", "true", "t", and "1" as meaning "yes". If the // value is neither one of these nor "auto", we treat it as "no" to // be conservative. } // Helpers for printing colored strings to stdout. Note that on Windows, we // cannot simply emit special characters and have the terminal change colors. // This routine must actually emit the characters rather than return a string // that would be colored when printed, as can be done on Linux. void ColoredPrintf(GTestColor color, const char* fmt, ...) { va_list args; va_start(args, fmt); #if GTEST_OS_WINDOWS_MOBILE || GTEST_OS_SYMBIAN || GTEST_OS_ZOS || GTEST_OS_IOS const bool use_color = false; #else static const bool in_color_mode = ShouldUseColor(posix::IsATTY(posix::FileNo(stdout)) != 0); const bool use_color = in_color_mode && (color != COLOR_DEFAULT); #endif // GTEST_OS_WINDOWS_MOBILE || GTEST_OS_SYMBIAN || GTEST_OS_ZOS // The '!= 0' comparison is necessary to satisfy MSVC 7.1. if (!use_color) { vprintf(fmt, args); va_end(args); return; } #if GTEST_OS_WINDOWS && !GTEST_OS_WINDOWS_MOBILE const HANDLE stdout_handle = GetStdHandle(STD_OUTPUT_HANDLE); // Gets the current text color. CONSOLE_SCREEN_BUFFER_INFO buffer_info; GetConsoleScreenBufferInfo(stdout_handle, &buffer_info); const WORD old_color_attrs = buffer_info.wAttributes; // We need to flush the stream buffers into the console before each // SetConsoleTextAttribute call lest it affect the text that is already // printed but has not yet reached the console. fflush(stdout); SetConsoleTextAttribute(stdout_handle, GetColorAttribute(color) | FOREGROUND_INTENSITY); vprintf(fmt, args); fflush(stdout); // Restores the text color. SetConsoleTextAttribute(stdout_handle, old_color_attrs); #else printf("\033[0;3%sm", GetAnsiColorCode(color)); vprintf(fmt, args); printf("\033[m"); // Resets the terminal to default. #endif // GTEST_OS_WINDOWS && !GTEST_OS_WINDOWS_MOBILE va_end(args); } // Text printed in Google Test's text output and --gunit_list_tests // output to label the type parameter and value parameter for a test. static const char kTypeParamLabel[] = "TypeParam"; static const char kValueParamLabel[] = "GetParam()"; void PrintFullTestCommentIfPresent(const TestInfo& test_info) { const char* const type_param = test_info.type_param(); const char* const value_param = test_info.value_param(); if (type_param != NULL || value_param != NULL) { printf(", where "); if (type_param != NULL) { printf("%s = %s", kTypeParamLabel, type_param); if (value_param != NULL) printf(" and "); } if (value_param != NULL) { printf("%s = %s", kValueParamLabel, value_param); } } } // This class implements the TestEventListener interface. // // Class PrettyUnitTestResultPrinter is copyable. class PrettyUnitTestResultPrinter : public TestEventListener { public: PrettyUnitTestResultPrinter() {} static void PrintTestName(const char * test_case, const char * test) { printf("%s.%s", test_case, test); } // The following methods override what's in the TestEventListener class. virtual void OnTestProgramStart(const UnitTest& /*unit_test*/) {} virtual void OnTestIterationStart(const UnitTest& unit_test, int iteration); virtual void OnEnvironmentsSetUpStart(const UnitTest& unit_test); virtual void OnEnvironmentsSetUpEnd(const UnitTest& /*unit_test*/) {} virtual void OnTestCaseStart(const TestCase& test_case); virtual void OnTestStart(const TestInfo& test_info); virtual void OnTestPartResult(const TestPartResult& result); virtual void OnTestEnd(const TestInfo& test_info); virtual void OnTestCaseEnd(const TestCase& test_case); virtual void OnEnvironmentsTearDownStart(const UnitTest& unit_test); virtual void OnEnvironmentsTearDownEnd(const UnitTest& /*unit_test*/) {} virtual void OnTestIterationEnd(const UnitTest& unit_test, int iteration); virtual void OnTestProgramEnd(const UnitTest& /*unit_test*/) {} private: static void PrintFailedTests(const UnitTest& unit_test); }; // Fired before each iteration of tests starts. void PrettyUnitTestResultPrinter::OnTestIterationStart( const UnitTest& unit_test, int iteration) { if (GTEST_FLAG(repeat) != 1) printf("\nRepeating all tests (iteration %d) . . .\n\n", iteration + 1); const char* const filter = GTEST_FLAG(filter).c_str(); // Prints the filter if it's not *. This reminds the user that some // tests may be skipped. if (!String::CStringEquals(filter, kUniversalFilter)) { ColoredPrintf(COLOR_YELLOW, "Note: %s filter = %s\n", GTEST_NAME_, filter); } if (internal::ShouldShard(kTestTotalShards, kTestShardIndex, false)) { const Int32 shard_index = Int32FromEnvOrDie(kTestShardIndex, -1); ColoredPrintf(COLOR_YELLOW, "Note: This is test shard %d of %s.\n", static_cast(shard_index) + 1, internal::posix::GetEnv(kTestTotalShards)); } if (GTEST_FLAG(shuffle)) { ColoredPrintf(COLOR_YELLOW, "Note: Randomizing tests' orders with a seed of %d .\n", unit_test.random_seed()); } ColoredPrintf(COLOR_GREEN, "[==========] "); printf("Running %s from %s.\n", FormatTestCount(unit_test.test_to_run_count()).c_str(), FormatTestCaseCount(unit_test.test_case_to_run_count()).c_str()); fflush(stdout); } void PrettyUnitTestResultPrinter::OnEnvironmentsSetUpStart( const UnitTest& /*unit_test*/) { ColoredPrintf(COLOR_GREEN, "[----------] "); printf("Global test environment set-up.\n"); fflush(stdout); } void PrettyUnitTestResultPrinter::OnTestCaseStart(const TestCase& test_case) { const std::string counts = FormatCountableNoun(test_case.test_to_run_count(), "test", "tests"); ColoredPrintf(COLOR_GREEN, "[----------] "); printf("%s from %s", counts.c_str(), test_case.name()); if (test_case.type_param() == NULL) { printf("\n"); } else { printf(", where %s = %s\n", kTypeParamLabel, test_case.type_param()); } fflush(stdout); } void PrettyUnitTestResultPrinter::OnTestStart(const TestInfo& test_info) { ColoredPrintf(COLOR_GREEN, "[ RUN ] "); PrintTestName(test_info.test_case_name(), test_info.name()); printf("\n"); fflush(stdout); } // Called after an assertion failure. void PrettyUnitTestResultPrinter::OnTestPartResult( const TestPartResult& result) { // If the test part succeeded, we don't need to do anything. if (result.type() == TestPartResult::kSuccess) return; // Print failure message from the assertion (e.g. expected this and got that). PrintTestPartResult(result); fflush(stdout); } void PrettyUnitTestResultPrinter::OnTestEnd(const TestInfo& test_info) { if (test_info.result()->Passed()) { ColoredPrintf(COLOR_GREEN, "[ OK ] "); } else { ColoredPrintf(COLOR_RED, "[ FAILED ] "); } PrintTestName(test_info.test_case_name(), test_info.name()); if (test_info.result()->Failed()) PrintFullTestCommentIfPresent(test_info); if (GTEST_FLAG(print_time)) { printf(" (%s ms)\n", internal::StreamableToString( test_info.result()->elapsed_time()).c_str()); } else { printf("\n"); } fflush(stdout); } void PrettyUnitTestResultPrinter::OnTestCaseEnd(const TestCase& test_case) { if (!GTEST_FLAG(print_time)) return; const std::string counts = FormatCountableNoun(test_case.test_to_run_count(), "test", "tests"); ColoredPrintf(COLOR_GREEN, "[----------] "); printf("%s from %s (%s ms total)\n\n", counts.c_str(), test_case.name(), internal::StreamableToString(test_case.elapsed_time()).c_str()); fflush(stdout); } void PrettyUnitTestResultPrinter::OnEnvironmentsTearDownStart( const UnitTest& /*unit_test*/) { ColoredPrintf(COLOR_GREEN, "[----------] "); printf("Global test environment tear-down\n"); fflush(stdout); } // Internal helper for printing the list of failed tests. void PrettyUnitTestResultPrinter::PrintFailedTests(const UnitTest& unit_test) { const int failed_test_count = unit_test.failed_test_count(); if (failed_test_count == 0) { return; } for (int i = 0; i < unit_test.total_test_case_count(); ++i) { const TestCase& test_case = *unit_test.GetTestCase(i); if (!test_case.should_run() || (test_case.failed_test_count() == 0)) { continue; } for (int j = 0; j < test_case.total_test_count(); ++j) { const TestInfo& test_info = *test_case.GetTestInfo(j); if (!test_info.should_run() || test_info.result()->Passed()) { continue; } ColoredPrintf(COLOR_RED, "[ FAILED ] "); printf("%s.%s", test_case.name(), test_info.name()); PrintFullTestCommentIfPresent(test_info); printf("\n"); } } } void PrettyUnitTestResultPrinter::OnTestIterationEnd(const UnitTest& unit_test, int /*iteration*/) { ColoredPrintf(COLOR_GREEN, "[==========] "); printf("%s from %s ran.", FormatTestCount(unit_test.test_to_run_count()).c_str(), FormatTestCaseCount(unit_test.test_case_to_run_count()).c_str()); if (GTEST_FLAG(print_time)) { printf(" (%s ms total)", internal::StreamableToString(unit_test.elapsed_time()).c_str()); } printf("\n"); ColoredPrintf(COLOR_GREEN, "[ PASSED ] "); printf("%s.\n", FormatTestCount(unit_test.successful_test_count()).c_str()); int num_failures = unit_test.failed_test_count(); if (!unit_test.Passed()) { const int failed_test_count = unit_test.failed_test_count(); ColoredPrintf(COLOR_RED, "[ FAILED ] "); printf("%s, listed below:\n", FormatTestCount(failed_test_count).c_str()); PrintFailedTests(unit_test); printf("\n%2d FAILED %s\n", num_failures, num_failures == 1 ? "TEST" : "TESTS"); } int num_disabled = unit_test.reportable_disabled_test_count(); if (num_disabled && !GTEST_FLAG(also_run_disabled_tests)) { if (!num_failures) { printf("\n"); // Add a spacer if no FAILURE banner is displayed. } ColoredPrintf(COLOR_YELLOW, " YOU HAVE %d DISABLED %s\n\n", num_disabled, num_disabled == 1 ? "TEST" : "TESTS"); } // Ensure that Google Test output is printed before, e.g., heapchecker output. fflush(stdout); } // End PrettyUnitTestResultPrinter // class TestEventRepeater // // This class forwards events to other event listeners. class TestEventRepeater : public TestEventListener { public: TestEventRepeater() : forwarding_enabled_(true) {} virtual ~TestEventRepeater(); void Append(TestEventListener *listener); TestEventListener* Release(TestEventListener* listener); // Controls whether events will be forwarded to listeners_. Set to false // in death test child processes. bool forwarding_enabled() const { return forwarding_enabled_; } void set_forwarding_enabled(bool enable) { forwarding_enabled_ = enable; } virtual void OnTestProgramStart(const UnitTest& unit_test); virtual void OnTestIterationStart(const UnitTest& unit_test, int iteration); virtual void OnEnvironmentsSetUpStart(const UnitTest& unit_test); virtual void OnEnvironmentsSetUpEnd(const UnitTest& unit_test); virtual void OnTestCaseStart(const TestCase& test_case); virtual void OnTestStart(const TestInfo& test_info); virtual void OnTestPartResult(const TestPartResult& result); virtual void OnTestEnd(const TestInfo& test_info); virtual void OnTestCaseEnd(const TestCase& test_case); virtual void OnEnvironmentsTearDownStart(const UnitTest& unit_test); virtual void OnEnvironmentsTearDownEnd(const UnitTest& unit_test); virtual void OnTestIterationEnd(const UnitTest& unit_test, int iteration); virtual void OnTestProgramEnd(const UnitTest& unit_test); private: // Controls whether events will be forwarded to listeners_. Set to false // in death test child processes. bool forwarding_enabled_; // The list of listeners that receive events. std::vector listeners_; GTEST_DISALLOW_COPY_AND_ASSIGN_(TestEventRepeater); }; TestEventRepeater::~TestEventRepeater() { ForEach(listeners_, Delete); } void TestEventRepeater::Append(TestEventListener *listener) { listeners_.push_back(listener); } // TODO(vladl@google.com): Factor the search functionality into Vector::Find. TestEventListener* TestEventRepeater::Release(TestEventListener *listener) { for (size_t i = 0; i < listeners_.size(); ++i) { if (listeners_[i] == listener) { listeners_.erase(listeners_.begin() + i); return listener; } } return NULL; } // Since most methods are very similar, use macros to reduce boilerplate. // This defines a member that forwards the call to all listeners. #define GTEST_REPEATER_METHOD_(Name, Type) \ void TestEventRepeater::Name(const Type& parameter) { \ if (forwarding_enabled_) { \ for (size_t i = 0; i < listeners_.size(); i++) { \ listeners_[i]->Name(parameter); \ } \ } \ } // This defines a member that forwards the call to all listeners in reverse // order. #define GTEST_REVERSE_REPEATER_METHOD_(Name, Type) \ void TestEventRepeater::Name(const Type& parameter) { \ if (forwarding_enabled_) { \ for (int i = static_cast(listeners_.size()) - 1; i >= 0; i--) { \ listeners_[i]->Name(parameter); \ } \ } \ } GTEST_REPEATER_METHOD_(OnTestProgramStart, UnitTest) GTEST_REPEATER_METHOD_(OnEnvironmentsSetUpStart, UnitTest) GTEST_REPEATER_METHOD_(OnTestCaseStart, TestCase) GTEST_REPEATER_METHOD_(OnTestStart, TestInfo) GTEST_REPEATER_METHOD_(OnTestPartResult, TestPartResult) GTEST_REPEATER_METHOD_(OnEnvironmentsTearDownStart, UnitTest) GTEST_REVERSE_REPEATER_METHOD_(OnEnvironmentsSetUpEnd, UnitTest) GTEST_REVERSE_REPEATER_METHOD_(OnEnvironmentsTearDownEnd, UnitTest) GTEST_REVERSE_REPEATER_METHOD_(OnTestEnd, TestInfo) GTEST_REVERSE_REPEATER_METHOD_(OnTestCaseEnd, TestCase) GTEST_REVERSE_REPEATER_METHOD_(OnTestProgramEnd, UnitTest) #undef GTEST_REPEATER_METHOD_ #undef GTEST_REVERSE_REPEATER_METHOD_ void TestEventRepeater::OnTestIterationStart(const UnitTest& unit_test, int iteration) { if (forwarding_enabled_) { for (size_t i = 0; i < listeners_.size(); i++) { listeners_[i]->OnTestIterationStart(unit_test, iteration); } } } void TestEventRepeater::OnTestIterationEnd(const UnitTest& unit_test, int iteration) { if (forwarding_enabled_) { for (int i = static_cast(listeners_.size()) - 1; i >= 0; i--) { listeners_[i]->OnTestIterationEnd(unit_test, iteration); } } } // End TestEventRepeater // This class generates an XML output file. class XmlUnitTestResultPrinter : public EmptyTestEventListener { public: explicit XmlUnitTestResultPrinter(const char* output_file); virtual void OnTestIterationEnd(const UnitTest& unit_test, int iteration); private: // Is c a whitespace character that is normalized to a space character // when it appears in an XML attribute value? static bool IsNormalizableWhitespace(char c) { return c == 0x9 || c == 0xA || c == 0xD; } // May c appear in a well-formed XML document? static bool IsValidXmlCharacter(char c) { return IsNormalizableWhitespace(c) || c >= 0x20; } // Returns an XML-escaped copy of the input string str. If // is_attribute is true, the text is meant to appear as an attribute // value, and normalizable whitespace is preserved by replacing it // with character references. static std::string EscapeXml(const std::string& str, bool is_attribute); // Returns the given string with all characters invalid in XML removed. static std::string RemoveInvalidXmlCharacters(const std::string& str); // Convenience wrapper around EscapeXml when str is an attribute value. static std::string EscapeXmlAttribute(const std::string& str) { return EscapeXml(str, true); } // Convenience wrapper around EscapeXml when str is not an attribute value. static std::string EscapeXmlText(const char* str) { return EscapeXml(str, false); } // Verifies that the given attribute belongs to the given element and // streams the attribute as XML. static void OutputXmlAttribute(std::ostream* stream, const std::string& element_name, const std::string& name, const std::string& value); // Streams an XML CDATA section, escaping invalid CDATA sequences as needed. static void OutputXmlCDataSection(::std::ostream* stream, const char* data); // Streams an XML representation of a TestInfo object. static void OutputXmlTestInfo(::std::ostream* stream, const char* test_case_name, const TestInfo& test_info); // Prints an XML representation of a TestCase object static void PrintXmlTestCase(::std::ostream* stream, const TestCase& test_case); // Prints an XML summary of unit_test to output stream out. static void PrintXmlUnitTest(::std::ostream* stream, const UnitTest& unit_test); // Produces a string representing the test properties in a result as space // delimited XML attributes based on the property key="value" pairs. // When the std::string is not empty, it includes a space at the beginning, // to delimit this attribute from prior attributes. static std::string TestPropertiesAsXmlAttributes(const TestResult& result); // The output file. const std::string output_file_; GTEST_DISALLOW_COPY_AND_ASSIGN_(XmlUnitTestResultPrinter); }; // Creates a new XmlUnitTestResultPrinter. XmlUnitTestResultPrinter::XmlUnitTestResultPrinter(const char* output_file) : output_file_(output_file) { if (output_file_.c_str() == NULL || output_file_.empty()) { fprintf(stderr, "XML output file may not be null\n"); fflush(stderr); exit(EXIT_FAILURE); } } // Called after the unit test ends. void XmlUnitTestResultPrinter::OnTestIterationEnd(const UnitTest& unit_test, int /*iteration*/) { FILE* xmlout = NULL; FilePath output_file(output_file_); FilePath output_dir(output_file.RemoveFileName()); if (output_dir.CreateDirectoriesRecursively()) { xmlout = posix::FOpen(output_file_.c_str(), "w"); } if (xmlout == NULL) { // TODO(wan): report the reason of the failure. // // We don't do it for now as: // // 1. There is no urgent need for it. // 2. It's a bit involved to make the errno variable thread-safe on // all three operating systems (Linux, Windows, and Mac OS). // 3. To interpret the meaning of errno in a thread-safe way, // we need the strerror_r() function, which is not available on // Windows. fprintf(stderr, "Unable to open file \"%s\"\n", output_file_.c_str()); fflush(stderr); exit(EXIT_FAILURE); } std::stringstream stream; PrintXmlUnitTest(&stream, unit_test); fprintf(xmlout, "%s", StringStreamToString(&stream).c_str()); fclose(xmlout); } // Returns an XML-escaped copy of the input string str. If is_attribute // is true, the text is meant to appear as an attribute value, and // normalizable whitespace is preserved by replacing it with character // references. // // Invalid XML characters in str, if any, are stripped from the output. // It is expected that most, if not all, of the text processed by this // module will consist of ordinary English text. // If this module is ever modified to produce version 1.1 XML output, // most invalid characters can be retained using character references. // TODO(wan): It might be nice to have a minimally invasive, human-readable // escaping scheme for invalid characters, rather than dropping them. std::string XmlUnitTestResultPrinter::EscapeXml( const std::string& str, bool is_attribute) { Message m; for (size_t i = 0; i < str.size(); ++i) { const char ch = str[i]; switch (ch) { case '<': m << "<"; break; case '>': m << ">"; break; case '&': m << "&"; break; case '\'': if (is_attribute) m << "'"; else m << '\''; break; case '"': if (is_attribute) m << """; else m << '"'; break; default: if (IsValidXmlCharacter(ch)) { if (is_attribute && IsNormalizableWhitespace(ch)) m << "&#x" << String::FormatByte(static_cast(ch)) << ";"; else m << ch; } break; } } return m.GetString(); } // Returns the given string with all characters invalid in XML removed. // Currently invalid characters are dropped from the string. An // alternative is to replace them with certain characters such as . or ?. std::string XmlUnitTestResultPrinter::RemoveInvalidXmlCharacters( const std::string& str) { std::string output; output.reserve(str.size()); for (std::string::const_iterator it = str.begin(); it != str.end(); ++it) if (IsValidXmlCharacter(*it)) output.push_back(*it); return output; } // The following routines generate an XML representation of a UnitTest // object. // // This is how Google Test concepts map to the DTD: // // <-- corresponds to a UnitTest object // <-- corresponds to a TestCase object // <-- corresponds to a TestInfo object // ... // ... // ... // <-- individual assertion failures // // // // Formats the given time in milliseconds as seconds. std::string FormatTimeInMillisAsSeconds(TimeInMillis ms) { ::std::stringstream ss; ss << ms/1000.0; return ss.str(); } // Converts the given epoch time in milliseconds to a date string in the ISO // 8601 format, without the timezone information. std::string FormatEpochTimeInMillisAsIso8601(TimeInMillis ms) { // Using non-reentrant version as localtime_r is not portable. time_t seconds = static_cast(ms / 1000); #ifdef _MSC_VER # pragma warning(push) // Saves the current warning state. # pragma warning(disable:4996) // Temporarily disables warning 4996 // (function or variable may be unsafe). const struct tm* const time_struct = localtime(&seconds); // NOLINT # pragma warning(pop) // Restores the warning state again. #else const struct tm* const time_struct = localtime(&seconds); // NOLINT #endif if (time_struct == NULL) return ""; // Invalid ms value // YYYY-MM-DDThh:mm:ss return StreamableToString(time_struct->tm_year + 1900) + "-" + String::FormatIntWidth2(time_struct->tm_mon + 1) + "-" + String::FormatIntWidth2(time_struct->tm_mday) + "T" + String::FormatIntWidth2(time_struct->tm_hour) + ":" + String::FormatIntWidth2(time_struct->tm_min) + ":" + String::FormatIntWidth2(time_struct->tm_sec); } // Streams an XML CDATA section, escaping invalid CDATA sequences as needed. void XmlUnitTestResultPrinter::OutputXmlCDataSection(::std::ostream* stream, const char* data) { const char* segment = data; *stream << ""); if (next_segment != NULL) { stream->write( segment, static_cast(next_segment - segment)); *stream << "]]>]]>"); } else { *stream << segment; break; } } *stream << "]]>"; } void XmlUnitTestResultPrinter::OutputXmlAttribute( std::ostream* stream, const std::string& element_name, const std::string& name, const std::string& value) { const std::vector& allowed_names = GetReservedAttributesForElement(element_name); GTEST_CHECK_(std::find(allowed_names.begin(), allowed_names.end(), name) != allowed_names.end()) << "Attribute " << name << " is not allowed for element <" << element_name << ">."; *stream << " " << name << "=\"" << EscapeXmlAttribute(value) << "\""; } // Prints an XML representation of a TestInfo object. // TODO(wan): There is also value in printing properties with the plain printer. void XmlUnitTestResultPrinter::OutputXmlTestInfo(::std::ostream* stream, const char* test_case_name, const TestInfo& test_info) { const TestResult& result = *test_info.result(); const std::string kTestcase = "testcase"; *stream << " \n"; } const string location = internal::FormatCompilerIndependentFileLocation( part.file_name(), part.line_number()); const string summary = location + "\n" + part.summary(); *stream << " "; const string detail = location + "\n" + part.message(); OutputXmlCDataSection(stream, RemoveInvalidXmlCharacters(detail).c_str()); *stream << "\n"; } } if (failures == 0) *stream << " />\n"; else *stream << " \n"; } // Prints an XML representation of a TestCase object void XmlUnitTestResultPrinter::PrintXmlTestCase(std::ostream* stream, const TestCase& test_case) { const std::string kTestsuite = "testsuite"; *stream << " <" << kTestsuite; OutputXmlAttribute(stream, kTestsuite, "name", test_case.name()); OutputXmlAttribute(stream, kTestsuite, "tests", StreamableToString(test_case.reportable_test_count())); OutputXmlAttribute(stream, kTestsuite, "failures", StreamableToString(test_case.failed_test_count())); OutputXmlAttribute( stream, kTestsuite, "disabled", StreamableToString(test_case.reportable_disabled_test_count())); OutputXmlAttribute(stream, kTestsuite, "errors", "0"); OutputXmlAttribute(stream, kTestsuite, "time", FormatTimeInMillisAsSeconds(test_case.elapsed_time())); *stream << TestPropertiesAsXmlAttributes(test_case.ad_hoc_test_result()) << ">\n"; for (int i = 0; i < test_case.total_test_count(); ++i) { if (test_case.GetTestInfo(i)->is_reportable()) OutputXmlTestInfo(stream, test_case.name(), *test_case.GetTestInfo(i)); } *stream << " \n"; } // Prints an XML summary of unit_test to output stream out. void XmlUnitTestResultPrinter::PrintXmlUnitTest(std::ostream* stream, const UnitTest& unit_test) { const std::string kTestsuites = "testsuites"; *stream << "\n"; *stream << "<" << kTestsuites; OutputXmlAttribute(stream, kTestsuites, "tests", StreamableToString(unit_test.reportable_test_count())); OutputXmlAttribute(stream, kTestsuites, "failures", StreamableToString(unit_test.failed_test_count())); OutputXmlAttribute( stream, kTestsuites, "disabled", StreamableToString(unit_test.reportable_disabled_test_count())); OutputXmlAttribute(stream, kTestsuites, "errors", "0"); OutputXmlAttribute( stream, kTestsuites, "timestamp", FormatEpochTimeInMillisAsIso8601(unit_test.start_timestamp())); OutputXmlAttribute(stream, kTestsuites, "time", FormatTimeInMillisAsSeconds(unit_test.elapsed_time())); if (GTEST_FLAG(shuffle)) { OutputXmlAttribute(stream, kTestsuites, "random_seed", StreamableToString(unit_test.random_seed())); } *stream << TestPropertiesAsXmlAttributes(unit_test.ad_hoc_test_result()); OutputXmlAttribute(stream, kTestsuites, "name", "AllTests"); *stream << ">\n"; for (int i = 0; i < unit_test.total_test_case_count(); ++i) { if (unit_test.GetTestCase(i)->reportable_test_count() > 0) PrintXmlTestCase(stream, *unit_test.GetTestCase(i)); } *stream << "\n"; } // Produces a string representing the test properties in a result as space // delimited XML attributes based on the property key="value" pairs. std::string XmlUnitTestResultPrinter::TestPropertiesAsXmlAttributes( const TestResult& result) { Message attributes; for (int i = 0; i < result.test_property_count(); ++i) { const TestProperty& property = result.GetTestProperty(i); attributes << " " << property.key() << "=" << "\"" << EscapeXmlAttribute(property.value()) << "\""; } return attributes.GetString(); } // End XmlUnitTestResultPrinter #if GTEST_CAN_STREAM_RESULTS_ // Checks if str contains '=', '&', '%' or '\n' characters. If yes, // replaces them by "%xx" where xx is their hexadecimal value. For // example, replaces "=" with "%3D". This algorithm is O(strlen(str)) // in both time and space -- important as the input str may contain an // arbitrarily long test failure message and stack trace. string StreamingListener::UrlEncode(const char* str) { string result; result.reserve(strlen(str) + 1); for (char ch = *str; ch != '\0'; ch = *++str) { switch (ch) { case '%': case '=': case '&': case '\n': result.append("%" + String::FormatByte(static_cast(ch))); break; default: result.push_back(ch); break; } } return result; } void StreamingListener::SocketWriter::MakeConnection() { GTEST_CHECK_(sockfd_ == -1) << "MakeConnection() can't be called when there is already a connection."; addrinfo hints; memset(&hints, 0, sizeof(hints)); hints.ai_family = AF_UNSPEC; // To allow both IPv4 and IPv6 addresses. hints.ai_socktype = SOCK_STREAM; addrinfo* servinfo = NULL; // Use the getaddrinfo() to get a linked list of IP addresses for // the given host name. const int error_num = getaddrinfo( host_name_.c_str(), port_num_.c_str(), &hints, &servinfo); if (error_num != 0) { GTEST_LOG_(WARNING) << "stream_result_to: getaddrinfo() failed: " << gai_strerror(error_num); } // Loop through all the results and connect to the first we can. for (addrinfo* cur_addr = servinfo; sockfd_ == -1 && cur_addr != NULL; cur_addr = cur_addr->ai_next) { sockfd_ = socket( cur_addr->ai_family, cur_addr->ai_socktype, cur_addr->ai_protocol); if (sockfd_ != -1) { // Connect the client socket to the server socket. if (connect(sockfd_, cur_addr->ai_addr, cur_addr->ai_addrlen) == -1) { close(sockfd_); sockfd_ = -1; } } } freeaddrinfo(servinfo); // all done with this structure if (sockfd_ == -1) { GTEST_LOG_(WARNING) << "stream_result_to: failed to connect to " << host_name_ << ":" << port_num_; } } // End of class Streaming Listener #endif // GTEST_CAN_STREAM_RESULTS__ // Class ScopedTrace // Pushes the given source file location and message onto a per-thread // trace stack maintained by Google Test. ScopedTrace::ScopedTrace(const char* file, int line, const Message& message) GTEST_LOCK_EXCLUDED_(&UnitTest::mutex_) { TraceInfo trace; trace.file = file; trace.line = line; trace.message = message.GetString(); UnitTest::GetInstance()->PushGTestTrace(trace); } // Pops the info pushed by the c'tor. ScopedTrace::~ScopedTrace() GTEST_LOCK_EXCLUDED_(&UnitTest::mutex_) { UnitTest::GetInstance()->PopGTestTrace(); } // class OsStackTraceGetter // Returns the current OS stack trace as an std::string. Parameters: // // max_depth - the maximum number of stack frames to be included // in the trace. // skip_count - the number of top frames to be skipped; doesn't count // against max_depth. // string OsStackTraceGetter::CurrentStackTrace(int /* max_depth */, int /* skip_count */) GTEST_LOCK_EXCLUDED_(mutex_) { return ""; } void OsStackTraceGetter::UponLeavingGTest() GTEST_LOCK_EXCLUDED_(mutex_) { } const char* const OsStackTraceGetter::kElidedFramesMarker = "... " GTEST_NAME_ " internal frames ..."; // A helper class that creates the premature-exit file in its // constructor and deletes the file in its destructor. class ScopedPrematureExitFile { public: explicit ScopedPrematureExitFile(const char* premature_exit_filepath) : premature_exit_filepath_(premature_exit_filepath) { // If a path to the premature-exit file is specified... if (premature_exit_filepath != NULL && *premature_exit_filepath != '\0') { // create the file with a single "0" character in it. I/O // errors are ignored as there's nothing better we can do and we // don't want to fail the test because of this. FILE* pfile = posix::FOpen(premature_exit_filepath, "w"); fwrite("0", 1, 1, pfile); fclose(pfile); } } ~ScopedPrematureExitFile() { if (premature_exit_filepath_ != NULL && *premature_exit_filepath_ != '\0') { remove(premature_exit_filepath_); } } private: const char* const premature_exit_filepath_; GTEST_DISALLOW_COPY_AND_ASSIGN_(ScopedPrematureExitFile); }; } // namespace internal // class TestEventListeners TestEventListeners::TestEventListeners() : repeater_(new internal::TestEventRepeater()), default_result_printer_(NULL), default_xml_generator_(NULL) { } TestEventListeners::~TestEventListeners() { delete repeater_; } // Returns the standard listener responsible for the default console // output. Can be removed from the listeners list to shut down default // console output. Note that removing this object from the listener list // with Release transfers its ownership to the user. void TestEventListeners::Append(TestEventListener* listener) { repeater_->Append(listener); } // Removes the given event listener from the list and returns it. It then // becomes the caller's responsibility to delete the listener. Returns // NULL if the listener is not found in the list. TestEventListener* TestEventListeners::Release(TestEventListener* listener) { if (listener == default_result_printer_) default_result_printer_ = NULL; else if (listener == default_xml_generator_) default_xml_generator_ = NULL; return repeater_->Release(listener); } // Returns repeater that broadcasts the TestEventListener events to all // subscribers. TestEventListener* TestEventListeners::repeater() { return repeater_; } // Sets the default_result_printer attribute to the provided listener. // The listener is also added to the listener list and previous // default_result_printer is removed from it and deleted. The listener can // also be NULL in which case it will not be added to the list. Does // nothing if the previous and the current listener objects are the same. void TestEventListeners::SetDefaultResultPrinter(TestEventListener* listener) { if (default_result_printer_ != listener) { // It is an error to pass this method a listener that is already in the // list. delete Release(default_result_printer_); default_result_printer_ = listener; if (listener != NULL) Append(listener); } } // Sets the default_xml_generator attribute to the provided listener. The // listener is also added to the listener list and previous // default_xml_generator is removed from it and deleted. The listener can // also be NULL in which case it will not be added to the list. Does // nothing if the previous and the current listener objects are the same. void TestEventListeners::SetDefaultXmlGenerator(TestEventListener* listener) { if (default_xml_generator_ != listener) { // It is an error to pass this method a listener that is already in the // list. delete Release(default_xml_generator_); default_xml_generator_ = listener; if (listener != NULL) Append(listener); } } // Controls whether events will be forwarded by the repeater to the // listeners in the list. bool TestEventListeners::EventForwardingEnabled() const { return repeater_->forwarding_enabled(); } void TestEventListeners::SuppressEventForwarding() { repeater_->set_forwarding_enabled(false); } // class UnitTest // Gets the singleton UnitTest object. The first time this method is // called, a UnitTest object is constructed and returned. Consecutive // calls will return the same object. // // We don't protect this under mutex_ as a user is not supposed to // call this before main() starts, from which point on the return // value will never change. UnitTest* UnitTest::GetInstance() { // When compiled with MSVC 7.1 in optimized mode, destroying the // UnitTest object upon exiting the program messes up the exit code, // causing successful tests to appear failed. We have to use a // different implementation in this case to bypass the compiler bug. // This implementation makes the compiler happy, at the cost of // leaking the UnitTest object. // CodeGear C++Builder insists on a public destructor for the // default implementation. Use this implementation to keep good OO // design with private destructor. #if (_MSC_VER == 1310 && !defined(_DEBUG)) || defined(__BORLANDC__) static UnitTest* const instance = new UnitTest; return instance; #else static UnitTest instance; return &instance; #endif // (_MSC_VER == 1310 && !defined(_DEBUG)) || defined(__BORLANDC__) } // Gets the number of successful test cases. int UnitTest::successful_test_case_count() const { return impl()->successful_test_case_count(); } // Gets the number of failed test cases. int UnitTest::failed_test_case_count() const { return impl()->failed_test_case_count(); } // Gets the number of all test cases. int UnitTest::total_test_case_count() const { return impl()->total_test_case_count(); } // Gets the number of all test cases that contain at least one test // that should run. int UnitTest::test_case_to_run_count() const { return impl()->test_case_to_run_count(); } // Gets the number of successful tests. int UnitTest::successful_test_count() const { return impl()->successful_test_count(); } // Gets the number of failed tests. int UnitTest::failed_test_count() const { return impl()->failed_test_count(); } // Gets the number of disabled tests that will be reported in the XML report. int UnitTest::reportable_disabled_test_count() const { return impl()->reportable_disabled_test_count(); } // Gets the number of disabled tests. int UnitTest::disabled_test_count() const { return impl()->disabled_test_count(); } // Gets the number of tests to be printed in the XML report. int UnitTest::reportable_test_count() const { return impl()->reportable_test_count(); } // Gets the number of all tests. int UnitTest::total_test_count() const { return impl()->total_test_count(); } // Gets the number of tests that should run. int UnitTest::test_to_run_count() const { return impl()->test_to_run_count(); } // Gets the time of the test program start, in ms from the start of the // UNIX epoch. internal::TimeInMillis UnitTest::start_timestamp() const { return impl()->start_timestamp(); } // Gets the elapsed time, in milliseconds. internal::TimeInMillis UnitTest::elapsed_time() const { return impl()->elapsed_time(); } // Returns true iff the unit test passed (i.e. all test cases passed). bool UnitTest::Passed() const { return impl()->Passed(); } // Returns true iff the unit test failed (i.e. some test case failed // or something outside of all tests failed). bool UnitTest::Failed() const { return impl()->Failed(); } // Gets the i-th test case among all the test cases. i can range from 0 to // total_test_case_count() - 1. If i is not in that range, returns NULL. const TestCase* UnitTest::GetTestCase(int i) const { return impl()->GetTestCase(i); } // Returns the TestResult containing information on test failures and // properties logged outside of individual test cases. const TestResult& UnitTest::ad_hoc_test_result() const { return *impl()->ad_hoc_test_result(); } // Gets the i-th test case among all the test cases. i can range from 0 to // total_test_case_count() - 1. If i is not in that range, returns NULL. TestCase* UnitTest::GetMutableTestCase(int i) { return impl()->GetMutableTestCase(i); } // Returns the list of event listeners that can be used to track events // inside Google Test. TestEventListeners& UnitTest::listeners() { return *impl()->listeners(); } // Registers and returns a global test environment. When a test // program is run, all global test environments will be set-up in the // order they were registered. After all tests in the program have // finished, all global test environments will be torn-down in the // *reverse* order they were registered. // // The UnitTest object takes ownership of the given environment. // // We don't protect this under mutex_, as we only support calling it // from the main thread. Environment* UnitTest::AddEnvironment(Environment* env) { if (env == NULL) { return NULL; } impl_->environments().push_back(env); return env; } // Adds a TestPartResult to the current TestResult object. All Google Test // assertion macros (e.g. ASSERT_TRUE, EXPECT_EQ, etc) eventually call // this to report their results. The user code should use the // assertion macros instead of calling this directly. void UnitTest::AddTestPartResult( TestPartResult::Type result_type, const char* file_name, int line_number, const std::string& message, const std::string& os_stack_trace) GTEST_LOCK_EXCLUDED_(mutex_) { Message msg; msg << message; internal::MutexLock lock(&mutex_); if (impl_->gtest_trace_stack().size() > 0) { msg << "\n" << GTEST_NAME_ << " trace:"; for (int i = static_cast(impl_->gtest_trace_stack().size()); i > 0; --i) { const internal::TraceInfo& trace = impl_->gtest_trace_stack()[i - 1]; msg << "\n" << internal::FormatFileLocation(trace.file, trace.line) << " " << trace.message; } } if (os_stack_trace.c_str() != NULL && !os_stack_trace.empty()) { msg << internal::kStackTraceMarker << os_stack_trace; } const TestPartResult result = TestPartResult(result_type, file_name, line_number, msg.GetString().c_str()); impl_->GetTestPartResultReporterForCurrentThread()-> ReportTestPartResult(result); if (result_type != TestPartResult::kSuccess) { // gtest_break_on_failure takes precedence over // gtest_throw_on_failure. This allows a user to set the latter // in the code (perhaps in order to use Google Test assertions // with another testing framework) and specify the former on the // command line for debugging. if (GTEST_FLAG(break_on_failure)) { #if GTEST_OS_WINDOWS // Using DebugBreak on Windows allows gtest to still break into a debugger // when a failure happens and both the --gtest_break_on_failure and // the --gtest_catch_exceptions flags are specified. DebugBreak(); #else // Dereference NULL through a volatile pointer to prevent the compiler // from removing. We use this rather than abort() or __builtin_trap() for // portability: Symbian doesn't implement abort() well, and some debuggers // don't correctly trap abort(). *static_cast(NULL) = 1; #endif // GTEST_OS_WINDOWS } else if (GTEST_FLAG(throw_on_failure)) { #if GTEST_HAS_EXCEPTIONS throw internal::GoogleTestFailureException(result); #else // We cannot call abort() as it generates a pop-up in debug mode // that cannot be suppressed in VC 7.1 or below. exit(1); #endif } } } // Adds a TestProperty to the current TestResult object when invoked from // inside a test, to current TestCase's ad_hoc_test_result_ when invoked // from SetUpTestCase or TearDownTestCase, or to the global property set // when invoked elsewhere. If the result already contains a property with // the same key, the value will be updated. void UnitTest::RecordProperty(const std::string& key, const std::string& value) { impl_->RecordProperty(TestProperty(key, value)); } // Runs all tests in this UnitTest object and prints the result. // Returns 0 if successful, or 1 otherwise. // // We don't protect this under mutex_, as we only support calling it // from the main thread. int UnitTest::Run() { const bool in_death_test_child_process = internal::GTEST_FLAG(internal_run_death_test).length() > 0; // Google Test implements this protocol for catching that a test // program exits before returning control to Google Test: // // 1. Upon start, Google Test creates a file whose absolute path // is specified by the environment variable // TEST_PREMATURE_EXIT_FILE. // 2. When Google Test has finished its work, it deletes the file. // // This allows a test runner to set TEST_PREMATURE_EXIT_FILE before // running a Google-Test-based test program and check the existence // of the file at the end of the test execution to see if it has // exited prematurely. // If we are in the child process of a death test, don't // create/delete the premature exit file, as doing so is unnecessary // and will confuse the parent process. Otherwise, create/delete // the file upon entering/leaving this function. If the program // somehow exits before this function has a chance to return, the // premature-exit file will be left undeleted, causing a test runner // that understands the premature-exit-file protocol to report the // test as having failed. const internal::ScopedPrematureExitFile premature_exit_file( in_death_test_child_process ? NULL : internal::posix::GetEnv("TEST_PREMATURE_EXIT_FILE")); // Captures the value of GTEST_FLAG(catch_exceptions). This value will be // used for the duration of the program. impl()->set_catch_exceptions(GTEST_FLAG(catch_exceptions)); #if GTEST_HAS_SEH // Either the user wants Google Test to catch exceptions thrown by the // tests or this is executing in the context of death test child // process. In either case the user does not want to see pop-up dialogs // about crashes - they are expected. if (impl()->catch_exceptions() || in_death_test_child_process) { # if !GTEST_OS_WINDOWS_MOBILE // SetErrorMode doesn't exist on CE. SetErrorMode(SEM_FAILCRITICALERRORS | SEM_NOALIGNMENTFAULTEXCEPT | SEM_NOGPFAULTERRORBOX | SEM_NOOPENFILEERRORBOX); # endif // !GTEST_OS_WINDOWS_MOBILE # if (defined(_MSC_VER) || GTEST_OS_WINDOWS_MINGW) && !GTEST_OS_WINDOWS_MOBILE // Death test children can be terminated with _abort(). On Windows, // _abort() can show a dialog with a warning message. This forces the // abort message to go to stderr instead. _set_error_mode(_OUT_TO_STDERR); # endif # if _MSC_VER >= 1400 && !GTEST_OS_WINDOWS_MOBILE // In the debug version, Visual Studio pops up a separate dialog // offering a choice to debug the aborted program. We need to suppress // this dialog or it will pop up for every EXPECT/ASSERT_DEATH statement // executed. Google Test will notify the user of any unexpected // failure via stderr. // // VC++ doesn't define _set_abort_behavior() prior to the version 8.0. // Users of prior VC versions shall suffer the agony and pain of // clicking through the countless debug dialogs. // TODO(vladl@google.com): find a way to suppress the abort dialog() in the // debug mode when compiled with VC 7.1 or lower. if (!GTEST_FLAG(break_on_failure)) _set_abort_behavior( 0x0, // Clear the following flags: _WRITE_ABORT_MSG | _CALL_REPORTFAULT); // pop-up window, core dump. # endif } #endif // GTEST_HAS_SEH return internal::HandleExceptionsInMethodIfSupported( impl(), &internal::UnitTestImpl::RunAllTests, "auxiliary test code (environments or event listeners)") ? 0 : 1; } // Returns the working directory when the first TEST() or TEST_F() was // executed. const char* UnitTest::original_working_dir() const { return impl_->original_working_dir_.c_str(); } // Returns the TestCase object for the test that's currently running, // or NULL if no test is running. const TestCase* UnitTest::current_test_case() const GTEST_LOCK_EXCLUDED_(mutex_) { internal::MutexLock lock(&mutex_); return impl_->current_test_case(); } // Returns the TestInfo object for the test that's currently running, // or NULL if no test is running. const TestInfo* UnitTest::current_test_info() const GTEST_LOCK_EXCLUDED_(mutex_) { internal::MutexLock lock(&mutex_); return impl_->current_test_info(); } // Returns the random seed used at the start of the current test run. int UnitTest::random_seed() const { return impl_->random_seed(); } #if GTEST_HAS_PARAM_TEST // Returns ParameterizedTestCaseRegistry object used to keep track of // value-parameterized tests and instantiate and register them. internal::ParameterizedTestCaseRegistry& UnitTest::parameterized_test_registry() GTEST_LOCK_EXCLUDED_(mutex_) { return impl_->parameterized_test_registry(); } #endif // GTEST_HAS_PARAM_TEST // Creates an empty UnitTest. UnitTest::UnitTest() { impl_ = new internal::UnitTestImpl(this); } // Destructor of UnitTest. UnitTest::~UnitTest() { delete impl_; } // Pushes a trace defined by SCOPED_TRACE() on to the per-thread // Google Test trace stack. void UnitTest::PushGTestTrace(const internal::TraceInfo& trace) GTEST_LOCK_EXCLUDED_(mutex_) { internal::MutexLock lock(&mutex_); impl_->gtest_trace_stack().push_back(trace); } // Pops a trace from the per-thread Google Test trace stack. void UnitTest::PopGTestTrace() GTEST_LOCK_EXCLUDED_(mutex_) { internal::MutexLock lock(&mutex_); impl_->gtest_trace_stack().pop_back(); } namespace internal { UnitTestImpl::UnitTestImpl(UnitTest* parent) : parent_(parent), #ifdef _MSC_VER # pragma warning(push) // Saves the current warning state. # pragma warning(disable:4355) // Temporarily disables warning 4355 // (using this in initializer). default_global_test_part_result_reporter_(this), default_per_thread_test_part_result_reporter_(this), # pragma warning(pop) // Restores the warning state again. #else default_global_test_part_result_reporter_(this), default_per_thread_test_part_result_reporter_(this), #endif // _MSC_VER global_test_part_result_repoter_( &default_global_test_part_result_reporter_), per_thread_test_part_result_reporter_( &default_per_thread_test_part_result_reporter_), #if GTEST_HAS_PARAM_TEST parameterized_test_registry_(), parameterized_tests_registered_(false), #endif // GTEST_HAS_PARAM_TEST last_death_test_case_(-1), current_test_case_(NULL), current_test_info_(NULL), ad_hoc_test_result_(), os_stack_trace_getter_(NULL), post_flag_parse_init_performed_(false), random_seed_(0), // Will be overridden by the flag before first use. random_(0), // Will be reseeded before first use. start_timestamp_(0), elapsed_time_(0), #if GTEST_HAS_DEATH_TEST death_test_factory_(new DefaultDeathTestFactory), #endif // Will be overridden by the flag before first use. catch_exceptions_(false) { listeners()->SetDefaultResultPrinter(new PrettyUnitTestResultPrinter); } UnitTestImpl::~UnitTestImpl() { // Deletes every TestCase. ForEach(test_cases_, internal::Delete); // Deletes every Environment. ForEach(environments_, internal::Delete); delete os_stack_trace_getter_; } // Adds a TestProperty to the current TestResult object when invoked in a // context of a test, to current test case's ad_hoc_test_result when invoke // from SetUpTestCase/TearDownTestCase, or to the global property set // otherwise. If the result already contains a property with the same key, // the value will be updated. void UnitTestImpl::RecordProperty(const TestProperty& test_property) { std::string xml_element; TestResult* test_result; // TestResult appropriate for property recording. if (current_test_info_ != NULL) { xml_element = "testcase"; test_result = &(current_test_info_->result_); } else if (current_test_case_ != NULL) { xml_element = "testsuite"; test_result = &(current_test_case_->ad_hoc_test_result_); } else { xml_element = "testsuites"; test_result = &ad_hoc_test_result_; } test_result->RecordProperty(xml_element, test_property); } #if GTEST_HAS_DEATH_TEST // Disables event forwarding if the control is currently in a death test // subprocess. Must not be called before InitGoogleTest. void UnitTestImpl::SuppressTestEventsIfInSubprocess() { if (internal_run_death_test_flag_.get() != NULL) listeners()->SuppressEventForwarding(); } #endif // GTEST_HAS_DEATH_TEST // Initializes event listeners performing XML output as specified by // UnitTestOptions. Must not be called before InitGoogleTest. void UnitTestImpl::ConfigureXmlOutput() { const std::string& output_format = UnitTestOptions::GetOutputFormat(); if (output_format == "xml") { listeners()->SetDefaultXmlGenerator(new XmlUnitTestResultPrinter( UnitTestOptions::GetAbsolutePathToOutputFile().c_str())); } else if (output_format != "") { printf("WARNING: unrecognized output format \"%s\" ignored.\n", output_format.c_str()); fflush(stdout); } } #if GTEST_CAN_STREAM_RESULTS_ // Initializes event listeners for streaming test results in string form. // Must not be called before InitGoogleTest. void UnitTestImpl::ConfigureStreamingOutput() { const std::string& target = GTEST_FLAG(stream_result_to); if (!target.empty()) { const size_t pos = target.find(':'); if (pos != std::string::npos) { listeners()->Append(new StreamingListener(target.substr(0, pos), target.substr(pos+1))); } else { printf("WARNING: unrecognized streaming target \"%s\" ignored.\n", target.c_str()); fflush(stdout); } } } #endif // GTEST_CAN_STREAM_RESULTS_ // Performs initialization dependent upon flag values obtained in // ParseGoogleTestFlagsOnly. Is called from InitGoogleTest after the call to // ParseGoogleTestFlagsOnly. In case a user neglects to call InitGoogleTest // this function is also called from RunAllTests. Since this function can be // called more than once, it has to be idempotent. void UnitTestImpl::PostFlagParsingInit() { // Ensures that this function does not execute more than once. if (!post_flag_parse_init_performed_) { post_flag_parse_init_performed_ = true; #if GTEST_HAS_DEATH_TEST InitDeathTestSubprocessControlInfo(); SuppressTestEventsIfInSubprocess(); #endif // GTEST_HAS_DEATH_TEST // Registers parameterized tests. This makes parameterized tests // available to the UnitTest reflection API without running // RUN_ALL_TESTS. RegisterParameterizedTests(); // Configures listeners for XML output. This makes it possible for users // to shut down the default XML output before invoking RUN_ALL_TESTS. ConfigureXmlOutput(); #if GTEST_CAN_STREAM_RESULTS_ // Configures listeners for streaming test results to the specified server. ConfigureStreamingOutput(); #endif // GTEST_CAN_STREAM_RESULTS_ } } // A predicate that checks the name of a TestCase against a known // value. // // This is used for implementation of the UnitTest class only. We put // it in the anonymous namespace to prevent polluting the outer // namespace. // // TestCaseNameIs is copyable. class TestCaseNameIs { public: // Constructor. explicit TestCaseNameIs(const std::string& name) : name_(name) {} // Returns true iff the name of test_case matches name_. bool operator()(const TestCase* test_case) const { return test_case != NULL && strcmp(test_case->name(), name_.c_str()) == 0; } private: std::string name_; }; // Finds and returns a TestCase with the given name. If one doesn't // exist, creates one and returns it. It's the CALLER'S // RESPONSIBILITY to ensure that this function is only called WHEN THE // TESTS ARE NOT SHUFFLED. // // Arguments: // // test_case_name: name of the test case // type_param: the name of the test case's type parameter, or NULL if // this is not a typed or a type-parameterized test case. // set_up_tc: pointer to the function that sets up the test case // tear_down_tc: pointer to the function that tears down the test case TestCase* UnitTestImpl::GetTestCase(const char* test_case_name, const char* type_param, Test::SetUpTestCaseFunc set_up_tc, Test::TearDownTestCaseFunc tear_down_tc) { // Can we find a TestCase with the given name? const std::vector::const_iterator test_case = std::find_if(test_cases_.begin(), test_cases_.end(), TestCaseNameIs(test_case_name)); if (test_case != test_cases_.end()) return *test_case; // No. Let's create one. TestCase* const new_test_case = new TestCase(test_case_name, type_param, set_up_tc, tear_down_tc); // Is this a death test case? if (internal::UnitTestOptions::MatchesFilter(test_case_name, kDeathTestCaseFilter)) { // Yes. Inserts the test case after the last death test case // defined so far. This only works when the test cases haven't // been shuffled. Otherwise we may end up running a death test // after a non-death test. ++last_death_test_case_; test_cases_.insert(test_cases_.begin() + last_death_test_case_, new_test_case); } else { // No. Appends to the end of the list. test_cases_.push_back(new_test_case); } test_case_indices_.push_back(static_cast(test_case_indices_.size())); return new_test_case; } // Helpers for setting up / tearing down the given environment. They // are for use in the ForEach() function. static void SetUpEnvironment(Environment* env) { env->SetUp(); } static void TearDownEnvironment(Environment* env) { env->TearDown(); } // Runs all tests in this UnitTest object, prints the result, and // returns true if all tests are successful. If any exception is // thrown during a test, the test is considered to be failed, but the // rest of the tests will still be run. // // When parameterized tests are enabled, it expands and registers // parameterized tests first in RegisterParameterizedTests(). // All other functions called from RunAllTests() may safely assume that // parameterized tests are ready to be counted and run. bool UnitTestImpl::RunAllTests() { // Makes sure InitGoogleTest() was called. if (!GTestIsInitialized()) { printf("%s", "\nThis test program did NOT call ::testing::InitGoogleTest " "before calling RUN_ALL_TESTS(). Please fix it.\n"); return false; } // Do not run any test if the --help flag was specified. if (g_help_flag) return true; // Repeats the call to the post-flag parsing initialization in case the // user didn't call InitGoogleTest. PostFlagParsingInit(); // Even if sharding is not on, test runners may want to use the // GTEST_SHARD_STATUS_FILE to query whether the test supports the sharding // protocol. internal::WriteToShardStatusFileIfNeeded(); // True iff we are in a subprocess for running a thread-safe-style // death test. bool in_subprocess_for_death_test = false; #if GTEST_HAS_DEATH_TEST in_subprocess_for_death_test = (internal_run_death_test_flag_.get() != NULL); #endif // GTEST_HAS_DEATH_TEST const bool should_shard = ShouldShard(kTestTotalShards, kTestShardIndex, in_subprocess_for_death_test); // Compares the full test names with the filter to decide which // tests to run. const bool has_tests_to_run = FilterTests(should_shard ? HONOR_SHARDING_PROTOCOL : IGNORE_SHARDING_PROTOCOL) > 0; // Lists the tests and exits if the --gtest_list_tests flag was specified. if (GTEST_FLAG(list_tests)) { // This must be called *after* FilterTests() has been called. ListTestsMatchingFilter(); return true; } random_seed_ = GTEST_FLAG(shuffle) ? GetRandomSeedFromFlag(GTEST_FLAG(random_seed)) : 0; // True iff at least one test has failed. bool failed = false; TestEventListener* repeater = listeners()->repeater(); start_timestamp_ = GetTimeInMillis(); repeater->OnTestProgramStart(*parent_); // How many times to repeat the tests? We don't want to repeat them // when we are inside the subprocess of a death test. const int repeat = in_subprocess_for_death_test ? 1 : GTEST_FLAG(repeat); // Repeats forever if the repeat count is negative. const bool forever = repeat < 0; for (int i = 0; forever || i != repeat; i++) { // We want to preserve failures generated by ad-hoc test // assertions executed before RUN_ALL_TESTS(). ClearNonAdHocTestResult(); const TimeInMillis start = GetTimeInMillis(); // Shuffles test cases and tests if requested. if (has_tests_to_run && GTEST_FLAG(shuffle)) { random()->Reseed(random_seed_); // This should be done before calling OnTestIterationStart(), // such that a test event listener can see the actual test order // in the event. ShuffleTests(); } // Tells the unit test event listeners that the tests are about to start. repeater->OnTestIterationStart(*parent_, i); // Runs each test case if there is at least one test to run. if (has_tests_to_run) { // Sets up all environments beforehand. repeater->OnEnvironmentsSetUpStart(*parent_); ForEach(environments_, SetUpEnvironment); repeater->OnEnvironmentsSetUpEnd(*parent_); // Runs the tests only if there was no fatal failure during global // set-up. if (!Test::HasFatalFailure()) { for (int test_index = 0; test_index < total_test_case_count(); test_index++) { GetMutableTestCase(test_index)->Run(); } } // Tears down all environments in reverse order afterwards. repeater->OnEnvironmentsTearDownStart(*parent_); std::for_each(environments_.rbegin(), environments_.rend(), TearDownEnvironment); repeater->OnEnvironmentsTearDownEnd(*parent_); } elapsed_time_ = GetTimeInMillis() - start; // Tells the unit test event listener that the tests have just finished. repeater->OnTestIterationEnd(*parent_, i); // Gets the result and clears it. if (!Passed()) { failed = true; } // Restores the original test order after the iteration. This // allows the user to quickly repro a failure that happens in the // N-th iteration without repeating the first (N - 1) iterations. // This is not enclosed in "if (GTEST_FLAG(shuffle)) { ... }", in // case the user somehow changes the value of the flag somewhere // (it's always safe to unshuffle the tests). UnshuffleTests(); if (GTEST_FLAG(shuffle)) { // Picks a new random seed for each iteration. random_seed_ = GetNextRandomSeed(random_seed_); } } repeater->OnTestProgramEnd(*parent_); return !failed; } // Reads the GTEST_SHARD_STATUS_FILE environment variable, and creates the file // if the variable is present. If a file already exists at this location, this // function will write over it. If the variable is present, but the file cannot // be created, prints an error and exits. void WriteToShardStatusFileIfNeeded() { const char* const test_shard_file = posix::GetEnv(kTestShardStatusFile); if (test_shard_file != NULL) { FILE* const file = posix::FOpen(test_shard_file, "w"); if (file == NULL) { ColoredPrintf(COLOR_RED, "Could not write to the test shard status file \"%s\" " "specified by the %s environment variable.\n", test_shard_file, kTestShardStatusFile); fflush(stdout); exit(EXIT_FAILURE); } fclose(file); } } // Checks whether sharding is enabled by examining the relevant // environment variable values. If the variables are present, // but inconsistent (i.e., shard_index >= total_shards), prints // an error and exits. If in_subprocess_for_death_test, sharding is // disabled because it must only be applied to the original test // process. Otherwise, we could filter out death tests we intended to execute. bool ShouldShard(const char* total_shards_env, const char* shard_index_env, bool in_subprocess_for_death_test) { if (in_subprocess_for_death_test) { return false; } const Int32 total_shards = Int32FromEnvOrDie(total_shards_env, -1); const Int32 shard_index = Int32FromEnvOrDie(shard_index_env, -1); if (total_shards == -1 && shard_index == -1) { return false; } else if (total_shards == -1 && shard_index != -1) { const Message msg = Message() << "Invalid environment variables: you have " << kTestShardIndex << " = " << shard_index << ", but have left " << kTestTotalShards << " unset.\n"; ColoredPrintf(COLOR_RED, msg.GetString().c_str()); fflush(stdout); exit(EXIT_FAILURE); } else if (total_shards != -1 && shard_index == -1) { const Message msg = Message() << "Invalid environment variables: you have " << kTestTotalShards << " = " << total_shards << ", but have left " << kTestShardIndex << " unset.\n"; ColoredPrintf(COLOR_RED, msg.GetString().c_str()); fflush(stdout); exit(EXIT_FAILURE); } else if (shard_index < 0 || shard_index >= total_shards) { const Message msg = Message() << "Invalid environment variables: we require 0 <= " << kTestShardIndex << " < " << kTestTotalShards << ", but you have " << kTestShardIndex << "=" << shard_index << ", " << kTestTotalShards << "=" << total_shards << ".\n"; ColoredPrintf(COLOR_RED, msg.GetString().c_str()); fflush(stdout); exit(EXIT_FAILURE); } return total_shards > 1; } // Parses the environment variable var as an Int32. If it is unset, // returns default_val. If it is not an Int32, prints an error // and aborts. Int32 Int32FromEnvOrDie(const char* var, Int32 default_val) { const char* str_val = posix::GetEnv(var); if (str_val == NULL) { return default_val; } Int32 result; if (!ParseInt32(Message() << "The value of environment variable " << var, str_val, &result)) { exit(EXIT_FAILURE); } return result; } // Given the total number of shards, the shard index, and the test id, // returns true iff the test should be run on this shard. The test id is // some arbitrary but unique non-negative integer assigned to each test // method. Assumes that 0 <= shard_index < total_shards. bool ShouldRunTestOnShard(int total_shards, int shard_index, int test_id) { return (test_id % total_shards) == shard_index; } // Compares the name of each test with the user-specified filter to // decide whether the test should be run, then records the result in // each TestCase and TestInfo object. // If shard_tests == true, further filters tests based on sharding // variables in the environment - see // http://code.google.com/p/googletest/wiki/GoogleTestAdvancedGuide. // Returns the number of tests that should run. int UnitTestImpl::FilterTests(ReactionToSharding shard_tests) { const Int32 total_shards = shard_tests == HONOR_SHARDING_PROTOCOL ? Int32FromEnvOrDie(kTestTotalShards, -1) : -1; const Int32 shard_index = shard_tests == HONOR_SHARDING_PROTOCOL ? Int32FromEnvOrDie(kTestShardIndex, -1) : -1; // num_runnable_tests are the number of tests that will // run across all shards (i.e., match filter and are not disabled). // num_selected_tests are the number of tests to be run on // this shard. int num_runnable_tests = 0; int num_selected_tests = 0; for (size_t i = 0; i < test_cases_.size(); i++) { TestCase* const test_case = test_cases_[i]; const std::string &test_case_name = test_case->name(); test_case->set_should_run(false); for (size_t j = 0; j < test_case->test_info_list().size(); j++) { TestInfo* const test_info = test_case->test_info_list()[j]; const std::string test_name(test_info->name()); // A test is disabled if test case name or test name matches // kDisableTestFilter. const bool is_disabled = internal::UnitTestOptions::MatchesFilter(test_case_name, kDisableTestFilter) || internal::UnitTestOptions::MatchesFilter(test_name, kDisableTestFilter); test_info->is_disabled_ = is_disabled; const bool matches_filter = internal::UnitTestOptions::FilterMatchesTest(test_case_name, test_name); test_info->matches_filter_ = matches_filter; const bool is_runnable = (GTEST_FLAG(also_run_disabled_tests) || !is_disabled) && matches_filter; const bool is_selected = is_runnable && (shard_tests == IGNORE_SHARDING_PROTOCOL || ShouldRunTestOnShard(total_shards, shard_index, num_runnable_tests)); num_runnable_tests += is_runnable; num_selected_tests += is_selected; test_info->should_run_ = is_selected; test_case->set_should_run(test_case->should_run() || is_selected); } } return num_selected_tests; } // Prints the given C-string on a single line by replacing all '\n' // characters with string "\\n". If the output takes more than // max_length characters, only prints the first max_length characters // and "...". static void PrintOnOneLine(const char* str, int max_length) { if (str != NULL) { for (int i = 0; *str != '\0'; ++str) { if (i >= max_length) { printf("..."); break; } if (*str == '\n') { printf("\\n"); i += 2; } else { printf("%c", *str); ++i; } } } } // Prints the names of the tests matching the user-specified filter flag. void UnitTestImpl::ListTestsMatchingFilter() { // Print at most this many characters for each type/value parameter. const int kMaxParamLength = 250; for (size_t i = 0; i < test_cases_.size(); i++) { const TestCase* const test_case = test_cases_[i]; bool printed_test_case_name = false; for (size_t j = 0; j < test_case->test_info_list().size(); j++) { const TestInfo* const test_info = test_case->test_info_list()[j]; if (test_info->matches_filter_) { if (!printed_test_case_name) { printed_test_case_name = true; printf("%s.", test_case->name()); if (test_case->type_param() != NULL) { printf(" # %s = ", kTypeParamLabel); // We print the type parameter on a single line to make // the output easy to parse by a program. PrintOnOneLine(test_case->type_param(), kMaxParamLength); } printf("\n"); } printf(" %s", test_info->name()); if (test_info->value_param() != NULL) { printf(" # %s = ", kValueParamLabel); // We print the value parameter on a single line to make the // output easy to parse by a program. PrintOnOneLine(test_info->value_param(), kMaxParamLength); } printf("\n"); } } } fflush(stdout); } // Sets the OS stack trace getter. // // Does nothing if the input and the current OS stack trace getter are // the same; otherwise, deletes the old getter and makes the input the // current getter. void UnitTestImpl::set_os_stack_trace_getter( OsStackTraceGetterInterface* getter) { if (os_stack_trace_getter_ != getter) { delete os_stack_trace_getter_; os_stack_trace_getter_ = getter; } } // Returns the current OS stack trace getter if it is not NULL; // otherwise, creates an OsStackTraceGetter, makes it the current // getter, and returns it. OsStackTraceGetterInterface* UnitTestImpl::os_stack_trace_getter() { if (os_stack_trace_getter_ == NULL) { os_stack_trace_getter_ = new OsStackTraceGetter; } return os_stack_trace_getter_; } // Returns the TestResult for the test that's currently running, or // the TestResult for the ad hoc test if no test is running. TestResult* UnitTestImpl::current_test_result() { return current_test_info_ ? &(current_test_info_->result_) : &ad_hoc_test_result_; } // Shuffles all test cases, and the tests within each test case, // making sure that death tests are still run first. void UnitTestImpl::ShuffleTests() { // Shuffles the death test cases. ShuffleRange(random(), 0, last_death_test_case_ + 1, &test_case_indices_); // Shuffles the non-death test cases. ShuffleRange(random(), last_death_test_case_ + 1, static_cast(test_cases_.size()), &test_case_indices_); // Shuffles the tests inside each test case. for (size_t i = 0; i < test_cases_.size(); i++) { test_cases_[i]->ShuffleTests(random()); } } // Restores the test cases and tests to their order before the first shuffle. void UnitTestImpl::UnshuffleTests() { for (size_t i = 0; i < test_cases_.size(); i++) { // Unshuffles the tests in each test case. test_cases_[i]->UnshuffleTests(); // Resets the index of each test case. test_case_indices_[i] = static_cast(i); } } // Returns the current OS stack trace as an std::string. // // The maximum number of stack frames to be included is specified by // the gtest_stack_trace_depth flag. The skip_count parameter // specifies the number of top frames to be skipped, which doesn't // count against the number of frames to be included. // // For example, if Foo() calls Bar(), which in turn calls // GetCurrentOsStackTraceExceptTop(..., 1), Foo() will be included in // the trace but Bar() and GetCurrentOsStackTraceExceptTop() won't. std::string GetCurrentOsStackTraceExceptTop(UnitTest* /*unit_test*/, int skip_count) { // We pass skip_count + 1 to skip this wrapper function in addition // to what the user really wants to skip. return GetUnitTestImpl()->CurrentOsStackTraceExceptTop(skip_count + 1); } // Used by the GTEST_SUPPRESS_UNREACHABLE_CODE_WARNING_BELOW_ macro to // suppress unreachable code warnings. namespace { class ClassUniqueToAlwaysTrue {}; } bool IsTrue(bool condition) { return condition; } bool AlwaysTrue() { #if GTEST_HAS_EXCEPTIONS // This condition is always false so AlwaysTrue() never actually throws, // but it makes the compiler think that it may throw. if (IsTrue(false)) throw ClassUniqueToAlwaysTrue(); #endif // GTEST_HAS_EXCEPTIONS return true; } // If *pstr starts with the given prefix, modifies *pstr to be right // past the prefix and returns true; otherwise leaves *pstr unchanged // and returns false. None of pstr, *pstr, and prefix can be NULL. bool SkipPrefix(const char* prefix, const char** pstr) { const size_t prefix_len = strlen(prefix); if (strncmp(*pstr, prefix, prefix_len) == 0) { *pstr += prefix_len; return true; } return false; } // Parses a string as a command line flag. The string should have // the format "--flag=value". When def_optional is true, the "=value" // part can be omitted. // // Returns the value of the flag, or NULL if the parsing failed. const char* ParseFlagValue(const char* str, const char* flag, bool def_optional) { // str and flag must not be NULL. if (str == NULL || flag == NULL) return NULL; // The flag must start with "--" followed by GTEST_FLAG_PREFIX_. const std::string flag_str = std::string("--") + GTEST_FLAG_PREFIX_ + flag; const size_t flag_len = flag_str.length(); if (strncmp(str, flag_str.c_str(), flag_len) != 0) return NULL; // Skips the flag name. const char* flag_end = str + flag_len; // When def_optional is true, it's OK to not have a "=value" part. if (def_optional && (flag_end[0] == '\0')) { return flag_end; } // If def_optional is true and there are more characters after the // flag name, or if def_optional is false, there must be a '=' after // the flag name. if (flag_end[0] != '=') return NULL; // Returns the string after "=". return flag_end + 1; } // Parses a string for a bool flag, in the form of either // "--flag=value" or "--flag". // // In the former case, the value is taken as true as long as it does // not start with '0', 'f', or 'F'. // // In the latter case, the value is taken as true. // // On success, stores the value of the flag in *value, and returns // true. On failure, returns false without changing *value. bool ParseBoolFlag(const char* str, const char* flag, bool* value) { // Gets the value of the flag as a string. const char* const value_str = ParseFlagValue(str, flag, true); // Aborts if the parsing failed. if (value_str == NULL) return false; // Converts the string value to a bool. *value = !(*value_str == '0' || *value_str == 'f' || *value_str == 'F'); return true; } // Parses a string for an Int32 flag, in the form of // "--flag=value". // // On success, stores the value of the flag in *value, and returns // true. On failure, returns false without changing *value. bool ParseInt32Flag(const char* str, const char* flag, Int32* value) { // Gets the value of the flag as a string. const char* const value_str = ParseFlagValue(str, flag, false); // Aborts if the parsing failed. if (value_str == NULL) return false; // Sets *value to the value of the flag. return ParseInt32(Message() << "The value of flag --" << flag, value_str, value); } // Parses a string for a string flag, in the form of // "--flag=value". // // On success, stores the value of the flag in *value, and returns // true. On failure, returns false without changing *value. bool ParseStringFlag(const char* str, const char* flag, std::string* value) { // Gets the value of the flag as a string. const char* const value_str = ParseFlagValue(str, flag, false); // Aborts if the parsing failed. if (value_str == NULL) return false; // Sets *value to the value of the flag. *value = value_str; return true; } // Determines whether a string has a prefix that Google Test uses for its // flags, i.e., starts with GTEST_FLAG_PREFIX_ or GTEST_FLAG_PREFIX_DASH_. // If Google Test detects that a command line flag has its prefix but is not // recognized, it will print its help message. Flags starting with // GTEST_INTERNAL_PREFIX_ followed by "internal_" are considered Google Test // internal flags and do not trigger the help message. static bool HasGoogleTestFlagPrefix(const char* str) { return (SkipPrefix("--", &str) || SkipPrefix("-", &str) || SkipPrefix("/", &str)) && !SkipPrefix(GTEST_FLAG_PREFIX_ "internal_", &str) && (SkipPrefix(GTEST_FLAG_PREFIX_, &str) || SkipPrefix(GTEST_FLAG_PREFIX_DASH_, &str)); } // Prints a string containing code-encoded text. The following escape // sequences can be used in the string to control the text color: // // @@ prints a single '@' character. // @R changes the color to red. // @G changes the color to green. // @Y changes the color to yellow. // @D changes to the default terminal text color. // // TODO(wan@google.com): Write tests for this once we add stdout // capturing to Google Test. static void PrintColorEncoded(const char* str) { GTestColor color = COLOR_DEFAULT; // The current color. // Conceptually, we split the string into segments divided by escape // sequences. Then we print one segment at a time. At the end of // each iteration, the str pointer advances to the beginning of the // next segment. for (;;) { const char* p = strchr(str, '@'); if (p == NULL) { ColoredPrintf(color, "%s", str); return; } ColoredPrintf(color, "%s", std::string(str, p).c_str()); const char ch = p[1]; str = p + 2; if (ch == '@') { ColoredPrintf(color, "@"); } else if (ch == 'D') { color = COLOR_DEFAULT; } else if (ch == 'R') { color = COLOR_RED; } else if (ch == 'G') { color = COLOR_GREEN; } else if (ch == 'Y') { color = COLOR_YELLOW; } else { --str; } } } static const char kColorEncodedHelpMessage[] = "This program contains tests written using " GTEST_NAME_ ". You can use the\n" "following command line flags to control its behavior:\n" "\n" "Test Selection:\n" " @G--" GTEST_FLAG_PREFIX_ "list_tests@D\n" " List the names of all tests instead of running them. The name of\n" " TEST(Foo, Bar) is \"Foo.Bar\".\n" " @G--" GTEST_FLAG_PREFIX_ "filter=@YPOSTIVE_PATTERNS" "[@G-@YNEGATIVE_PATTERNS]@D\n" " Run only the tests whose name matches one of the positive patterns but\n" " none of the negative patterns. '?' matches any single character; '*'\n" " matches any substring; ':' separates two patterns.\n" " @G--" GTEST_FLAG_PREFIX_ "also_run_disabled_tests@D\n" " Run all disabled tests too.\n" "\n" "Test Execution:\n" " @G--" GTEST_FLAG_PREFIX_ "repeat=@Y[COUNT]@D\n" " Run the tests repeatedly; use a negative count to repeat forever.\n" " @G--" GTEST_FLAG_PREFIX_ "shuffle@D\n" " Randomize tests' orders on every iteration.\n" " @G--" GTEST_FLAG_PREFIX_ "random_seed=@Y[NUMBER]@D\n" " Random number seed to use for shuffling test orders (between 1 and\n" " 99999, or 0 to use a seed based on the current time).\n" "\n" "Test Output:\n" " @G--" GTEST_FLAG_PREFIX_ "color=@Y(@Gyes@Y|@Gno@Y|@Gauto@Y)@D\n" " Enable/disable colored output. The default is @Gauto@D.\n" " -@G-" GTEST_FLAG_PREFIX_ "print_time=0@D\n" " Don't print the elapsed time of each test.\n" " @G--" GTEST_FLAG_PREFIX_ "output=xml@Y[@G:@YDIRECTORY_PATH@G" GTEST_PATH_SEP_ "@Y|@G:@YFILE_PATH]@D\n" " Generate an XML report in the given directory or with the given file\n" " name. @YFILE_PATH@D defaults to @Gtest_details.xml@D.\n" #if GTEST_CAN_STREAM_RESULTS_ " @G--" GTEST_FLAG_PREFIX_ "stream_result_to=@YHOST@G:@YPORT@D\n" " Stream test results to the given server.\n" #endif // GTEST_CAN_STREAM_RESULTS_ "\n" "Assertion Behavior:\n" #if GTEST_HAS_DEATH_TEST && !GTEST_OS_WINDOWS " @G--" GTEST_FLAG_PREFIX_ "death_test_style=@Y(@Gfast@Y|@Gthreadsafe@Y)@D\n" " Set the default death test style.\n" #endif // GTEST_HAS_DEATH_TEST && !GTEST_OS_WINDOWS " @G--" GTEST_FLAG_PREFIX_ "break_on_failure@D\n" " Turn assertion failures into debugger break-points.\n" " @G--" GTEST_FLAG_PREFIX_ "throw_on_failure@D\n" " Turn assertion failures into C++ exceptions.\n" " @G--" GTEST_FLAG_PREFIX_ "catch_exceptions=0@D\n" " Do not report exceptions as test failures. Instead, allow them\n" " to crash the program or throw a pop-up (on Windows).\n" "\n" "Except for @G--" GTEST_FLAG_PREFIX_ "list_tests@D, you can alternatively set " "the corresponding\n" "environment variable of a flag (all letters in upper-case). For example, to\n" "disable colored text output, you can either specify @G--" GTEST_FLAG_PREFIX_ "color=no@D or set\n" "the @G" GTEST_FLAG_PREFIX_UPPER_ "COLOR@D environment variable to @Gno@D.\n" "\n" "For more information, please read the " GTEST_NAME_ " documentation at\n" "@G" GTEST_PROJECT_URL_ "@D. If you find a bug in " GTEST_NAME_ "\n" "(not one in your own code or tests), please report it to\n" "@G<" GTEST_DEV_EMAIL_ ">@D.\n"; // Parses the command line for Google Test flags, without initializing // other parts of Google Test. The type parameter CharType can be // instantiated to either char or wchar_t. template void ParseGoogleTestFlagsOnlyImpl(int* argc, CharType** argv) { for (int i = 1; i < *argc; i++) { const std::string arg_string = StreamableToString(argv[i]); const char* const arg = arg_string.c_str(); using internal::ParseBoolFlag; using internal::ParseInt32Flag; using internal::ParseStringFlag; // Do we see a Google Test flag? if (ParseBoolFlag(arg, kAlsoRunDisabledTestsFlag, >EST_FLAG(also_run_disabled_tests)) || ParseBoolFlag(arg, kBreakOnFailureFlag, >EST_FLAG(break_on_failure)) || ParseBoolFlag(arg, kCatchExceptionsFlag, >EST_FLAG(catch_exceptions)) || ParseStringFlag(arg, kColorFlag, >EST_FLAG(color)) || ParseStringFlag(arg, kDeathTestStyleFlag, >EST_FLAG(death_test_style)) || ParseBoolFlag(arg, kDeathTestUseFork, >EST_FLAG(death_test_use_fork)) || ParseStringFlag(arg, kFilterFlag, >EST_FLAG(filter)) || ParseStringFlag(arg, kInternalRunDeathTestFlag, >EST_FLAG(internal_run_death_test)) || ParseBoolFlag(arg, kListTestsFlag, >EST_FLAG(list_tests)) || ParseStringFlag(arg, kOutputFlag, >EST_FLAG(output)) || ParseBoolFlag(arg, kPrintTimeFlag, >EST_FLAG(print_time)) || ParseInt32Flag(arg, kRandomSeedFlag, >EST_FLAG(random_seed)) || ParseInt32Flag(arg, kRepeatFlag, >EST_FLAG(repeat)) || ParseBoolFlag(arg, kShuffleFlag, >EST_FLAG(shuffle)) || ParseInt32Flag(arg, kStackTraceDepthFlag, >EST_FLAG(stack_trace_depth)) || ParseStringFlag(arg, kStreamResultToFlag, >EST_FLAG(stream_result_to)) || ParseBoolFlag(arg, kThrowOnFailureFlag, >EST_FLAG(throw_on_failure)) ) { // Yes. Shift the remainder of the argv list left by one. Note // that argv has (*argc + 1) elements, the last one always being // NULL. The following loop moves the trailing NULL element as // well. for (int j = i; j != *argc; j++) { argv[j] = argv[j + 1]; } // Decrements the argument count. (*argc)--; // We also need to decrement the iterator as we just removed // an element. i--; } else if (arg_string == "--help" || arg_string == "-h" || arg_string == "-?" || arg_string == "/?" || HasGoogleTestFlagPrefix(arg)) { // Both help flag and unrecognized Google Test flags (excluding // internal ones) trigger help display. g_help_flag = true; } } if (g_help_flag) { // We print the help here instead of in RUN_ALL_TESTS(), as the // latter may not be called at all if the user is using Google // Test with another testing framework. PrintColorEncoded(kColorEncodedHelpMessage); } } // Parses the command line for Google Test flags, without initializing // other parts of Google Test. void ParseGoogleTestFlagsOnly(int* argc, char** argv) { ParseGoogleTestFlagsOnlyImpl(argc, argv); } void ParseGoogleTestFlagsOnly(int* argc, wchar_t** argv) { ParseGoogleTestFlagsOnlyImpl(argc, argv); } // The internal implementation of InitGoogleTest(). // // The type parameter CharType can be instantiated to either char or // wchar_t. template void InitGoogleTestImpl(int* argc, CharType** argv) { g_init_gtest_count++; // We don't want to run the initialization code twice. if (g_init_gtest_count != 1) return; if (*argc <= 0) return; internal::g_executable_path = internal::StreamableToString(argv[0]); #if GTEST_HAS_DEATH_TEST g_argvs.clear(); for (int i = 0; i != *argc; i++) { g_argvs.push_back(StreamableToString(argv[i])); } #endif // GTEST_HAS_DEATH_TEST ParseGoogleTestFlagsOnly(argc, argv); GetUnitTestImpl()->PostFlagParsingInit(); } } // namespace internal // Initializes Google Test. This must be called before calling // RUN_ALL_TESTS(). In particular, it parses a command line for the // flags that Google Test recognizes. Whenever a Google Test flag is // seen, it is removed from argv, and *argc is decremented. // // No value is returned. Instead, the Google Test flag variables are // updated. // // Calling the function for the second time has no user-visible effect. void InitGoogleTest(int* argc, char** argv) { internal::InitGoogleTestImpl(argc, argv); } // This overloaded version can be used in Windows programs compiled in // UNICODE mode. void InitGoogleTest(int* argc, wchar_t** argv) { internal::InitGoogleTestImpl(argc, argv); } } // namespace testing // Copyright 2005, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: wan@google.com (Zhanyong Wan), vladl@google.com (Vlad Losev) // // This file implements death tests. #if GTEST_HAS_DEATH_TEST # if GTEST_OS_MAC # include # endif // GTEST_OS_MAC # include # include # include # if GTEST_OS_LINUX # include # endif // GTEST_OS_LINUX # include # if GTEST_OS_WINDOWS # include # else # include # include # endif // GTEST_OS_WINDOWS # if GTEST_OS_QNX # include # endif // GTEST_OS_QNX #endif // GTEST_HAS_DEATH_TEST // Indicates that this translation unit is part of Google Test's // implementation. It must come before gtest-internal-inl.h is // included, or there will be a compiler error. This trick is to // prevent a user from accidentally including gtest-internal-inl.h in // his code. #define GTEST_IMPLEMENTATION_ 1 #undef GTEST_IMPLEMENTATION_ namespace testing { // Constants. // The default death test style. static const char kDefaultDeathTestStyle[] = "fast"; GTEST_DEFINE_string_( death_test_style, internal::StringFromGTestEnv("death_test_style", kDefaultDeathTestStyle), "Indicates how to run a death test in a forked child process: " "\"threadsafe\" (child process re-executes the test binary " "from the beginning, running only the specific death test) or " "\"fast\" (child process runs the death test immediately " "after forking)."); GTEST_DEFINE_bool_( death_test_use_fork, internal::BoolFromGTestEnv("death_test_use_fork", false), "Instructs to use fork()/_exit() instead of clone() in death tests. " "Ignored and always uses fork() on POSIX systems where clone() is not " "implemented. Useful when running under valgrind or similar tools if " "those do not support clone(). Valgrind 3.3.1 will just fail if " "it sees an unsupported combination of clone() flags. " "It is not recommended to use this flag w/o valgrind though it will " "work in 99% of the cases. Once valgrind is fixed, this flag will " "most likely be removed."); namespace internal { GTEST_DEFINE_string_( internal_run_death_test, "", "Indicates the file, line number, temporal index of " "the single death test to run, and a file descriptor to " "which a success code may be sent, all separated by " "the '|' characters. This flag is specified if and only if the current " "process is a sub-process launched for running a thread-safe " "death test. FOR INTERNAL USE ONLY."); } // namespace internal #if GTEST_HAS_DEATH_TEST namespace internal { // Valid only for fast death tests. Indicates the code is running in the // child process of a fast style death test. static bool g_in_fast_death_test_child = false; // Returns a Boolean value indicating whether the caller is currently // executing in the context of the death test child process. Tools such as // Valgrind heap checkers may need this to modify their behavior in death // tests. IMPORTANT: This is an internal utility. Using it may break the // implementation of death tests. User code MUST NOT use it. bool InDeathTestChild() { # if GTEST_OS_WINDOWS // On Windows, death tests are thread-safe regardless of the value of the // death_test_style flag. return !GTEST_FLAG(internal_run_death_test).empty(); # else if (GTEST_FLAG(death_test_style) == "threadsafe") return !GTEST_FLAG(internal_run_death_test).empty(); else return g_in_fast_death_test_child; #endif } } // namespace internal // ExitedWithCode constructor. ExitedWithCode::ExitedWithCode(int exit_code) : exit_code_(exit_code) { } // ExitedWithCode function-call operator. bool ExitedWithCode::operator()(int exit_status) const { # if GTEST_OS_WINDOWS return exit_status == exit_code_; # else return WIFEXITED(exit_status) && WEXITSTATUS(exit_status) == exit_code_; # endif // GTEST_OS_WINDOWS } # if !GTEST_OS_WINDOWS // KilledBySignal constructor. KilledBySignal::KilledBySignal(int signum) : signum_(signum) { } // KilledBySignal function-call operator. bool KilledBySignal::operator()(int exit_status) const { return WIFSIGNALED(exit_status) && WTERMSIG(exit_status) == signum_; } # endif // !GTEST_OS_WINDOWS namespace internal { // Utilities needed for death tests. // Generates a textual description of a given exit code, in the format // specified by wait(2). static std::string ExitSummary(int exit_code) { Message m; # if GTEST_OS_WINDOWS m << "Exited with exit status " << exit_code; # else if (WIFEXITED(exit_code)) { m << "Exited with exit status " << WEXITSTATUS(exit_code); } else if (WIFSIGNALED(exit_code)) { m << "Terminated by signal " << WTERMSIG(exit_code); } # ifdef WCOREDUMP if (WCOREDUMP(exit_code)) { m << " (core dumped)"; } # endif # endif // GTEST_OS_WINDOWS return m.GetString(); } // Returns true if exit_status describes a process that was terminated // by a signal, or exited normally with a nonzero exit code. bool ExitedUnsuccessfully(int exit_status) { return !ExitedWithCode(0)(exit_status); } # if !GTEST_OS_WINDOWS // Generates a textual failure message when a death test finds more than // one thread running, or cannot determine the number of threads, prior // to executing the given statement. It is the responsibility of the // caller not to pass a thread_count of 1. static std::string DeathTestThreadWarning(size_t thread_count) { Message msg; msg << "Death tests use fork(), which is unsafe particularly" << " in a threaded context. For this test, " << GTEST_NAME_ << " "; if (thread_count == 0) msg << "couldn't detect the number of threads."; else msg << "detected " << thread_count << " threads."; return msg.GetString(); } # endif // !GTEST_OS_WINDOWS // Flag characters for reporting a death test that did not die. static const char kDeathTestLived = 'L'; static const char kDeathTestReturned = 'R'; static const char kDeathTestThrew = 'T'; static const char kDeathTestInternalError = 'I'; // An enumeration describing all of the possible ways that a death test can // conclude. DIED means that the process died while executing the test // code; LIVED means that process lived beyond the end of the test code; // RETURNED means that the test statement attempted to execute a return // statement, which is not allowed; THREW means that the test statement // returned control by throwing an exception. IN_PROGRESS means the test // has not yet concluded. // TODO(vladl@google.com): Unify names and possibly values for // AbortReason, DeathTestOutcome, and flag characters above. enum DeathTestOutcome { IN_PROGRESS, DIED, LIVED, RETURNED, THREW }; // Routine for aborting the program which is safe to call from an // exec-style death test child process, in which case the error // message is propagated back to the parent process. Otherwise, the // message is simply printed to stderr. In either case, the program // then exits with status 1. void DeathTestAbort(const std::string& message) { // On a POSIX system, this function may be called from a threadsafe-style // death test child process, which operates on a very small stack. Use // the heap for any additional non-minuscule memory requirements. const InternalRunDeathTestFlag* const flag = GetUnitTestImpl()->internal_run_death_test_flag(); if (flag != NULL) { FILE* parent = posix::FDOpen(flag->write_fd(), "w"); fputc(kDeathTestInternalError, parent); fprintf(parent, "%s", message.c_str()); fflush(parent); _exit(1); } else { fprintf(stderr, "%s", message.c_str()); fflush(stderr); posix::Abort(); } } // A replacement for CHECK that calls DeathTestAbort if the assertion // fails. # define GTEST_DEATH_TEST_CHECK_(expression) \ do { \ if (!::testing::internal::IsTrue(expression)) { \ DeathTestAbort( \ ::std::string("CHECK failed: File ") + __FILE__ + ", line " \ + ::testing::internal::StreamableToString(__LINE__) + ": " \ + #expression); \ } \ } while (::testing::internal::AlwaysFalse()) // This macro is similar to GTEST_DEATH_TEST_CHECK_, but it is meant for // evaluating any system call that fulfills two conditions: it must return // -1 on failure, and set errno to EINTR when it is interrupted and // should be tried again. The macro expands to a loop that repeatedly // evaluates the expression as long as it evaluates to -1 and sets // errno to EINTR. If the expression evaluates to -1 but errno is // something other than EINTR, DeathTestAbort is called. # define GTEST_DEATH_TEST_CHECK_SYSCALL_(expression) \ do { \ int gtest_retval; \ do { \ gtest_retval = (expression); \ } while (gtest_retval == -1 && errno == EINTR); \ if (gtest_retval == -1) { \ DeathTestAbort( \ ::std::string("CHECK failed: File ") + __FILE__ + ", line " \ + ::testing::internal::StreamableToString(__LINE__) + ": " \ + #expression + " != -1"); \ } \ } while (::testing::internal::AlwaysFalse()) // Returns the message describing the last system error in errno. std::string GetLastErrnoDescription() { return errno == 0 ? "" : posix::StrError(errno); } // This is called from a death test parent process to read a failure // message from the death test child process and log it with the FATAL // severity. On Windows, the message is read from a pipe handle. On other // platforms, it is read from a file descriptor. static void FailFromInternalError(int fd) { Message error; char buffer[256]; int num_read; do { while ((num_read = posix::Read(fd, buffer, 255)) > 0) { buffer[num_read] = '\0'; error << buffer; } } while (num_read == -1 && errno == EINTR); if (num_read == 0) { GTEST_LOG_(FATAL) << error.GetString(); } else { const int last_error = errno; GTEST_LOG_(FATAL) << "Error while reading death test internal: " << GetLastErrnoDescription() << " [" << last_error << "]"; } } // Death test constructor. Increments the running death test count // for the current test. DeathTest::DeathTest() { TestInfo* const info = GetUnitTestImpl()->current_test_info(); if (info == NULL) { DeathTestAbort("Cannot run a death test outside of a TEST or " "TEST_F construct"); } } // Creates and returns a death test by dispatching to the current // death test factory. bool DeathTest::Create(const char* statement, const RE* regex, const char* file, int line, DeathTest** test) { return GetUnitTestImpl()->death_test_factory()->Create( statement, regex, file, line, test); } const char* DeathTest::LastMessage() { return last_death_test_message_.c_str(); } void DeathTest::set_last_death_test_message(const std::string& message) { last_death_test_message_ = message; } std::string DeathTest::last_death_test_message_; // Provides cross platform implementation for some death functionality. class DeathTestImpl : public DeathTest { protected: DeathTestImpl(const char* a_statement, const RE* a_regex) : statement_(a_statement), regex_(a_regex), spawned_(false), status_(-1), outcome_(IN_PROGRESS), read_fd_(-1), write_fd_(-1) {} // read_fd_ is expected to be closed and cleared by a derived class. ~DeathTestImpl() { GTEST_DEATH_TEST_CHECK_(read_fd_ == -1); } void Abort(AbortReason reason); virtual bool Passed(bool status_ok); const char* statement() const { return statement_; } const RE* regex() const { return regex_; } bool spawned() const { return spawned_; } void set_spawned(bool is_spawned) { spawned_ = is_spawned; } int status() const { return status_; } void set_status(int a_status) { status_ = a_status; } DeathTestOutcome outcome() const { return outcome_; } void set_outcome(DeathTestOutcome an_outcome) { outcome_ = an_outcome; } int read_fd() const { return read_fd_; } void set_read_fd(int fd) { read_fd_ = fd; } int write_fd() const { return write_fd_; } void set_write_fd(int fd) { write_fd_ = fd; } // Called in the parent process only. Reads the result code of the death // test child process via a pipe, interprets it to set the outcome_ // member, and closes read_fd_. Outputs diagnostics and terminates in // case of unexpected codes. void ReadAndInterpretStatusByte(); private: // The textual content of the code this object is testing. This class // doesn't own this string and should not attempt to delete it. const char* const statement_; // The regular expression which test output must match. DeathTestImpl // doesn't own this object and should not attempt to delete it. const RE* const regex_; // True if the death test child process has been successfully spawned. bool spawned_; // The exit status of the child process. int status_; // How the death test concluded. DeathTestOutcome outcome_; // Descriptor to the read end of the pipe to the child process. It is // always -1 in the child process. The child keeps its write end of the // pipe in write_fd_. int read_fd_; // Descriptor to the child's write end of the pipe to the parent process. // It is always -1 in the parent process. The parent keeps its end of the // pipe in read_fd_. int write_fd_; }; // Called in the parent process only. Reads the result code of the death // test child process via a pipe, interprets it to set the outcome_ // member, and closes read_fd_. Outputs diagnostics and terminates in // case of unexpected codes. void DeathTestImpl::ReadAndInterpretStatusByte() { char flag; int bytes_read; // The read() here blocks until data is available (signifying the // failure of the death test) or until the pipe is closed (signifying // its success), so it's okay to call this in the parent before // the child process has exited. do { bytes_read = posix::Read(read_fd(), &flag, 1); } while (bytes_read == -1 && errno == EINTR); if (bytes_read == 0) { set_outcome(DIED); } else if (bytes_read == 1) { switch (flag) { case kDeathTestReturned: set_outcome(RETURNED); break; case kDeathTestThrew: set_outcome(THREW); break; case kDeathTestLived: set_outcome(LIVED); break; case kDeathTestInternalError: FailFromInternalError(read_fd()); // Does not return. break; default: GTEST_LOG_(FATAL) << "Death test child process reported " << "unexpected status byte (" << static_cast(flag) << ")"; } } else { GTEST_LOG_(FATAL) << "Read from death test child process failed: " << GetLastErrnoDescription(); } GTEST_DEATH_TEST_CHECK_SYSCALL_(posix::Close(read_fd())); set_read_fd(-1); } // Signals that the death test code which should have exited, didn't. // Should be called only in a death test child process. // Writes a status byte to the child's status file descriptor, then // calls _exit(1). void DeathTestImpl::Abort(AbortReason reason) { // The parent process considers the death test to be a failure if // it finds any data in our pipe. So, here we write a single flag byte // to the pipe, then exit. const char status_ch = reason == TEST_DID_NOT_DIE ? kDeathTestLived : reason == TEST_THREW_EXCEPTION ? kDeathTestThrew : kDeathTestReturned; GTEST_DEATH_TEST_CHECK_SYSCALL_(posix::Write(write_fd(), &status_ch, 1)); // We are leaking the descriptor here because on some platforms (i.e., // when built as Windows DLL), destructors of global objects will still // run after calling _exit(). On such systems, write_fd_ will be // indirectly closed from the destructor of UnitTestImpl, causing double // close if it is also closed here. On debug configurations, double close // may assert. As there are no in-process buffers to flush here, we are // relying on the OS to close the descriptor after the process terminates // when the destructors are not run. _exit(1); // Exits w/o any normal exit hooks (we were supposed to crash) } // Returns an indented copy of stderr output for a death test. // This makes distinguishing death test output lines from regular log lines // much easier. static ::std::string FormatDeathTestOutput(const ::std::string& output) { ::std::string ret; for (size_t at = 0; ; ) { const size_t line_end = output.find('\n', at); ret += "[ DEATH ] "; if (line_end == ::std::string::npos) { ret += output.substr(at); break; } ret += output.substr(at, line_end + 1 - at); at = line_end + 1; } return ret; } // Assesses the success or failure of a death test, using both private // members which have previously been set, and one argument: // // Private data members: // outcome: An enumeration describing how the death test // concluded: DIED, LIVED, THREW, or RETURNED. The death test // fails in the latter three cases. // status: The exit status of the child process. On *nix, it is in the // in the format specified by wait(2). On Windows, this is the // value supplied to the ExitProcess() API or a numeric code // of the exception that terminated the program. // regex: A regular expression object to be applied to // the test's captured standard error output; the death test // fails if it does not match. // // Argument: // status_ok: true if exit_status is acceptable in the context of // this particular death test, which fails if it is false // // Returns true iff all of the above conditions are met. Otherwise, the // first failing condition, in the order given above, is the one that is // reported. Also sets the last death test message string. bool DeathTestImpl::Passed(bool status_ok) { if (!spawned()) return false; const std::string error_message = GetCapturedStderr(); bool success = false; Message buffer; buffer << "Death test: " << statement() << "\n"; switch (outcome()) { case LIVED: buffer << " Result: failed to die.\n" << " Error msg:\n" << FormatDeathTestOutput(error_message); break; case THREW: buffer << " Result: threw an exception.\n" << " Error msg:\n" << FormatDeathTestOutput(error_message); break; case RETURNED: buffer << " Result: illegal return in test statement.\n" << " Error msg:\n" << FormatDeathTestOutput(error_message); break; case DIED: if (status_ok) { const bool matched = RE::PartialMatch(error_message.c_str(), *regex()); if (matched) { success = true; } else { buffer << " Result: died but not with expected error.\n" << " Expected: " << regex()->pattern() << "\n" << "Actual msg:\n" << FormatDeathTestOutput(error_message); } } else { buffer << " Result: died but not with expected exit code:\n" << " " << ExitSummary(status()) << "\n" << "Actual msg:\n" << FormatDeathTestOutput(error_message); } break; case IN_PROGRESS: default: GTEST_LOG_(FATAL) << "DeathTest::Passed somehow called before conclusion of test"; } DeathTest::set_last_death_test_message(buffer.GetString()); return success; } # if GTEST_OS_WINDOWS // WindowsDeathTest implements death tests on Windows. Due to the // specifics of starting new processes on Windows, death tests there are // always threadsafe, and Google Test considers the // --gtest_death_test_style=fast setting to be equivalent to // --gtest_death_test_style=threadsafe there. // // A few implementation notes: Like the Linux version, the Windows // implementation uses pipes for child-to-parent communication. But due to // the specifics of pipes on Windows, some extra steps are required: // // 1. The parent creates a communication pipe and stores handles to both // ends of it. // 2. The parent starts the child and provides it with the information // necessary to acquire the handle to the write end of the pipe. // 3. The child acquires the write end of the pipe and signals the parent // using a Windows event. // 4. Now the parent can release the write end of the pipe on its side. If // this is done before step 3, the object's reference count goes down to // 0 and it is destroyed, preventing the child from acquiring it. The // parent now has to release it, or read operations on the read end of // the pipe will not return when the child terminates. // 5. The parent reads child's output through the pipe (outcome code and // any possible error messages) from the pipe, and its stderr and then // determines whether to fail the test. // // Note: to distinguish Win32 API calls from the local method and function // calls, the former are explicitly resolved in the global namespace. // class WindowsDeathTest : public DeathTestImpl { public: WindowsDeathTest(const char* a_statement, const RE* a_regex, const char* file, int line) : DeathTestImpl(a_statement, a_regex), file_(file), line_(line) {} // All of these virtual functions are inherited from DeathTest. virtual int Wait(); virtual TestRole AssumeRole(); private: // The name of the file in which the death test is located. const char* const file_; // The line number on which the death test is located. const int line_; // Handle to the write end of the pipe to the child process. AutoHandle write_handle_; // Child process handle. AutoHandle child_handle_; // Event the child process uses to signal the parent that it has // acquired the handle to the write end of the pipe. After seeing this // event the parent can release its own handles to make sure its // ReadFile() calls return when the child terminates. AutoHandle event_handle_; }; // Waits for the child in a death test to exit, returning its exit // status, or 0 if no child process exists. As a side effect, sets the // outcome data member. int WindowsDeathTest::Wait() { if (!spawned()) return 0; // Wait until the child either signals that it has acquired the write end // of the pipe or it dies. const HANDLE wait_handles[2] = { child_handle_.Get(), event_handle_.Get() }; switch (::WaitForMultipleObjects(2, wait_handles, FALSE, // Waits for any of the handles. INFINITE)) { case WAIT_OBJECT_0: case WAIT_OBJECT_0 + 1: break; default: GTEST_DEATH_TEST_CHECK_(false); // Should not get here. } // The child has acquired the write end of the pipe or exited. // We release the handle on our side and continue. write_handle_.Reset(); event_handle_.Reset(); ReadAndInterpretStatusByte(); // Waits for the child process to exit if it haven't already. This // returns immediately if the child has already exited, regardless of // whether previous calls to WaitForMultipleObjects synchronized on this // handle or not. GTEST_DEATH_TEST_CHECK_( WAIT_OBJECT_0 == ::WaitForSingleObject(child_handle_.Get(), INFINITE)); DWORD status_code; GTEST_DEATH_TEST_CHECK_( ::GetExitCodeProcess(child_handle_.Get(), &status_code) != FALSE); child_handle_.Reset(); set_status(static_cast(status_code)); return status(); } // The AssumeRole process for a Windows death test. It creates a child // process with the same executable as the current process to run the // death test. The child process is given the --gtest_filter and // --gtest_internal_run_death_test flags such that it knows to run the // current death test only. DeathTest::TestRole WindowsDeathTest::AssumeRole() { const UnitTestImpl* const impl = GetUnitTestImpl(); const InternalRunDeathTestFlag* const flag = impl->internal_run_death_test_flag(); const TestInfo* const info = impl->current_test_info(); const int death_test_index = info->result()->death_test_count(); if (flag != NULL) { // ParseInternalRunDeathTestFlag() has performed all the necessary // processing. set_write_fd(flag->write_fd()); return EXECUTE_TEST; } // WindowsDeathTest uses an anonymous pipe to communicate results of // a death test. SECURITY_ATTRIBUTES handles_are_inheritable = { sizeof(SECURITY_ATTRIBUTES), NULL, TRUE }; HANDLE read_handle, write_handle; GTEST_DEATH_TEST_CHECK_( ::CreatePipe(&read_handle, &write_handle, &handles_are_inheritable, 0) // Default buffer size. != FALSE); set_read_fd(::_open_osfhandle(reinterpret_cast(read_handle), O_RDONLY)); write_handle_.Reset(write_handle); event_handle_.Reset(::CreateEvent( &handles_are_inheritable, TRUE, // The event will automatically reset to non-signaled state. FALSE, // The initial state is non-signalled. NULL)); // The even is unnamed. GTEST_DEATH_TEST_CHECK_(event_handle_.Get() != NULL); const std::string filter_flag = std::string("--") + GTEST_FLAG_PREFIX_ + kFilterFlag + "=" + info->test_case_name() + "." + info->name(); const std::string internal_flag = std::string("--") + GTEST_FLAG_PREFIX_ + kInternalRunDeathTestFlag + "=" + file_ + "|" + StreamableToString(line_) + "|" + StreamableToString(death_test_index) + "|" + StreamableToString(static_cast(::GetCurrentProcessId())) + // size_t has the same width as pointers on both 32-bit and 64-bit // Windows platforms. // See http://msdn.microsoft.com/en-us/library/tcxf1dw6.aspx. "|" + StreamableToString(reinterpret_cast(write_handle)) + "|" + StreamableToString(reinterpret_cast(event_handle_.Get())); char executable_path[_MAX_PATH + 1]; // NOLINT GTEST_DEATH_TEST_CHECK_( _MAX_PATH + 1 != ::GetModuleFileNameA(NULL, executable_path, _MAX_PATH)); std::string command_line = std::string(::GetCommandLineA()) + " " + filter_flag + " \"" + internal_flag + "\""; DeathTest::set_last_death_test_message(""); CaptureStderr(); // Flush the log buffers since the log streams are shared with the child. FlushInfoLog(); // The child process will share the standard handles with the parent. STARTUPINFOA startup_info; memset(&startup_info, 0, sizeof(STARTUPINFO)); startup_info.dwFlags = STARTF_USESTDHANDLES; startup_info.hStdInput = ::GetStdHandle(STD_INPUT_HANDLE); startup_info.hStdOutput = ::GetStdHandle(STD_OUTPUT_HANDLE); startup_info.hStdError = ::GetStdHandle(STD_ERROR_HANDLE); PROCESS_INFORMATION process_info; GTEST_DEATH_TEST_CHECK_(::CreateProcessA( executable_path, const_cast(command_line.c_str()), NULL, // Retuned process handle is not inheritable. NULL, // Retuned thread handle is not inheritable. TRUE, // Child inherits all inheritable handles (for write_handle_). 0x0, // Default creation flags. NULL, // Inherit the parent's environment. UnitTest::GetInstance()->original_working_dir(), &startup_info, &process_info) != FALSE); child_handle_.Reset(process_info.hProcess); ::CloseHandle(process_info.hThread); set_spawned(true); return OVERSEE_TEST; } # else // We are not on Windows. // ForkingDeathTest provides implementations for most of the abstract // methods of the DeathTest interface. Only the AssumeRole method is // left undefined. class ForkingDeathTest : public DeathTestImpl { public: ForkingDeathTest(const char* statement, const RE* regex); // All of these virtual functions are inherited from DeathTest. virtual int Wait(); protected: void set_child_pid(pid_t child_pid) { child_pid_ = child_pid; } private: // PID of child process during death test; 0 in the child process itself. pid_t child_pid_; }; // Constructs a ForkingDeathTest. ForkingDeathTest::ForkingDeathTest(const char* a_statement, const RE* a_regex) : DeathTestImpl(a_statement, a_regex), child_pid_(-1) {} // Waits for the child in a death test to exit, returning its exit // status, or 0 if no child process exists. As a side effect, sets the // outcome data member. int ForkingDeathTest::Wait() { if (!spawned()) return 0; ReadAndInterpretStatusByte(); int status_value; GTEST_DEATH_TEST_CHECK_SYSCALL_(waitpid(child_pid_, &status_value, 0)); set_status(status_value); return status_value; } // A concrete death test class that forks, then immediately runs the test // in the child process. class NoExecDeathTest : public ForkingDeathTest { public: NoExecDeathTest(const char* a_statement, const RE* a_regex) : ForkingDeathTest(a_statement, a_regex) { } virtual TestRole AssumeRole(); }; // The AssumeRole process for a fork-and-run death test. It implements a // straightforward fork, with a simple pipe to transmit the status byte. DeathTest::TestRole NoExecDeathTest::AssumeRole() { const size_t thread_count = GetThreadCount(); if (thread_count != 1) { GTEST_LOG_(WARNING) << DeathTestThreadWarning(thread_count); } int pipe_fd[2]; GTEST_DEATH_TEST_CHECK_(pipe(pipe_fd) != -1); DeathTest::set_last_death_test_message(""); CaptureStderr(); // When we fork the process below, the log file buffers are copied, but the // file descriptors are shared. We flush all log files here so that closing // the file descriptors in the child process doesn't throw off the // synchronization between descriptors and buffers in the parent process. // This is as close to the fork as possible to avoid a race condition in case // there are multiple threads running before the death test, and another // thread writes to the log file. FlushInfoLog(); const pid_t child_pid = fork(); GTEST_DEATH_TEST_CHECK_(child_pid != -1); set_child_pid(child_pid); if (child_pid == 0) { GTEST_DEATH_TEST_CHECK_SYSCALL_(close(pipe_fd[0])); set_write_fd(pipe_fd[1]); // Redirects all logging to stderr in the child process to prevent // concurrent writes to the log files. We capture stderr in the parent // process and append the child process' output to a log. LogToStderr(); // Event forwarding to the listeners of event listener API mush be shut // down in death test subprocesses. GetUnitTestImpl()->listeners()->SuppressEventForwarding(); g_in_fast_death_test_child = true; return EXECUTE_TEST; } else { GTEST_DEATH_TEST_CHECK_SYSCALL_(close(pipe_fd[1])); set_read_fd(pipe_fd[0]); set_spawned(true); return OVERSEE_TEST; } } // A concrete death test class that forks and re-executes the main // program from the beginning, with command-line flags set that cause // only this specific death test to be run. class ExecDeathTest : public ForkingDeathTest { public: ExecDeathTest(const char* a_statement, const RE* a_regex, const char* file, int line) : ForkingDeathTest(a_statement, a_regex), file_(file), line_(line) { } virtual TestRole AssumeRole(); private: static ::std::vector GetArgvsForDeathTestChildProcess() { ::std::vector args = GetInjectableArgvs(); return args; } // The name of the file in which the death test is located. const char* const file_; // The line number on which the death test is located. const int line_; }; // Utility class for accumulating command-line arguments. class Arguments { public: Arguments() { args_.push_back(NULL); } ~Arguments() { for (std::vector::iterator i = args_.begin(); i != args_.end(); ++i) { free(*i); } } void AddArgument(const char* argument) { args_.insert(args_.end() - 1, posix::StrDup(argument)); } template void AddArguments(const ::std::vector& arguments) { for (typename ::std::vector::const_iterator i = arguments.begin(); i != arguments.end(); ++i) { args_.insert(args_.end() - 1, posix::StrDup(i->c_str())); } } char* const* Argv() { return &args_[0]; } private: std::vector args_; }; // A struct that encompasses the arguments to the child process of a // threadsafe-style death test process. struct ExecDeathTestArgs { char* const* argv; // Command-line arguments for the child's call to exec int close_fd; // File descriptor to close; the read end of a pipe }; # if GTEST_OS_MAC inline char** GetEnviron() { // When Google Test is built as a framework on MacOS X, the environ variable // is unavailable. Apple's documentation (man environ) recommends using // _NSGetEnviron() instead. return *_NSGetEnviron(); } # else // Some POSIX platforms expect you to declare environ. extern "C" makes // it reside in the global namespace. extern "C" char** environ; inline char** GetEnviron() { return environ; } # endif // GTEST_OS_MAC # if !GTEST_OS_QNX // The main function for a threadsafe-style death test child process. // This function is called in a clone()-ed process and thus must avoid // any potentially unsafe operations like malloc or libc functions. static int ExecDeathTestChildMain(void* child_arg) { ExecDeathTestArgs* const args = static_cast(child_arg); GTEST_DEATH_TEST_CHECK_SYSCALL_(close(args->close_fd)); // We need to execute the test program in the same environment where // it was originally invoked. Therefore we change to the original // working directory first. const char* const original_dir = UnitTest::GetInstance()->original_working_dir(); // We can safely call chdir() as it's a direct system call. if (chdir(original_dir) != 0) { DeathTestAbort(std::string("chdir(\"") + original_dir + "\") failed: " + GetLastErrnoDescription()); return EXIT_FAILURE; } // We can safely call execve() as it's a direct system call. We // cannot use execvp() as it's a libc function and thus potentially // unsafe. Since execve() doesn't search the PATH, the user must // invoke the test program via a valid path that contains at least // one path separator. execve(args->argv[0], args->argv, GetEnviron()); DeathTestAbort(std::string("execve(") + args->argv[0] + ", ...) in " + original_dir + " failed: " + GetLastErrnoDescription()); return EXIT_FAILURE; } # endif // !GTEST_OS_QNX // Two utility routines that together determine the direction the stack // grows. // This could be accomplished more elegantly by a single recursive // function, but we want to guard against the unlikely possibility of // a smart compiler optimizing the recursion away. // // GTEST_NO_INLINE_ is required to prevent GCC 4.6 from inlining // StackLowerThanAddress into StackGrowsDown, which then doesn't give // correct answer. void StackLowerThanAddress(const void* ptr, bool* result) GTEST_NO_INLINE_; void StackLowerThanAddress(const void* ptr, bool* result) { int dummy; *result = (&dummy < ptr); } bool StackGrowsDown() { int dummy; bool result; StackLowerThanAddress(&dummy, &result); return result; } // Spawns a child process with the same executable as the current process in // a thread-safe manner and instructs it to run the death test. The // implementation uses fork(2) + exec. On systems where clone(2) is // available, it is used instead, being slightly more thread-safe. On QNX, // fork supports only single-threaded environments, so this function uses // spawn(2) there instead. The function dies with an error message if // anything goes wrong. static pid_t ExecDeathTestSpawnChild(char* const* argv, int close_fd) { ExecDeathTestArgs args = { argv, close_fd }; pid_t child_pid = -1; # if GTEST_OS_QNX // Obtains the current directory and sets it to be closed in the child // process. const int cwd_fd = open(".", O_RDONLY); GTEST_DEATH_TEST_CHECK_(cwd_fd != -1); GTEST_DEATH_TEST_CHECK_SYSCALL_(fcntl(cwd_fd, F_SETFD, FD_CLOEXEC)); // We need to execute the test program in the same environment where // it was originally invoked. Therefore we change to the original // working directory first. const char* const original_dir = UnitTest::GetInstance()->original_working_dir(); // We can safely call chdir() as it's a direct system call. if (chdir(original_dir) != 0) { DeathTestAbort(std::string("chdir(\"") + original_dir + "\") failed: " + GetLastErrnoDescription()); return EXIT_FAILURE; } int fd_flags; // Set close_fd to be closed after spawn. GTEST_DEATH_TEST_CHECK_SYSCALL_(fd_flags = fcntl(close_fd, F_GETFD)); GTEST_DEATH_TEST_CHECK_SYSCALL_(fcntl(close_fd, F_SETFD, fd_flags | FD_CLOEXEC)); struct inheritance inherit = {0}; // spawn is a system call. child_pid = spawn(args.argv[0], 0, NULL, &inherit, args.argv, GetEnviron()); // Restores the current working directory. GTEST_DEATH_TEST_CHECK_(fchdir(cwd_fd) != -1); GTEST_DEATH_TEST_CHECK_SYSCALL_(close(cwd_fd)); # else // GTEST_OS_QNX # if GTEST_OS_LINUX // When a SIGPROF signal is received while fork() or clone() are executing, // the process may hang. To avoid this, we ignore SIGPROF here and re-enable // it after the call to fork()/clone() is complete. struct sigaction saved_sigprof_action; struct sigaction ignore_sigprof_action; memset(&ignore_sigprof_action, 0, sizeof(ignore_sigprof_action)); sigemptyset(&ignore_sigprof_action.sa_mask); ignore_sigprof_action.sa_handler = SIG_IGN; GTEST_DEATH_TEST_CHECK_SYSCALL_(sigaction( SIGPROF, &ignore_sigprof_action, &saved_sigprof_action)); # endif // GTEST_OS_LINUX # if GTEST_HAS_CLONE const bool use_fork = GTEST_FLAG(death_test_use_fork); if (!use_fork) { static const bool stack_grows_down = StackGrowsDown(); const size_t stack_size = getpagesize(); // MMAP_ANONYMOUS is not defined on Mac, so we use MAP_ANON instead. void* const stack = mmap(NULL, stack_size, PROT_READ | PROT_WRITE, MAP_ANON | MAP_PRIVATE, -1, 0); GTEST_DEATH_TEST_CHECK_(stack != MAP_FAILED); // Maximum stack alignment in bytes: For a downward-growing stack, this // amount is subtracted from size of the stack space to get an address // that is within the stack space and is aligned on all systems we care // about. As far as I know there is no ABI with stack alignment greater // than 64. We assume stack and stack_size already have alignment of // kMaxStackAlignment. const size_t kMaxStackAlignment = 64; void* const stack_top = static_cast(stack) + (stack_grows_down ? stack_size - kMaxStackAlignment : 0); GTEST_DEATH_TEST_CHECK_(stack_size > kMaxStackAlignment && reinterpret_cast(stack_top) % kMaxStackAlignment == 0); child_pid = clone(&ExecDeathTestChildMain, stack_top, SIGCHLD, &args); GTEST_DEATH_TEST_CHECK_(munmap(stack, stack_size) != -1); } # else const bool use_fork = true; # endif // GTEST_HAS_CLONE if (use_fork && (child_pid = fork()) == 0) { ExecDeathTestChildMain(&args); _exit(0); } # endif // GTEST_OS_QNX # if GTEST_OS_LINUX GTEST_DEATH_TEST_CHECK_SYSCALL_( sigaction(SIGPROF, &saved_sigprof_action, NULL)); # endif // GTEST_OS_LINUX GTEST_DEATH_TEST_CHECK_(child_pid != -1); return child_pid; } // The AssumeRole process for a fork-and-exec death test. It re-executes the // main program from the beginning, setting the --gtest_filter // and --gtest_internal_run_death_test flags to cause only the current // death test to be re-run. DeathTest::TestRole ExecDeathTest::AssumeRole() { const UnitTestImpl* const impl = GetUnitTestImpl(); const InternalRunDeathTestFlag* const flag = impl->internal_run_death_test_flag(); const TestInfo* const info = impl->current_test_info(); const int death_test_index = info->result()->death_test_count(); if (flag != NULL) { set_write_fd(flag->write_fd()); return EXECUTE_TEST; } int pipe_fd[2]; GTEST_DEATH_TEST_CHECK_(pipe(pipe_fd) != -1); // Clear the close-on-exec flag on the write end of the pipe, lest // it be closed when the child process does an exec: GTEST_DEATH_TEST_CHECK_(fcntl(pipe_fd[1], F_SETFD, 0) != -1); const std::string filter_flag = std::string("--") + GTEST_FLAG_PREFIX_ + kFilterFlag + "=" + info->test_case_name() + "." + info->name(); const std::string internal_flag = std::string("--") + GTEST_FLAG_PREFIX_ + kInternalRunDeathTestFlag + "=" + file_ + "|" + StreamableToString(line_) + "|" + StreamableToString(death_test_index) + "|" + StreamableToString(pipe_fd[1]); Arguments args; args.AddArguments(GetArgvsForDeathTestChildProcess()); args.AddArgument(filter_flag.c_str()); args.AddArgument(internal_flag.c_str()); DeathTest::set_last_death_test_message(""); CaptureStderr(); // See the comment in NoExecDeathTest::AssumeRole for why the next line // is necessary. FlushInfoLog(); const pid_t child_pid = ExecDeathTestSpawnChild(args.Argv(), pipe_fd[0]); GTEST_DEATH_TEST_CHECK_SYSCALL_(close(pipe_fd[1])); set_child_pid(child_pid); set_read_fd(pipe_fd[0]); set_spawned(true); return OVERSEE_TEST; } # endif // !GTEST_OS_WINDOWS // Creates a concrete DeathTest-derived class that depends on the // --gtest_death_test_style flag, and sets the pointer pointed to // by the "test" argument to its address. If the test should be // skipped, sets that pointer to NULL. Returns true, unless the // flag is set to an invalid value. bool DefaultDeathTestFactory::Create(const char* statement, const RE* regex, const char* file, int line, DeathTest** test) { UnitTestImpl* const impl = GetUnitTestImpl(); const InternalRunDeathTestFlag* const flag = impl->internal_run_death_test_flag(); const int death_test_index = impl->current_test_info() ->increment_death_test_count(); if (flag != NULL) { if (death_test_index > flag->index()) { DeathTest::set_last_death_test_message( "Death test count (" + StreamableToString(death_test_index) + ") somehow exceeded expected maximum (" + StreamableToString(flag->index()) + ")"); return false; } if (!(flag->file() == file && flag->line() == line && flag->index() == death_test_index)) { *test = NULL; return true; } } # if GTEST_OS_WINDOWS if (GTEST_FLAG(death_test_style) == "threadsafe" || GTEST_FLAG(death_test_style) == "fast") { *test = new WindowsDeathTest(statement, regex, file, line); } # else if (GTEST_FLAG(death_test_style) == "threadsafe") { *test = new ExecDeathTest(statement, regex, file, line); } else if (GTEST_FLAG(death_test_style) == "fast") { *test = new NoExecDeathTest(statement, regex); } # endif // GTEST_OS_WINDOWS else { // NOLINT - this is more readable than unbalanced brackets inside #if. DeathTest::set_last_death_test_message( "Unknown death test style \"" + GTEST_FLAG(death_test_style) + "\" encountered"); return false; } return true; } // Splits a given string on a given delimiter, populating a given // vector with the fields. GTEST_HAS_DEATH_TEST implies that we have // ::std::string, so we can use it here. static void SplitString(const ::std::string& str, char delimiter, ::std::vector< ::std::string>* dest) { ::std::vector< ::std::string> parsed; ::std::string::size_type pos = 0; while (::testing::internal::AlwaysTrue()) { const ::std::string::size_type colon = str.find(delimiter, pos); if (colon == ::std::string::npos) { parsed.push_back(str.substr(pos)); break; } else { parsed.push_back(str.substr(pos, colon - pos)); pos = colon + 1; } } dest->swap(parsed); } # if GTEST_OS_WINDOWS // Recreates the pipe and event handles from the provided parameters, // signals the event, and returns a file descriptor wrapped around the pipe // handle. This function is called in the child process only. int GetStatusFileDescriptor(unsigned int parent_process_id, size_t write_handle_as_size_t, size_t event_handle_as_size_t) { AutoHandle parent_process_handle(::OpenProcess(PROCESS_DUP_HANDLE, FALSE, // Non-inheritable. parent_process_id)); if (parent_process_handle.Get() == INVALID_HANDLE_VALUE) { DeathTestAbort("Unable to open parent process " + StreamableToString(parent_process_id)); } // TODO(vladl@google.com): Replace the following check with a // compile-time assertion when available. GTEST_CHECK_(sizeof(HANDLE) <= sizeof(size_t)); const HANDLE write_handle = reinterpret_cast(write_handle_as_size_t); HANDLE dup_write_handle; // The newly initialized handle is accessible only in in the parent // process. To obtain one accessible within the child, we need to use // DuplicateHandle. if (!::DuplicateHandle(parent_process_handle.Get(), write_handle, ::GetCurrentProcess(), &dup_write_handle, 0x0, // Requested privileges ignored since // DUPLICATE_SAME_ACCESS is used. FALSE, // Request non-inheritable handler. DUPLICATE_SAME_ACCESS)) { DeathTestAbort("Unable to duplicate the pipe handle " + StreamableToString(write_handle_as_size_t) + " from the parent process " + StreamableToString(parent_process_id)); } const HANDLE event_handle = reinterpret_cast(event_handle_as_size_t); HANDLE dup_event_handle; if (!::DuplicateHandle(parent_process_handle.Get(), event_handle, ::GetCurrentProcess(), &dup_event_handle, 0x0, FALSE, DUPLICATE_SAME_ACCESS)) { DeathTestAbort("Unable to duplicate the event handle " + StreamableToString(event_handle_as_size_t) + " from the parent process " + StreamableToString(parent_process_id)); } const int write_fd = ::_open_osfhandle(reinterpret_cast(dup_write_handle), O_APPEND); if (write_fd == -1) { DeathTestAbort("Unable to convert pipe handle " + StreamableToString(write_handle_as_size_t) + " to a file descriptor"); } // Signals the parent that the write end of the pipe has been acquired // so the parent can release its own write end. ::SetEvent(dup_event_handle); return write_fd; } # endif // GTEST_OS_WINDOWS // Returns a newly created InternalRunDeathTestFlag object with fields // initialized from the GTEST_FLAG(internal_run_death_test) flag if // the flag is specified; otherwise returns NULL. InternalRunDeathTestFlag* ParseInternalRunDeathTestFlag() { if (GTEST_FLAG(internal_run_death_test) == "") return NULL; // GTEST_HAS_DEATH_TEST implies that we have ::std::string, so we // can use it here. int line = -1; int index = -1; ::std::vector< ::std::string> fields; SplitString(GTEST_FLAG(internal_run_death_test).c_str(), '|', &fields); int write_fd = -1; # if GTEST_OS_WINDOWS unsigned int parent_process_id = 0; size_t write_handle_as_size_t = 0; size_t event_handle_as_size_t = 0; if (fields.size() != 6 || !ParseNaturalNumber(fields[1], &line) || !ParseNaturalNumber(fields[2], &index) || !ParseNaturalNumber(fields[3], &parent_process_id) || !ParseNaturalNumber(fields[4], &write_handle_as_size_t) || !ParseNaturalNumber(fields[5], &event_handle_as_size_t)) { DeathTestAbort("Bad --gtest_internal_run_death_test flag: " + GTEST_FLAG(internal_run_death_test)); } write_fd = GetStatusFileDescriptor(parent_process_id, write_handle_as_size_t, event_handle_as_size_t); # else if (fields.size() != 4 || !ParseNaturalNumber(fields[1], &line) || !ParseNaturalNumber(fields[2], &index) || !ParseNaturalNumber(fields[3], &write_fd)) { DeathTestAbort("Bad --gtest_internal_run_death_test flag: " + GTEST_FLAG(internal_run_death_test)); } # endif // GTEST_OS_WINDOWS return new InternalRunDeathTestFlag(fields[0], line, index, write_fd); } } // namespace internal #endif // GTEST_HAS_DEATH_TEST } // namespace testing // Copyright 2008, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Authors: keith.ray@gmail.com (Keith Ray) #include #if GTEST_OS_WINDOWS_MOBILE # include #elif GTEST_OS_WINDOWS # include # include #elif GTEST_OS_SYMBIAN // Symbian OpenC has PATH_MAX in sys/syslimits.h # include #else # include # include // Some Linux distributions define PATH_MAX here. #endif // GTEST_OS_WINDOWS_MOBILE #if GTEST_OS_WINDOWS # define GTEST_PATH_MAX_ _MAX_PATH #elif defined(PATH_MAX) # define GTEST_PATH_MAX_ PATH_MAX #elif defined(_XOPEN_PATH_MAX) # define GTEST_PATH_MAX_ _XOPEN_PATH_MAX #else # define GTEST_PATH_MAX_ _POSIX_PATH_MAX #endif // GTEST_OS_WINDOWS namespace testing { namespace internal { #if GTEST_OS_WINDOWS // On Windows, '\\' is the standard path separator, but many tools and the // Windows API also accept '/' as an alternate path separator. Unless otherwise // noted, a file path can contain either kind of path separators, or a mixture // of them. const char kPathSeparator = '\\'; const char kAlternatePathSeparator = '/'; const char kPathSeparatorString[] = "\\"; const char kAlternatePathSeparatorString[] = "/"; # if GTEST_OS_WINDOWS_MOBILE // Windows CE doesn't have a current directory. You should not use // the current directory in tests on Windows CE, but this at least // provides a reasonable fallback. const char kCurrentDirectoryString[] = "\\"; // Windows CE doesn't define INVALID_FILE_ATTRIBUTES const DWORD kInvalidFileAttributes = 0xffffffff; # else const char kCurrentDirectoryString[] = ".\\"; # endif // GTEST_OS_WINDOWS_MOBILE #else const char kPathSeparator = '/'; const char kPathSeparatorString[] = "/"; const char kCurrentDirectoryString[] = "./"; #endif // GTEST_OS_WINDOWS // Returns whether the given character is a valid path separator. static bool IsPathSeparator(char c) { #if GTEST_HAS_ALT_PATH_SEP_ return (c == kPathSeparator) || (c == kAlternatePathSeparator); #else return c == kPathSeparator; #endif } // Returns the current working directory, or "" if unsuccessful. FilePath FilePath::GetCurrentDir() { #if GTEST_OS_WINDOWS_MOBILE // Windows CE doesn't have a current directory, so we just return // something reasonable. return FilePath(kCurrentDirectoryString); #elif GTEST_OS_WINDOWS char cwd[GTEST_PATH_MAX_ + 1] = { '\0' }; return FilePath(_getcwd(cwd, sizeof(cwd)) == NULL ? "" : cwd); #else char cwd[GTEST_PATH_MAX_ + 1] = { '\0' }; return FilePath(getcwd(cwd, sizeof(cwd)) == NULL ? "" : cwd); #endif // GTEST_OS_WINDOWS_MOBILE } // Returns a copy of the FilePath with the case-insensitive extension removed. // Example: FilePath("dir/file.exe").RemoveExtension("EXE") returns // FilePath("dir/file"). If a case-insensitive extension is not // found, returns a copy of the original FilePath. FilePath FilePath::RemoveExtension(const char* extension) const { const std::string dot_extension = std::string(".") + extension; if (String::EndsWithCaseInsensitive(pathname_, dot_extension)) { return FilePath(pathname_.substr( 0, pathname_.length() - dot_extension.length())); } return *this; } // Returns a pointer to the last occurence of a valid path separator in // the FilePath. On Windows, for example, both '/' and '\' are valid path // separators. Returns NULL if no path separator was found. const char* FilePath::FindLastPathSeparator() const { const char* const last_sep = strrchr(c_str(), kPathSeparator); #if GTEST_HAS_ALT_PATH_SEP_ const char* const last_alt_sep = strrchr(c_str(), kAlternatePathSeparator); // Comparing two pointers of which only one is NULL is undefined. if (last_alt_sep != NULL && (last_sep == NULL || last_alt_sep > last_sep)) { return last_alt_sep; } #endif return last_sep; } // Returns a copy of the FilePath with the directory part removed. // Example: FilePath("path/to/file").RemoveDirectoryName() returns // FilePath("file"). If there is no directory part ("just_a_file"), it returns // the FilePath unmodified. If there is no file part ("just_a_dir/") it // returns an empty FilePath (""). // On Windows platform, '\' is the path separator, otherwise it is '/'. FilePath FilePath::RemoveDirectoryName() const { const char* const last_sep = FindLastPathSeparator(); return last_sep ? FilePath(last_sep + 1) : *this; } // RemoveFileName returns the directory path with the filename removed. // Example: FilePath("path/to/file").RemoveFileName() returns "path/to/". // If the FilePath is "a_file" or "/a_file", RemoveFileName returns // FilePath("./") or, on Windows, FilePath(".\\"). If the filepath does // not have a file, like "just/a/dir/", it returns the FilePath unmodified. // On Windows platform, '\' is the path separator, otherwise it is '/'. FilePath FilePath::RemoveFileName() const { const char* const last_sep = FindLastPathSeparator(); std::string dir; if (last_sep) { dir = std::string(c_str(), last_sep + 1 - c_str()); } else { dir = kCurrentDirectoryString; } return FilePath(dir); } // Helper functions for naming files in a directory for xml output. // Given directory = "dir", base_name = "test", number = 0, // extension = "xml", returns "dir/test.xml". If number is greater // than zero (e.g., 12), returns "dir/test_12.xml". // On Windows platform, uses \ as the separator rather than /. FilePath FilePath::MakeFileName(const FilePath& directory, const FilePath& base_name, int number, const char* extension) { std::string file; if (number == 0) { file = base_name.string() + "." + extension; } else { file = base_name.string() + "_" + StreamableToString(number) + "." + extension; } return ConcatPaths(directory, FilePath(file)); } // Given directory = "dir", relative_path = "test.xml", returns "dir/test.xml". // On Windows, uses \ as the separator rather than /. FilePath FilePath::ConcatPaths(const FilePath& directory, const FilePath& relative_path) { if (directory.IsEmpty()) return relative_path; const FilePath dir(directory.RemoveTrailingPathSeparator()); return FilePath(dir.string() + kPathSeparator + relative_path.string()); } // Returns true if pathname describes something findable in the file-system, // either a file, directory, or whatever. bool FilePath::FileOrDirectoryExists() const { #if GTEST_OS_WINDOWS_MOBILE LPCWSTR unicode = String::AnsiToUtf16(pathname_.c_str()); const DWORD attributes = GetFileAttributes(unicode); delete [] unicode; return attributes != kInvalidFileAttributes; #else posix::StatStruct file_stat; return posix::Stat(pathname_.c_str(), &file_stat) == 0; #endif // GTEST_OS_WINDOWS_MOBILE } // Returns true if pathname describes a directory in the file-system // that exists. bool FilePath::DirectoryExists() const { bool result = false; #if GTEST_OS_WINDOWS // Don't strip off trailing separator if path is a root directory on // Windows (like "C:\\"). const FilePath& path(IsRootDirectory() ? *this : RemoveTrailingPathSeparator()); #else const FilePath& path(*this); #endif #if GTEST_OS_WINDOWS_MOBILE LPCWSTR unicode = String::AnsiToUtf16(path.c_str()); const DWORD attributes = GetFileAttributes(unicode); delete [] unicode; if ((attributes != kInvalidFileAttributes) && (attributes & FILE_ATTRIBUTE_DIRECTORY)) { result = true; } #else posix::StatStruct file_stat; result = posix::Stat(path.c_str(), &file_stat) == 0 && posix::IsDir(file_stat); #endif // GTEST_OS_WINDOWS_MOBILE return result; } // Returns true if pathname describes a root directory. (Windows has one // root directory per disk drive.) bool FilePath::IsRootDirectory() const { #if GTEST_OS_WINDOWS // TODO(wan@google.com): on Windows a network share like // \\server\share can be a root directory, although it cannot be the // current directory. Handle this properly. return pathname_.length() == 3 && IsAbsolutePath(); #else return pathname_.length() == 1 && IsPathSeparator(pathname_.c_str()[0]); #endif } // Returns true if pathname describes an absolute path. bool FilePath::IsAbsolutePath() const { const char* const name = pathname_.c_str(); #if GTEST_OS_WINDOWS return pathname_.length() >= 3 && ((name[0] >= 'a' && name[0] <= 'z') || (name[0] >= 'A' && name[0] <= 'Z')) && name[1] == ':' && IsPathSeparator(name[2]); #else return IsPathSeparator(name[0]); #endif } // Returns a pathname for a file that does not currently exist. The pathname // will be directory/base_name.extension or // directory/base_name_.extension if directory/base_name.extension // already exists. The number will be incremented until a pathname is found // that does not already exist. // Examples: 'dir/foo_test.xml' or 'dir/foo_test_1.xml'. // There could be a race condition if two or more processes are calling this // function at the same time -- they could both pick the same filename. FilePath FilePath::GenerateUniqueFileName(const FilePath& directory, const FilePath& base_name, const char* extension) { FilePath full_pathname; int number = 0; do { full_pathname.Set(MakeFileName(directory, base_name, number++, extension)); } while (full_pathname.FileOrDirectoryExists()); return full_pathname; } // Returns true if FilePath ends with a path separator, which indicates that // it is intended to represent a directory. Returns false otherwise. // This does NOT check that a directory (or file) actually exists. bool FilePath::IsDirectory() const { return !pathname_.empty() && IsPathSeparator(pathname_.c_str()[pathname_.length() - 1]); } // Create directories so that path exists. Returns true if successful or if // the directories already exist; returns false if unable to create directories // for any reason. bool FilePath::CreateDirectoriesRecursively() const { if (!this->IsDirectory()) { return false; } if (pathname_.length() == 0 || this->DirectoryExists()) { return true; } const FilePath parent(this->RemoveTrailingPathSeparator().RemoveFileName()); return parent.CreateDirectoriesRecursively() && this->CreateFolder(); } // Create the directory so that path exists. Returns true if successful or // if the directory already exists; returns false if unable to create the // directory for any reason, including if the parent directory does not // exist. Not named "CreateDirectory" because that's a macro on Windows. bool FilePath::CreateFolder() const { #if GTEST_OS_WINDOWS_MOBILE FilePath removed_sep(this->RemoveTrailingPathSeparator()); LPCWSTR unicode = String::AnsiToUtf16(removed_sep.c_str()); int result = CreateDirectory(unicode, NULL) ? 0 : -1; delete [] unicode; #elif GTEST_OS_WINDOWS int result = _mkdir(pathname_.c_str()); #else int result = mkdir(pathname_.c_str(), 0777); #endif // GTEST_OS_WINDOWS_MOBILE if (result == -1) { return this->DirectoryExists(); // An error is OK if the directory exists. } return true; // No error. } // If input name has a trailing separator character, remove it and return the // name, otherwise return the name string unmodified. // On Windows platform, uses \ as the separator, other platforms use /. FilePath FilePath::RemoveTrailingPathSeparator() const { return IsDirectory() ? FilePath(pathname_.substr(0, pathname_.length() - 1)) : *this; } // Removes any redundant separators that might be in the pathname. // For example, "bar///foo" becomes "bar/foo". Does not eliminate other // redundancies that might be in a pathname involving "." or "..". // TODO(wan@google.com): handle Windows network shares (e.g. \\server\share). void FilePath::Normalize() { if (pathname_.c_str() == NULL) { pathname_ = ""; return; } const char* src = pathname_.c_str(); char* const dest = new char[pathname_.length() + 1]; char* dest_ptr = dest; memset(dest_ptr, 0, pathname_.length() + 1); while (*src != '\0') { *dest_ptr = *src; if (!IsPathSeparator(*src)) { src++; } else { #if GTEST_HAS_ALT_PATH_SEP_ if (*dest_ptr == kAlternatePathSeparator) { *dest_ptr = kPathSeparator; } #endif while (IsPathSeparator(*src)) src++; } dest_ptr++; } *dest_ptr = '\0'; pathname_ = dest; delete[] dest; } } // namespace internal } // namespace testing // Copyright 2008, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: wan@google.com (Zhanyong Wan) #include #include #include #include #if GTEST_OS_WINDOWS_MOBILE # include // For TerminateProcess() #elif GTEST_OS_WINDOWS # include # include #else # include #endif // GTEST_OS_WINDOWS_MOBILE #if GTEST_OS_MAC # include # include # include #endif // GTEST_OS_MAC #if GTEST_OS_QNX # include # include #endif // GTEST_OS_QNX // Indicates that this translation unit is part of Google Test's // implementation. It must come before gtest-internal-inl.h is // included, or there will be a compiler error. This trick is to // prevent a user from accidentally including gtest-internal-inl.h in // his code. #define GTEST_IMPLEMENTATION_ 1 #undef GTEST_IMPLEMENTATION_ namespace testing { namespace internal { #if defined(_MSC_VER) || defined(__BORLANDC__) // MSVC and C++Builder do not provide a definition of STDERR_FILENO. const int kStdOutFileno = 1; const int kStdErrFileno = 2; #else const int kStdOutFileno = STDOUT_FILENO; const int kStdErrFileno = STDERR_FILENO; #endif // _MSC_VER #if GTEST_OS_MAC // Returns the number of threads running in the process, or 0 to indicate that // we cannot detect it. size_t GetThreadCount() { const task_t task = mach_task_self(); mach_msg_type_number_t thread_count; thread_act_array_t thread_list; const kern_return_t status = task_threads(task, &thread_list, &thread_count); if (status == KERN_SUCCESS) { // task_threads allocates resources in thread_list and we need to free them // to avoid leaks. vm_deallocate(task, reinterpret_cast(thread_list), sizeof(thread_t) * thread_count); return static_cast(thread_count); } else { return 0; } } #elif GTEST_OS_QNX // Returns the number of threads running in the process, or 0 to indicate that // we cannot detect it. size_t GetThreadCount() { const int fd = open("/proc/self/as", O_RDONLY); if (fd < 0) { return 0; } procfs_info process_info; const int status = devctl(fd, DCMD_PROC_INFO, &process_info, sizeof(process_info), NULL); close(fd); if (status == EOK) { return static_cast(process_info.num_threads); } else { return 0; } } #else size_t GetThreadCount() { // There's no portable way to detect the number of threads, so we just // return 0 to indicate that we cannot detect it. return 0; } #endif // GTEST_OS_MAC #if GTEST_USES_POSIX_RE // Implements RE. Currently only needed for death tests. RE::~RE() { if (is_valid_) { // regfree'ing an invalid regex might crash because the content // of the regex is undefined. Since the regex's are essentially // the same, one cannot be valid (or invalid) without the other // being so too. regfree(&partial_regex_); regfree(&full_regex_); } free(const_cast(pattern_)); } // Returns true iff regular expression re matches the entire str. bool RE::FullMatch(const char* str, const RE& re) { if (!re.is_valid_) return false; regmatch_t match; return regexec(&re.full_regex_, str, 1, &match, 0) == 0; } // Returns true iff regular expression re matches a substring of str // (including str itself). bool RE::PartialMatch(const char* str, const RE& re) { if (!re.is_valid_) return false; regmatch_t match; return regexec(&re.partial_regex_, str, 1, &match, 0) == 0; } // Initializes an RE from its string representation. void RE::Init(const char* regex) { pattern_ = posix::StrDup(regex); // Reserves enough bytes to hold the regular expression used for a // full match. const size_t full_regex_len = strlen(regex) + 10; char* const full_pattern = new char[full_regex_len]; snprintf(full_pattern, full_regex_len, "^(%s)$", regex); is_valid_ = regcomp(&full_regex_, full_pattern, REG_EXTENDED) == 0; // We want to call regcomp(&partial_regex_, ...) even if the // previous expression returns false. Otherwise partial_regex_ may // not be properly initialized can may cause trouble when it's // freed. // // Some implementation of POSIX regex (e.g. on at least some // versions of Cygwin) doesn't accept the empty string as a valid // regex. We change it to an equivalent form "()" to be safe. if (is_valid_) { const char* const partial_regex = (*regex == '\0') ? "()" : regex; is_valid_ = regcomp(&partial_regex_, partial_regex, REG_EXTENDED) == 0; } EXPECT_TRUE(is_valid_) << "Regular expression \"" << regex << "\" is not a valid POSIX Extended regular expression."; delete[] full_pattern; } #elif GTEST_USES_SIMPLE_RE // Returns true iff ch appears anywhere in str (excluding the // terminating '\0' character). bool IsInSet(char ch, const char* str) { return ch != '\0' && strchr(str, ch) != NULL; } // Returns true iff ch belongs to the given classification. Unlike // similar functions in , these aren't affected by the // current locale. bool IsAsciiDigit(char ch) { return '0' <= ch && ch <= '9'; } bool IsAsciiPunct(char ch) { return IsInSet(ch, "^-!\"#$%&'()*+,./:;<=>?@[\\]_`{|}~"); } bool IsRepeat(char ch) { return IsInSet(ch, "?*+"); } bool IsAsciiWhiteSpace(char ch) { return IsInSet(ch, " \f\n\r\t\v"); } bool IsAsciiWordChar(char ch) { return ('a' <= ch && ch <= 'z') || ('A' <= ch && ch <= 'Z') || ('0' <= ch && ch <= '9') || ch == '_'; } // Returns true iff "\\c" is a supported escape sequence. bool IsValidEscape(char c) { return (IsAsciiPunct(c) || IsInSet(c, "dDfnrsStvwW")); } // Returns true iff the given atom (specified by escaped and pattern) // matches ch. The result is undefined if the atom is invalid. bool AtomMatchesChar(bool escaped, char pattern_char, char ch) { if (escaped) { // "\\p" where p is pattern_char. switch (pattern_char) { case 'd': return IsAsciiDigit(ch); case 'D': return !IsAsciiDigit(ch); case 'f': return ch == '\f'; case 'n': return ch == '\n'; case 'r': return ch == '\r'; case 's': return IsAsciiWhiteSpace(ch); case 'S': return !IsAsciiWhiteSpace(ch); case 't': return ch == '\t'; case 'v': return ch == '\v'; case 'w': return IsAsciiWordChar(ch); case 'W': return !IsAsciiWordChar(ch); } return IsAsciiPunct(pattern_char) && pattern_char == ch; } return (pattern_char == '.' && ch != '\n') || pattern_char == ch; } // Helper function used by ValidateRegex() to format error messages. std::string FormatRegexSyntaxError(const char* regex, int index) { return (Message() << "Syntax error at index " << index << " in simple regular expression \"" << regex << "\": ").GetString(); } // Generates non-fatal failures and returns false if regex is invalid; // otherwise returns true. bool ValidateRegex(const char* regex) { if (regex == NULL) { // TODO(wan@google.com): fix the source file location in the // assertion failures to match where the regex is used in user // code. ADD_FAILURE() << "NULL is not a valid simple regular expression."; return false; } bool is_valid = true; // True iff ?, *, or + can follow the previous atom. bool prev_repeatable = false; for (int i = 0; regex[i]; i++) { if (regex[i] == '\\') { // An escape sequence i++; if (regex[i] == '\0') { ADD_FAILURE() << FormatRegexSyntaxError(regex, i - 1) << "'\\' cannot appear at the end."; return false; } if (!IsValidEscape(regex[i])) { ADD_FAILURE() << FormatRegexSyntaxError(regex, i - 1) << "invalid escape sequence \"\\" << regex[i] << "\"."; is_valid = false; } prev_repeatable = true; } else { // Not an escape sequence. const char ch = regex[i]; if (ch == '^' && i > 0) { ADD_FAILURE() << FormatRegexSyntaxError(regex, i) << "'^' can only appear at the beginning."; is_valid = false; } else if (ch == '$' && regex[i + 1] != '\0') { ADD_FAILURE() << FormatRegexSyntaxError(regex, i) << "'$' can only appear at the end."; is_valid = false; } else if (IsInSet(ch, "()[]{}|")) { ADD_FAILURE() << FormatRegexSyntaxError(regex, i) << "'" << ch << "' is unsupported."; is_valid = false; } else if (IsRepeat(ch) && !prev_repeatable) { ADD_FAILURE() << FormatRegexSyntaxError(regex, i) << "'" << ch << "' can only follow a repeatable token."; is_valid = false; } prev_repeatable = !IsInSet(ch, "^$?*+"); } } return is_valid; } // Matches a repeated regex atom followed by a valid simple regular // expression. The regex atom is defined as c if escaped is false, // or \c otherwise. repeat is the repetition meta character (?, *, // or +). The behavior is undefined if str contains too many // characters to be indexable by size_t, in which case the test will // probably time out anyway. We are fine with this limitation as // std::string has it too. bool MatchRepetitionAndRegexAtHead( bool escaped, char c, char repeat, const char* regex, const char* str) { const size_t min_count = (repeat == '+') ? 1 : 0; const size_t max_count = (repeat == '?') ? 1 : static_cast(-1) - 1; // We cannot call numeric_limits::max() as it conflicts with the // max() macro on Windows. for (size_t i = 0; i <= max_count; ++i) { // We know that the atom matches each of the first i characters in str. if (i >= min_count && MatchRegexAtHead(regex, str + i)) { // We have enough matches at the head, and the tail matches too. // Since we only care about *whether* the pattern matches str // (as opposed to *how* it matches), there is no need to find a // greedy match. return true; } if (str[i] == '\0' || !AtomMatchesChar(escaped, c, str[i])) return false; } return false; } // Returns true iff regex matches a prefix of str. regex must be a // valid simple regular expression and not start with "^", or the // result is undefined. bool MatchRegexAtHead(const char* regex, const char* str) { if (*regex == '\0') // An empty regex matches a prefix of anything. return true; // "$" only matches the end of a string. Note that regex being // valid guarantees that there's nothing after "$" in it. if (*regex == '$') return *str == '\0'; // Is the first thing in regex an escape sequence? const bool escaped = *regex == '\\'; if (escaped) ++regex; if (IsRepeat(regex[1])) { // MatchRepetitionAndRegexAtHead() calls MatchRegexAtHead(), so // here's an indirect recursion. It terminates as the regex gets // shorter in each recursion. return MatchRepetitionAndRegexAtHead( escaped, regex[0], regex[1], regex + 2, str); } else { // regex isn't empty, isn't "$", and doesn't start with a // repetition. We match the first atom of regex with the first // character of str and recurse. return (*str != '\0') && AtomMatchesChar(escaped, *regex, *str) && MatchRegexAtHead(regex + 1, str + 1); } } // Returns true iff regex matches any substring of str. regex must be // a valid simple regular expression, or the result is undefined. // // The algorithm is recursive, but the recursion depth doesn't exceed // the regex length, so we won't need to worry about running out of // stack space normally. In rare cases the time complexity can be // exponential with respect to the regex length + the string length, // but usually it's must faster (often close to linear). bool MatchRegexAnywhere(const char* regex, const char* str) { if (regex == NULL || str == NULL) return false; if (*regex == '^') return MatchRegexAtHead(regex + 1, str); // A successful match can be anywhere in str. do { if (MatchRegexAtHead(regex, str)) return true; } while (*str++ != '\0'); return false; } // Implements the RE class. RE::~RE() { free(const_cast(pattern_)); free(const_cast(full_pattern_)); } // Returns true iff regular expression re matches the entire str. bool RE::FullMatch(const char* str, const RE& re) { return re.is_valid_ && MatchRegexAnywhere(re.full_pattern_, str); } // Returns true iff regular expression re matches a substring of str // (including str itself). bool RE::PartialMatch(const char* str, const RE& re) { return re.is_valid_ && MatchRegexAnywhere(re.pattern_, str); } // Initializes an RE from its string representation. void RE::Init(const char* regex) { pattern_ = full_pattern_ = NULL; if (regex != NULL) { pattern_ = posix::StrDup(regex); } is_valid_ = ValidateRegex(regex); if (!is_valid_) { // No need to calculate the full pattern when the regex is invalid. return; } const size_t len = strlen(regex); // Reserves enough bytes to hold the regular expression used for a // full match: we need space to prepend a '^', append a '$', and // terminate the string with '\0'. char* buffer = static_cast(malloc(len + 3)); full_pattern_ = buffer; if (*regex != '^') *buffer++ = '^'; // Makes sure full_pattern_ starts with '^'. // We don't use snprintf or strncpy, as they trigger a warning when // compiled with VC++ 8.0. memcpy(buffer, regex, len); buffer += len; if (len == 0 || regex[len - 1] != '$') *buffer++ = '$'; // Makes sure full_pattern_ ends with '$'. *buffer = '\0'; } #endif // GTEST_USES_POSIX_RE const char kUnknownFile[] = "unknown file"; // Formats a source file path and a line number as they would appear // in an error message from the compiler used to compile this code. GTEST_API_ ::std::string FormatFileLocation(const char* file, int line) { const std::string file_name(file == NULL ? kUnknownFile : file); if (line < 0) { return file_name + ":"; } #ifdef _MSC_VER return file_name + "(" + StreamableToString(line) + "):"; #else return file_name + ":" + StreamableToString(line) + ":"; #endif // _MSC_VER } // Formats a file location for compiler-independent XML output. // Although this function is not platform dependent, we put it next to // FormatFileLocation in order to contrast the two functions. // Note that FormatCompilerIndependentFileLocation() does NOT append colon // to the file location it produces, unlike FormatFileLocation(). GTEST_API_ ::std::string FormatCompilerIndependentFileLocation( const char* file, int line) { const std::string file_name(file == NULL ? kUnknownFile : file); if (line < 0) return file_name; else return file_name + ":" + StreamableToString(line); } GTestLog::GTestLog(GTestLogSeverity severity, const char* file, int line) : severity_(severity) { const char* const marker = severity == GTEST_INFO ? "[ INFO ]" : severity == GTEST_WARNING ? "[WARNING]" : severity == GTEST_ERROR ? "[ ERROR ]" : "[ FATAL ]"; GetStream() << ::std::endl << marker << " " << FormatFileLocation(file, line).c_str() << ": "; } // Flushes the buffers and, if severity is GTEST_FATAL, aborts the program. GTestLog::~GTestLog() { GetStream() << ::std::endl; if (severity_ == GTEST_FATAL) { fflush(stderr); posix::Abort(); } } // Disable Microsoft deprecation warnings for POSIX functions called from // this class (creat, dup, dup2, and close) #ifdef _MSC_VER # pragma warning(push) # pragma warning(disable: 4996) #endif // _MSC_VER #if GTEST_HAS_STREAM_REDIRECTION // Object that captures an output stream (stdout/stderr). class CapturedStream { public: // The ctor redirects the stream to a temporary file. explicit CapturedStream(int fd) : fd_(fd), uncaptured_fd_(dup(fd)) { # if GTEST_OS_WINDOWS char temp_dir_path[MAX_PATH + 1] = { '\0' }; // NOLINT char temp_file_path[MAX_PATH + 1] = { '\0' }; // NOLINT ::GetTempPathA(sizeof(temp_dir_path), temp_dir_path); const UINT success = ::GetTempFileNameA(temp_dir_path, "gtest_redir", 0, // Generate unique file name. temp_file_path); GTEST_CHECK_(success != 0) << "Unable to create a temporary file in " << temp_dir_path; const int captured_fd = creat(temp_file_path, _S_IREAD | _S_IWRITE); GTEST_CHECK_(captured_fd != -1) << "Unable to open temporary file " << temp_file_path; filename_ = temp_file_path; # else // There's no guarantee that a test has write access to the current // directory, so we create the temporary file in the /tmp directory // instead. We use /tmp on most systems, and /sdcard on Android. // That's because Android doesn't have /tmp. # if GTEST_OS_LINUX_ANDROID // Note: Android applications are expected to call the framework's // Context.getExternalStorageDirectory() method through JNI to get // the location of the world-writable SD Card directory. However, // this requires a Context handle, which cannot be retrieved // globally from native code. Doing so also precludes running the // code as part of a regular standalone executable, which doesn't // run in a Dalvik process (e.g. when running it through 'adb shell'). // // The location /sdcard is directly accessible from native code // and is the only location (unofficially) supported by the Android // team. It's generally a symlink to the real SD Card mount point // which can be /mnt/sdcard, /mnt/sdcard0, /system/media/sdcard, or // other OEM-customized locations. Never rely on these, and always // use /sdcard. char name_template[] = "/sdcard/gtest_captured_stream.XXXXXX"; # else char name_template[] = "/tmp/captured_stream.XXXXXX"; # endif // GTEST_OS_LINUX_ANDROID const int captured_fd = mkstemp(name_template); filename_ = name_template; # endif // GTEST_OS_WINDOWS fflush(NULL); dup2(captured_fd, fd_); close(captured_fd); } ~CapturedStream() { remove(filename_.c_str()); } std::string GetCapturedString() { if (uncaptured_fd_ != -1) { // Restores the original stream. fflush(NULL); dup2(uncaptured_fd_, fd_); close(uncaptured_fd_); uncaptured_fd_ = -1; } FILE* const file = posix::FOpen(filename_.c_str(), "r"); const std::string content = ReadEntireFile(file); posix::FClose(file); return content; } private: // Reads the entire content of a file as an std::string. static std::string ReadEntireFile(FILE* file); // Returns the size (in bytes) of a file. static size_t GetFileSize(FILE* file); const int fd_; // A stream to capture. int uncaptured_fd_; // Name of the temporary file holding the stderr output. ::std::string filename_; GTEST_DISALLOW_COPY_AND_ASSIGN_(CapturedStream); }; // Returns the size (in bytes) of a file. size_t CapturedStream::GetFileSize(FILE* file) { fseek(file, 0, SEEK_END); return static_cast(ftell(file)); } // Reads the entire content of a file as a string. std::string CapturedStream::ReadEntireFile(FILE* file) { const size_t file_size = GetFileSize(file); char* const buffer = new char[file_size]; size_t bytes_last_read = 0; // # of bytes read in the last fread() size_t bytes_read = 0; // # of bytes read so far fseek(file, 0, SEEK_SET); // Keeps reading the file until we cannot read further or the // pre-determined file size is reached. do { bytes_last_read = fread(buffer+bytes_read, 1, file_size-bytes_read, file); bytes_read += bytes_last_read; } while (bytes_last_read > 0 && bytes_read < file_size); const std::string content(buffer, bytes_read); delete[] buffer; return content; } # ifdef _MSC_VER # pragma warning(pop) # endif // _MSC_VER static CapturedStream* g_captured_stderr = NULL; static CapturedStream* g_captured_stdout = NULL; // Starts capturing an output stream (stdout/stderr). void CaptureStream(int fd, const char* stream_name, CapturedStream** stream) { if (*stream != NULL) { GTEST_LOG_(FATAL) << "Only one " << stream_name << " capturer can exist at a time."; } *stream = new CapturedStream(fd); } // Stops capturing the output stream and returns the captured string. std::string GetCapturedStream(CapturedStream** captured_stream) { const std::string content = (*captured_stream)->GetCapturedString(); delete *captured_stream; *captured_stream = NULL; return content; } // Starts capturing stdout. void CaptureStdout() { CaptureStream(kStdOutFileno, "stdout", &g_captured_stdout); } // Starts capturing stderr. void CaptureStderr() { CaptureStream(kStdErrFileno, "stderr", &g_captured_stderr); } // Stops capturing stdout and returns the captured string. std::string GetCapturedStdout() { return GetCapturedStream(&g_captured_stdout); } // Stops capturing stderr and returns the captured string. std::string GetCapturedStderr() { return GetCapturedStream(&g_captured_stderr); } #endif // GTEST_HAS_STREAM_REDIRECTION #if GTEST_HAS_DEATH_TEST // A copy of all command line arguments. Set by InitGoogleTest(). ::std::vector g_argvs; static const ::std::vector* g_injected_test_argvs = NULL; // Owned. void SetInjectableArgvs(const ::std::vector* argvs) { if (g_injected_test_argvs != argvs) delete g_injected_test_argvs; g_injected_test_argvs = argvs; } const ::std::vector& GetInjectableArgvs() { if (g_injected_test_argvs != NULL) { return *g_injected_test_argvs; } return g_argvs; } #endif // GTEST_HAS_DEATH_TEST #if GTEST_OS_WINDOWS_MOBILE namespace posix { void Abort() { DebugBreak(); TerminateProcess(GetCurrentProcess(), 1); } } // namespace posix #endif // GTEST_OS_WINDOWS_MOBILE // Returns the name of the environment variable corresponding to the // given flag. For example, FlagToEnvVar("foo") will return // "GTEST_FOO" in the open-source version. static std::string FlagToEnvVar(const char* flag) { const std::string full_flag = (Message() << GTEST_FLAG_PREFIX_ << flag).GetString(); Message env_var; for (size_t i = 0; i != full_flag.length(); i++) { env_var << ToUpper(full_flag.c_str()[i]); } return env_var.GetString(); } // Parses 'str' for a 32-bit signed integer. If successful, writes // the result to *value and returns true; otherwise leaves *value // unchanged and returns false. bool ParseInt32(const Message& src_text, const char* str, Int32* value) { // Parses the environment variable as a decimal integer. char* end = NULL; const long long_value = strtol(str, &end, 10); // NOLINT // Has strtol() consumed all characters in the string? if (*end != '\0') { // No - an invalid character was encountered. Message msg; msg << "WARNING: " << src_text << " is expected to be a 32-bit integer, but actually" << " has value \"" << str << "\".\n"; printf("%s", msg.GetString().c_str()); fflush(stdout); return false; } // Is the parsed value in the range of an Int32? const Int32 result = static_cast(long_value); if (long_value == LONG_MAX || long_value == LONG_MIN || // The parsed value overflows as a long. (strtol() returns // LONG_MAX or LONG_MIN when the input overflows.) result != long_value // The parsed value overflows as an Int32. ) { Message msg; msg << "WARNING: " << src_text << " is expected to be a 32-bit integer, but actually" << " has value " << str << ", which overflows.\n"; printf("%s", msg.GetString().c_str()); fflush(stdout); return false; } *value = result; return true; } // Reads and returns the Boolean environment variable corresponding to // the given flag; if it's not set, returns default_value. // // The value is considered true iff it's not "0". bool BoolFromGTestEnv(const char* flag, bool default_value) { const std::string env_var = FlagToEnvVar(flag); const char* const string_value = posix::GetEnv(env_var.c_str()); return string_value == NULL ? default_value : strcmp(string_value, "0") != 0; } // Reads and returns a 32-bit integer stored in the environment // variable corresponding to the given flag; if it isn't set or // doesn't represent a valid 32-bit integer, returns default_value. Int32 Int32FromGTestEnv(const char* flag, Int32 default_value) { const std::string env_var = FlagToEnvVar(flag); const char* const string_value = posix::GetEnv(env_var.c_str()); if (string_value == NULL) { // The environment variable is not set. return default_value; } Int32 result = default_value; if (!ParseInt32(Message() << "Environment variable " << env_var, string_value, &result)) { printf("The default value %s is used.\n", (Message() << default_value).GetString().c_str()); fflush(stdout); return default_value; } return result; } // Reads and returns the string environment variable corresponding to // the given flag; if it's not set, returns default_value. const char* StringFromGTestEnv(const char* flag, const char* default_value) { const std::string env_var = FlagToEnvVar(flag); const char* const value = posix::GetEnv(env_var.c_str()); return value == NULL ? default_value : value; } } // namespace internal } // namespace testing // Copyright 2007, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: wan@google.com (Zhanyong Wan) // Google Test - The Google C++ Testing Framework // // This file implements a universal value printer that can print a // value of any type T: // // void ::testing::internal::UniversalPrinter::Print(value, ostream_ptr); // // It uses the << operator when possible, and prints the bytes in the // object otherwise. A user can override its behavior for a class // type Foo by defining either operator<<(::std::ostream&, const Foo&) // or void PrintTo(const Foo&, ::std::ostream*) in the namespace that // defines Foo. #include #include #include // NOLINT #include namespace testing { namespace { using ::std::ostream; // Prints a segment of bytes in the given object. void PrintByteSegmentInObjectTo(const unsigned char* obj_bytes, size_t start, size_t count, ostream* os) { char text[5] = ""; for (size_t i = 0; i != count; i++) { const size_t j = start + i; if (i != 0) { // Organizes the bytes into groups of 2 for easy parsing by // human. if ((j % 2) == 0) *os << ' '; else *os << '-'; } GTEST_SNPRINTF_(text, sizeof(text), "%02X", obj_bytes[j]); *os << text; } } // Prints the bytes in the given value to the given ostream. void PrintBytesInObjectToImpl(const unsigned char* obj_bytes, size_t count, ostream* os) { // Tells the user how big the object is. *os << count << "-byte object <"; const size_t kThreshold = 132; const size_t kChunkSize = 64; // If the object size is bigger than kThreshold, we'll have to omit // some details by printing only the first and the last kChunkSize // bytes. // TODO(wan): let the user control the threshold using a flag. if (count < kThreshold) { PrintByteSegmentInObjectTo(obj_bytes, 0, count, os); } else { PrintByteSegmentInObjectTo(obj_bytes, 0, kChunkSize, os); *os << " ... "; // Rounds up to 2-byte boundary. const size_t resume_pos = (count - kChunkSize + 1)/2*2; PrintByteSegmentInObjectTo(obj_bytes, resume_pos, count - resume_pos, os); } *os << ">"; } } // namespace namespace internal2 { // Delegates to PrintBytesInObjectToImpl() to print the bytes in the // given object. The delegation simplifies the implementation, which // uses the << operator and thus is easier done outside of the // ::testing::internal namespace, which contains a << operator that // sometimes conflicts with the one in STL. void PrintBytesInObjectTo(const unsigned char* obj_bytes, size_t count, ostream* os) { PrintBytesInObjectToImpl(obj_bytes, count, os); } } // namespace internal2 namespace internal { // Depending on the value of a char (or wchar_t), we print it in one // of three formats: // - as is if it's a printable ASCII (e.g. 'a', '2', ' '), // - as a hexidecimal escape sequence (e.g. '\x7F'), or // - as a special escape sequence (e.g. '\r', '\n'). enum CharFormat { kAsIs, kHexEscape, kSpecialEscape }; // Returns true if c is a printable ASCII character. We test the // value of c directly instead of calling isprint(), which is buggy on // Windows Mobile. inline bool IsPrintableAscii(wchar_t c) { return 0x20 <= c && c <= 0x7E; } // Prints a wide or narrow char c as a character literal without the // quotes, escaping it when necessary; returns how c was formatted. // The template argument UnsignedChar is the unsigned version of Char, // which is the type of c. template static CharFormat PrintAsCharLiteralTo(Char c, ostream* os) { switch (static_cast(c)) { case L'\0': *os << "\\0"; break; case L'\'': *os << "\\'"; break; case L'\\': *os << "\\\\"; break; case L'\a': *os << "\\a"; break; case L'\b': *os << "\\b"; break; case L'\f': *os << "\\f"; break; case L'\n': *os << "\\n"; break; case L'\r': *os << "\\r"; break; case L'\t': *os << "\\t"; break; case L'\v': *os << "\\v"; break; default: if (IsPrintableAscii(c)) { *os << static_cast(c); return kAsIs; } else { *os << "\\x" + String::FormatHexInt(static_cast(c)); return kHexEscape; } } return kSpecialEscape; } // Prints a wchar_t c as if it's part of a string literal, escaping it when // necessary; returns how c was formatted. static CharFormat PrintAsStringLiteralTo(wchar_t c, ostream* os) { switch (c) { case L'\'': *os << "'"; return kAsIs; case L'"': *os << "\\\""; return kSpecialEscape; default: return PrintAsCharLiteralTo(c, os); } } // Prints a char c as if it's part of a string literal, escaping it when // necessary; returns how c was formatted. static CharFormat PrintAsStringLiteralTo(char c, ostream* os) { return PrintAsStringLiteralTo( static_cast(static_cast(c)), os); } // Prints a wide or narrow character c and its code. '\0' is printed // as "'\\0'", other unprintable characters are also properly escaped // using the standard C++ escape sequence. The template argument // UnsignedChar is the unsigned version of Char, which is the type of c. template void PrintCharAndCodeTo(Char c, ostream* os) { // First, print c as a literal in the most readable form we can find. *os << ((sizeof(c) > 1) ? "L'" : "'"); const CharFormat format = PrintAsCharLiteralTo(c, os); *os << "'"; // To aid user debugging, we also print c's code in decimal, unless // it's 0 (in which case c was printed as '\\0', making the code // obvious). if (c == 0) return; *os << " (" << static_cast(c); // For more convenience, we print c's code again in hexidecimal, // unless c was already printed in the form '\x##' or the code is in // [1, 9]. if (format == kHexEscape || (1 <= c && c <= 9)) { // Do nothing. } else { *os << ", 0x" << String::FormatHexInt(static_cast(c)); } *os << ")"; } void PrintTo(unsigned char c, ::std::ostream* os) { PrintCharAndCodeTo(c, os); } void PrintTo(signed char c, ::std::ostream* os) { PrintCharAndCodeTo(c, os); } // Prints a wchar_t as a symbol if it is printable or as its internal // code otherwise and also as its code. L'\0' is printed as "L'\\0'". void PrintTo(wchar_t wc, ostream* os) { PrintCharAndCodeTo(wc, os); } // Prints the given array of characters to the ostream. CharType must be either // char or wchar_t. // The array starts at begin, the length is len, it may include '\0' characters // and may not be NUL-terminated. template static void PrintCharsAsStringTo( const CharType* begin, size_t len, ostream* os) { const char* const kQuoteBegin = sizeof(CharType) == 1 ? "\"" : "L\""; *os << kQuoteBegin; bool is_previous_hex = false; for (size_t index = 0; index < len; ++index) { const CharType cur = begin[index]; if (is_previous_hex && IsXDigit(cur)) { // Previous character is of '\x..' form and this character can be // interpreted as another hexadecimal digit in its number. Break string to // disambiguate. *os << "\" " << kQuoteBegin; } is_previous_hex = PrintAsStringLiteralTo(cur, os) == kHexEscape; } *os << "\""; } // Prints a (const) char/wchar_t array of 'len' elements, starting at address // 'begin'. CharType must be either char or wchar_t. template static void UniversalPrintCharArray( const CharType* begin, size_t len, ostream* os) { // The code // const char kFoo[] = "foo"; // generates an array of 4, not 3, elements, with the last one being '\0'. // // Therefore when printing a char array, we don't print the last element if // it's '\0', such that the output matches the string literal as it's // written in the source code. if (len > 0 && begin[len - 1] == '\0') { PrintCharsAsStringTo(begin, len - 1, os); return; } // If, however, the last element in the array is not '\0', e.g. // const char kFoo[] = { 'f', 'o', 'o' }; // we must print the entire array. We also print a message to indicate // that the array is not NUL-terminated. PrintCharsAsStringTo(begin, len, os); *os << " (no terminating NUL)"; } // Prints a (const) char array of 'len' elements, starting at address 'begin'. void UniversalPrintArray(const char* begin, size_t len, ostream* os) { UniversalPrintCharArray(begin, len, os); } // Prints a (const) wchar_t array of 'len' elements, starting at address // 'begin'. void UniversalPrintArray(const wchar_t* begin, size_t len, ostream* os) { UniversalPrintCharArray(begin, len, os); } // Prints the given C string to the ostream. void PrintTo(const char* s, ostream* os) { if (s == NULL) { *os << "NULL"; } else { *os << ImplicitCast_(s) << " pointing to "; PrintCharsAsStringTo(s, strlen(s), os); } } // MSVC compiler can be configured to define whar_t as a typedef // of unsigned short. Defining an overload for const wchar_t* in that case // would cause pointers to unsigned shorts be printed as wide strings, // possibly accessing more memory than intended and causing invalid // memory accesses. MSVC defines _NATIVE_WCHAR_T_DEFINED symbol when // wchar_t is implemented as a native type. #if !defined(_MSC_VER) || defined(_NATIVE_WCHAR_T_DEFINED) // Prints the given wide C string to the ostream. void PrintTo(const wchar_t* s, ostream* os) { if (s == NULL) { *os << "NULL"; } else { *os << ImplicitCast_(s) << " pointing to "; PrintCharsAsStringTo(s, wcslen(s), os); } } #endif // wchar_t is native // Prints a ::string object. #if GTEST_HAS_GLOBAL_STRING void PrintStringTo(const ::string& s, ostream* os) { PrintCharsAsStringTo(s.data(), s.size(), os); } #endif // GTEST_HAS_GLOBAL_STRING void PrintStringTo(const ::std::string& s, ostream* os) { PrintCharsAsStringTo(s.data(), s.size(), os); } // Prints a ::wstring object. #if GTEST_HAS_GLOBAL_WSTRING void PrintWideStringTo(const ::wstring& s, ostream* os) { PrintCharsAsStringTo(s.data(), s.size(), os); } #endif // GTEST_HAS_GLOBAL_WSTRING #if GTEST_HAS_STD_WSTRING void PrintWideStringTo(const ::std::wstring& s, ostream* os) { PrintCharsAsStringTo(s.data(), s.size(), os); } #endif // GTEST_HAS_STD_WSTRING } // namespace internal } // namespace testing // Copyright 2008, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: mheule@google.com (Markus Heule) // // The Google C++ Testing Framework (Google Test) // Indicates that this translation unit is part of Google Test's // implementation. It must come before gtest-internal-inl.h is // included, or there will be a compiler error. This trick is to // prevent a user from accidentally including gtest-internal-inl.h in // his code. #define GTEST_IMPLEMENTATION_ 1 #undef GTEST_IMPLEMENTATION_ namespace testing { using internal::GetUnitTestImpl; // Gets the summary of the failure message by omitting the stack trace // in it. std::string TestPartResult::ExtractSummary(const char* message) { const char* const stack_trace = strstr(message, internal::kStackTraceMarker); return stack_trace == NULL ? message : std::string(message, stack_trace); } // Prints a TestPartResult object. std::ostream& operator<<(std::ostream& os, const TestPartResult& result) { return os << result.file_name() << ":" << result.line_number() << ": " << (result.type() == TestPartResult::kSuccess ? "Success" : result.type() == TestPartResult::kFatalFailure ? "Fatal failure" : "Non-fatal failure") << ":\n" << result.message() << std::endl; } // Appends a TestPartResult to the array. void TestPartResultArray::Append(const TestPartResult& result) { array_.push_back(result); } // Returns the TestPartResult at the given index (0-based). const TestPartResult& TestPartResultArray::GetTestPartResult(int index) const { if (index < 0 || index >= size()) { printf("\nInvalid index (%d) into TestPartResultArray.\n", index); internal::posix::Abort(); } return array_[index]; } // Returns the number of TestPartResult objects in the array. int TestPartResultArray::size() const { return static_cast(array_.size()); } namespace internal { HasNewFatalFailureHelper::HasNewFatalFailureHelper() : has_new_fatal_failure_(false), original_reporter_(GetUnitTestImpl()-> GetTestPartResultReporterForCurrentThread()) { GetUnitTestImpl()->SetTestPartResultReporterForCurrentThread(this); } HasNewFatalFailureHelper::~HasNewFatalFailureHelper() { GetUnitTestImpl()->SetTestPartResultReporterForCurrentThread( original_reporter_); } void HasNewFatalFailureHelper::ReportTestPartResult( const TestPartResult& result) { if (result.fatally_failed()) has_new_fatal_failure_ = true; original_reporter_->ReportTestPartResult(result); } } // namespace internal } // namespace testing // Copyright 2008 Google Inc. // All Rights Reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: wan@google.com (Zhanyong Wan) namespace testing { namespace internal { #if GTEST_HAS_TYPED_TEST_P // Skips to the first non-space char in str. Returns an empty string if str // contains only whitespace characters. static const char* SkipSpaces(const char* str) { while (IsSpace(*str)) str++; return str; } // Verifies that registered_tests match the test names in // defined_test_names_; returns registered_tests if successful, or // aborts the program otherwise. const char* TypedTestCasePState::VerifyRegisteredTestNames( const char* file, int line, const char* registered_tests) { typedef ::std::set::const_iterator DefinedTestIter; registered_ = true; // Skip initial whitespace in registered_tests since some // preprocessors prefix stringizied literals with whitespace. registered_tests = SkipSpaces(registered_tests); Message errors; ::std::set tests; for (const char* names = registered_tests; names != NULL; names = SkipComma(names)) { const std::string name = GetPrefixUntilComma(names); if (tests.count(name) != 0) { errors << "Test " << name << " is listed more than once.\n"; continue; } bool found = false; for (DefinedTestIter it = defined_test_names_.begin(); it != defined_test_names_.end(); ++it) { if (name == *it) { found = true; break; } } if (found) { tests.insert(name); } else { errors << "No test named " << name << " can be found in this test case.\n"; } } for (DefinedTestIter it = defined_test_names_.begin(); it != defined_test_names_.end(); ++it) { if (tests.count(*it) == 0) { errors << "You forgot to list test " << *it << ".\n"; } } const std::string& errors_str = errors.GetString(); if (errors_str != "") { fprintf(stderr, "%s %s", FormatFileLocation(file, line).c_str(), errors_str.c_str()); fflush(stderr); posix::Abort(); } return registered_tests; } #endif // GTEST_HAS_TYPED_TEST_P } // namespace internal } // namespace testing ffc-2019.1.0.post0/libs/gtest-1.7.0/fused-src/gtest/gtest.h000066400000000000000000031261431346026536000226250ustar00rootroot00000000000000// Copyright 2005, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: wan@google.com (Zhanyong Wan) // // The Google C++ Testing Framework (Google Test) // // This header file defines the public API for Google Test. It should be // included by any test program that uses Google Test. // // IMPORTANT NOTE: Due to limitation of the C++ language, we have to // leave some internal implementation details in this header file. // They are clearly marked by comments like this: // // // INTERNAL IMPLEMENTATION - DO NOT USE IN A USER PROGRAM. // // Such code is NOT meant to be used by a user directly, and is subject // to CHANGE WITHOUT NOTICE. Therefore DO NOT DEPEND ON IT in a user // program! // // Acknowledgment: Google Test borrowed the idea of automatic test // registration from Barthelemy Dagenais' (barthelemy@prologique.com) // easyUnit framework. #ifndef GTEST_INCLUDE_GTEST_GTEST_H_ #define GTEST_INCLUDE_GTEST_GTEST_H_ #include #include #include // Copyright 2005, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Authors: wan@google.com (Zhanyong Wan), eefacm@gmail.com (Sean Mcafee) // // The Google C++ Testing Framework (Google Test) // // This header file declares functions and macros used internally by // Google Test. They are subject to change without notice. #ifndef GTEST_INCLUDE_GTEST_INTERNAL_GTEST_INTERNAL_H_ #define GTEST_INCLUDE_GTEST_INTERNAL_GTEST_INTERNAL_H_ // Copyright 2005, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Authors: wan@google.com (Zhanyong Wan) // // Low-level types and utilities for porting Google Test to various // platforms. They are subject to change without notice. DO NOT USE // THEM IN USER CODE. // // This file is fundamental to Google Test. All other Google Test source // files are expected to #include this. Therefore, it cannot #include // any other Google Test header. #ifndef GTEST_INCLUDE_GTEST_INTERNAL_GTEST_PORT_H_ #define GTEST_INCLUDE_GTEST_INTERNAL_GTEST_PORT_H_ // The user can define the following macros in the build script to // control Google Test's behavior. If the user doesn't define a macro // in this list, Google Test will define it. // // GTEST_HAS_CLONE - Define it to 1/0 to indicate that clone(2) // is/isn't available. // GTEST_HAS_EXCEPTIONS - Define it to 1/0 to indicate that exceptions // are enabled. // GTEST_HAS_GLOBAL_STRING - Define it to 1/0 to indicate that ::string // is/isn't available (some systems define // ::string, which is different to std::string). // GTEST_HAS_GLOBAL_WSTRING - Define it to 1/0 to indicate that ::string // is/isn't available (some systems define // ::wstring, which is different to std::wstring). // GTEST_HAS_POSIX_RE - Define it to 1/0 to indicate that POSIX regular // expressions are/aren't available. // GTEST_HAS_PTHREAD - Define it to 1/0 to indicate that // is/isn't available. // GTEST_HAS_RTTI - Define it to 1/0 to indicate that RTTI is/isn't // enabled. // GTEST_HAS_STD_WSTRING - Define it to 1/0 to indicate that // std::wstring does/doesn't work (Google Test can // be used where std::wstring is unavailable). // GTEST_HAS_TR1_TUPLE - Define it to 1/0 to indicate tr1::tuple // is/isn't available. // GTEST_HAS_SEH - Define it to 1/0 to indicate whether the // compiler supports Microsoft's "Structured // Exception Handling". // GTEST_HAS_STREAM_REDIRECTION // - Define it to 1/0 to indicate whether the // platform supports I/O stream redirection using // dup() and dup2(). // GTEST_USE_OWN_TR1_TUPLE - Define it to 1/0 to indicate whether Google // Test's own tr1 tuple implementation should be // used. Unused when the user sets // GTEST_HAS_TR1_TUPLE to 0. // GTEST_LANG_CXX11 - Define it to 1/0 to indicate that Google Test // is building in C++11/C++98 mode. // GTEST_LINKED_AS_SHARED_LIBRARY // - Define to 1 when compiling tests that use // Google Test as a shared library (known as // DLL on Windows). // GTEST_CREATE_SHARED_LIBRARY // - Define to 1 when compiling Google Test itself // as a shared library. // This header defines the following utilities: // // Macros indicating the current platform (defined to 1 if compiled on // the given platform; otherwise undefined): // GTEST_OS_AIX - IBM AIX // GTEST_OS_CYGWIN - Cygwin // GTEST_OS_HPUX - HP-UX // GTEST_OS_LINUX - Linux // GTEST_OS_LINUX_ANDROID - Google Android // GTEST_OS_MAC - Mac OS X // GTEST_OS_IOS - iOS // GTEST_OS_IOS_SIMULATOR - iOS simulator // GTEST_OS_NACL - Google Native Client (NaCl) // GTEST_OS_OPENBSD - OpenBSD // GTEST_OS_QNX - QNX // GTEST_OS_SOLARIS - Sun Solaris // GTEST_OS_SYMBIAN - Symbian // GTEST_OS_WINDOWS - Windows (Desktop, MinGW, or Mobile) // GTEST_OS_WINDOWS_DESKTOP - Windows Desktop // GTEST_OS_WINDOWS_MINGW - MinGW // GTEST_OS_WINDOWS_MOBILE - Windows Mobile // GTEST_OS_ZOS - z/OS // // Among the platforms, Cygwin, Linux, Max OS X, and Windows have the // most stable support. Since core members of the Google Test project // don't have access to other platforms, support for them may be less // stable. If you notice any problems on your platform, please notify // googletestframework@googlegroups.com (patches for fixing them are // even more welcome!). // // Note that it is possible that none of the GTEST_OS_* macros are defined. // // Macros indicating available Google Test features (defined to 1 if // the corresponding feature is supported; otherwise undefined): // GTEST_HAS_COMBINE - the Combine() function (for value-parameterized // tests) // GTEST_HAS_DEATH_TEST - death tests // GTEST_HAS_PARAM_TEST - value-parameterized tests // GTEST_HAS_TYPED_TEST - typed tests // GTEST_HAS_TYPED_TEST_P - type-parameterized tests // GTEST_USES_POSIX_RE - enhanced POSIX regex is used. Do not confuse with // GTEST_HAS_POSIX_RE (see above) which users can // define themselves. // GTEST_USES_SIMPLE_RE - our own simple regex is used; // the above two are mutually exclusive. // GTEST_CAN_COMPARE_NULL - accepts untyped NULL in EXPECT_EQ(). // // Macros for basic C++ coding: // GTEST_AMBIGUOUS_ELSE_BLOCKER_ - for disabling a gcc warning. // GTEST_ATTRIBUTE_UNUSED_ - declares that a class' instances or a // variable don't have to be used. // GTEST_DISALLOW_ASSIGN_ - disables operator=. // GTEST_DISALLOW_COPY_AND_ASSIGN_ - disables copy ctor and operator=. // GTEST_MUST_USE_RESULT_ - declares that a function's result must be used. // // Synchronization: // Mutex, MutexLock, ThreadLocal, GetThreadCount() // - synchronization primitives. // GTEST_IS_THREADSAFE - defined to 1 to indicate that the above // synchronization primitives have real implementations // and Google Test is thread-safe; or 0 otherwise. // // Template meta programming: // is_pointer - as in TR1; needed on Symbian and IBM XL C/C++ only. // IteratorTraits - partial implementation of std::iterator_traits, which // is not available in libCstd when compiled with Sun C++. // // Smart pointers: // scoped_ptr - as in TR2. // // Regular expressions: // RE - a simple regular expression class using the POSIX // Extended Regular Expression syntax on UNIX-like // platforms, or a reduced regular exception syntax on // other platforms, including Windows. // // Logging: // GTEST_LOG_() - logs messages at the specified severity level. // LogToStderr() - directs all log messages to stderr. // FlushInfoLog() - flushes informational log messages. // // Stdout and stderr capturing: // CaptureStdout() - starts capturing stdout. // GetCapturedStdout() - stops capturing stdout and returns the captured // string. // CaptureStderr() - starts capturing stderr. // GetCapturedStderr() - stops capturing stderr and returns the captured // string. // // Integer types: // TypeWithSize - maps an integer to a int type. // Int32, UInt32, Int64, UInt64, TimeInMillis // - integers of known sizes. // BiggestInt - the biggest signed integer type. // // Command-line utilities: // GTEST_FLAG() - references a flag. // GTEST_DECLARE_*() - declares a flag. // GTEST_DEFINE_*() - defines a flag. // GetInjectableArgvs() - returns the command line as a vector of strings. // // Environment variable utilities: // GetEnv() - gets the value of an environment variable. // BoolFromGTestEnv() - parses a bool environment variable. // Int32FromGTestEnv() - parses an Int32 environment variable. // StringFromGTestEnv() - parses a string environment variable. #include // for isspace, etc #include // for ptrdiff_t #include #include #include #ifndef _WIN32_WCE # include # include #endif // !_WIN32_WCE #if defined __APPLE__ # include # include #endif #include // NOLINT #include // NOLINT #include // NOLINT #define GTEST_DEV_EMAIL_ "googletestframework@@googlegroups.com" #define GTEST_FLAG_PREFIX_ "gtest_" #define GTEST_FLAG_PREFIX_DASH_ "gtest-" #define GTEST_FLAG_PREFIX_UPPER_ "GTEST_" #define GTEST_NAME_ "Google Test" #define GTEST_PROJECT_URL_ "http://code.google.com/p/googletest/" // Determines the version of gcc that is used to compile this. #ifdef __GNUC__ // 40302 means version 4.3.2. # define GTEST_GCC_VER_ \ (__GNUC__*10000 + __GNUC_MINOR__*100 + __GNUC_PATCHLEVEL__) #endif // __GNUC__ // Determines the platform on which Google Test is compiled. #ifdef __CYGWIN__ # define GTEST_OS_CYGWIN 1 #elif defined __SYMBIAN32__ # define GTEST_OS_SYMBIAN 1 #elif defined _WIN32 # define GTEST_OS_WINDOWS 1 # ifdef _WIN32_WCE # define GTEST_OS_WINDOWS_MOBILE 1 # elif defined(__MINGW__) || defined(__MINGW32__) # define GTEST_OS_WINDOWS_MINGW 1 # else # define GTEST_OS_WINDOWS_DESKTOP 1 # endif // _WIN32_WCE #elif defined __APPLE__ # define GTEST_OS_MAC 1 # if TARGET_OS_IPHONE # define GTEST_OS_IOS 1 # if TARGET_IPHONE_SIMULATOR # define GTEST_OS_IOS_SIMULATOR 1 # endif # endif #elif defined __linux__ # define GTEST_OS_LINUX 1 # if defined __ANDROID__ # define GTEST_OS_LINUX_ANDROID 1 # endif #elif defined __MVS__ # define GTEST_OS_ZOS 1 #elif defined(__sun) && defined(__SVR4) # define GTEST_OS_SOLARIS 1 #elif defined(_AIX) # define GTEST_OS_AIX 1 #elif defined(__hpux) # define GTEST_OS_HPUX 1 #elif defined __native_client__ # define GTEST_OS_NACL 1 #elif defined __OpenBSD__ # define GTEST_OS_OPENBSD 1 #elif defined __QNX__ # define GTEST_OS_QNX 1 #endif // __CYGWIN__ #ifndef GTEST_LANG_CXX11 // gcc and clang define __GXX_EXPERIMENTAL_CXX0X__ when // -std={c,gnu}++{0x,11} is passed. The C++11 standard specifies a // value for __cplusplus, and recent versions of clang, gcc, and // probably other compilers set that too in C++11 mode. # if __GXX_EXPERIMENTAL_CXX0X__ || __cplusplus >= 201103L // Compiling in at least C++11 mode. # define GTEST_LANG_CXX11 1 # else # define GTEST_LANG_CXX11 0 # endif #endif // Brings in definitions for functions used in the testing::internal::posix // namespace (read, write, close, chdir, isatty, stat). We do not currently // use them on Windows Mobile. #if !GTEST_OS_WINDOWS // This assumes that non-Windows OSes provide unistd.h. For OSes where this // is not the case, we need to include headers that provide the functions // mentioned above. # include # include #elif !GTEST_OS_WINDOWS_MOBILE # include # include #endif #if GTEST_OS_LINUX_ANDROID // Used to define __ANDROID_API__ matching the target NDK API level. # include // NOLINT #endif // Defines this to true iff Google Test can use POSIX regular expressions. #ifndef GTEST_HAS_POSIX_RE # if GTEST_OS_LINUX_ANDROID // On Android, is only available starting with Gingerbread. # define GTEST_HAS_POSIX_RE (__ANDROID_API__ >= 9) # else # define GTEST_HAS_POSIX_RE (!GTEST_OS_WINDOWS) # endif #endif #if GTEST_HAS_POSIX_RE // On some platforms, needs someone to define size_t, and // won't compile otherwise. We can #include it here as we already // included , which is guaranteed to define size_t through // . # include // NOLINT # define GTEST_USES_POSIX_RE 1 #elif GTEST_OS_WINDOWS // is not available on Windows. Use our own simple regex // implementation instead. # define GTEST_USES_SIMPLE_RE 1 #else // may not be available on this platform. Use our own // simple regex implementation instead. # define GTEST_USES_SIMPLE_RE 1 #endif // GTEST_HAS_POSIX_RE #ifndef GTEST_HAS_EXCEPTIONS // The user didn't tell us whether exceptions are enabled, so we need // to figure it out. # if defined(_MSC_VER) || defined(__BORLANDC__) // MSVC's and C++Builder's implementations of the STL use the _HAS_EXCEPTIONS // macro to enable exceptions, so we'll do the same. // Assumes that exceptions are enabled by default. # ifndef _HAS_EXCEPTIONS # define _HAS_EXCEPTIONS 1 # endif // _HAS_EXCEPTIONS # define GTEST_HAS_EXCEPTIONS _HAS_EXCEPTIONS # elif defined(__GNUC__) && __EXCEPTIONS // gcc defines __EXCEPTIONS to 1 iff exceptions are enabled. # define GTEST_HAS_EXCEPTIONS 1 # elif defined(__SUNPRO_CC) // Sun Pro CC supports exceptions. However, there is no compile-time way of // detecting whether they are enabled or not. Therefore, we assume that // they are enabled unless the user tells us otherwise. # define GTEST_HAS_EXCEPTIONS 1 # elif defined(__IBMCPP__) && __EXCEPTIONS // xlC defines __EXCEPTIONS to 1 iff exceptions are enabled. # define GTEST_HAS_EXCEPTIONS 1 # elif defined(__HP_aCC) // Exception handling is in effect by default in HP aCC compiler. It has to // be turned of by +noeh compiler option if desired. # define GTEST_HAS_EXCEPTIONS 1 # else // For other compilers, we assume exceptions are disabled to be // conservative. # define GTEST_HAS_EXCEPTIONS 0 # endif // defined(_MSC_VER) || defined(__BORLANDC__) #endif // GTEST_HAS_EXCEPTIONS #if !defined(GTEST_HAS_STD_STRING) // Even though we don't use this macro any longer, we keep it in case // some clients still depend on it. # define GTEST_HAS_STD_STRING 1 #elif !GTEST_HAS_STD_STRING // The user told us that ::std::string isn't available. # error "Google Test cannot be used where ::std::string isn't available." #endif // !defined(GTEST_HAS_STD_STRING) #ifndef GTEST_HAS_GLOBAL_STRING // The user didn't tell us whether ::string is available, so we need // to figure it out. # define GTEST_HAS_GLOBAL_STRING 0 #endif // GTEST_HAS_GLOBAL_STRING #ifndef GTEST_HAS_STD_WSTRING // The user didn't tell us whether ::std::wstring is available, so we need // to figure it out. // TODO(wan@google.com): uses autoconf to detect whether ::std::wstring // is available. // Cygwin 1.7 and below doesn't support ::std::wstring. // Solaris' libc++ doesn't support it either. Android has // no support for it at least as recent as Froyo (2.2). # define GTEST_HAS_STD_WSTRING \ (!(GTEST_OS_LINUX_ANDROID || GTEST_OS_CYGWIN || GTEST_OS_SOLARIS)) #endif // GTEST_HAS_STD_WSTRING #ifndef GTEST_HAS_GLOBAL_WSTRING // The user didn't tell us whether ::wstring is available, so we need // to figure it out. # define GTEST_HAS_GLOBAL_WSTRING \ (GTEST_HAS_STD_WSTRING && GTEST_HAS_GLOBAL_STRING) #endif // GTEST_HAS_GLOBAL_WSTRING // Determines whether RTTI is available. #ifndef GTEST_HAS_RTTI // The user didn't tell us whether RTTI is enabled, so we need to // figure it out. # ifdef _MSC_VER # ifdef _CPPRTTI // MSVC defines this macro iff RTTI is enabled. # define GTEST_HAS_RTTI 1 # else # define GTEST_HAS_RTTI 0 # endif // Starting with version 4.3.2, gcc defines __GXX_RTTI iff RTTI is enabled. # elif defined(__GNUC__) && (GTEST_GCC_VER_ >= 40302) # ifdef __GXX_RTTI // When building against STLport with the Android NDK and with // -frtti -fno-exceptions, the build fails at link time with undefined // references to __cxa_bad_typeid. Note sure if STL or toolchain bug, // so disable RTTI when detected. # if GTEST_OS_LINUX_ANDROID && defined(_STLPORT_MAJOR) && \ !defined(__EXCEPTIONS) # define GTEST_HAS_RTTI 0 # else # define GTEST_HAS_RTTI 1 # endif // GTEST_OS_LINUX_ANDROID && __STLPORT_MAJOR && !__EXCEPTIONS # else # define GTEST_HAS_RTTI 0 # endif // __GXX_RTTI // Clang defines __GXX_RTTI starting with version 3.0, but its manual recommends // using has_feature instead. has_feature(cxx_rtti) is supported since 2.7, the // first version with C++ support. # elif defined(__clang__) # define GTEST_HAS_RTTI __has_feature(cxx_rtti) // Starting with version 9.0 IBM Visual Age defines __RTTI_ALL__ to 1 if // both the typeid and dynamic_cast features are present. # elif defined(__IBMCPP__) && (__IBMCPP__ >= 900) # ifdef __RTTI_ALL__ # define GTEST_HAS_RTTI 1 # else # define GTEST_HAS_RTTI 0 # endif # else // For all other compilers, we assume RTTI is enabled. # define GTEST_HAS_RTTI 1 # endif // _MSC_VER #endif // GTEST_HAS_RTTI // It's this header's responsibility to #include when RTTI // is enabled. #if GTEST_HAS_RTTI # include #endif // Determines whether Google Test can use the pthreads library. #ifndef GTEST_HAS_PTHREAD // The user didn't tell us explicitly, so we assume pthreads support is // available on Linux and Mac. // // To disable threading support in Google Test, add -DGTEST_HAS_PTHREAD=0 // to your compiler flags. # define GTEST_HAS_PTHREAD (GTEST_OS_LINUX || GTEST_OS_MAC || GTEST_OS_HPUX \ || GTEST_OS_QNX) #endif // GTEST_HAS_PTHREAD #if GTEST_HAS_PTHREAD // gtest-port.h guarantees to #include when GTEST_HAS_PTHREAD is // true. # include // NOLINT // For timespec and nanosleep, used below. # include // NOLINT #endif // Determines whether Google Test can use tr1/tuple. You can define // this macro to 0 to prevent Google Test from using tuple (any // feature depending on tuple with be disabled in this mode). #ifndef GTEST_HAS_TR1_TUPLE # if GTEST_OS_LINUX_ANDROID && defined(_STLPORT_MAJOR) // STLport, provided with the Android NDK, has neither or . # define GTEST_HAS_TR1_TUPLE 0 # else // The user didn't tell us not to do it, so we assume it's OK. # define GTEST_HAS_TR1_TUPLE 1 # endif #endif // GTEST_HAS_TR1_TUPLE // Determines whether Google Test's own tr1 tuple implementation // should be used. #ifndef GTEST_USE_OWN_TR1_TUPLE // The user didn't tell us, so we need to figure it out. // We use our own TR1 tuple if we aren't sure the user has an // implementation of it already. At this time, libstdc++ 4.0.0+ and // MSVC 2010 are the only mainstream standard libraries that come // with a TR1 tuple implementation. NVIDIA's CUDA NVCC compiler // pretends to be GCC by defining __GNUC__ and friends, but cannot // compile GCC's tuple implementation. MSVC 2008 (9.0) provides TR1 // tuple in a 323 MB Feature Pack download, which we cannot assume the // user has. QNX's QCC compiler is a modified GCC but it doesn't // support TR1 tuple. libc++ only provides std::tuple, in C++11 mode, // and it can be used with some compilers that define __GNUC__. # if (defined(__GNUC__) && !defined(__CUDACC__) && (GTEST_GCC_VER_ >= 40000) \ && !GTEST_OS_QNX && !defined(_LIBCPP_VERSION)) || _MSC_VER >= 1600 # define GTEST_ENV_HAS_TR1_TUPLE_ 1 # endif // C++11 specifies that provides std::tuple. Use that if gtest is used // in C++11 mode and libstdc++ isn't very old (binaries targeting OS X 10.6 // can build with clang but need to use gcc4.2's libstdc++). # if GTEST_LANG_CXX11 && (!defined(__GLIBCXX__) || __GLIBCXX__ > 20110325) # define GTEST_ENV_HAS_STD_TUPLE_ 1 # endif # if GTEST_ENV_HAS_TR1_TUPLE_ || GTEST_ENV_HAS_STD_TUPLE_ # define GTEST_USE_OWN_TR1_TUPLE 0 # else # define GTEST_USE_OWN_TR1_TUPLE 1 # endif #endif // GTEST_USE_OWN_TR1_TUPLE // To avoid conditional compilation everywhere, we make it // gtest-port.h's responsibility to #include the header implementing // tr1/tuple. #if GTEST_HAS_TR1_TUPLE # if GTEST_USE_OWN_TR1_TUPLE // This file was GENERATED by command: // pump.py gtest-tuple.h.pump // DO NOT EDIT BY HAND!!! // Copyright 2009 Google Inc. // All Rights Reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: wan@google.com (Zhanyong Wan) // Implements a subset of TR1 tuple needed by Google Test and Google Mock. #ifndef GTEST_INCLUDE_GTEST_INTERNAL_GTEST_TUPLE_H_ #define GTEST_INCLUDE_GTEST_INTERNAL_GTEST_TUPLE_H_ #include // For ::std::pair. // The compiler used in Symbian has a bug that prevents us from declaring the // tuple template as a friend (it complains that tuple is redefined). This // hack bypasses the bug by declaring the members that should otherwise be // private as public. // Sun Studio versions < 12 also have the above bug. #if defined(__SYMBIAN32__) || (defined(__SUNPRO_CC) && __SUNPRO_CC < 0x590) # define GTEST_DECLARE_TUPLE_AS_FRIEND_ public: #else # define GTEST_DECLARE_TUPLE_AS_FRIEND_ \ template friend class tuple; \ private: #endif // GTEST_n_TUPLE_(T) is the type of an n-tuple. #define GTEST_0_TUPLE_(T) tuple<> #define GTEST_1_TUPLE_(T) tuple #define GTEST_2_TUPLE_(T) tuple #define GTEST_3_TUPLE_(T) tuple #define GTEST_4_TUPLE_(T) tuple #define GTEST_5_TUPLE_(T) tuple #define GTEST_6_TUPLE_(T) tuple #define GTEST_7_TUPLE_(T) tuple #define GTEST_8_TUPLE_(T) tuple #define GTEST_9_TUPLE_(T) tuple #define GTEST_10_TUPLE_(T) tuple // GTEST_n_TYPENAMES_(T) declares a list of n typenames. #define GTEST_0_TYPENAMES_(T) #define GTEST_1_TYPENAMES_(T) typename T##0 #define GTEST_2_TYPENAMES_(T) typename T##0, typename T##1 #define GTEST_3_TYPENAMES_(T) typename T##0, typename T##1, typename T##2 #define GTEST_4_TYPENAMES_(T) typename T##0, typename T##1, typename T##2, \ typename T##3 #define GTEST_5_TYPENAMES_(T) typename T##0, typename T##1, typename T##2, \ typename T##3, typename T##4 #define GTEST_6_TYPENAMES_(T) typename T##0, typename T##1, typename T##2, \ typename T##3, typename T##4, typename T##5 #define GTEST_7_TYPENAMES_(T) typename T##0, typename T##1, typename T##2, \ typename T##3, typename T##4, typename T##5, typename T##6 #define GTEST_8_TYPENAMES_(T) typename T##0, typename T##1, typename T##2, \ typename T##3, typename T##4, typename T##5, typename T##6, typename T##7 #define GTEST_9_TYPENAMES_(T) typename T##0, typename T##1, typename T##2, \ typename T##3, typename T##4, typename T##5, typename T##6, \ typename T##7, typename T##8 #define GTEST_10_TYPENAMES_(T) typename T##0, typename T##1, typename T##2, \ typename T##3, typename T##4, typename T##5, typename T##6, \ typename T##7, typename T##8, typename T##9 // In theory, defining stuff in the ::std namespace is undefined // behavior. We can do this as we are playing the role of a standard // library vendor. namespace std { namespace tr1 { template class tuple; // Anything in namespace gtest_internal is Google Test's INTERNAL // IMPLEMENTATION DETAIL and MUST NOT BE USED DIRECTLY in user code. namespace gtest_internal { // ByRef::type is T if T is a reference; otherwise it's const T&. template struct ByRef { typedef const T& type; }; // NOLINT template struct ByRef { typedef T& type; }; // NOLINT // A handy wrapper for ByRef. #define GTEST_BY_REF_(T) typename ::std::tr1::gtest_internal::ByRef::type // AddRef::type is T if T is a reference; otherwise it's T&. This // is the same as tr1::add_reference::type. template struct AddRef { typedef T& type; }; // NOLINT template struct AddRef { typedef T& type; }; // NOLINT // A handy wrapper for AddRef. #define GTEST_ADD_REF_(T) typename ::std::tr1::gtest_internal::AddRef::type // A helper for implementing get(). template class Get; // A helper for implementing tuple_element. kIndexValid is true // iff k < the number of fields in tuple type T. template struct TupleElement; template struct TupleElement { typedef T0 type; }; template struct TupleElement { typedef T1 type; }; template struct TupleElement { typedef T2 type; }; template struct TupleElement { typedef T3 type; }; template struct TupleElement { typedef T4 type; }; template struct TupleElement { typedef T5 type; }; template struct TupleElement { typedef T6 type; }; template struct TupleElement { typedef T7 type; }; template struct TupleElement { typedef T8 type; }; template struct TupleElement { typedef T9 type; }; } // namespace gtest_internal template <> class tuple<> { public: tuple() {} tuple(const tuple& /* t */) {} tuple& operator=(const tuple& /* t */) { return *this; } }; template class GTEST_1_TUPLE_(T) { public: template friend class gtest_internal::Get; tuple() : f0_() {} explicit tuple(GTEST_BY_REF_(T0) f0) : f0_(f0) {} tuple(const tuple& t) : f0_(t.f0_) {} template tuple(const GTEST_1_TUPLE_(U)& t) : f0_(t.f0_) {} tuple& operator=(const tuple& t) { return CopyFrom(t); } template tuple& operator=(const GTEST_1_TUPLE_(U)& t) { return CopyFrom(t); } GTEST_DECLARE_TUPLE_AS_FRIEND_ template tuple& CopyFrom(const GTEST_1_TUPLE_(U)& t) { f0_ = t.f0_; return *this; } T0 f0_; }; template class GTEST_2_TUPLE_(T) { public: template friend class gtest_internal::Get; tuple() : f0_(), f1_() {} explicit tuple(GTEST_BY_REF_(T0) f0, GTEST_BY_REF_(T1) f1) : f0_(f0), f1_(f1) {} tuple(const tuple& t) : f0_(t.f0_), f1_(t.f1_) {} template tuple(const GTEST_2_TUPLE_(U)& t) : f0_(t.f0_), f1_(t.f1_) {} template tuple(const ::std::pair& p) : f0_(p.first), f1_(p.second) {} tuple& operator=(const tuple& t) { return CopyFrom(t); } template tuple& operator=(const GTEST_2_TUPLE_(U)& t) { return CopyFrom(t); } template tuple& operator=(const ::std::pair& p) { f0_ = p.first; f1_ = p.second; return *this; } GTEST_DECLARE_TUPLE_AS_FRIEND_ template tuple& CopyFrom(const GTEST_2_TUPLE_(U)& t) { f0_ = t.f0_; f1_ = t.f1_; return *this; } T0 f0_; T1 f1_; }; template class GTEST_3_TUPLE_(T) { public: template friend class gtest_internal::Get; tuple() : f0_(), f1_(), f2_() {} explicit tuple(GTEST_BY_REF_(T0) f0, GTEST_BY_REF_(T1) f1, GTEST_BY_REF_(T2) f2) : f0_(f0), f1_(f1), f2_(f2) {} tuple(const tuple& t) : f0_(t.f0_), f1_(t.f1_), f2_(t.f2_) {} template tuple(const GTEST_3_TUPLE_(U)& t) : f0_(t.f0_), f1_(t.f1_), f2_(t.f2_) {} tuple& operator=(const tuple& t) { return CopyFrom(t); } template tuple& operator=(const GTEST_3_TUPLE_(U)& t) { return CopyFrom(t); } GTEST_DECLARE_TUPLE_AS_FRIEND_ template tuple& CopyFrom(const GTEST_3_TUPLE_(U)& t) { f0_ = t.f0_; f1_ = t.f1_; f2_ = t.f2_; return *this; } T0 f0_; T1 f1_; T2 f2_; }; template class GTEST_4_TUPLE_(T) { public: template friend class gtest_internal::Get; tuple() : f0_(), f1_(), f2_(), f3_() {} explicit tuple(GTEST_BY_REF_(T0) f0, GTEST_BY_REF_(T1) f1, GTEST_BY_REF_(T2) f2, GTEST_BY_REF_(T3) f3) : f0_(f0), f1_(f1), f2_(f2), f3_(f3) {} tuple(const tuple& t) : f0_(t.f0_), f1_(t.f1_), f2_(t.f2_), f3_(t.f3_) {} template tuple(const GTEST_4_TUPLE_(U)& t) : f0_(t.f0_), f1_(t.f1_), f2_(t.f2_), f3_(t.f3_) {} tuple& operator=(const tuple& t) { return CopyFrom(t); } template tuple& operator=(const GTEST_4_TUPLE_(U)& t) { return CopyFrom(t); } GTEST_DECLARE_TUPLE_AS_FRIEND_ template tuple& CopyFrom(const GTEST_4_TUPLE_(U)& t) { f0_ = t.f0_; f1_ = t.f1_; f2_ = t.f2_; f3_ = t.f3_; return *this; } T0 f0_; T1 f1_; T2 f2_; T3 f3_; }; template class GTEST_5_TUPLE_(T) { public: template friend class gtest_internal::Get; tuple() : f0_(), f1_(), f2_(), f3_(), f4_() {} explicit tuple(GTEST_BY_REF_(T0) f0, GTEST_BY_REF_(T1) f1, GTEST_BY_REF_(T2) f2, GTEST_BY_REF_(T3) f3, GTEST_BY_REF_(T4) f4) : f0_(f0), f1_(f1), f2_(f2), f3_(f3), f4_(f4) {} tuple(const tuple& t) : f0_(t.f0_), f1_(t.f1_), f2_(t.f2_), f3_(t.f3_), f4_(t.f4_) {} template tuple(const GTEST_5_TUPLE_(U)& t) : f0_(t.f0_), f1_(t.f1_), f2_(t.f2_), f3_(t.f3_), f4_(t.f4_) {} tuple& operator=(const tuple& t) { return CopyFrom(t); } template tuple& operator=(const GTEST_5_TUPLE_(U)& t) { return CopyFrom(t); } GTEST_DECLARE_TUPLE_AS_FRIEND_ template tuple& CopyFrom(const GTEST_5_TUPLE_(U)& t) { f0_ = t.f0_; f1_ = t.f1_; f2_ = t.f2_; f3_ = t.f3_; f4_ = t.f4_; return *this; } T0 f0_; T1 f1_; T2 f2_; T3 f3_; T4 f4_; }; template class GTEST_6_TUPLE_(T) { public: template friend class gtest_internal::Get; tuple() : f0_(), f1_(), f2_(), f3_(), f4_(), f5_() {} explicit tuple(GTEST_BY_REF_(T0) f0, GTEST_BY_REF_(T1) f1, GTEST_BY_REF_(T2) f2, GTEST_BY_REF_(T3) f3, GTEST_BY_REF_(T4) f4, GTEST_BY_REF_(T5) f5) : f0_(f0), f1_(f1), f2_(f2), f3_(f3), f4_(f4), f5_(f5) {} tuple(const tuple& t) : f0_(t.f0_), f1_(t.f1_), f2_(t.f2_), f3_(t.f3_), f4_(t.f4_), f5_(t.f5_) {} template tuple(const GTEST_6_TUPLE_(U)& t) : f0_(t.f0_), f1_(t.f1_), f2_(t.f2_), f3_(t.f3_), f4_(t.f4_), f5_(t.f5_) {} tuple& operator=(const tuple& t) { return CopyFrom(t); } template tuple& operator=(const GTEST_6_TUPLE_(U)& t) { return CopyFrom(t); } GTEST_DECLARE_TUPLE_AS_FRIEND_ template tuple& CopyFrom(const GTEST_6_TUPLE_(U)& t) { f0_ = t.f0_; f1_ = t.f1_; f2_ = t.f2_; f3_ = t.f3_; f4_ = t.f4_; f5_ = t.f5_; return *this; } T0 f0_; T1 f1_; T2 f2_; T3 f3_; T4 f4_; T5 f5_; }; template class GTEST_7_TUPLE_(T) { public: template friend class gtest_internal::Get; tuple() : f0_(), f1_(), f2_(), f3_(), f4_(), f5_(), f6_() {} explicit tuple(GTEST_BY_REF_(T0) f0, GTEST_BY_REF_(T1) f1, GTEST_BY_REF_(T2) f2, GTEST_BY_REF_(T3) f3, GTEST_BY_REF_(T4) f4, GTEST_BY_REF_(T5) f5, GTEST_BY_REF_(T6) f6) : f0_(f0), f1_(f1), f2_(f2), f3_(f3), f4_(f4), f5_(f5), f6_(f6) {} tuple(const tuple& t) : f0_(t.f0_), f1_(t.f1_), f2_(t.f2_), f3_(t.f3_), f4_(t.f4_), f5_(t.f5_), f6_(t.f6_) {} template tuple(const GTEST_7_TUPLE_(U)& t) : f0_(t.f0_), f1_(t.f1_), f2_(t.f2_), f3_(t.f3_), f4_(t.f4_), f5_(t.f5_), f6_(t.f6_) {} tuple& operator=(const tuple& t) { return CopyFrom(t); } template tuple& operator=(const GTEST_7_TUPLE_(U)& t) { return CopyFrom(t); } GTEST_DECLARE_TUPLE_AS_FRIEND_ template tuple& CopyFrom(const GTEST_7_TUPLE_(U)& t) { f0_ = t.f0_; f1_ = t.f1_; f2_ = t.f2_; f3_ = t.f3_; f4_ = t.f4_; f5_ = t.f5_; f6_ = t.f6_; return *this; } T0 f0_; T1 f1_; T2 f2_; T3 f3_; T4 f4_; T5 f5_; T6 f6_; }; template class GTEST_8_TUPLE_(T) { public: template friend class gtest_internal::Get; tuple() : f0_(), f1_(), f2_(), f3_(), f4_(), f5_(), f6_(), f7_() {} explicit tuple(GTEST_BY_REF_(T0) f0, GTEST_BY_REF_(T1) f1, GTEST_BY_REF_(T2) f2, GTEST_BY_REF_(T3) f3, GTEST_BY_REF_(T4) f4, GTEST_BY_REF_(T5) f5, GTEST_BY_REF_(T6) f6, GTEST_BY_REF_(T7) f7) : f0_(f0), f1_(f1), f2_(f2), f3_(f3), f4_(f4), f5_(f5), f6_(f6), f7_(f7) {} tuple(const tuple& t) : f0_(t.f0_), f1_(t.f1_), f2_(t.f2_), f3_(t.f3_), f4_(t.f4_), f5_(t.f5_), f6_(t.f6_), f7_(t.f7_) {} template tuple(const GTEST_8_TUPLE_(U)& t) : f0_(t.f0_), f1_(t.f1_), f2_(t.f2_), f3_(t.f3_), f4_(t.f4_), f5_(t.f5_), f6_(t.f6_), f7_(t.f7_) {} tuple& operator=(const tuple& t) { return CopyFrom(t); } template tuple& operator=(const GTEST_8_TUPLE_(U)& t) { return CopyFrom(t); } GTEST_DECLARE_TUPLE_AS_FRIEND_ template tuple& CopyFrom(const GTEST_8_TUPLE_(U)& t) { f0_ = t.f0_; f1_ = t.f1_; f2_ = t.f2_; f3_ = t.f3_; f4_ = t.f4_; f5_ = t.f5_; f6_ = t.f6_; f7_ = t.f7_; return *this; } T0 f0_; T1 f1_; T2 f2_; T3 f3_; T4 f4_; T5 f5_; T6 f6_; T7 f7_; }; template class GTEST_9_TUPLE_(T) { public: template friend class gtest_internal::Get; tuple() : f0_(), f1_(), f2_(), f3_(), f4_(), f5_(), f6_(), f7_(), f8_() {} explicit tuple(GTEST_BY_REF_(T0) f0, GTEST_BY_REF_(T1) f1, GTEST_BY_REF_(T2) f2, GTEST_BY_REF_(T3) f3, GTEST_BY_REF_(T4) f4, GTEST_BY_REF_(T5) f5, GTEST_BY_REF_(T6) f6, GTEST_BY_REF_(T7) f7, GTEST_BY_REF_(T8) f8) : f0_(f0), f1_(f1), f2_(f2), f3_(f3), f4_(f4), f5_(f5), f6_(f6), f7_(f7), f8_(f8) {} tuple(const tuple& t) : f0_(t.f0_), f1_(t.f1_), f2_(t.f2_), f3_(t.f3_), f4_(t.f4_), f5_(t.f5_), f6_(t.f6_), f7_(t.f7_), f8_(t.f8_) {} template tuple(const GTEST_9_TUPLE_(U)& t) : f0_(t.f0_), f1_(t.f1_), f2_(t.f2_), f3_(t.f3_), f4_(t.f4_), f5_(t.f5_), f6_(t.f6_), f7_(t.f7_), f8_(t.f8_) {} tuple& operator=(const tuple& t) { return CopyFrom(t); } template tuple& operator=(const GTEST_9_TUPLE_(U)& t) { return CopyFrom(t); } GTEST_DECLARE_TUPLE_AS_FRIEND_ template tuple& CopyFrom(const GTEST_9_TUPLE_(U)& t) { f0_ = t.f0_; f1_ = t.f1_; f2_ = t.f2_; f3_ = t.f3_; f4_ = t.f4_; f5_ = t.f5_; f6_ = t.f6_; f7_ = t.f7_; f8_ = t.f8_; return *this; } T0 f0_; T1 f1_; T2 f2_; T3 f3_; T4 f4_; T5 f5_; T6 f6_; T7 f7_; T8 f8_; }; template class tuple { public: template friend class gtest_internal::Get; tuple() : f0_(), f1_(), f2_(), f3_(), f4_(), f5_(), f6_(), f7_(), f8_(), f9_() {} explicit tuple(GTEST_BY_REF_(T0) f0, GTEST_BY_REF_(T1) f1, GTEST_BY_REF_(T2) f2, GTEST_BY_REF_(T3) f3, GTEST_BY_REF_(T4) f4, GTEST_BY_REF_(T5) f5, GTEST_BY_REF_(T6) f6, GTEST_BY_REF_(T7) f7, GTEST_BY_REF_(T8) f8, GTEST_BY_REF_(T9) f9) : f0_(f0), f1_(f1), f2_(f2), f3_(f3), f4_(f4), f5_(f5), f6_(f6), f7_(f7), f8_(f8), f9_(f9) {} tuple(const tuple& t) : f0_(t.f0_), f1_(t.f1_), f2_(t.f2_), f3_(t.f3_), f4_(t.f4_), f5_(t.f5_), f6_(t.f6_), f7_(t.f7_), f8_(t.f8_), f9_(t.f9_) {} template tuple(const GTEST_10_TUPLE_(U)& t) : f0_(t.f0_), f1_(t.f1_), f2_(t.f2_), f3_(t.f3_), f4_(t.f4_), f5_(t.f5_), f6_(t.f6_), f7_(t.f7_), f8_(t.f8_), f9_(t.f9_) {} tuple& operator=(const tuple& t) { return CopyFrom(t); } template tuple& operator=(const GTEST_10_TUPLE_(U)& t) { return CopyFrom(t); } GTEST_DECLARE_TUPLE_AS_FRIEND_ template tuple& CopyFrom(const GTEST_10_TUPLE_(U)& t) { f0_ = t.f0_; f1_ = t.f1_; f2_ = t.f2_; f3_ = t.f3_; f4_ = t.f4_; f5_ = t.f5_; f6_ = t.f6_; f7_ = t.f7_; f8_ = t.f8_; f9_ = t.f9_; return *this; } T0 f0_; T1 f1_; T2 f2_; T3 f3_; T4 f4_; T5 f5_; T6 f6_; T7 f7_; T8 f8_; T9 f9_; }; // 6.1.3.2 Tuple creation functions. // Known limitations: we don't support passing an // std::tr1::reference_wrapper to make_tuple(). And we don't // implement tie(). inline tuple<> make_tuple() { return tuple<>(); } template inline GTEST_1_TUPLE_(T) make_tuple(const T0& f0) { return GTEST_1_TUPLE_(T)(f0); } template inline GTEST_2_TUPLE_(T) make_tuple(const T0& f0, const T1& f1) { return GTEST_2_TUPLE_(T)(f0, f1); } template inline GTEST_3_TUPLE_(T) make_tuple(const T0& f0, const T1& f1, const T2& f2) { return GTEST_3_TUPLE_(T)(f0, f1, f2); } template inline GTEST_4_TUPLE_(T) make_tuple(const T0& f0, const T1& f1, const T2& f2, const T3& f3) { return GTEST_4_TUPLE_(T)(f0, f1, f2, f3); } template inline GTEST_5_TUPLE_(T) make_tuple(const T0& f0, const T1& f1, const T2& f2, const T3& f3, const T4& f4) { return GTEST_5_TUPLE_(T)(f0, f1, f2, f3, f4); } template inline GTEST_6_TUPLE_(T) make_tuple(const T0& f0, const T1& f1, const T2& f2, const T3& f3, const T4& f4, const T5& f5) { return GTEST_6_TUPLE_(T)(f0, f1, f2, f3, f4, f5); } template inline GTEST_7_TUPLE_(T) make_tuple(const T0& f0, const T1& f1, const T2& f2, const T3& f3, const T4& f4, const T5& f5, const T6& f6) { return GTEST_7_TUPLE_(T)(f0, f1, f2, f3, f4, f5, f6); } template inline GTEST_8_TUPLE_(T) make_tuple(const T0& f0, const T1& f1, const T2& f2, const T3& f3, const T4& f4, const T5& f5, const T6& f6, const T7& f7) { return GTEST_8_TUPLE_(T)(f0, f1, f2, f3, f4, f5, f6, f7); } template inline GTEST_9_TUPLE_(T) make_tuple(const T0& f0, const T1& f1, const T2& f2, const T3& f3, const T4& f4, const T5& f5, const T6& f6, const T7& f7, const T8& f8) { return GTEST_9_TUPLE_(T)(f0, f1, f2, f3, f4, f5, f6, f7, f8); } template inline GTEST_10_TUPLE_(T) make_tuple(const T0& f0, const T1& f1, const T2& f2, const T3& f3, const T4& f4, const T5& f5, const T6& f6, const T7& f7, const T8& f8, const T9& f9) { return GTEST_10_TUPLE_(T)(f0, f1, f2, f3, f4, f5, f6, f7, f8, f9); } // 6.1.3.3 Tuple helper classes. template struct tuple_size; template struct tuple_size { static const int value = 0; }; template struct tuple_size { static const int value = 1; }; template struct tuple_size { static const int value = 2; }; template struct tuple_size { static const int value = 3; }; template struct tuple_size { static const int value = 4; }; template struct tuple_size { static const int value = 5; }; template struct tuple_size { static const int value = 6; }; template struct tuple_size { static const int value = 7; }; template struct tuple_size { static const int value = 8; }; template struct tuple_size { static const int value = 9; }; template struct tuple_size { static const int value = 10; }; template struct tuple_element { typedef typename gtest_internal::TupleElement< k < (tuple_size::value), k, Tuple>::type type; }; #define GTEST_TUPLE_ELEMENT_(k, Tuple) typename tuple_element::type // 6.1.3.4 Element access. namespace gtest_internal { template <> class Get<0> { public: template static GTEST_ADD_REF_(GTEST_TUPLE_ELEMENT_(0, Tuple)) Field(Tuple& t) { return t.f0_; } // NOLINT template static GTEST_BY_REF_(GTEST_TUPLE_ELEMENT_(0, Tuple)) ConstField(const Tuple& t) { return t.f0_; } }; template <> class Get<1> { public: template static GTEST_ADD_REF_(GTEST_TUPLE_ELEMENT_(1, Tuple)) Field(Tuple& t) { return t.f1_; } // NOLINT template static GTEST_BY_REF_(GTEST_TUPLE_ELEMENT_(1, Tuple)) ConstField(const Tuple& t) { return t.f1_; } }; template <> class Get<2> { public: template static GTEST_ADD_REF_(GTEST_TUPLE_ELEMENT_(2, Tuple)) Field(Tuple& t) { return t.f2_; } // NOLINT template static GTEST_BY_REF_(GTEST_TUPLE_ELEMENT_(2, Tuple)) ConstField(const Tuple& t) { return t.f2_; } }; template <> class Get<3> { public: template static GTEST_ADD_REF_(GTEST_TUPLE_ELEMENT_(3, Tuple)) Field(Tuple& t) { return t.f3_; } // NOLINT template static GTEST_BY_REF_(GTEST_TUPLE_ELEMENT_(3, Tuple)) ConstField(const Tuple& t) { return t.f3_; } }; template <> class Get<4> { public: template static GTEST_ADD_REF_(GTEST_TUPLE_ELEMENT_(4, Tuple)) Field(Tuple& t) { return t.f4_; } // NOLINT template static GTEST_BY_REF_(GTEST_TUPLE_ELEMENT_(4, Tuple)) ConstField(const Tuple& t) { return t.f4_; } }; template <> class Get<5> { public: template static GTEST_ADD_REF_(GTEST_TUPLE_ELEMENT_(5, Tuple)) Field(Tuple& t) { return t.f5_; } // NOLINT template static GTEST_BY_REF_(GTEST_TUPLE_ELEMENT_(5, Tuple)) ConstField(const Tuple& t) { return t.f5_; } }; template <> class Get<6> { public: template static GTEST_ADD_REF_(GTEST_TUPLE_ELEMENT_(6, Tuple)) Field(Tuple& t) { return t.f6_; } // NOLINT template static GTEST_BY_REF_(GTEST_TUPLE_ELEMENT_(6, Tuple)) ConstField(const Tuple& t) { return t.f6_; } }; template <> class Get<7> { public: template static GTEST_ADD_REF_(GTEST_TUPLE_ELEMENT_(7, Tuple)) Field(Tuple& t) { return t.f7_; } // NOLINT template static GTEST_BY_REF_(GTEST_TUPLE_ELEMENT_(7, Tuple)) ConstField(const Tuple& t) { return t.f7_; } }; template <> class Get<8> { public: template static GTEST_ADD_REF_(GTEST_TUPLE_ELEMENT_(8, Tuple)) Field(Tuple& t) { return t.f8_; } // NOLINT template static GTEST_BY_REF_(GTEST_TUPLE_ELEMENT_(8, Tuple)) ConstField(const Tuple& t) { return t.f8_; } }; template <> class Get<9> { public: template static GTEST_ADD_REF_(GTEST_TUPLE_ELEMENT_(9, Tuple)) Field(Tuple& t) { return t.f9_; } // NOLINT template static GTEST_BY_REF_(GTEST_TUPLE_ELEMENT_(9, Tuple)) ConstField(const Tuple& t) { return t.f9_; } }; } // namespace gtest_internal template GTEST_ADD_REF_(GTEST_TUPLE_ELEMENT_(k, GTEST_10_TUPLE_(T))) get(GTEST_10_TUPLE_(T)& t) { return gtest_internal::Get::Field(t); } template GTEST_BY_REF_(GTEST_TUPLE_ELEMENT_(k, GTEST_10_TUPLE_(T))) get(const GTEST_10_TUPLE_(T)& t) { return gtest_internal::Get::ConstField(t); } // 6.1.3.5 Relational operators // We only implement == and !=, as we don't have a need for the rest yet. namespace gtest_internal { // SameSizeTuplePrefixComparator::Eq(t1, t2) returns true if the // first k fields of t1 equals the first k fields of t2. // SameSizeTuplePrefixComparator(k1, k2) would be a compiler error if // k1 != k2. template struct SameSizeTuplePrefixComparator; template <> struct SameSizeTuplePrefixComparator<0, 0> { template static bool Eq(const Tuple1& /* t1 */, const Tuple2& /* t2 */) { return true; } }; template struct SameSizeTuplePrefixComparator { template static bool Eq(const Tuple1& t1, const Tuple2& t2) { return SameSizeTuplePrefixComparator::Eq(t1, t2) && ::std::tr1::get(t1) == ::std::tr1::get(t2); } }; } // namespace gtest_internal template inline bool operator==(const GTEST_10_TUPLE_(T)& t, const GTEST_10_TUPLE_(U)& u) { return gtest_internal::SameSizeTuplePrefixComparator< tuple_size::value, tuple_size::value>::Eq(t, u); } template inline bool operator!=(const GTEST_10_TUPLE_(T)& t, const GTEST_10_TUPLE_(U)& u) { return !(t == u); } // 6.1.4 Pairs. // Unimplemented. } // namespace tr1 } // namespace std #undef GTEST_0_TUPLE_ #undef GTEST_1_TUPLE_ #undef GTEST_2_TUPLE_ #undef GTEST_3_TUPLE_ #undef GTEST_4_TUPLE_ #undef GTEST_5_TUPLE_ #undef GTEST_6_TUPLE_ #undef GTEST_7_TUPLE_ #undef GTEST_8_TUPLE_ #undef GTEST_9_TUPLE_ #undef GTEST_10_TUPLE_ #undef GTEST_0_TYPENAMES_ #undef GTEST_1_TYPENAMES_ #undef GTEST_2_TYPENAMES_ #undef GTEST_3_TYPENAMES_ #undef GTEST_4_TYPENAMES_ #undef GTEST_5_TYPENAMES_ #undef GTEST_6_TYPENAMES_ #undef GTEST_7_TYPENAMES_ #undef GTEST_8_TYPENAMES_ #undef GTEST_9_TYPENAMES_ #undef GTEST_10_TYPENAMES_ #undef GTEST_DECLARE_TUPLE_AS_FRIEND_ #undef GTEST_BY_REF_ #undef GTEST_ADD_REF_ #undef GTEST_TUPLE_ELEMENT_ #endif // GTEST_INCLUDE_GTEST_INTERNAL_GTEST_TUPLE_H_ # elif GTEST_ENV_HAS_STD_TUPLE_ # include // C++11 puts its tuple into the ::std namespace rather than // ::std::tr1. gtest expects tuple to live in ::std::tr1, so put it there. // This causes undefined behavior, but supported compilers react in // the way we intend. namespace std { namespace tr1 { using ::std::get; using ::std::make_tuple; using ::std::tuple; using ::std::tuple_element; using ::std::tuple_size; } } # elif GTEST_OS_SYMBIAN // On Symbian, BOOST_HAS_TR1_TUPLE causes Boost's TR1 tuple library to // use STLport's tuple implementation, which unfortunately doesn't // work as the copy of STLport distributed with Symbian is incomplete. // By making sure BOOST_HAS_TR1_TUPLE is undefined, we force Boost to // use its own tuple implementation. # ifdef BOOST_HAS_TR1_TUPLE # undef BOOST_HAS_TR1_TUPLE # endif // BOOST_HAS_TR1_TUPLE // This prevents , which defines // BOOST_HAS_TR1_TUPLE, from being #included by Boost's . # define BOOST_TR1_DETAIL_CONFIG_HPP_INCLUDED # include # elif defined(__GNUC__) && (GTEST_GCC_VER_ >= 40000) // GCC 4.0+ implements tr1/tuple in the header. This does // not conform to the TR1 spec, which requires the header to be . # if !GTEST_HAS_RTTI && GTEST_GCC_VER_ < 40302 // Until version 4.3.2, gcc has a bug that causes , // which is #included by , to not compile when RTTI is // disabled. _TR1_FUNCTIONAL is the header guard for // . Hence the following #define is a hack to prevent // from being included. # define _TR1_FUNCTIONAL 1 # include # undef _TR1_FUNCTIONAL // Allows the user to #include // if he chooses to. # else # include // NOLINT # endif // !GTEST_HAS_RTTI && GTEST_GCC_VER_ < 40302 # else // If the compiler is not GCC 4.0+, we assume the user is using a // spec-conforming TR1 implementation. # include // NOLINT # endif // GTEST_USE_OWN_TR1_TUPLE #endif // GTEST_HAS_TR1_TUPLE // Determines whether clone(2) is supported. // Usually it will only be available on Linux, excluding // Linux on the Itanium architecture. // Also see http://linux.die.net/man/2/clone. #ifndef GTEST_HAS_CLONE // The user didn't tell us, so we need to figure it out. # if GTEST_OS_LINUX && !defined(__ia64__) # if GTEST_OS_LINUX_ANDROID // On Android, clone() is only available on ARM starting with Gingerbread. # if defined(__arm__) && __ANDROID_API__ >= 9 # define GTEST_HAS_CLONE 1 # else # define GTEST_HAS_CLONE 0 # endif # else # define GTEST_HAS_CLONE 1 # endif # else # define GTEST_HAS_CLONE 0 # endif // GTEST_OS_LINUX && !defined(__ia64__) #endif // GTEST_HAS_CLONE // Determines whether to support stream redirection. This is used to test // output correctness and to implement death tests. #ifndef GTEST_HAS_STREAM_REDIRECTION // By default, we assume that stream redirection is supported on all // platforms except known mobile ones. # if GTEST_OS_WINDOWS_MOBILE || GTEST_OS_SYMBIAN # define GTEST_HAS_STREAM_REDIRECTION 0 # else # define GTEST_HAS_STREAM_REDIRECTION 1 # endif // !GTEST_OS_WINDOWS_MOBILE && !GTEST_OS_SYMBIAN #endif // GTEST_HAS_STREAM_REDIRECTION // Determines whether to support death tests. // Google Test does not support death tests for VC 7.1 and earlier as // abort() in a VC 7.1 application compiled as GUI in debug config // pops up a dialog window that cannot be suppressed programmatically. #if (GTEST_OS_LINUX || GTEST_OS_CYGWIN || GTEST_OS_SOLARIS || \ (GTEST_OS_MAC && !GTEST_OS_IOS) || GTEST_OS_IOS_SIMULATOR || \ (GTEST_OS_WINDOWS_DESKTOP && _MSC_VER >= 1400) || \ GTEST_OS_WINDOWS_MINGW || GTEST_OS_AIX || GTEST_OS_HPUX || \ GTEST_OS_OPENBSD || GTEST_OS_QNX) # define GTEST_HAS_DEATH_TEST 1 # include // NOLINT #endif // We don't support MSVC 7.1 with exceptions disabled now. Therefore // all the compilers we care about are adequate for supporting // value-parameterized tests. #define GTEST_HAS_PARAM_TEST 1 // Determines whether to support type-driven tests. // Typed tests need and variadic macros, which GCC, VC++ 8.0, // Sun Pro CC, IBM Visual Age, and HP aCC support. #if defined(__GNUC__) || (_MSC_VER >= 1400) || defined(__SUNPRO_CC) || \ defined(__IBMCPP__) || defined(__HP_aCC) # define GTEST_HAS_TYPED_TEST 1 # define GTEST_HAS_TYPED_TEST_P 1 #endif // Determines whether to support Combine(). This only makes sense when // value-parameterized tests are enabled. The implementation doesn't // work on Sun Studio since it doesn't understand templated conversion // operators. #if GTEST_HAS_PARAM_TEST && GTEST_HAS_TR1_TUPLE && !defined(__SUNPRO_CC) # define GTEST_HAS_COMBINE 1 #endif // Determines whether the system compiler uses UTF-16 for encoding wide strings. #define GTEST_WIDE_STRING_USES_UTF16_ \ (GTEST_OS_WINDOWS || GTEST_OS_CYGWIN || GTEST_OS_SYMBIAN || GTEST_OS_AIX) // Determines whether test results can be streamed to a socket. #if GTEST_OS_LINUX # define GTEST_CAN_STREAM_RESULTS_ 1 #endif // Defines some utility macros. // The GNU compiler emits a warning if nested "if" statements are followed by // an "else" statement and braces are not used to explicitly disambiguate the // "else" binding. This leads to problems with code like: // // if (gate) // ASSERT_*(condition) << "Some message"; // // The "switch (0) case 0:" idiom is used to suppress this. #ifdef __INTEL_COMPILER # define GTEST_AMBIGUOUS_ELSE_BLOCKER_ #else # define GTEST_AMBIGUOUS_ELSE_BLOCKER_ switch (0) case 0: default: // NOLINT #endif // Use this annotation at the end of a struct/class definition to // prevent the compiler from optimizing away instances that are never // used. This is useful when all interesting logic happens inside the // c'tor and / or d'tor. Example: // // struct Foo { // Foo() { ... } // } GTEST_ATTRIBUTE_UNUSED_; // // Also use it after a variable or parameter declaration to tell the // compiler the variable/parameter does not have to be used. #if defined(__GNUC__) && !defined(COMPILER_ICC) # define GTEST_ATTRIBUTE_UNUSED_ __attribute__ ((unused)) #else # define GTEST_ATTRIBUTE_UNUSED_ #endif // A macro to disallow operator= // This should be used in the private: declarations for a class. #define GTEST_DISALLOW_ASSIGN_(type)\ void operator=(type const &) // A macro to disallow copy constructor and operator= // This should be used in the private: declarations for a class. #define GTEST_DISALLOW_COPY_AND_ASSIGN_(type)\ type(type const &);\ GTEST_DISALLOW_ASSIGN_(type) // Tell the compiler to warn about unused return values for functions declared // with this macro. The macro should be used on function declarations // following the argument list: // // Sprocket* AllocateSprocket() GTEST_MUST_USE_RESULT_; #if defined(__GNUC__) && (GTEST_GCC_VER_ >= 30400) && !defined(COMPILER_ICC) # define GTEST_MUST_USE_RESULT_ __attribute__ ((warn_unused_result)) #else # define GTEST_MUST_USE_RESULT_ #endif // __GNUC__ && (GTEST_GCC_VER_ >= 30400) && !COMPILER_ICC // Determine whether the compiler supports Microsoft's Structured Exception // Handling. This is supported by several Windows compilers but generally // does not exist on any other system. #ifndef GTEST_HAS_SEH // The user didn't tell us, so we need to figure it out. # if defined(_MSC_VER) || defined(__BORLANDC__) // These two compilers are known to support SEH. # define GTEST_HAS_SEH 1 # else // Assume no SEH. # define GTEST_HAS_SEH 0 # endif #endif // GTEST_HAS_SEH #ifdef _MSC_VER # if GTEST_LINKED_AS_SHARED_LIBRARY # define GTEST_API_ __declspec(dllimport) # elif GTEST_CREATE_SHARED_LIBRARY # define GTEST_API_ __declspec(dllexport) # endif #endif // _MSC_VER #ifndef GTEST_API_ # define GTEST_API_ #endif #ifdef __GNUC__ // Ask the compiler to never inline a given function. # define GTEST_NO_INLINE_ __attribute__((noinline)) #else # define GTEST_NO_INLINE_ #endif // _LIBCPP_VERSION is defined by the libc++ library from the LLVM project. #if defined(__GLIBCXX__) || defined(_LIBCPP_VERSION) # define GTEST_HAS_CXXABI_H_ 1 #else # define GTEST_HAS_CXXABI_H_ 0 #endif namespace testing { class Message; namespace internal { // A secret type that Google Test users don't know about. It has no // definition on purpose. Therefore it's impossible to create a // Secret object, which is what we want. class Secret; // The GTEST_COMPILE_ASSERT_ macro can be used to verify that a compile time // expression is true. For example, you could use it to verify the // size of a static array: // // GTEST_COMPILE_ASSERT_(ARRAYSIZE(content_type_names) == CONTENT_NUM_TYPES, // content_type_names_incorrect_size); // // or to make sure a struct is smaller than a certain size: // // GTEST_COMPILE_ASSERT_(sizeof(foo) < 128, foo_too_large); // // The second argument to the macro is the name of the variable. If // the expression is false, most compilers will issue a warning/error // containing the name of the variable. template struct CompileAssert { }; #define GTEST_COMPILE_ASSERT_(expr, msg) \ typedef ::testing::internal::CompileAssert<(static_cast(expr))> \ msg[static_cast(expr) ? 1 : -1] GTEST_ATTRIBUTE_UNUSED_ // Implementation details of GTEST_COMPILE_ASSERT_: // // - GTEST_COMPILE_ASSERT_ works by defining an array type that has -1 // elements (and thus is invalid) when the expression is false. // // - The simpler definition // // #define GTEST_COMPILE_ASSERT_(expr, msg) typedef char msg[(expr) ? 1 : -1] // // does not work, as gcc supports variable-length arrays whose sizes // are determined at run-time (this is gcc's extension and not part // of the C++ standard). As a result, gcc fails to reject the // following code with the simple definition: // // int foo; // GTEST_COMPILE_ASSERT_(foo, msg); // not supposed to compile as foo is // // not a compile-time constant. // // - By using the type CompileAssert<(bool(expr))>, we ensures that // expr is a compile-time constant. (Template arguments must be // determined at compile-time.) // // - The outter parentheses in CompileAssert<(bool(expr))> are necessary // to work around a bug in gcc 3.4.4 and 4.0.1. If we had written // // CompileAssert // // instead, these compilers will refuse to compile // // GTEST_COMPILE_ASSERT_(5 > 0, some_message); // // (They seem to think the ">" in "5 > 0" marks the end of the // template argument list.) // // - The array size is (bool(expr) ? 1 : -1), instead of simply // // ((expr) ? 1 : -1). // // This is to avoid running into a bug in MS VC 7.1, which // causes ((0.0) ? 1 : -1) to incorrectly evaluate to 1. // StaticAssertTypeEqHelper is used by StaticAssertTypeEq defined in gtest.h. // // This template is declared, but intentionally undefined. template struct StaticAssertTypeEqHelper; template struct StaticAssertTypeEqHelper {}; #if GTEST_HAS_GLOBAL_STRING typedef ::string string; #else typedef ::std::string string; #endif // GTEST_HAS_GLOBAL_STRING #if GTEST_HAS_GLOBAL_WSTRING typedef ::wstring wstring; #elif GTEST_HAS_STD_WSTRING typedef ::std::wstring wstring; #endif // GTEST_HAS_GLOBAL_WSTRING // A helper for suppressing warnings on constant condition. It just // returns 'condition'. GTEST_API_ bool IsTrue(bool condition); // Defines scoped_ptr. // This implementation of scoped_ptr is PARTIAL - it only contains // enough stuff to satisfy Google Test's need. template class scoped_ptr { public: typedef T element_type; explicit scoped_ptr(T* p = NULL) : ptr_(p) {} ~scoped_ptr() { reset(); } T& operator*() const { return *ptr_; } T* operator->() const { return ptr_; } T* get() const { return ptr_; } T* release() { T* const ptr = ptr_; ptr_ = NULL; return ptr; } void reset(T* p = NULL) { if (p != ptr_) { if (IsTrue(sizeof(T) > 0)) { // Makes sure T is a complete type. delete ptr_; } ptr_ = p; } } private: T* ptr_; GTEST_DISALLOW_COPY_AND_ASSIGN_(scoped_ptr); }; // Defines RE. // A simple C++ wrapper for . It uses the POSIX Extended // Regular Expression syntax. class GTEST_API_ RE { public: // A copy constructor is required by the Standard to initialize object // references from r-values. RE(const RE& other) { Init(other.pattern()); } // Constructs an RE from a string. RE(const ::std::string& regex) { Init(regex.c_str()); } // NOLINT #if GTEST_HAS_GLOBAL_STRING RE(const ::string& regex) { Init(regex.c_str()); } // NOLINT #endif // GTEST_HAS_GLOBAL_STRING RE(const char* regex) { Init(regex); } // NOLINT ~RE(); // Returns the string representation of the regex. const char* pattern() const { return pattern_; } // FullMatch(str, re) returns true iff regular expression re matches // the entire str. // PartialMatch(str, re) returns true iff regular expression re // matches a substring of str (including str itself). // // TODO(wan@google.com): make FullMatch() and PartialMatch() work // when str contains NUL characters. static bool FullMatch(const ::std::string& str, const RE& re) { return FullMatch(str.c_str(), re); } static bool PartialMatch(const ::std::string& str, const RE& re) { return PartialMatch(str.c_str(), re); } #if GTEST_HAS_GLOBAL_STRING static bool FullMatch(const ::string& str, const RE& re) { return FullMatch(str.c_str(), re); } static bool PartialMatch(const ::string& str, const RE& re) { return PartialMatch(str.c_str(), re); } #endif // GTEST_HAS_GLOBAL_STRING static bool FullMatch(const char* str, const RE& re); static bool PartialMatch(const char* str, const RE& re); private: void Init(const char* regex); // We use a const char* instead of an std::string, as Google Test used to be // used where std::string is not available. TODO(wan@google.com): change to // std::string. const char* pattern_; bool is_valid_; #if GTEST_USES_POSIX_RE regex_t full_regex_; // For FullMatch(). regex_t partial_regex_; // For PartialMatch(). #else // GTEST_USES_SIMPLE_RE const char* full_pattern_; // For FullMatch(); #endif GTEST_DISALLOW_ASSIGN_(RE); }; // Formats a source file path and a line number as they would appear // in an error message from the compiler used to compile this code. GTEST_API_ ::std::string FormatFileLocation(const char* file, int line); // Formats a file location for compiler-independent XML output. // Although this function is not platform dependent, we put it next to // FormatFileLocation in order to contrast the two functions. GTEST_API_ ::std::string FormatCompilerIndependentFileLocation(const char* file, int line); // Defines logging utilities: // GTEST_LOG_(severity) - logs messages at the specified severity level. The // message itself is streamed into the macro. // LogToStderr() - directs all log messages to stderr. // FlushInfoLog() - flushes informational log messages. enum GTestLogSeverity { GTEST_INFO, GTEST_WARNING, GTEST_ERROR, GTEST_FATAL }; // Formats log entry severity, provides a stream object for streaming the // log message, and terminates the message with a newline when going out of // scope. class GTEST_API_ GTestLog { public: GTestLog(GTestLogSeverity severity, const char* file, int line); // Flushes the buffers and, if severity is GTEST_FATAL, aborts the program. ~GTestLog(); ::std::ostream& GetStream() { return ::std::cerr; } private: const GTestLogSeverity severity_; GTEST_DISALLOW_COPY_AND_ASSIGN_(GTestLog); }; #define GTEST_LOG_(severity) \ ::testing::internal::GTestLog(::testing::internal::GTEST_##severity, \ __FILE__, __LINE__).GetStream() inline void LogToStderr() {} inline void FlushInfoLog() { fflush(NULL); } // INTERNAL IMPLEMENTATION - DO NOT USE. // // GTEST_CHECK_ is an all-mode assert. It aborts the program if the condition // is not satisfied. // Synopsys: // GTEST_CHECK_(boolean_condition); // or // GTEST_CHECK_(boolean_condition) << "Additional message"; // // This checks the condition and if the condition is not satisfied // it prints message about the condition violation, including the // condition itself, plus additional message streamed into it, if any, // and then it aborts the program. It aborts the program irrespective of // whether it is built in the debug mode or not. #define GTEST_CHECK_(condition) \ GTEST_AMBIGUOUS_ELSE_BLOCKER_ \ if (::testing::internal::IsTrue(condition)) \ ; \ else \ GTEST_LOG_(FATAL) << "Condition " #condition " failed. " // An all-mode assert to verify that the given POSIX-style function // call returns 0 (indicating success). Known limitation: this // doesn't expand to a balanced 'if' statement, so enclose the macro // in {} if you need to use it as the only statement in an 'if' // branch. #define GTEST_CHECK_POSIX_SUCCESS_(posix_call) \ if (const int gtest_error = (posix_call)) \ GTEST_LOG_(FATAL) << #posix_call << "failed with error " \ << gtest_error // INTERNAL IMPLEMENTATION - DO NOT USE IN USER CODE. // // Use ImplicitCast_ as a safe version of static_cast for upcasting in // the type hierarchy (e.g. casting a Foo* to a SuperclassOfFoo* or a // const Foo*). When you use ImplicitCast_, the compiler checks that // the cast is safe. Such explicit ImplicitCast_s are necessary in // surprisingly many situations where C++ demands an exact type match // instead of an argument type convertable to a target type. // // The syntax for using ImplicitCast_ is the same as for static_cast: // // ImplicitCast_(expr) // // ImplicitCast_ would have been part of the C++ standard library, // but the proposal was submitted too late. It will probably make // its way into the language in the future. // // This relatively ugly name is intentional. It prevents clashes with // similar functions users may have (e.g., implicit_cast). The internal // namespace alone is not enough because the function can be found by ADL. template inline To ImplicitCast_(To x) { return x; } // When you upcast (that is, cast a pointer from type Foo to type // SuperclassOfFoo), it's fine to use ImplicitCast_<>, since upcasts // always succeed. When you downcast (that is, cast a pointer from // type Foo to type SubclassOfFoo), static_cast<> isn't safe, because // how do you know the pointer is really of type SubclassOfFoo? It // could be a bare Foo, or of type DifferentSubclassOfFoo. Thus, // when you downcast, you should use this macro. In debug mode, we // use dynamic_cast<> to double-check the downcast is legal (we die // if it's not). In normal mode, we do the efficient static_cast<> // instead. Thus, it's important to test in debug mode to make sure // the cast is legal! // This is the only place in the code we should use dynamic_cast<>. // In particular, you SHOULDN'T be using dynamic_cast<> in order to // do RTTI (eg code like this: // if (dynamic_cast(foo)) HandleASubclass1Object(foo); // if (dynamic_cast(foo)) HandleASubclass2Object(foo); // You should design the code some other way not to need this. // // This relatively ugly name is intentional. It prevents clashes with // similar functions users may have (e.g., down_cast). The internal // namespace alone is not enough because the function can be found by ADL. template // use like this: DownCast_(foo); inline To DownCast_(From* f) { // so we only accept pointers // Ensures that To is a sub-type of From *. This test is here only // for compile-time type checking, and has no overhead in an // optimized build at run-time, as it will be optimized away // completely. if (false) { const To to = NULL; ::testing::internal::ImplicitCast_(to); } #if GTEST_HAS_RTTI // RTTI: debug mode only! GTEST_CHECK_(f == NULL || dynamic_cast(f) != NULL); #endif return static_cast(f); } // Downcasts the pointer of type Base to Derived. // Derived must be a subclass of Base. The parameter MUST // point to a class of type Derived, not any subclass of it. // When RTTI is available, the function performs a runtime // check to enforce this. template Derived* CheckedDowncastToActualType(Base* base) { #if GTEST_HAS_RTTI GTEST_CHECK_(typeid(*base) == typeid(Derived)); return dynamic_cast(base); // NOLINT #else return static_cast(base); // Poor man's downcast. #endif } #if GTEST_HAS_STREAM_REDIRECTION // Defines the stderr capturer: // CaptureStdout - starts capturing stdout. // GetCapturedStdout - stops capturing stdout and returns the captured string. // CaptureStderr - starts capturing stderr. // GetCapturedStderr - stops capturing stderr and returns the captured string. // GTEST_API_ void CaptureStdout(); GTEST_API_ std::string GetCapturedStdout(); GTEST_API_ void CaptureStderr(); GTEST_API_ std::string GetCapturedStderr(); #endif // GTEST_HAS_STREAM_REDIRECTION #if GTEST_HAS_DEATH_TEST const ::std::vector& GetInjectableArgvs(); void SetInjectableArgvs(const ::std::vector* new_argvs); // A copy of all command line arguments. Set by InitGoogleTest(). extern ::std::vector g_argvs; #endif // GTEST_HAS_DEATH_TEST // Defines synchronization primitives. #if GTEST_HAS_PTHREAD // Sleeps for (roughly) n milli-seconds. This function is only for // testing Google Test's own constructs. Don't use it in user tests, // either directly or indirectly. inline void SleepMilliseconds(int n) { const timespec time = { 0, // 0 seconds. n * 1000L * 1000L, // And n ms. }; nanosleep(&time, NULL); } // Allows a controller thread to pause execution of newly created // threads until notified. Instances of this class must be created // and destroyed in the controller thread. // // This class is only for testing Google Test's own constructs. Do not // use it in user tests, either directly or indirectly. class Notification { public: Notification() : notified_(false) { GTEST_CHECK_POSIX_SUCCESS_(pthread_mutex_init(&mutex_, NULL)); } ~Notification() { pthread_mutex_destroy(&mutex_); } // Notifies all threads created with this notification to start. Must // be called from the controller thread. void Notify() { pthread_mutex_lock(&mutex_); notified_ = true; pthread_mutex_unlock(&mutex_); } // Blocks until the controller thread notifies. Must be called from a test // thread. void WaitForNotification() { for (;;) { pthread_mutex_lock(&mutex_); const bool notified = notified_; pthread_mutex_unlock(&mutex_); if (notified) break; SleepMilliseconds(10); } } private: pthread_mutex_t mutex_; bool notified_; GTEST_DISALLOW_COPY_AND_ASSIGN_(Notification); }; // As a C-function, ThreadFuncWithCLinkage cannot be templated itself. // Consequently, it cannot select a correct instantiation of ThreadWithParam // in order to call its Run(). Introducing ThreadWithParamBase as a // non-templated base class for ThreadWithParam allows us to bypass this // problem. class ThreadWithParamBase { public: virtual ~ThreadWithParamBase() {} virtual void Run() = 0; }; // pthread_create() accepts a pointer to a function type with the C linkage. // According to the Standard (7.5/1), function types with different linkages // are different even if they are otherwise identical. Some compilers (for // example, SunStudio) treat them as different types. Since class methods // cannot be defined with C-linkage we need to define a free C-function to // pass into pthread_create(). extern "C" inline void* ThreadFuncWithCLinkage(void* thread) { static_cast(thread)->Run(); return NULL; } // Helper class for testing Google Test's multi-threading constructs. // To use it, write: // // void ThreadFunc(int param) { /* Do things with param */ } // Notification thread_can_start; // ... // // The thread_can_start parameter is optional; you can supply NULL. // ThreadWithParam thread(&ThreadFunc, 5, &thread_can_start); // thread_can_start.Notify(); // // These classes are only for testing Google Test's own constructs. Do // not use them in user tests, either directly or indirectly. template class ThreadWithParam : public ThreadWithParamBase { public: typedef void (*UserThreadFunc)(T); ThreadWithParam( UserThreadFunc func, T param, Notification* thread_can_start) : func_(func), param_(param), thread_can_start_(thread_can_start), finished_(false) { ThreadWithParamBase* const base = this; // The thread can be created only after all fields except thread_ // have been initialized. GTEST_CHECK_POSIX_SUCCESS_( pthread_create(&thread_, 0, &ThreadFuncWithCLinkage, base)); } ~ThreadWithParam() { Join(); } void Join() { if (!finished_) { GTEST_CHECK_POSIX_SUCCESS_(pthread_join(thread_, 0)); finished_ = true; } } virtual void Run() { if (thread_can_start_ != NULL) thread_can_start_->WaitForNotification(); func_(param_); } private: const UserThreadFunc func_; // User-supplied thread function. const T param_; // User-supplied parameter to the thread function. // When non-NULL, used to block execution until the controller thread // notifies. Notification* const thread_can_start_; bool finished_; // true iff we know that the thread function has finished. pthread_t thread_; // The native thread object. GTEST_DISALLOW_COPY_AND_ASSIGN_(ThreadWithParam); }; // MutexBase and Mutex implement mutex on pthreads-based platforms. They // are used in conjunction with class MutexLock: // // Mutex mutex; // ... // MutexLock lock(&mutex); // Acquires the mutex and releases it at the end // // of the current scope. // // MutexBase implements behavior for both statically and dynamically // allocated mutexes. Do not use MutexBase directly. Instead, write // the following to define a static mutex: // // GTEST_DEFINE_STATIC_MUTEX_(g_some_mutex); // // You can forward declare a static mutex like this: // // GTEST_DECLARE_STATIC_MUTEX_(g_some_mutex); // // To create a dynamic mutex, just define an object of type Mutex. class MutexBase { public: // Acquires this mutex. void Lock() { GTEST_CHECK_POSIX_SUCCESS_(pthread_mutex_lock(&mutex_)); owner_ = pthread_self(); has_owner_ = true; } // Releases this mutex. void Unlock() { // Since the lock is being released the owner_ field should no longer be // considered valid. We don't protect writing to has_owner_ here, as it's // the caller's responsibility to ensure that the current thread holds the // mutex when this is called. has_owner_ = false; GTEST_CHECK_POSIX_SUCCESS_(pthread_mutex_unlock(&mutex_)); } // Does nothing if the current thread holds the mutex. Otherwise, crashes // with high probability. void AssertHeld() const { GTEST_CHECK_(has_owner_ && pthread_equal(owner_, pthread_self())) << "The current thread is not holding the mutex @" << this; } // A static mutex may be used before main() is entered. It may even // be used before the dynamic initialization stage. Therefore we // must be able to initialize a static mutex object at link time. // This means MutexBase has to be a POD and its member variables // have to be public. public: pthread_mutex_t mutex_; // The underlying pthread mutex. // has_owner_ indicates whether the owner_ field below contains a valid thread // ID and is therefore safe to inspect (e.g., to use in pthread_equal()). All // accesses to the owner_ field should be protected by a check of this field. // An alternative might be to memset() owner_ to all zeros, but there's no // guarantee that a zero'd pthread_t is necessarily invalid or even different // from pthread_self(). bool has_owner_; pthread_t owner_; // The thread holding the mutex. }; // Forward-declares a static mutex. # define GTEST_DECLARE_STATIC_MUTEX_(mutex) \ extern ::testing::internal::MutexBase mutex // Defines and statically (i.e. at link time) initializes a static mutex. // The initialization list here does not explicitly initialize each field, // instead relying on default initialization for the unspecified fields. In // particular, the owner_ field (a pthread_t) is not explicitly initialized. // This allows initialization to work whether pthread_t is a scalar or struct. // The flag -Wmissing-field-initializers must not be specified for this to work. # define GTEST_DEFINE_STATIC_MUTEX_(mutex) \ ::testing::internal::MutexBase mutex = { PTHREAD_MUTEX_INITIALIZER, false } // The Mutex class can only be used for mutexes created at runtime. It // shares its API with MutexBase otherwise. class Mutex : public MutexBase { public: Mutex() { GTEST_CHECK_POSIX_SUCCESS_(pthread_mutex_init(&mutex_, NULL)); has_owner_ = false; } ~Mutex() { GTEST_CHECK_POSIX_SUCCESS_(pthread_mutex_destroy(&mutex_)); } private: GTEST_DISALLOW_COPY_AND_ASSIGN_(Mutex); }; // We cannot name this class MutexLock as the ctor declaration would // conflict with a macro named MutexLock, which is defined on some // platforms. Hence the typedef trick below. class GTestMutexLock { public: explicit GTestMutexLock(MutexBase* mutex) : mutex_(mutex) { mutex_->Lock(); } ~GTestMutexLock() { mutex_->Unlock(); } private: MutexBase* const mutex_; GTEST_DISALLOW_COPY_AND_ASSIGN_(GTestMutexLock); }; typedef GTestMutexLock MutexLock; // Helpers for ThreadLocal. // pthread_key_create() requires DeleteThreadLocalValue() to have // C-linkage. Therefore it cannot be templatized to access // ThreadLocal. Hence the need for class // ThreadLocalValueHolderBase. class ThreadLocalValueHolderBase { public: virtual ~ThreadLocalValueHolderBase() {} }; // Called by pthread to delete thread-local data stored by // pthread_setspecific(). extern "C" inline void DeleteThreadLocalValue(void* value_holder) { delete static_cast(value_holder); } // Implements thread-local storage on pthreads-based systems. // // // Thread 1 // ThreadLocal tl(100); // 100 is the default value for each thread. // // // Thread 2 // tl.set(150); // Changes the value for thread 2 only. // EXPECT_EQ(150, tl.get()); // // // Thread 1 // EXPECT_EQ(100, tl.get()); // In thread 1, tl has the original value. // tl.set(200); // EXPECT_EQ(200, tl.get()); // // The template type argument T must have a public copy constructor. // In addition, the default ThreadLocal constructor requires T to have // a public default constructor. // // An object managed for a thread by a ThreadLocal instance is deleted // when the thread exits. Or, if the ThreadLocal instance dies in // that thread, when the ThreadLocal dies. It's the user's // responsibility to ensure that all other threads using a ThreadLocal // have exited when it dies, or the per-thread objects for those // threads will not be deleted. // // Google Test only uses global ThreadLocal objects. That means they // will die after main() has returned. Therefore, no per-thread // object managed by Google Test will be leaked as long as all threads // using Google Test have exited when main() returns. template class ThreadLocal { public: ThreadLocal() : key_(CreateKey()), default_() {} explicit ThreadLocal(const T& value) : key_(CreateKey()), default_(value) {} ~ThreadLocal() { // Destroys the managed object for the current thread, if any. DeleteThreadLocalValue(pthread_getspecific(key_)); // Releases resources associated with the key. This will *not* // delete managed objects for other threads. GTEST_CHECK_POSIX_SUCCESS_(pthread_key_delete(key_)); } T* pointer() { return GetOrCreateValue(); } const T* pointer() const { return GetOrCreateValue(); } const T& get() const { return *pointer(); } void set(const T& value) { *pointer() = value; } private: // Holds a value of type T. class ValueHolder : public ThreadLocalValueHolderBase { public: explicit ValueHolder(const T& value) : value_(value) {} T* pointer() { return &value_; } private: T value_; GTEST_DISALLOW_COPY_AND_ASSIGN_(ValueHolder); }; static pthread_key_t CreateKey() { pthread_key_t key; // When a thread exits, DeleteThreadLocalValue() will be called on // the object managed for that thread. GTEST_CHECK_POSIX_SUCCESS_( pthread_key_create(&key, &DeleteThreadLocalValue)); return key; } T* GetOrCreateValue() const { ThreadLocalValueHolderBase* const holder = static_cast(pthread_getspecific(key_)); if (holder != NULL) { return CheckedDowncastToActualType(holder)->pointer(); } ValueHolder* const new_holder = new ValueHolder(default_); ThreadLocalValueHolderBase* const holder_base = new_holder; GTEST_CHECK_POSIX_SUCCESS_(pthread_setspecific(key_, holder_base)); return new_holder->pointer(); } // A key pthreads uses for looking up per-thread values. const pthread_key_t key_; const T default_; // The default value for each thread. GTEST_DISALLOW_COPY_AND_ASSIGN_(ThreadLocal); }; # define GTEST_IS_THREADSAFE 1 #else // GTEST_HAS_PTHREAD // A dummy implementation of synchronization primitives (mutex, lock, // and thread-local variable). Necessary for compiling Google Test where // mutex is not supported - using Google Test in multiple threads is not // supported on such platforms. class Mutex { public: Mutex() {} void Lock() {} void Unlock() {} void AssertHeld() const {} }; # define GTEST_DECLARE_STATIC_MUTEX_(mutex) \ extern ::testing::internal::Mutex mutex # define GTEST_DEFINE_STATIC_MUTEX_(mutex) ::testing::internal::Mutex mutex class GTestMutexLock { public: explicit GTestMutexLock(Mutex*) {} // NOLINT }; typedef GTestMutexLock MutexLock; template class ThreadLocal { public: ThreadLocal() : value_() {} explicit ThreadLocal(const T& value) : value_(value) {} T* pointer() { return &value_; } const T* pointer() const { return &value_; } const T& get() const { return value_; } void set(const T& value) { value_ = value; } private: T value_; }; // The above synchronization primitives have dummy implementations. // Therefore Google Test is not thread-safe. # define GTEST_IS_THREADSAFE 0 #endif // GTEST_HAS_PTHREAD // Returns the number of threads running in the process, or 0 to indicate that // we cannot detect it. GTEST_API_ size_t GetThreadCount(); // Passing non-POD classes through ellipsis (...) crashes the ARM // compiler and generates a warning in Sun Studio. The Nokia Symbian // and the IBM XL C/C++ compiler try to instantiate a copy constructor // for objects passed through ellipsis (...), failing for uncopyable // objects. We define this to ensure that only POD is passed through // ellipsis on these systems. #if defined(__SYMBIAN32__) || defined(__IBMCPP__) || defined(__SUNPRO_CC) // We lose support for NULL detection where the compiler doesn't like // passing non-POD classes through ellipsis (...). # define GTEST_ELLIPSIS_NEEDS_POD_ 1 #else # define GTEST_CAN_COMPARE_NULL 1 #endif // The Nokia Symbian and IBM XL C/C++ compilers cannot decide between // const T& and const T* in a function template. These compilers // _can_ decide between class template specializations for T and T*, // so a tr1::type_traits-like is_pointer works. #if defined(__SYMBIAN32__) || defined(__IBMCPP__) # define GTEST_NEEDS_IS_POINTER_ 1 #endif template struct bool_constant { typedef bool_constant type; static const bool value = bool_value; }; template const bool bool_constant::value; typedef bool_constant false_type; typedef bool_constant true_type; template struct is_pointer : public false_type {}; template struct is_pointer : public true_type {}; template struct IteratorTraits { typedef typename Iterator::value_type value_type; }; template struct IteratorTraits { typedef T value_type; }; template struct IteratorTraits { typedef T value_type; }; #if GTEST_OS_WINDOWS # define GTEST_PATH_SEP_ "\\" # define GTEST_HAS_ALT_PATH_SEP_ 1 // The biggest signed integer type the compiler supports. typedef __int64 BiggestInt; #else # define GTEST_PATH_SEP_ "/" # define GTEST_HAS_ALT_PATH_SEP_ 0 typedef long long BiggestInt; // NOLINT #endif // GTEST_OS_WINDOWS // Utilities for char. // isspace(int ch) and friends accept an unsigned char or EOF. char // may be signed, depending on the compiler (or compiler flags). // Therefore we need to cast a char to unsigned char before calling // isspace(), etc. inline bool IsAlpha(char ch) { return isalpha(static_cast(ch)) != 0; } inline bool IsAlNum(char ch) { return isalnum(static_cast(ch)) != 0; } inline bool IsDigit(char ch) { return isdigit(static_cast(ch)) != 0; } inline bool IsLower(char ch) { return islower(static_cast(ch)) != 0; } inline bool IsSpace(char ch) { return isspace(static_cast(ch)) != 0; } inline bool IsUpper(char ch) { return isupper(static_cast(ch)) != 0; } inline bool IsXDigit(char ch) { return isxdigit(static_cast(ch)) != 0; } inline bool IsXDigit(wchar_t ch) { const unsigned char low_byte = static_cast(ch); return ch == low_byte && isxdigit(low_byte) != 0; } inline char ToLower(char ch) { return static_cast(tolower(static_cast(ch))); } inline char ToUpper(char ch) { return static_cast(toupper(static_cast(ch))); } // The testing::internal::posix namespace holds wrappers for common // POSIX functions. These wrappers hide the differences between // Windows/MSVC and POSIX systems. Since some compilers define these // standard functions as macros, the wrapper cannot have the same name // as the wrapped function. namespace posix { // Functions with a different name on Windows. #if GTEST_OS_WINDOWS typedef struct _stat StatStruct; # ifdef __BORLANDC__ inline int IsATTY(int fd) { return isatty(fd); } inline int StrCaseCmp(const char* s1, const char* s2) { return stricmp(s1, s2); } inline char* StrDup(const char* src) { return strdup(src); } # else // !__BORLANDC__ # if GTEST_OS_WINDOWS_MOBILE inline int IsATTY(int /* fd */) { return 0; } # else inline int IsATTY(int fd) { return _isatty(fd); } # endif // GTEST_OS_WINDOWS_MOBILE inline int StrCaseCmp(const char* s1, const char* s2) { return _stricmp(s1, s2); } inline char* StrDup(const char* src) { return _strdup(src); } # endif // __BORLANDC__ # if GTEST_OS_WINDOWS_MOBILE inline int FileNo(FILE* file) { return reinterpret_cast(_fileno(file)); } // Stat(), RmDir(), and IsDir() are not needed on Windows CE at this // time and thus not defined there. # else inline int FileNo(FILE* file) { return _fileno(file); } inline int Stat(const char* path, StatStruct* buf) { return _stat(path, buf); } inline int RmDir(const char* dir) { return _rmdir(dir); } inline bool IsDir(const StatStruct& st) { return (_S_IFDIR & st.st_mode) != 0; } # endif // GTEST_OS_WINDOWS_MOBILE #else typedef struct stat StatStruct; inline int FileNo(FILE* file) { return fileno(file); } inline int IsATTY(int fd) { return isatty(fd); } inline int Stat(const char* path, StatStruct* buf) { return stat(path, buf); } inline int StrCaseCmp(const char* s1, const char* s2) { return strcasecmp(s1, s2); } inline char* StrDup(const char* src) { return strdup(src); } inline int RmDir(const char* dir) { return rmdir(dir); } inline bool IsDir(const StatStruct& st) { return S_ISDIR(st.st_mode); } #endif // GTEST_OS_WINDOWS // Functions deprecated by MSVC 8.0. #ifdef _MSC_VER // Temporarily disable warning 4996 (deprecated function). # pragma warning(push) # pragma warning(disable:4996) #endif inline const char* StrNCpy(char* dest, const char* src, size_t n) { return strncpy(dest, src, n); } // ChDir(), FReopen(), FDOpen(), Read(), Write(), Close(), and // StrError() aren't needed on Windows CE at this time and thus not // defined there. #if !GTEST_OS_WINDOWS_MOBILE inline int ChDir(const char* dir) { return chdir(dir); } #endif inline FILE* FOpen(const char* path, const char* mode) { return fopen(path, mode); } #if !GTEST_OS_WINDOWS_MOBILE inline FILE *FReopen(const char* path, const char* mode, FILE* stream) { return freopen(path, mode, stream); } inline FILE* FDOpen(int fd, const char* mode) { return fdopen(fd, mode); } #endif inline int FClose(FILE* fp) { return fclose(fp); } #if !GTEST_OS_WINDOWS_MOBILE inline int Read(int fd, void* buf, unsigned int count) { return static_cast(read(fd, buf, count)); } inline int Write(int fd, const void* buf, unsigned int count) { return static_cast(write(fd, buf, count)); } inline int Close(int fd) { return close(fd); } inline const char* StrError(int errnum) { return strerror(errnum); } #endif inline const char* GetEnv(const char* name) { #if GTEST_OS_WINDOWS_MOBILE // We are on Windows CE, which has no environment variables. return NULL; #elif defined(__BORLANDC__) || defined(__SunOS_5_8) || defined(__SunOS_5_9) // Environment variables which we programmatically clear will be set to the // empty string rather than unset (NULL). Handle that case. const char* const env = getenv(name); return (env != NULL && env[0] != '\0') ? env : NULL; #else return getenv(name); #endif } #ifdef _MSC_VER # pragma warning(pop) // Restores the warning state. #endif #if GTEST_OS_WINDOWS_MOBILE // Windows CE has no C library. The abort() function is used in // several places in Google Test. This implementation provides a reasonable // imitation of standard behaviour. void Abort(); #else inline void Abort() { abort(); } #endif // GTEST_OS_WINDOWS_MOBILE } // namespace posix // MSVC "deprecates" snprintf and issues warnings wherever it is used. In // order to avoid these warnings, we need to use _snprintf or _snprintf_s on // MSVC-based platforms. We map the GTEST_SNPRINTF_ macro to the appropriate // function in order to achieve that. We use macro definition here because // snprintf is a variadic function. #if _MSC_VER >= 1400 && !GTEST_OS_WINDOWS_MOBILE // MSVC 2005 and above support variadic macros. # define GTEST_SNPRINTF_(buffer, size, format, ...) \ _snprintf_s(buffer, size, size, format, __VA_ARGS__) #elif defined(_MSC_VER) // Windows CE does not define _snprintf_s and MSVC prior to 2005 doesn't // complain about _snprintf. # define GTEST_SNPRINTF_ _snprintf #else # define GTEST_SNPRINTF_ snprintf #endif // The maximum number a BiggestInt can represent. This definition // works no matter BiggestInt is represented in one's complement or // two's complement. // // We cannot rely on numeric_limits in STL, as __int64 and long long // are not part of standard C++ and numeric_limits doesn't need to be // defined for them. const BiggestInt kMaxBiggestInt = ~(static_cast(1) << (8*sizeof(BiggestInt) - 1)); // This template class serves as a compile-time function from size to // type. It maps a size in bytes to a primitive type with that // size. e.g. // // TypeWithSize<4>::UInt // // is typedef-ed to be unsigned int (unsigned integer made up of 4 // bytes). // // Such functionality should belong to STL, but I cannot find it // there. // // Google Test uses this class in the implementation of floating-point // comparison. // // For now it only handles UInt (unsigned int) as that's all Google Test // needs. Other types can be easily added in the future if need // arises. template class TypeWithSize { public: // This prevents the user from using TypeWithSize with incorrect // values of N. typedef void UInt; }; // The specialization for size 4. template <> class TypeWithSize<4> { public: // unsigned int has size 4 in both gcc and MSVC. // // As base/basictypes.h doesn't compile on Windows, we cannot use // uint32, uint64, and etc here. typedef int Int; typedef unsigned int UInt; }; // The specialization for size 8. template <> class TypeWithSize<8> { public: #if GTEST_OS_WINDOWS typedef __int64 Int; typedef unsigned __int64 UInt; #else typedef long long Int; // NOLINT typedef unsigned long long UInt; // NOLINT #endif // GTEST_OS_WINDOWS }; // Integer types of known sizes. typedef TypeWithSize<4>::Int Int32; typedef TypeWithSize<4>::UInt UInt32; typedef TypeWithSize<8>::Int Int64; typedef TypeWithSize<8>::UInt UInt64; typedef TypeWithSize<8>::Int TimeInMillis; // Represents time in milliseconds. // Utilities for command line flags and environment variables. // Macro for referencing flags. #define GTEST_FLAG(name) FLAGS_gtest_##name // Macros for declaring flags. #define GTEST_DECLARE_bool_(name) GTEST_API_ extern bool GTEST_FLAG(name) #define GTEST_DECLARE_int32_(name) \ GTEST_API_ extern ::testing::internal::Int32 GTEST_FLAG(name) #define GTEST_DECLARE_string_(name) \ GTEST_API_ extern ::std::string GTEST_FLAG(name) // Macros for defining flags. #define GTEST_DEFINE_bool_(name, default_val, doc) \ GTEST_API_ bool GTEST_FLAG(name) = (default_val) #define GTEST_DEFINE_int32_(name, default_val, doc) \ GTEST_API_ ::testing::internal::Int32 GTEST_FLAG(name) = (default_val) #define GTEST_DEFINE_string_(name, default_val, doc) \ GTEST_API_ ::std::string GTEST_FLAG(name) = (default_val) // Thread annotations #define GTEST_EXCLUSIVE_LOCK_REQUIRED_(locks) #define GTEST_LOCK_EXCLUDED_(locks) // Parses 'str' for a 32-bit signed integer. If successful, writes the result // to *value and returns true; otherwise leaves *value unchanged and returns // false. // TODO(chandlerc): Find a better way to refactor flag and environment parsing // out of both gtest-port.cc and gtest.cc to avoid exporting this utility // function. bool ParseInt32(const Message& src_text, const char* str, Int32* value); // Parses a bool/Int32/string from the environment variable // corresponding to the given Google Test flag. bool BoolFromGTestEnv(const char* flag, bool default_val); GTEST_API_ Int32 Int32FromGTestEnv(const char* flag, Int32 default_val); const char* StringFromGTestEnv(const char* flag, const char* default_val); } // namespace internal } // namespace testing #endif // GTEST_INCLUDE_GTEST_INTERNAL_GTEST_PORT_H_ #if GTEST_OS_LINUX # include # include # include # include #endif // GTEST_OS_LINUX #if GTEST_HAS_EXCEPTIONS # include #endif #include #include #include #include #include #include // Copyright 2005, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: wan@google.com (Zhanyong Wan) // // The Google C++ Testing Framework (Google Test) // // This header file defines the Message class. // // IMPORTANT NOTE: Due to limitation of the C++ language, we have to // leave some internal implementation details in this header file. // They are clearly marked by comments like this: // // // INTERNAL IMPLEMENTATION - DO NOT USE IN A USER PROGRAM. // // Such code is NOT meant to be used by a user directly, and is subject // to CHANGE WITHOUT NOTICE. Therefore DO NOT DEPEND ON IT in a user // program! #ifndef GTEST_INCLUDE_GTEST_GTEST_MESSAGE_H_ #define GTEST_INCLUDE_GTEST_GTEST_MESSAGE_H_ #include // Ensures that there is at least one operator<< in the global namespace. // See Message& operator<<(...) below for why. void operator<<(const testing::internal::Secret&, int); namespace testing { // The Message class works like an ostream repeater. // // Typical usage: // // 1. You stream a bunch of values to a Message object. // It will remember the text in a stringstream. // 2. Then you stream the Message object to an ostream. // This causes the text in the Message to be streamed // to the ostream. // // For example; // // testing::Message foo; // foo << 1 << " != " << 2; // std::cout << foo; // // will print "1 != 2". // // Message is not intended to be inherited from. In particular, its // destructor is not virtual. // // Note that stringstream behaves differently in gcc and in MSVC. You // can stream a NULL char pointer to it in the former, but not in the // latter (it causes an access violation if you do). The Message // class hides this difference by treating a NULL char pointer as // "(null)". class GTEST_API_ Message { private: // The type of basic IO manipulators (endl, ends, and flush) for // narrow streams. typedef std::ostream& (*BasicNarrowIoManip)(std::ostream&); public: // Constructs an empty Message. Message(); // Copy constructor. Message(const Message& msg) : ss_(new ::std::stringstream) { // NOLINT *ss_ << msg.GetString(); } // Constructs a Message from a C-string. explicit Message(const char* str) : ss_(new ::std::stringstream) { *ss_ << str; } #if GTEST_OS_SYMBIAN // Streams a value (either a pointer or not) to this object. template inline Message& operator <<(const T& value) { StreamHelper(typename internal::is_pointer::type(), value); return *this; } #else // Streams a non-pointer value to this object. template inline Message& operator <<(const T& val) { // Some libraries overload << for STL containers. These // overloads are defined in the global namespace instead of ::std. // // C++'s symbol lookup rule (i.e. Koenig lookup) says that these // overloads are visible in either the std namespace or the global // namespace, but not other namespaces, including the testing // namespace which Google Test's Message class is in. // // To allow STL containers (and other types that has a << operator // defined in the global namespace) to be used in Google Test // assertions, testing::Message must access the custom << operator // from the global namespace. With this using declaration, // overloads of << defined in the global namespace and those // visible via Koenig lookup are both exposed in this function. using ::operator <<; *ss_ << val; return *this; } // Streams a pointer value to this object. // // This function is an overload of the previous one. When you // stream a pointer to a Message, this definition will be used as it // is more specialized. (The C++ Standard, section // [temp.func.order].) If you stream a non-pointer, then the // previous definition will be used. // // The reason for this overload is that streaming a NULL pointer to // ostream is undefined behavior. Depending on the compiler, you // may get "0", "(nil)", "(null)", or an access violation. To // ensure consistent result across compilers, we always treat NULL // as "(null)". template inline Message& operator <<(T* const& pointer) { // NOLINT if (pointer == NULL) { *ss_ << "(null)"; } else { *ss_ << pointer; } return *this; } #endif // GTEST_OS_SYMBIAN // Since the basic IO manipulators are overloaded for both narrow // and wide streams, we have to provide this specialized definition // of operator <<, even though its body is the same as the // templatized version above. Without this definition, streaming // endl or other basic IO manipulators to Message will confuse the // compiler. Message& operator <<(BasicNarrowIoManip val) { *ss_ << val; return *this; } // Instead of 1/0, we want to see true/false for bool values. Message& operator <<(bool b) { return *this << (b ? "true" : "false"); } // These two overloads allow streaming a wide C string to a Message // using the UTF-8 encoding. Message& operator <<(const wchar_t* wide_c_str); Message& operator <<(wchar_t* wide_c_str); #if GTEST_HAS_STD_WSTRING // Converts the given wide string to a narrow string using the UTF-8 // encoding, and streams the result to this Message object. Message& operator <<(const ::std::wstring& wstr); #endif // GTEST_HAS_STD_WSTRING #if GTEST_HAS_GLOBAL_WSTRING // Converts the given wide string to a narrow string using the UTF-8 // encoding, and streams the result to this Message object. Message& operator <<(const ::wstring& wstr); #endif // GTEST_HAS_GLOBAL_WSTRING // Gets the text streamed to this object so far as an std::string. // Each '\0' character in the buffer is replaced with "\\0". // // INTERNAL IMPLEMENTATION - DO NOT USE IN A USER PROGRAM. std::string GetString() const; private: #if GTEST_OS_SYMBIAN // These are needed as the Nokia Symbian Compiler cannot decide between // const T& and const T* in a function template. The Nokia compiler _can_ // decide between class template specializations for T and T*, so a // tr1::type_traits-like is_pointer works, and we can overload on that. template inline void StreamHelper(internal::true_type /*is_pointer*/, T* pointer) { if (pointer == NULL) { *ss_ << "(null)"; } else { *ss_ << pointer; } } template inline void StreamHelper(internal::false_type /*is_pointer*/, const T& value) { // See the comments in Message& operator <<(const T&) above for why // we need this using statement. using ::operator <<; *ss_ << value; } #endif // GTEST_OS_SYMBIAN // We'll hold the text streamed to this object here. const internal::scoped_ptr< ::std::stringstream> ss_; // We declare (but don't implement) this to prevent the compiler // from implementing the assignment operator. void operator=(const Message&); }; // Streams a Message to an ostream. inline std::ostream& operator <<(std::ostream& os, const Message& sb) { return os << sb.GetString(); } namespace internal { // Converts a streamable value to an std::string. A NULL pointer is // converted to "(null)". When the input value is a ::string, // ::std::string, ::wstring, or ::std::wstring object, each NUL // character in it is replaced with "\\0". template std::string StreamableToString(const T& streamable) { return (Message() << streamable).GetString(); } } // namespace internal } // namespace testing #endif // GTEST_INCLUDE_GTEST_GTEST_MESSAGE_H_ // Copyright 2005, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Authors: wan@google.com (Zhanyong Wan), eefacm@gmail.com (Sean Mcafee) // // The Google C++ Testing Framework (Google Test) // // This header file declares the String class and functions used internally by // Google Test. They are subject to change without notice. They should not used // by code external to Google Test. // // This header file is #included by . // It should not be #included by other files. #ifndef GTEST_INCLUDE_GTEST_INTERNAL_GTEST_STRING_H_ #define GTEST_INCLUDE_GTEST_INTERNAL_GTEST_STRING_H_ #ifdef __BORLANDC__ // string.h is not guaranteed to provide strcpy on C++ Builder. # include #endif #include #include namespace testing { namespace internal { // String - an abstract class holding static string utilities. class GTEST_API_ String { public: // Static utility methods // Clones a 0-terminated C string, allocating memory using new. The // caller is responsible for deleting the return value using // delete[]. Returns the cloned string, or NULL if the input is // NULL. // // This is different from strdup() in string.h, which allocates // memory using malloc(). static const char* CloneCString(const char* c_str); #if GTEST_OS_WINDOWS_MOBILE // Windows CE does not have the 'ANSI' versions of Win32 APIs. To be // able to pass strings to Win32 APIs on CE we need to convert them // to 'Unicode', UTF-16. // Creates a UTF-16 wide string from the given ANSI string, allocating // memory using new. The caller is responsible for deleting the return // value using delete[]. Returns the wide string, or NULL if the // input is NULL. // // The wide string is created using the ANSI codepage (CP_ACP) to // match the behaviour of the ANSI versions of Win32 calls and the // C runtime. static LPCWSTR AnsiToUtf16(const char* c_str); // Creates an ANSI string from the given wide string, allocating // memory using new. The caller is responsible for deleting the return // value using delete[]. Returns the ANSI string, or NULL if the // input is NULL. // // The returned string is created using the ANSI codepage (CP_ACP) to // match the behaviour of the ANSI versions of Win32 calls and the // C runtime. static const char* Utf16ToAnsi(LPCWSTR utf16_str); #endif // Compares two C strings. Returns true iff they have the same content. // // Unlike strcmp(), this function can handle NULL argument(s). A // NULL C string is considered different to any non-NULL C string, // including the empty string. static bool CStringEquals(const char* lhs, const char* rhs); // Converts a wide C string to a String using the UTF-8 encoding. // NULL will be converted to "(null)". If an error occurred during // the conversion, "(failed to convert from wide string)" is // returned. static std::string ShowWideCString(const wchar_t* wide_c_str); // Compares two wide C strings. Returns true iff they have the same // content. // // Unlike wcscmp(), this function can handle NULL argument(s). A // NULL C string is considered different to any non-NULL C string, // including the empty string. static bool WideCStringEquals(const wchar_t* lhs, const wchar_t* rhs); // Compares two C strings, ignoring case. Returns true iff they // have the same content. // // Unlike strcasecmp(), this function can handle NULL argument(s). // A NULL C string is considered different to any non-NULL C string, // including the empty string. static bool CaseInsensitiveCStringEquals(const char* lhs, const char* rhs); // Compares two wide C strings, ignoring case. Returns true iff they // have the same content. // // Unlike wcscasecmp(), this function can handle NULL argument(s). // A NULL C string is considered different to any non-NULL wide C string, // including the empty string. // NB: The implementations on different platforms slightly differ. // On windows, this method uses _wcsicmp which compares according to LC_CTYPE // environment variable. On GNU platform this method uses wcscasecmp // which compares according to LC_CTYPE category of the current locale. // On MacOS X, it uses towlower, which also uses LC_CTYPE category of the // current locale. static bool CaseInsensitiveWideCStringEquals(const wchar_t* lhs, const wchar_t* rhs); // Returns true iff the given string ends with the given suffix, ignoring // case. Any string is considered to end with an empty suffix. static bool EndsWithCaseInsensitive( const std::string& str, const std::string& suffix); // Formats an int value as "%02d". static std::string FormatIntWidth2(int value); // "%02d" for width == 2 // Formats an int value as "%X". static std::string FormatHexInt(int value); // Formats a byte as "%02X". static std::string FormatByte(unsigned char value); private: String(); // Not meant to be instantiated. }; // class String // Gets the content of the stringstream's buffer as an std::string. Each '\0' // character in the buffer is replaced with "\\0". GTEST_API_ std::string StringStreamToString(::std::stringstream* stream); } // namespace internal } // namespace testing #endif // GTEST_INCLUDE_GTEST_INTERNAL_GTEST_STRING_H_ // Copyright 2008, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: keith.ray@gmail.com (Keith Ray) // // Google Test filepath utilities // // This header file declares classes and functions used internally by // Google Test. They are subject to change without notice. // // This file is #included in . // Do not include this header file separately! #ifndef GTEST_INCLUDE_GTEST_INTERNAL_GTEST_FILEPATH_H_ #define GTEST_INCLUDE_GTEST_INTERNAL_GTEST_FILEPATH_H_ namespace testing { namespace internal { // FilePath - a class for file and directory pathname manipulation which // handles platform-specific conventions (like the pathname separator). // Used for helper functions for naming files in a directory for xml output. // Except for Set methods, all methods are const or static, which provides an // "immutable value object" -- useful for peace of mind. // A FilePath with a value ending in a path separator ("like/this/") represents // a directory, otherwise it is assumed to represent a file. In either case, // it may or may not represent an actual file or directory in the file system. // Names are NOT checked for syntax correctness -- no checking for illegal // characters, malformed paths, etc. class GTEST_API_ FilePath { public: FilePath() : pathname_("") { } FilePath(const FilePath& rhs) : pathname_(rhs.pathname_) { } explicit FilePath(const std::string& pathname) : pathname_(pathname) { Normalize(); } FilePath& operator=(const FilePath& rhs) { Set(rhs); return *this; } void Set(const FilePath& rhs) { pathname_ = rhs.pathname_; } const std::string& string() const { return pathname_; } const char* c_str() const { return pathname_.c_str(); } // Returns the current working directory, or "" if unsuccessful. static FilePath GetCurrentDir(); // Given directory = "dir", base_name = "test", number = 0, // extension = "xml", returns "dir/test.xml". If number is greater // than zero (e.g., 12), returns "dir/test_12.xml". // On Windows platform, uses \ as the separator rather than /. static FilePath MakeFileName(const FilePath& directory, const FilePath& base_name, int number, const char* extension); // Given directory = "dir", relative_path = "test.xml", // returns "dir/test.xml". // On Windows, uses \ as the separator rather than /. static FilePath ConcatPaths(const FilePath& directory, const FilePath& relative_path); // Returns a pathname for a file that does not currently exist. The pathname // will be directory/base_name.extension or // directory/base_name_.extension if directory/base_name.extension // already exists. The number will be incremented until a pathname is found // that does not already exist. // Examples: 'dir/foo_test.xml' or 'dir/foo_test_1.xml'. // There could be a race condition if two or more processes are calling this // function at the same time -- they could both pick the same filename. static FilePath GenerateUniqueFileName(const FilePath& directory, const FilePath& base_name, const char* extension); // Returns true iff the path is "". bool IsEmpty() const { return pathname_.empty(); } // If input name has a trailing separator character, removes it and returns // the name, otherwise return the name string unmodified. // On Windows platform, uses \ as the separator, other platforms use /. FilePath RemoveTrailingPathSeparator() const; // Returns a copy of the FilePath with the directory part removed. // Example: FilePath("path/to/file").RemoveDirectoryName() returns // FilePath("file"). If there is no directory part ("just_a_file"), it returns // the FilePath unmodified. If there is no file part ("just_a_dir/") it // returns an empty FilePath (""). // On Windows platform, '\' is the path separator, otherwise it is '/'. FilePath RemoveDirectoryName() const; // RemoveFileName returns the directory path with the filename removed. // Example: FilePath("path/to/file").RemoveFileName() returns "path/to/". // If the FilePath is "a_file" or "/a_file", RemoveFileName returns // FilePath("./") or, on Windows, FilePath(".\\"). If the filepath does // not have a file, like "just/a/dir/", it returns the FilePath unmodified. // On Windows platform, '\' is the path separator, otherwise it is '/'. FilePath RemoveFileName() const; // Returns a copy of the FilePath with the case-insensitive extension removed. // Example: FilePath("dir/file.exe").RemoveExtension("EXE") returns // FilePath("dir/file"). If a case-insensitive extension is not // found, returns a copy of the original FilePath. FilePath RemoveExtension(const char* extension) const; // Creates directories so that path exists. Returns true if successful or if // the directories already exist; returns false if unable to create // directories for any reason. Will also return false if the FilePath does // not represent a directory (that is, it doesn't end with a path separator). bool CreateDirectoriesRecursively() const; // Create the directory so that path exists. Returns true if successful or // if the directory already exists; returns false if unable to create the // directory for any reason, including if the parent directory does not // exist. Not named "CreateDirectory" because that's a macro on Windows. bool CreateFolder() const; // Returns true if FilePath describes something in the file-system, // either a file, directory, or whatever, and that something exists. bool FileOrDirectoryExists() const; // Returns true if pathname describes a directory in the file-system // that exists. bool DirectoryExists() const; // Returns true if FilePath ends with a path separator, which indicates that // it is intended to represent a directory. Returns false otherwise. // This does NOT check that a directory (or file) actually exists. bool IsDirectory() const; // Returns true if pathname describes a root directory. (Windows has one // root directory per disk drive.) bool IsRootDirectory() const; // Returns true if pathname describes an absolute path. bool IsAbsolutePath() const; private: // Replaces multiple consecutive separators with a single separator. // For example, "bar///foo" becomes "bar/foo". Does not eliminate other // redundancies that might be in a pathname involving "." or "..". // // A pathname with multiple consecutive separators may occur either through // user error or as a result of some scripts or APIs that generate a pathname // with a trailing separator. On other platforms the same API or script // may NOT generate a pathname with a trailing "/". Then elsewhere that // pathname may have another "/" and pathname components added to it, // without checking for the separator already being there. // The script language and operating system may allow paths like "foo//bar" // but some of the functions in FilePath will not handle that correctly. In // particular, RemoveTrailingPathSeparator() only removes one separator, and // it is called in CreateDirectoriesRecursively() assuming that it will change // a pathname from directory syntax (trailing separator) to filename syntax. // // On Windows this method also replaces the alternate path separator '/' with // the primary path separator '\\', so that for example "bar\\/\\foo" becomes // "bar\\foo". void Normalize(); // Returns a pointer to the last occurence of a valid path separator in // the FilePath. On Windows, for example, both '/' and '\' are valid path // separators. Returns NULL if no path separator was found. const char* FindLastPathSeparator() const; std::string pathname_; }; // class FilePath } // namespace internal } // namespace testing #endif // GTEST_INCLUDE_GTEST_INTERNAL_GTEST_FILEPATH_H_ // This file was GENERATED by command: // pump.py gtest-type-util.h.pump // DO NOT EDIT BY HAND!!! // Copyright 2008 Google Inc. // All Rights Reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: wan@google.com (Zhanyong Wan) // Type utilities needed for implementing typed and type-parameterized // tests. This file is generated by a SCRIPT. DO NOT EDIT BY HAND! // // Currently we support at most 50 types in a list, and at most 50 // type-parameterized tests in one type-parameterized test case. // Please contact googletestframework@googlegroups.com if you need // more. #ifndef GTEST_INCLUDE_GTEST_INTERNAL_GTEST_TYPE_UTIL_H_ #define GTEST_INCLUDE_GTEST_INTERNAL_GTEST_TYPE_UTIL_H_ // #ifdef __GNUC__ is too general here. It is possible to use gcc without using // libstdc++ (which is where cxxabi.h comes from). # if GTEST_HAS_CXXABI_H_ # include # elif defined(__HP_aCC) # include # endif // GTEST_HASH_CXXABI_H_ namespace testing { namespace internal { // GetTypeName() returns a human-readable name of type T. // NB: This function is also used in Google Mock, so don't move it inside of // the typed-test-only section below. template std::string GetTypeName() { # if GTEST_HAS_RTTI const char* const name = typeid(T).name(); # if GTEST_HAS_CXXABI_H_ || defined(__HP_aCC) int status = 0; // gcc's implementation of typeid(T).name() mangles the type name, // so we have to demangle it. # if GTEST_HAS_CXXABI_H_ using abi::__cxa_demangle; # endif // GTEST_HAS_CXXABI_H_ char* const readable_name = __cxa_demangle(name, 0, 0, &status); const std::string name_str(status == 0 ? readable_name : name); free(readable_name); return name_str; # else return name; # endif // GTEST_HAS_CXXABI_H_ || __HP_aCC # else return ""; # endif // GTEST_HAS_RTTI } #if GTEST_HAS_TYPED_TEST || GTEST_HAS_TYPED_TEST_P // AssertyTypeEq::type is defined iff T1 and T2 are the same // type. This can be used as a compile-time assertion to ensure that // two types are equal. template struct AssertTypeEq; template struct AssertTypeEq { typedef bool type; }; // A unique type used as the default value for the arguments of class // template Types. This allows us to simulate variadic templates // (e.g. Types, Type, and etc), which C++ doesn't // support directly. struct None {}; // The following family of struct and struct templates are used to // represent type lists. In particular, TypesN // represents a type list with N types (T1, T2, ..., and TN) in it. // Except for Types0, every struct in the family has two member types: // Head for the first type in the list, and Tail for the rest of the // list. // The empty type list. struct Types0 {}; // Type lists of length 1, 2, 3, and so on. template struct Types1 { typedef T1 Head; typedef Types0 Tail; }; template struct Types2 { typedef T1 Head; typedef Types1 Tail; }; template struct Types3 { typedef T1 Head; typedef Types2 Tail; }; template struct Types4 { typedef T1 Head; typedef Types3 Tail; }; template struct Types5 { typedef T1 Head; typedef Types4 Tail; }; template struct Types6 { typedef T1 Head; typedef Types5 Tail; }; template struct Types7 { typedef T1 Head; typedef Types6 Tail; }; template struct Types8 { typedef T1 Head; typedef Types7 Tail; }; template struct Types9 { typedef T1 Head; typedef Types8 Tail; }; template struct Types10 { typedef T1 Head; typedef Types9 Tail; }; template struct Types11 { typedef T1 Head; typedef Types10 Tail; }; template struct Types12 { typedef T1 Head; typedef Types11 Tail; }; template struct Types13 { typedef T1 Head; typedef Types12 Tail; }; template struct Types14 { typedef T1 Head; typedef Types13 Tail; }; template struct Types15 { typedef T1 Head; typedef Types14 Tail; }; template struct Types16 { typedef T1 Head; typedef Types15 Tail; }; template struct Types17 { typedef T1 Head; typedef Types16 Tail; }; template struct Types18 { typedef T1 Head; typedef Types17 Tail; }; template struct Types19 { typedef T1 Head; typedef Types18 Tail; }; template struct Types20 { typedef T1 Head; typedef Types19 Tail; }; template struct Types21 { typedef T1 Head; typedef Types20 Tail; }; template struct Types22 { typedef T1 Head; typedef Types21 Tail; }; template struct Types23 { typedef T1 Head; typedef Types22 Tail; }; template struct Types24 { typedef T1 Head; typedef Types23 Tail; }; template struct Types25 { typedef T1 Head; typedef Types24 Tail; }; template struct Types26 { typedef T1 Head; typedef Types25 Tail; }; template struct Types27 { typedef T1 Head; typedef Types26 Tail; }; template struct Types28 { typedef T1 Head; typedef Types27 Tail; }; template struct Types29 { typedef T1 Head; typedef Types28 Tail; }; template struct Types30 { typedef T1 Head; typedef Types29 Tail; }; template struct Types31 { typedef T1 Head; typedef Types30 Tail; }; template struct Types32 { typedef T1 Head; typedef Types31 Tail; }; template struct Types33 { typedef T1 Head; typedef Types32 Tail; }; template struct Types34 { typedef T1 Head; typedef Types33 Tail; }; template struct Types35 { typedef T1 Head; typedef Types34 Tail; }; template struct Types36 { typedef T1 Head; typedef Types35 Tail; }; template struct Types37 { typedef T1 Head; typedef Types36 Tail; }; template struct Types38 { typedef T1 Head; typedef Types37 Tail; }; template struct Types39 { typedef T1 Head; typedef Types38 Tail; }; template struct Types40 { typedef T1 Head; typedef Types39 Tail; }; template struct Types41 { typedef T1 Head; typedef Types40 Tail; }; template struct Types42 { typedef T1 Head; typedef Types41 Tail; }; template struct Types43 { typedef T1 Head; typedef Types42 Tail; }; template struct Types44 { typedef T1 Head; typedef Types43 Tail; }; template struct Types45 { typedef T1 Head; typedef Types44 Tail; }; template struct Types46 { typedef T1 Head; typedef Types45 Tail; }; template struct Types47 { typedef T1 Head; typedef Types46 Tail; }; template struct Types48 { typedef T1 Head; typedef Types47 Tail; }; template struct Types49 { typedef T1 Head; typedef Types48 Tail; }; template struct Types50 { typedef T1 Head; typedef Types49 Tail; }; } // namespace internal // We don't want to require the users to write TypesN<...> directly, // as that would require them to count the length. Types<...> is much // easier to write, but generates horrible messages when there is a // compiler error, as gcc insists on printing out each template // argument, even if it has the default value (this means Types // will appear as Types in the compiler // errors). // // Our solution is to combine the best part of the two approaches: a // user would write Types, and Google Test will translate // that to TypesN internally to make error messages // readable. The translation is done by the 'type' member of the // Types template. template struct Types { typedef internal::Types50 type; }; template <> struct Types { typedef internal::Types0 type; }; template struct Types { typedef internal::Types1 type; }; template struct Types { typedef internal::Types2 type; }; template struct Types { typedef internal::Types3 type; }; template struct Types { typedef internal::Types4 type; }; template struct Types { typedef internal::Types5 type; }; template struct Types { typedef internal::Types6 type; }; template struct Types { typedef internal::Types7 type; }; template struct Types { typedef internal::Types8 type; }; template struct Types { typedef internal::Types9 type; }; template struct Types { typedef internal::Types10 type; }; template struct Types { typedef internal::Types11 type; }; template struct Types { typedef internal::Types12 type; }; template struct Types { typedef internal::Types13 type; }; template struct Types { typedef internal::Types14 type; }; template struct Types { typedef internal::Types15 type; }; template struct Types { typedef internal::Types16 type; }; template struct Types { typedef internal::Types17 type; }; template struct Types { typedef internal::Types18 type; }; template struct Types { typedef internal::Types19 type; }; template struct Types { typedef internal::Types20 type; }; template struct Types { typedef internal::Types21 type; }; template struct Types { typedef internal::Types22 type; }; template struct Types { typedef internal::Types23 type; }; template struct Types { typedef internal::Types24 type; }; template struct Types { typedef internal::Types25 type; }; template struct Types { typedef internal::Types26 type; }; template struct Types { typedef internal::Types27 type; }; template struct Types { typedef internal::Types28 type; }; template struct Types { typedef internal::Types29 type; }; template struct Types { typedef internal::Types30 type; }; template struct Types { typedef internal::Types31 type; }; template struct Types { typedef internal::Types32 type; }; template struct Types { typedef internal::Types33 type; }; template struct Types { typedef internal::Types34 type; }; template struct Types { typedef internal::Types35 type; }; template struct Types { typedef internal::Types36 type; }; template struct Types { typedef internal::Types37 type; }; template struct Types { typedef internal::Types38 type; }; template struct Types { typedef internal::Types39 type; }; template struct Types { typedef internal::Types40 type; }; template struct Types { typedef internal::Types41 type; }; template struct Types { typedef internal::Types42 type; }; template struct Types { typedef internal::Types43 type; }; template struct Types { typedef internal::Types44 type; }; template struct Types { typedef internal::Types45 type; }; template struct Types { typedef internal::Types46 type; }; template struct Types { typedef internal::Types47 type; }; template struct Types { typedef internal::Types48 type; }; template struct Types { typedef internal::Types49 type; }; namespace internal { # define GTEST_TEMPLATE_ template class // The template "selector" struct TemplateSel is used to // represent Tmpl, which must be a class template with one type // parameter, as a type. TemplateSel::Bind::type is defined // as the type Tmpl. This allows us to actually instantiate the // template "selected" by TemplateSel. // // This trick is necessary for simulating typedef for class templates, // which C++ doesn't support directly. template struct TemplateSel { template struct Bind { typedef Tmpl type; }; }; # define GTEST_BIND_(TmplSel, T) \ TmplSel::template Bind::type // A unique struct template used as the default value for the // arguments of class template Templates. This allows us to simulate // variadic templates (e.g. Templates, Templates, // and etc), which C++ doesn't support directly. template struct NoneT {}; // The following family of struct and struct templates are used to // represent template lists. In particular, TemplatesN represents a list of N templates (T1, T2, ..., and TN). Except // for Templates0, every struct in the family has two member types: // Head for the selector of the first template in the list, and Tail // for the rest of the list. // The empty template list. struct Templates0 {}; // Template lists of length 1, 2, 3, and so on. template struct Templates1 { typedef TemplateSel Head; typedef Templates0 Tail; }; template struct Templates2 { typedef TemplateSel Head; typedef Templates1 Tail; }; template struct Templates3 { typedef TemplateSel Head; typedef Templates2 Tail; }; template struct Templates4 { typedef TemplateSel Head; typedef Templates3 Tail; }; template struct Templates5 { typedef TemplateSel Head; typedef Templates4 Tail; }; template struct Templates6 { typedef TemplateSel Head; typedef Templates5 Tail; }; template struct Templates7 { typedef TemplateSel Head; typedef Templates6 Tail; }; template struct Templates8 { typedef TemplateSel Head; typedef Templates7 Tail; }; template struct Templates9 { typedef TemplateSel Head; typedef Templates8 Tail; }; template struct Templates10 { typedef TemplateSel Head; typedef Templates9 Tail; }; template struct Templates11 { typedef TemplateSel Head; typedef Templates10 Tail; }; template struct Templates12 { typedef TemplateSel Head; typedef Templates11 Tail; }; template struct Templates13 { typedef TemplateSel Head; typedef Templates12 Tail; }; template struct Templates14 { typedef TemplateSel Head; typedef Templates13 Tail; }; template struct Templates15 { typedef TemplateSel Head; typedef Templates14 Tail; }; template struct Templates16 { typedef TemplateSel Head; typedef Templates15 Tail; }; template struct Templates17 { typedef TemplateSel Head; typedef Templates16 Tail; }; template struct Templates18 { typedef TemplateSel Head; typedef Templates17 Tail; }; template struct Templates19 { typedef TemplateSel Head; typedef Templates18 Tail; }; template struct Templates20 { typedef TemplateSel Head; typedef Templates19 Tail; }; template struct Templates21 { typedef TemplateSel Head; typedef Templates20 Tail; }; template struct Templates22 { typedef TemplateSel Head; typedef Templates21 Tail; }; template struct Templates23 { typedef TemplateSel Head; typedef Templates22 Tail; }; template struct Templates24 { typedef TemplateSel Head; typedef Templates23 Tail; }; template struct Templates25 { typedef TemplateSel Head; typedef Templates24 Tail; }; template struct Templates26 { typedef TemplateSel Head; typedef Templates25 Tail; }; template struct Templates27 { typedef TemplateSel Head; typedef Templates26 Tail; }; template struct Templates28 { typedef TemplateSel Head; typedef Templates27 Tail; }; template struct Templates29 { typedef TemplateSel Head; typedef Templates28 Tail; }; template struct Templates30 { typedef TemplateSel Head; typedef Templates29 Tail; }; template struct Templates31 { typedef TemplateSel Head; typedef Templates30 Tail; }; template struct Templates32 { typedef TemplateSel Head; typedef Templates31 Tail; }; template struct Templates33 { typedef TemplateSel Head; typedef Templates32 Tail; }; template struct Templates34 { typedef TemplateSel Head; typedef Templates33 Tail; }; template struct Templates35 { typedef TemplateSel Head; typedef Templates34 Tail; }; template struct Templates36 { typedef TemplateSel Head; typedef Templates35 Tail; }; template struct Templates37 { typedef TemplateSel Head; typedef Templates36 Tail; }; template struct Templates38 { typedef TemplateSel Head; typedef Templates37 Tail; }; template struct Templates39 { typedef TemplateSel Head; typedef Templates38 Tail; }; template struct Templates40 { typedef TemplateSel Head; typedef Templates39 Tail; }; template struct Templates41 { typedef TemplateSel Head; typedef Templates40 Tail; }; template struct Templates42 { typedef TemplateSel Head; typedef Templates41 Tail; }; template struct Templates43 { typedef TemplateSel Head; typedef Templates42 Tail; }; template struct Templates44 { typedef TemplateSel Head; typedef Templates43 Tail; }; template struct Templates45 { typedef TemplateSel Head; typedef Templates44 Tail; }; template struct Templates46 { typedef TemplateSel Head; typedef Templates45 Tail; }; template struct Templates47 { typedef TemplateSel Head; typedef Templates46 Tail; }; template struct Templates48 { typedef TemplateSel Head; typedef Templates47 Tail; }; template struct Templates49 { typedef TemplateSel Head; typedef Templates48 Tail; }; template struct Templates50 { typedef TemplateSel Head; typedef Templates49 Tail; }; // We don't want to require the users to write TemplatesN<...> directly, // as that would require them to count the length. Templates<...> is much // easier to write, but generates horrible messages when there is a // compiler error, as gcc insists on printing out each template // argument, even if it has the default value (this means Templates // will appear as Templates in the compiler // errors). // // Our solution is to combine the best part of the two approaches: a // user would write Templates, and Google Test will translate // that to TemplatesN internally to make error messages // readable. The translation is done by the 'type' member of the // Templates template. template struct Templates { typedef Templates50 type; }; template <> struct Templates { typedef Templates0 type; }; template struct Templates { typedef Templates1 type; }; template struct Templates { typedef Templates2 type; }; template struct Templates { typedef Templates3 type; }; template struct Templates { typedef Templates4 type; }; template struct Templates { typedef Templates5 type; }; template struct Templates { typedef Templates6 type; }; template struct Templates { typedef Templates7 type; }; template struct Templates { typedef Templates8 type; }; template struct Templates { typedef Templates9 type; }; template struct Templates { typedef Templates10 type; }; template struct Templates { typedef Templates11 type; }; template struct Templates { typedef Templates12 type; }; template struct Templates { typedef Templates13 type; }; template struct Templates { typedef Templates14 type; }; template struct Templates { typedef Templates15 type; }; template struct Templates { typedef Templates16 type; }; template struct Templates { typedef Templates17 type; }; template struct Templates { typedef Templates18 type; }; template struct Templates { typedef Templates19 type; }; template struct Templates { typedef Templates20 type; }; template struct Templates { typedef Templates21 type; }; template struct Templates { typedef Templates22 type; }; template struct Templates { typedef Templates23 type; }; template struct Templates { typedef Templates24 type; }; template struct Templates { typedef Templates25 type; }; template struct Templates { typedef Templates26 type; }; template struct Templates { typedef Templates27 type; }; template struct Templates { typedef Templates28 type; }; template struct Templates { typedef Templates29 type; }; template struct Templates { typedef Templates30 type; }; template struct Templates { typedef Templates31 type; }; template struct Templates { typedef Templates32 type; }; template struct Templates { typedef Templates33 type; }; template struct Templates { typedef Templates34 type; }; template struct Templates { typedef Templates35 type; }; template struct Templates { typedef Templates36 type; }; template struct Templates { typedef Templates37 type; }; template struct Templates { typedef Templates38 type; }; template struct Templates { typedef Templates39 type; }; template struct Templates { typedef Templates40 type; }; template struct Templates { typedef Templates41 type; }; template struct Templates { typedef Templates42 type; }; template struct Templates { typedef Templates43 type; }; template struct Templates { typedef Templates44 type; }; template struct Templates { typedef Templates45 type; }; template struct Templates { typedef Templates46 type; }; template struct Templates { typedef Templates47 type; }; template struct Templates { typedef Templates48 type; }; template struct Templates { typedef Templates49 type; }; // The TypeList template makes it possible to use either a single type // or a Types<...> list in TYPED_TEST_CASE() and // INSTANTIATE_TYPED_TEST_CASE_P(). template struct TypeList { typedef Types1 type; }; template struct TypeList > { typedef typename Types::type type; }; #endif // GTEST_HAS_TYPED_TEST || GTEST_HAS_TYPED_TEST_P } // namespace internal } // namespace testing #endif // GTEST_INCLUDE_GTEST_INTERNAL_GTEST_TYPE_UTIL_H_ // Due to C++ preprocessor weirdness, we need double indirection to // concatenate two tokens when one of them is __LINE__. Writing // // foo ## __LINE__ // // will result in the token foo__LINE__, instead of foo followed by // the current line number. For more details, see // http://www.parashift.com/c++-faq-lite/misc-technical-issues.html#faq-39.6 #define GTEST_CONCAT_TOKEN_(foo, bar) GTEST_CONCAT_TOKEN_IMPL_(foo, bar) #define GTEST_CONCAT_TOKEN_IMPL_(foo, bar) foo ## bar class ProtocolMessage; namespace proto2 { class Message; } namespace testing { // Forward declarations. class AssertionResult; // Result of an assertion. class Message; // Represents a failure message. class Test; // Represents a test. class TestInfo; // Information about a test. class TestPartResult; // Result of a test part. class UnitTest; // A collection of test cases. template ::std::string PrintToString(const T& value); namespace internal { struct TraceInfo; // Information about a trace point. class ScopedTrace; // Implements scoped trace. class TestInfoImpl; // Opaque implementation of TestInfo class UnitTestImpl; // Opaque implementation of UnitTest // How many times InitGoogleTest() has been called. GTEST_API_ extern int g_init_gtest_count; // The text used in failure messages to indicate the start of the // stack trace. GTEST_API_ extern const char kStackTraceMarker[]; // Two overloaded helpers for checking at compile time whether an // expression is a null pointer literal (i.e. NULL or any 0-valued // compile-time integral constant). Their return values have // different sizes, so we can use sizeof() to test which version is // picked by the compiler. These helpers have no implementations, as // we only need their signatures. // // Given IsNullLiteralHelper(x), the compiler will pick the first // version if x can be implicitly converted to Secret*, and pick the // second version otherwise. Since Secret is a secret and incomplete // type, the only expression a user can write that has type Secret* is // a null pointer literal. Therefore, we know that x is a null // pointer literal if and only if the first version is picked by the // compiler. char IsNullLiteralHelper(Secret* p); char (&IsNullLiteralHelper(...))[2]; // NOLINT // A compile-time bool constant that is true if and only if x is a // null pointer literal (i.e. NULL or any 0-valued compile-time // integral constant). #ifdef GTEST_ELLIPSIS_NEEDS_POD_ // We lose support for NULL detection where the compiler doesn't like // passing non-POD classes through ellipsis (...). # define GTEST_IS_NULL_LITERAL_(x) false #else # define GTEST_IS_NULL_LITERAL_(x) \ (sizeof(::testing::internal::IsNullLiteralHelper(x)) == 1) #endif // GTEST_ELLIPSIS_NEEDS_POD_ // Appends the user-supplied message to the Google-Test-generated message. GTEST_API_ std::string AppendUserMessage( const std::string& gtest_msg, const Message& user_msg); #if GTEST_HAS_EXCEPTIONS // This exception is thrown by (and only by) a failed Google Test // assertion when GTEST_FLAG(throw_on_failure) is true (if exceptions // are enabled). We derive it from std::runtime_error, which is for // errors presumably detectable only at run time. Since // std::runtime_error inherits from std::exception, many testing // frameworks know how to extract and print the message inside it. class GTEST_API_ GoogleTestFailureException : public ::std::runtime_error { public: explicit GoogleTestFailureException(const TestPartResult& failure); }; #endif // GTEST_HAS_EXCEPTIONS // A helper class for creating scoped traces in user programs. class GTEST_API_ ScopedTrace { public: // The c'tor pushes the given source file location and message onto // a trace stack maintained by Google Test. ScopedTrace(const char* file, int line, const Message& message); // The d'tor pops the info pushed by the c'tor. // // Note that the d'tor is not virtual in order to be efficient. // Don't inherit from ScopedTrace! ~ScopedTrace(); private: GTEST_DISALLOW_COPY_AND_ASSIGN_(ScopedTrace); } GTEST_ATTRIBUTE_UNUSED_; // A ScopedTrace object does its job in its // c'tor and d'tor. Therefore it doesn't // need to be used otherwise. // Constructs and returns the message for an equality assertion // (e.g. ASSERT_EQ, EXPECT_STREQ, etc) failure. // // The first four parameters are the expressions used in the assertion // and their values, as strings. For example, for ASSERT_EQ(foo, bar) // where foo is 5 and bar is 6, we have: // // expected_expression: "foo" // actual_expression: "bar" // expected_value: "5" // actual_value: "6" // // The ignoring_case parameter is true iff the assertion is a // *_STRCASEEQ*. When it's true, the string " (ignoring case)" will // be inserted into the message. GTEST_API_ AssertionResult EqFailure(const char* expected_expression, const char* actual_expression, const std::string& expected_value, const std::string& actual_value, bool ignoring_case); // Constructs a failure message for Boolean assertions such as EXPECT_TRUE. GTEST_API_ std::string GetBoolAssertionFailureMessage( const AssertionResult& assertion_result, const char* expression_text, const char* actual_predicate_value, const char* expected_predicate_value); // This template class represents an IEEE floating-point number // (either single-precision or double-precision, depending on the // template parameters). // // The purpose of this class is to do more sophisticated number // comparison. (Due to round-off error, etc, it's very unlikely that // two floating-points will be equal exactly. Hence a naive // comparison by the == operation often doesn't work.) // // Format of IEEE floating-point: // // The most-significant bit being the leftmost, an IEEE // floating-point looks like // // sign_bit exponent_bits fraction_bits // // Here, sign_bit is a single bit that designates the sign of the // number. // // For float, there are 8 exponent bits and 23 fraction bits. // // For double, there are 11 exponent bits and 52 fraction bits. // // More details can be found at // http://en.wikipedia.org/wiki/IEEE_floating-point_standard. // // Template parameter: // // RawType: the raw floating-point type (either float or double) template class FloatingPoint { public: // Defines the unsigned integer type that has the same size as the // floating point number. typedef typename TypeWithSize::UInt Bits; // Constants. // # of bits in a number. static const size_t kBitCount = 8*sizeof(RawType); // # of fraction bits in a number. static const size_t kFractionBitCount = std::numeric_limits::digits - 1; // # of exponent bits in a number. static const size_t kExponentBitCount = kBitCount - 1 - kFractionBitCount; // The mask for the sign bit. static const Bits kSignBitMask = static_cast(1) << (kBitCount - 1); // The mask for the fraction bits. static const Bits kFractionBitMask = ~static_cast(0) >> (kExponentBitCount + 1); // The mask for the exponent bits. static const Bits kExponentBitMask = ~(kSignBitMask | kFractionBitMask); // How many ULP's (Units in the Last Place) we want to tolerate when // comparing two numbers. The larger the value, the more error we // allow. A 0 value means that two numbers must be exactly the same // to be considered equal. // // The maximum error of a single floating-point operation is 0.5 // units in the last place. On Intel CPU's, all floating-point // calculations are done with 80-bit precision, while double has 64 // bits. Therefore, 4 should be enough for ordinary use. // // See the following article for more details on ULP: // http://randomascii.wordpress.com/2012/02/25/comparing-floating-point-numbers-2012-edition/ static const size_t kMaxUlps = 4; // Constructs a FloatingPoint from a raw floating-point number. // // On an Intel CPU, passing a non-normalized NAN (Not a Number) // around may change its bits, although the new value is guaranteed // to be also a NAN. Therefore, don't expect this constructor to // preserve the bits in x when x is a NAN. explicit FloatingPoint(const RawType& x) { u_.value_ = x; } // Static methods // Reinterprets a bit pattern as a floating-point number. // // This function is needed to test the AlmostEquals() method. static RawType ReinterpretBits(const Bits bits) { FloatingPoint fp(0); fp.u_.bits_ = bits; return fp.u_.value_; } // Returns the floating-point number that represent positive infinity. static RawType Infinity() { return ReinterpretBits(kExponentBitMask); } // Returns the maximum representable finite floating-point number. static RawType Max(); // Non-static methods // Returns the bits that represents this number. const Bits &bits() const { return u_.bits_; } // Returns the exponent bits of this number. Bits exponent_bits() const { return kExponentBitMask & u_.bits_; } // Returns the fraction bits of this number. Bits fraction_bits() const { return kFractionBitMask & u_.bits_; } // Returns the sign bit of this number. Bits sign_bit() const { return kSignBitMask & u_.bits_; } // Returns true iff this is NAN (not a number). bool is_nan() const { // It's a NAN if the exponent bits are all ones and the fraction // bits are not entirely zeros. return (exponent_bits() == kExponentBitMask) && (fraction_bits() != 0); } // Returns true iff this number is at most kMaxUlps ULP's away from // rhs. In particular, this function: // // - returns false if either number is (or both are) NAN. // - treats really large numbers as almost equal to infinity. // - thinks +0.0 and -0.0 are 0 DLP's apart. bool AlmostEquals(const FloatingPoint& rhs) const { // The IEEE standard says that any comparison operation involving // a NAN must return false. if (is_nan() || rhs.is_nan()) return false; return DistanceBetweenSignAndMagnitudeNumbers(u_.bits_, rhs.u_.bits_) <= kMaxUlps; } private: // The data type used to store the actual floating-point number. union FloatingPointUnion { RawType value_; // The raw floating-point number. Bits bits_; // The bits that represent the number. }; // Converts an integer from the sign-and-magnitude representation to // the biased representation. More precisely, let N be 2 to the // power of (kBitCount - 1), an integer x is represented by the // unsigned number x + N. // // For instance, // // -N + 1 (the most negative number representable using // sign-and-magnitude) is represented by 1; // 0 is represented by N; and // N - 1 (the biggest number representable using // sign-and-magnitude) is represented by 2N - 1. // // Read http://en.wikipedia.org/wiki/Signed_number_representations // for more details on signed number representations. static Bits SignAndMagnitudeToBiased(const Bits &sam) { if (kSignBitMask & sam) { // sam represents a negative number. return ~sam + 1; } else { // sam represents a positive number. return kSignBitMask | sam; } } // Given two numbers in the sign-and-magnitude representation, // returns the distance between them as an unsigned number. static Bits DistanceBetweenSignAndMagnitudeNumbers(const Bits &sam1, const Bits &sam2) { const Bits biased1 = SignAndMagnitudeToBiased(sam1); const Bits biased2 = SignAndMagnitudeToBiased(sam2); return (biased1 >= biased2) ? (biased1 - biased2) : (biased2 - biased1); } FloatingPointUnion u_; }; // We cannot use std::numeric_limits::max() as it clashes with the max() // macro defined by . template <> inline float FloatingPoint::Max() { return FLT_MAX; } template <> inline double FloatingPoint::Max() { return DBL_MAX; } // Typedefs the instances of the FloatingPoint template class that we // care to use. typedef FloatingPoint Float; typedef FloatingPoint Double; // In order to catch the mistake of putting tests that use different // test fixture classes in the same test case, we need to assign // unique IDs to fixture classes and compare them. The TypeId type is // used to hold such IDs. The user should treat TypeId as an opaque // type: the only operation allowed on TypeId values is to compare // them for equality using the == operator. typedef const void* TypeId; template class TypeIdHelper { public: // dummy_ must not have a const type. Otherwise an overly eager // compiler (e.g. MSVC 7.1 & 8.0) may try to merge // TypeIdHelper::dummy_ for different Ts as an "optimization". static bool dummy_; }; template bool TypeIdHelper::dummy_ = false; // GetTypeId() returns the ID of type T. Different values will be // returned for different types. Calling the function twice with the // same type argument is guaranteed to return the same ID. template TypeId GetTypeId() { // The compiler is required to allocate a different // TypeIdHelper::dummy_ variable for each T used to instantiate // the template. Therefore, the address of dummy_ is guaranteed to // be unique. return &(TypeIdHelper::dummy_); } // Returns the type ID of ::testing::Test. Always call this instead // of GetTypeId< ::testing::Test>() to get the type ID of // ::testing::Test, as the latter may give the wrong result due to a // suspected linker bug when compiling Google Test as a Mac OS X // framework. GTEST_API_ TypeId GetTestTypeId(); // Defines the abstract factory interface that creates instances // of a Test object. class TestFactoryBase { public: virtual ~TestFactoryBase() {} // Creates a test instance to run. The instance is both created and destroyed // within TestInfoImpl::Run() virtual Test* CreateTest() = 0; protected: TestFactoryBase() {} private: GTEST_DISALLOW_COPY_AND_ASSIGN_(TestFactoryBase); }; // This class provides implementation of TeastFactoryBase interface. // It is used in TEST and TEST_F macros. template class TestFactoryImpl : public TestFactoryBase { public: virtual Test* CreateTest() { return new TestClass; } }; #if GTEST_OS_WINDOWS // Predicate-formatters for implementing the HRESULT checking macros // {ASSERT|EXPECT}_HRESULT_{SUCCEEDED|FAILED} // We pass a long instead of HRESULT to avoid causing an // include dependency for the HRESULT type. GTEST_API_ AssertionResult IsHRESULTSuccess(const char* expr, long hr); // NOLINT GTEST_API_ AssertionResult IsHRESULTFailure(const char* expr, long hr); // NOLINT #endif // GTEST_OS_WINDOWS // Types of SetUpTestCase() and TearDownTestCase() functions. typedef void (*SetUpTestCaseFunc)(); typedef void (*TearDownTestCaseFunc)(); // Creates a new TestInfo object and registers it with Google Test; // returns the created object. // // Arguments: // // test_case_name: name of the test case // name: name of the test // type_param the name of the test's type parameter, or NULL if // this is not a typed or a type-parameterized test. // value_param text representation of the test's value parameter, // or NULL if this is not a type-parameterized test. // fixture_class_id: ID of the test fixture class // set_up_tc: pointer to the function that sets up the test case // tear_down_tc: pointer to the function that tears down the test case // factory: pointer to the factory that creates a test object. // The newly created TestInfo instance will assume // ownership of the factory object. GTEST_API_ TestInfo* MakeAndRegisterTestInfo( const char* test_case_name, const char* name, const char* type_param, const char* value_param, TypeId fixture_class_id, SetUpTestCaseFunc set_up_tc, TearDownTestCaseFunc tear_down_tc, TestFactoryBase* factory); // If *pstr starts with the given prefix, modifies *pstr to be right // past the prefix and returns true; otherwise leaves *pstr unchanged // and returns false. None of pstr, *pstr, and prefix can be NULL. GTEST_API_ bool SkipPrefix(const char* prefix, const char** pstr); #if GTEST_HAS_TYPED_TEST || GTEST_HAS_TYPED_TEST_P // State of the definition of a type-parameterized test case. class GTEST_API_ TypedTestCasePState { public: TypedTestCasePState() : registered_(false) {} // Adds the given test name to defined_test_names_ and return true // if the test case hasn't been registered; otherwise aborts the // program. bool AddTestName(const char* file, int line, const char* case_name, const char* test_name) { if (registered_) { fprintf(stderr, "%s Test %s must be defined before " "REGISTER_TYPED_TEST_CASE_P(%s, ...).\n", FormatFileLocation(file, line).c_str(), test_name, case_name); fflush(stderr); posix::Abort(); } defined_test_names_.insert(test_name); return true; } // Verifies that registered_tests match the test names in // defined_test_names_; returns registered_tests if successful, or // aborts the program otherwise. const char* VerifyRegisteredTestNames( const char* file, int line, const char* registered_tests); private: bool registered_; ::std::set defined_test_names_; }; // Skips to the first non-space char after the first comma in 'str'; // returns NULL if no comma is found in 'str'. inline const char* SkipComma(const char* str) { const char* comma = strchr(str, ','); if (comma == NULL) { return NULL; } while (IsSpace(*(++comma))) {} return comma; } // Returns the prefix of 'str' before the first comma in it; returns // the entire string if it contains no comma. inline std::string GetPrefixUntilComma(const char* str) { const char* comma = strchr(str, ','); return comma == NULL ? str : std::string(str, comma); } // TypeParameterizedTest::Register() // registers a list of type-parameterized tests with Google Test. The // return value is insignificant - we just need to return something // such that we can call this function in a namespace scope. // // Implementation note: The GTEST_TEMPLATE_ macro declares a template // template parameter. It's defined in gtest-type-util.h. template class TypeParameterizedTest { public: // 'index' is the index of the test in the type list 'Types' // specified in INSTANTIATE_TYPED_TEST_CASE_P(Prefix, TestCase, // Types). Valid values for 'index' are [0, N - 1] where N is the // length of Types. static bool Register(const char* prefix, const char* case_name, const char* test_names, int index) { typedef typename Types::Head Type; typedef Fixture FixtureClass; typedef typename GTEST_BIND_(TestSel, Type) TestClass; // First, registers the first type-parameterized test in the type // list. MakeAndRegisterTestInfo( (std::string(prefix) + (prefix[0] == '\0' ? "" : "/") + case_name + "/" + StreamableToString(index)).c_str(), GetPrefixUntilComma(test_names).c_str(), GetTypeName().c_str(), NULL, // No value parameter. GetTypeId(), TestClass::SetUpTestCase, TestClass::TearDownTestCase, new TestFactoryImpl); // Next, recurses (at compile time) with the tail of the type list. return TypeParameterizedTest ::Register(prefix, case_name, test_names, index + 1); } }; // The base case for the compile time recursion. template class TypeParameterizedTest { public: static bool Register(const char* /*prefix*/, const char* /*case_name*/, const char* /*test_names*/, int /*index*/) { return true; } }; // TypeParameterizedTestCase::Register() // registers *all combinations* of 'Tests' and 'Types' with Google // Test. The return value is insignificant - we just need to return // something such that we can call this function in a namespace scope. template class TypeParameterizedTestCase { public: static bool Register(const char* prefix, const char* case_name, const char* test_names) { typedef typename Tests::Head Head; // First, register the first test in 'Test' for each type in 'Types'. TypeParameterizedTest::Register( prefix, case_name, test_names, 0); // Next, recurses (at compile time) with the tail of the test list. return TypeParameterizedTestCase ::Register(prefix, case_name, SkipComma(test_names)); } }; // The base case for the compile time recursion. template class TypeParameterizedTestCase { public: static bool Register(const char* /*prefix*/, const char* /*case_name*/, const char* /*test_names*/) { return true; } }; #endif // GTEST_HAS_TYPED_TEST || GTEST_HAS_TYPED_TEST_P // Returns the current OS stack trace as an std::string. // // The maximum number of stack frames to be included is specified by // the gtest_stack_trace_depth flag. The skip_count parameter // specifies the number of top frames to be skipped, which doesn't // count against the number of frames to be included. // // For example, if Foo() calls Bar(), which in turn calls // GetCurrentOsStackTraceExceptTop(..., 1), Foo() will be included in // the trace but Bar() and GetCurrentOsStackTraceExceptTop() won't. GTEST_API_ std::string GetCurrentOsStackTraceExceptTop( UnitTest* unit_test, int skip_count); // Helpers for suppressing warnings on unreachable code or constant // condition. // Always returns true. GTEST_API_ bool AlwaysTrue(); // Always returns false. inline bool AlwaysFalse() { return !AlwaysTrue(); } // Helper for suppressing false warning from Clang on a const char* // variable declared in a conditional expression always being NULL in // the else branch. struct GTEST_API_ ConstCharPtr { ConstCharPtr(const char* str) : value(str) {} operator bool() const { return true; } const char* value; }; // A simple Linear Congruential Generator for generating random // numbers with a uniform distribution. Unlike rand() and srand(), it // doesn't use global state (and therefore can't interfere with user // code). Unlike rand_r(), it's portable. An LCG isn't very random, // but it's good enough for our purposes. class GTEST_API_ Random { public: static const UInt32 kMaxRange = 1u << 31; explicit Random(UInt32 seed) : state_(seed) {} void Reseed(UInt32 seed) { state_ = seed; } // Generates a random number from [0, range). Crashes if 'range' is // 0 or greater than kMaxRange. UInt32 Generate(UInt32 range); private: UInt32 state_; GTEST_DISALLOW_COPY_AND_ASSIGN_(Random); }; // Defining a variable of type CompileAssertTypesEqual will cause a // compiler error iff T1 and T2 are different types. template struct CompileAssertTypesEqual; template struct CompileAssertTypesEqual { }; // Removes the reference from a type if it is a reference type, // otherwise leaves it unchanged. This is the same as // tr1::remove_reference, which is not widely available yet. template struct RemoveReference { typedef T type; }; // NOLINT template struct RemoveReference { typedef T type; }; // NOLINT // A handy wrapper around RemoveReference that works when the argument // T depends on template parameters. #define GTEST_REMOVE_REFERENCE_(T) \ typename ::testing::internal::RemoveReference::type // Removes const from a type if it is a const type, otherwise leaves // it unchanged. This is the same as tr1::remove_const, which is not // widely available yet. template struct RemoveConst { typedef T type; }; // NOLINT template struct RemoveConst { typedef T type; }; // NOLINT // MSVC 8.0, Sun C++, and IBM XL C++ have a bug which causes the above // definition to fail to remove the const in 'const int[3]' and 'const // char[3][4]'. The following specialization works around the bug. template struct RemoveConst { typedef typename RemoveConst::type type[N]; }; #if defined(_MSC_VER) && _MSC_VER < 1400 // This is the only specialization that allows VC++ 7.1 to remove const in // 'const int[3] and 'const int[3][4]'. However, it causes trouble with GCC // and thus needs to be conditionally compiled. template struct RemoveConst { typedef typename RemoveConst::type type[N]; }; #endif // A handy wrapper around RemoveConst that works when the argument // T depends on template parameters. #define GTEST_REMOVE_CONST_(T) \ typename ::testing::internal::RemoveConst::type // Turns const U&, U&, const U, and U all into U. #define GTEST_REMOVE_REFERENCE_AND_CONST_(T) \ GTEST_REMOVE_CONST_(GTEST_REMOVE_REFERENCE_(T)) // Adds reference to a type if it is not a reference type, // otherwise leaves it unchanged. This is the same as // tr1::add_reference, which is not widely available yet. template struct AddReference { typedef T& type; }; // NOLINT template struct AddReference { typedef T& type; }; // NOLINT // A handy wrapper around AddReference that works when the argument T // depends on template parameters. #define GTEST_ADD_REFERENCE_(T) \ typename ::testing::internal::AddReference::type // Adds a reference to const on top of T as necessary. For example, // it transforms // // char ==> const char& // const char ==> const char& // char& ==> const char& // const char& ==> const char& // // The argument T must depend on some template parameters. #define GTEST_REFERENCE_TO_CONST_(T) \ GTEST_ADD_REFERENCE_(const GTEST_REMOVE_REFERENCE_(T)) // ImplicitlyConvertible::value is a compile-time bool // constant that's true iff type From can be implicitly converted to // type To. template class ImplicitlyConvertible { private: // We need the following helper functions only for their types. // They have no implementations. // MakeFrom() is an expression whose type is From. We cannot simply // use From(), as the type From may not have a public default // constructor. static From MakeFrom(); // These two functions are overloaded. Given an expression // Helper(x), the compiler will pick the first version if x can be // implicitly converted to type To; otherwise it will pick the // second version. // // The first version returns a value of size 1, and the second // version returns a value of size 2. Therefore, by checking the // size of Helper(x), which can be done at compile time, we can tell // which version of Helper() is used, and hence whether x can be // implicitly converted to type To. static char Helper(To); static char (&Helper(...))[2]; // NOLINT // We have to put the 'public' section after the 'private' section, // or MSVC refuses to compile the code. public: // MSVC warns about implicitly converting from double to int for // possible loss of data, so we need to temporarily disable the // warning. #ifdef _MSC_VER # pragma warning(push) // Saves the current warning state. # pragma warning(disable:4244) // Temporarily disables warning 4244. static const bool value = sizeof(Helper(ImplicitlyConvertible::MakeFrom())) == 1; # pragma warning(pop) // Restores the warning state. #elif defined(__BORLANDC__) // C++Builder cannot use member overload resolution during template // instantiation. The simplest workaround is to use its C++0x type traits // functions (C++Builder 2009 and above only). static const bool value = __is_convertible(From, To); #else static const bool value = sizeof(Helper(ImplicitlyConvertible::MakeFrom())) == 1; #endif // _MSV_VER }; template const bool ImplicitlyConvertible::value; // IsAProtocolMessage::value is a compile-time bool constant that's // true iff T is type ProtocolMessage, proto2::Message, or a subclass // of those. template struct IsAProtocolMessage : public bool_constant< ImplicitlyConvertible::value || ImplicitlyConvertible::value> { }; // When the compiler sees expression IsContainerTest(0), if C is an // STL-style container class, the first overload of IsContainerTest // will be viable (since both C::iterator* and C::const_iterator* are // valid types and NULL can be implicitly converted to them). It will // be picked over the second overload as 'int' is a perfect match for // the type of argument 0. If C::iterator or C::const_iterator is not // a valid type, the first overload is not viable, and the second // overload will be picked. Therefore, we can determine whether C is // a container class by checking the type of IsContainerTest(0). // The value of the expression is insignificant. // // Note that we look for both C::iterator and C::const_iterator. The // reason is that C++ injects the name of a class as a member of the // class itself (e.g. you can refer to class iterator as either // 'iterator' or 'iterator::iterator'). If we look for C::iterator // only, for example, we would mistakenly think that a class named // iterator is an STL container. // // Also note that the simpler approach of overloading // IsContainerTest(typename C::const_iterator*) and // IsContainerTest(...) doesn't work with Visual Age C++ and Sun C++. typedef int IsContainer; template IsContainer IsContainerTest(int /* dummy */, typename C::iterator* /* it */ = NULL, typename C::const_iterator* /* const_it */ = NULL) { return 0; } typedef char IsNotContainer; template IsNotContainer IsContainerTest(long /* dummy */) { return '\0'; } // EnableIf::type is void when 'Cond' is true, and // undefined when 'Cond' is false. To use SFINAE to make a function // overload only apply when a particular expression is true, add // "typename EnableIf::type* = 0" as the last parameter. template struct EnableIf; template<> struct EnableIf { typedef void type; }; // NOLINT // Utilities for native arrays. // ArrayEq() compares two k-dimensional native arrays using the // elements' operator==, where k can be any integer >= 0. When k is // 0, ArrayEq() degenerates into comparing a single pair of values. template bool ArrayEq(const T* lhs, size_t size, const U* rhs); // This generic version is used when k is 0. template inline bool ArrayEq(const T& lhs, const U& rhs) { return lhs == rhs; } // This overload is used when k >= 1. template inline bool ArrayEq(const T(&lhs)[N], const U(&rhs)[N]) { return internal::ArrayEq(lhs, N, rhs); } // This helper reduces code bloat. If we instead put its logic inside // the previous ArrayEq() function, arrays with different sizes would // lead to different copies of the template code. template bool ArrayEq(const T* lhs, size_t size, const U* rhs) { for (size_t i = 0; i != size; i++) { if (!internal::ArrayEq(lhs[i], rhs[i])) return false; } return true; } // Finds the first element in the iterator range [begin, end) that // equals elem. Element may be a native array type itself. template Iter ArrayAwareFind(Iter begin, Iter end, const Element& elem) { for (Iter it = begin; it != end; ++it) { if (internal::ArrayEq(*it, elem)) return it; } return end; } // CopyArray() copies a k-dimensional native array using the elements' // operator=, where k can be any integer >= 0. When k is 0, // CopyArray() degenerates into copying a single value. template void CopyArray(const T* from, size_t size, U* to); // This generic version is used when k is 0. template inline void CopyArray(const T& from, U* to) { *to = from; } // This overload is used when k >= 1. template inline void CopyArray(const T(&from)[N], U(*to)[N]) { internal::CopyArray(from, N, *to); } // This helper reduces code bloat. If we instead put its logic inside // the previous CopyArray() function, arrays with different sizes // would lead to different copies of the template code. template void CopyArray(const T* from, size_t size, U* to) { for (size_t i = 0; i != size; i++) { internal::CopyArray(from[i], to + i); } } // The relation between an NativeArray object (see below) and the // native array it represents. enum RelationToSource { kReference, // The NativeArray references the native array. kCopy // The NativeArray makes a copy of the native array and // owns the copy. }; // Adapts a native array to a read-only STL-style container. Instead // of the complete STL container concept, this adaptor only implements // members useful for Google Mock's container matchers. New members // should be added as needed. To simplify the implementation, we only // support Element being a raw type (i.e. having no top-level const or // reference modifier). It's the client's responsibility to satisfy // this requirement. Element can be an array type itself (hence // multi-dimensional arrays are supported). template class NativeArray { public: // STL-style container typedefs. typedef Element value_type; typedef Element* iterator; typedef const Element* const_iterator; // Constructs from a native array. NativeArray(const Element* array, size_t count, RelationToSource relation) { Init(array, count, relation); } // Copy constructor. NativeArray(const NativeArray& rhs) { Init(rhs.array_, rhs.size_, rhs.relation_to_source_); } ~NativeArray() { // Ensures that the user doesn't instantiate NativeArray with a // const or reference type. static_cast(StaticAssertTypeEqHelper()); if (relation_to_source_ == kCopy) delete[] array_; } // STL-style container methods. size_t size() const { return size_; } const_iterator begin() const { return array_; } const_iterator end() const { return array_ + size_; } bool operator==(const NativeArray& rhs) const { return size() == rhs.size() && ArrayEq(begin(), size(), rhs.begin()); } private: // Initializes this object; makes a copy of the input array if // 'relation' is kCopy. void Init(const Element* array, size_t a_size, RelationToSource relation) { if (relation == kReference) { array_ = array; } else { Element* const copy = new Element[a_size]; CopyArray(array, a_size, copy); array_ = copy; } size_ = a_size; relation_to_source_ = relation; } const Element* array_; size_t size_; RelationToSource relation_to_source_; GTEST_DISALLOW_ASSIGN_(NativeArray); }; } // namespace internal } // namespace testing #define GTEST_MESSAGE_AT_(file, line, message, result_type) \ ::testing::internal::AssertHelper(result_type, file, line, message) \ = ::testing::Message() #define GTEST_MESSAGE_(message, result_type) \ GTEST_MESSAGE_AT_(__FILE__, __LINE__, message, result_type) #define GTEST_FATAL_FAILURE_(message) \ return GTEST_MESSAGE_(message, ::testing::TestPartResult::kFatalFailure) #define GTEST_NONFATAL_FAILURE_(message) \ GTEST_MESSAGE_(message, ::testing::TestPartResult::kNonFatalFailure) #define GTEST_SUCCESS_(message) \ GTEST_MESSAGE_(message, ::testing::TestPartResult::kSuccess) // Suppresses MSVC warnings 4072 (unreachable code) for the code following // statement if it returns or throws (or doesn't return or throw in some // situations). #define GTEST_SUPPRESS_UNREACHABLE_CODE_WARNING_BELOW_(statement) \ if (::testing::internal::AlwaysTrue()) { statement; } #define GTEST_TEST_THROW_(statement, expected_exception, fail) \ GTEST_AMBIGUOUS_ELSE_BLOCKER_ \ if (::testing::internal::ConstCharPtr gtest_msg = "") { \ bool gtest_caught_expected = false; \ try { \ GTEST_SUPPRESS_UNREACHABLE_CODE_WARNING_BELOW_(statement); \ } \ catch (expected_exception const&) { \ gtest_caught_expected = true; \ } \ catch (...) { \ gtest_msg.value = \ "Expected: " #statement " throws an exception of type " \ #expected_exception ".\n Actual: it throws a different type."; \ goto GTEST_CONCAT_TOKEN_(gtest_label_testthrow_, __LINE__); \ } \ if (!gtest_caught_expected) { \ gtest_msg.value = \ "Expected: " #statement " throws an exception of type " \ #expected_exception ".\n Actual: it throws nothing."; \ goto GTEST_CONCAT_TOKEN_(gtest_label_testthrow_, __LINE__); \ } \ } else \ GTEST_CONCAT_TOKEN_(gtest_label_testthrow_, __LINE__): \ fail(gtest_msg.value) #define GTEST_TEST_NO_THROW_(statement, fail) \ GTEST_AMBIGUOUS_ELSE_BLOCKER_ \ if (::testing::internal::AlwaysTrue()) { \ try { \ GTEST_SUPPRESS_UNREACHABLE_CODE_WARNING_BELOW_(statement); \ } \ catch (...) { \ goto GTEST_CONCAT_TOKEN_(gtest_label_testnothrow_, __LINE__); \ } \ } else \ GTEST_CONCAT_TOKEN_(gtest_label_testnothrow_, __LINE__): \ fail("Expected: " #statement " doesn't throw an exception.\n" \ " Actual: it throws.") #define GTEST_TEST_ANY_THROW_(statement, fail) \ GTEST_AMBIGUOUS_ELSE_BLOCKER_ \ if (::testing::internal::AlwaysTrue()) { \ bool gtest_caught_any = false; \ try { \ GTEST_SUPPRESS_UNREACHABLE_CODE_WARNING_BELOW_(statement); \ } \ catch (...) { \ gtest_caught_any = true; \ } \ if (!gtest_caught_any) { \ goto GTEST_CONCAT_TOKEN_(gtest_label_testanythrow_, __LINE__); \ } \ } else \ GTEST_CONCAT_TOKEN_(gtest_label_testanythrow_, __LINE__): \ fail("Expected: " #statement " throws an exception.\n" \ " Actual: it doesn't.") // Implements Boolean test assertions such as EXPECT_TRUE. expression can be // either a boolean expression or an AssertionResult. text is a textual // represenation of expression as it was passed into the EXPECT_TRUE. #define GTEST_TEST_BOOLEAN_(expression, text, actual, expected, fail) \ GTEST_AMBIGUOUS_ELSE_BLOCKER_ \ if (const ::testing::AssertionResult gtest_ar_ = \ ::testing::AssertionResult(expression)) \ ; \ else \ fail(::testing::internal::GetBoolAssertionFailureMessage(\ gtest_ar_, text, #actual, #expected).c_str()) #define GTEST_TEST_NO_FATAL_FAILURE_(statement, fail) \ GTEST_AMBIGUOUS_ELSE_BLOCKER_ \ if (::testing::internal::AlwaysTrue()) { \ ::testing::internal::HasNewFatalFailureHelper gtest_fatal_failure_checker; \ GTEST_SUPPRESS_UNREACHABLE_CODE_WARNING_BELOW_(statement); \ if (gtest_fatal_failure_checker.has_new_fatal_failure()) { \ goto GTEST_CONCAT_TOKEN_(gtest_label_testnofatal_, __LINE__); \ } \ } else \ GTEST_CONCAT_TOKEN_(gtest_label_testnofatal_, __LINE__): \ fail("Expected: " #statement " doesn't generate new fatal " \ "failures in the current thread.\n" \ " Actual: it does.") // Expands to the name of the class that implements the given test. #define GTEST_TEST_CLASS_NAME_(test_case_name, test_name) \ test_case_name##_##test_name##_Test // Helper macro for defining tests. #define GTEST_TEST_(test_case_name, test_name, parent_class, parent_id)\ class GTEST_TEST_CLASS_NAME_(test_case_name, test_name) : public parent_class {\ public:\ GTEST_TEST_CLASS_NAME_(test_case_name, test_name)() {}\ private:\ virtual void TestBody();\ static ::testing::TestInfo* const test_info_ GTEST_ATTRIBUTE_UNUSED_;\ GTEST_DISALLOW_COPY_AND_ASSIGN_(\ GTEST_TEST_CLASS_NAME_(test_case_name, test_name));\ };\ \ ::testing::TestInfo* const GTEST_TEST_CLASS_NAME_(test_case_name, test_name)\ ::test_info_ =\ ::testing::internal::MakeAndRegisterTestInfo(\ #test_case_name, #test_name, NULL, NULL, \ (parent_id), \ parent_class::SetUpTestCase, \ parent_class::TearDownTestCase, \ new ::testing::internal::TestFactoryImpl<\ GTEST_TEST_CLASS_NAME_(test_case_name, test_name)>);\ void GTEST_TEST_CLASS_NAME_(test_case_name, test_name)::TestBody() #endif // GTEST_INCLUDE_GTEST_INTERNAL_GTEST_INTERNAL_H_ // Copyright 2005, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: wan@google.com (Zhanyong Wan) // // The Google C++ Testing Framework (Google Test) // // This header file defines the public API for death tests. It is // #included by gtest.h so a user doesn't need to include this // directly. #ifndef GTEST_INCLUDE_GTEST_GTEST_DEATH_TEST_H_ #define GTEST_INCLUDE_GTEST_GTEST_DEATH_TEST_H_ // Copyright 2005, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Authors: wan@google.com (Zhanyong Wan), eefacm@gmail.com (Sean Mcafee) // // The Google C++ Testing Framework (Google Test) // // This header file defines internal utilities needed for implementing // death tests. They are subject to change without notice. #ifndef GTEST_INCLUDE_GTEST_INTERNAL_GTEST_DEATH_TEST_INTERNAL_H_ #define GTEST_INCLUDE_GTEST_INTERNAL_GTEST_DEATH_TEST_INTERNAL_H_ #include namespace testing { namespace internal { GTEST_DECLARE_string_(internal_run_death_test); // Names of the flags (needed for parsing Google Test flags). const char kDeathTestStyleFlag[] = "death_test_style"; const char kDeathTestUseFork[] = "death_test_use_fork"; const char kInternalRunDeathTestFlag[] = "internal_run_death_test"; #if GTEST_HAS_DEATH_TEST // DeathTest is a class that hides much of the complexity of the // GTEST_DEATH_TEST_ macro. It is abstract; its static Create method // returns a concrete class that depends on the prevailing death test // style, as defined by the --gtest_death_test_style and/or // --gtest_internal_run_death_test flags. // In describing the results of death tests, these terms are used with // the corresponding definitions: // // exit status: The integer exit information in the format specified // by wait(2) // exit code: The integer code passed to exit(3), _exit(2), or // returned from main() class GTEST_API_ DeathTest { public: // Create returns false if there was an error determining the // appropriate action to take for the current death test; for example, // if the gtest_death_test_style flag is set to an invalid value. // The LastMessage method will return a more detailed message in that // case. Otherwise, the DeathTest pointer pointed to by the "test" // argument is set. If the death test should be skipped, the pointer // is set to NULL; otherwise, it is set to the address of a new concrete // DeathTest object that controls the execution of the current test. static bool Create(const char* statement, const RE* regex, const char* file, int line, DeathTest** test); DeathTest(); virtual ~DeathTest() { } // A helper class that aborts a death test when it's deleted. class ReturnSentinel { public: explicit ReturnSentinel(DeathTest* test) : test_(test) { } ~ReturnSentinel() { test_->Abort(TEST_ENCOUNTERED_RETURN_STATEMENT); } private: DeathTest* const test_; GTEST_DISALLOW_COPY_AND_ASSIGN_(ReturnSentinel); } GTEST_ATTRIBUTE_UNUSED_; // An enumeration of possible roles that may be taken when a death // test is encountered. EXECUTE means that the death test logic should // be executed immediately. OVERSEE means that the program should prepare // the appropriate environment for a child process to execute the death // test, then wait for it to complete. enum TestRole { OVERSEE_TEST, EXECUTE_TEST }; // An enumeration of the three reasons that a test might be aborted. enum AbortReason { TEST_ENCOUNTERED_RETURN_STATEMENT, TEST_THREW_EXCEPTION, TEST_DID_NOT_DIE }; // Assumes one of the above roles. virtual TestRole AssumeRole() = 0; // Waits for the death test to finish and returns its status. virtual int Wait() = 0; // Returns true if the death test passed; that is, the test process // exited during the test, its exit status matches a user-supplied // predicate, and its stderr output matches a user-supplied regular // expression. // The user-supplied predicate may be a macro expression rather // than a function pointer or functor, or else Wait and Passed could // be combined. virtual bool Passed(bool exit_status_ok) = 0; // Signals that the death test did not die as expected. virtual void Abort(AbortReason reason) = 0; // Returns a human-readable outcome message regarding the outcome of // the last death test. static const char* LastMessage(); static void set_last_death_test_message(const std::string& message); private: // A string containing a description of the outcome of the last death test. static std::string last_death_test_message_; GTEST_DISALLOW_COPY_AND_ASSIGN_(DeathTest); }; // Factory interface for death tests. May be mocked out for testing. class DeathTestFactory { public: virtual ~DeathTestFactory() { } virtual bool Create(const char* statement, const RE* regex, const char* file, int line, DeathTest** test) = 0; }; // A concrete DeathTestFactory implementation for normal use. class DefaultDeathTestFactory : public DeathTestFactory { public: virtual bool Create(const char* statement, const RE* regex, const char* file, int line, DeathTest** test); }; // Returns true if exit_status describes a process that was terminated // by a signal, or exited normally with a nonzero exit code. GTEST_API_ bool ExitedUnsuccessfully(int exit_status); // Traps C++ exceptions escaping statement and reports them as test // failures. Note that trapping SEH exceptions is not implemented here. # if GTEST_HAS_EXCEPTIONS # define GTEST_EXECUTE_DEATH_TEST_STATEMENT_(statement, death_test) \ try { \ GTEST_SUPPRESS_UNREACHABLE_CODE_WARNING_BELOW_(statement); \ } catch (const ::std::exception& gtest_exception) { \ fprintf(\ stderr, \ "\n%s: Caught std::exception-derived exception escaping the " \ "death test statement. Exception message: %s\n", \ ::testing::internal::FormatFileLocation(__FILE__, __LINE__).c_str(), \ gtest_exception.what()); \ fflush(stderr); \ death_test->Abort(::testing::internal::DeathTest::TEST_THREW_EXCEPTION); \ } catch (...) { \ death_test->Abort(::testing::internal::DeathTest::TEST_THREW_EXCEPTION); \ } # else # define GTEST_EXECUTE_DEATH_TEST_STATEMENT_(statement, death_test) \ GTEST_SUPPRESS_UNREACHABLE_CODE_WARNING_BELOW_(statement) # endif // This macro is for implementing ASSERT_DEATH*, EXPECT_DEATH*, // ASSERT_EXIT*, and EXPECT_EXIT*. # define GTEST_DEATH_TEST_(statement, predicate, regex, fail) \ GTEST_AMBIGUOUS_ELSE_BLOCKER_ \ if (::testing::internal::AlwaysTrue()) { \ const ::testing::internal::RE& gtest_regex = (regex); \ ::testing::internal::DeathTest* gtest_dt; \ if (!::testing::internal::DeathTest::Create(#statement, >est_regex, \ __FILE__, __LINE__, >est_dt)) { \ goto GTEST_CONCAT_TOKEN_(gtest_label_, __LINE__); \ } \ if (gtest_dt != NULL) { \ ::testing::internal::scoped_ptr< ::testing::internal::DeathTest> \ gtest_dt_ptr(gtest_dt); \ switch (gtest_dt->AssumeRole()) { \ case ::testing::internal::DeathTest::OVERSEE_TEST: \ if (!gtest_dt->Passed(predicate(gtest_dt->Wait()))) { \ goto GTEST_CONCAT_TOKEN_(gtest_label_, __LINE__); \ } \ break; \ case ::testing::internal::DeathTest::EXECUTE_TEST: { \ ::testing::internal::DeathTest::ReturnSentinel \ gtest_sentinel(gtest_dt); \ GTEST_EXECUTE_DEATH_TEST_STATEMENT_(statement, gtest_dt); \ gtest_dt->Abort(::testing::internal::DeathTest::TEST_DID_NOT_DIE); \ break; \ } \ default: \ break; \ } \ } \ } else \ GTEST_CONCAT_TOKEN_(gtest_label_, __LINE__): \ fail(::testing::internal::DeathTest::LastMessage()) // The symbol "fail" here expands to something into which a message // can be streamed. // This macro is for implementing ASSERT/EXPECT_DEBUG_DEATH when compiled in // NDEBUG mode. In this case we need the statements to be executed, the regex is // ignored, and the macro must accept a streamed message even though the message // is never printed. # define GTEST_EXECUTE_STATEMENT_(statement, regex) \ GTEST_AMBIGUOUS_ELSE_BLOCKER_ \ if (::testing::internal::AlwaysTrue()) { \ GTEST_SUPPRESS_UNREACHABLE_CODE_WARNING_BELOW_(statement); \ } else \ ::testing::Message() // A class representing the parsed contents of the // --gtest_internal_run_death_test flag, as it existed when // RUN_ALL_TESTS was called. class InternalRunDeathTestFlag { public: InternalRunDeathTestFlag(const std::string& a_file, int a_line, int an_index, int a_write_fd) : file_(a_file), line_(a_line), index_(an_index), write_fd_(a_write_fd) {} ~InternalRunDeathTestFlag() { if (write_fd_ >= 0) posix::Close(write_fd_); } const std::string& file() const { return file_; } int line() const { return line_; } int index() const { return index_; } int write_fd() const { return write_fd_; } private: std::string file_; int line_; int index_; int write_fd_; GTEST_DISALLOW_COPY_AND_ASSIGN_(InternalRunDeathTestFlag); }; // Returns a newly created InternalRunDeathTestFlag object with fields // initialized from the GTEST_FLAG(internal_run_death_test) flag if // the flag is specified; otherwise returns NULL. InternalRunDeathTestFlag* ParseInternalRunDeathTestFlag(); #else // GTEST_HAS_DEATH_TEST // This macro is used for implementing macros such as // EXPECT_DEATH_IF_SUPPORTED and ASSERT_DEATH_IF_SUPPORTED on systems where // death tests are not supported. Those macros must compile on such systems // iff EXPECT_DEATH and ASSERT_DEATH compile with the same parameters on // systems that support death tests. This allows one to write such a macro // on a system that does not support death tests and be sure that it will // compile on a death-test supporting system. // // Parameters: // statement - A statement that a macro such as EXPECT_DEATH would test // for program termination. This macro has to make sure this // statement is compiled but not executed, to ensure that // EXPECT_DEATH_IF_SUPPORTED compiles with a certain // parameter iff EXPECT_DEATH compiles with it. // regex - A regex that a macro such as EXPECT_DEATH would use to test // the output of statement. This parameter has to be // compiled but not evaluated by this macro, to ensure that // this macro only accepts expressions that a macro such as // EXPECT_DEATH would accept. // terminator - Must be an empty statement for EXPECT_DEATH_IF_SUPPORTED // and a return statement for ASSERT_DEATH_IF_SUPPORTED. // This ensures that ASSERT_DEATH_IF_SUPPORTED will not // compile inside functions where ASSERT_DEATH doesn't // compile. // // The branch that has an always false condition is used to ensure that // statement and regex are compiled (and thus syntactically correct) but // never executed. The unreachable code macro protects the terminator // statement from generating an 'unreachable code' warning in case // statement unconditionally returns or throws. The Message constructor at // the end allows the syntax of streaming additional messages into the // macro, for compilational compatibility with EXPECT_DEATH/ASSERT_DEATH. # define GTEST_UNSUPPORTED_DEATH_TEST_(statement, regex, terminator) \ GTEST_AMBIGUOUS_ELSE_BLOCKER_ \ if (::testing::internal::AlwaysTrue()) { \ GTEST_LOG_(WARNING) \ << "Death tests are not supported on this platform.\n" \ << "Statement '" #statement "' cannot be verified."; \ } else if (::testing::internal::AlwaysFalse()) { \ ::testing::internal::RE::PartialMatch(".*", (regex)); \ GTEST_SUPPRESS_UNREACHABLE_CODE_WARNING_BELOW_(statement); \ terminator; \ } else \ ::testing::Message() #endif // GTEST_HAS_DEATH_TEST } // namespace internal } // namespace testing #endif // GTEST_INCLUDE_GTEST_INTERNAL_GTEST_DEATH_TEST_INTERNAL_H_ namespace testing { // This flag controls the style of death tests. Valid values are "threadsafe", // meaning that the death test child process will re-execute the test binary // from the start, running only a single death test, or "fast", // meaning that the child process will execute the test logic immediately // after forking. GTEST_DECLARE_string_(death_test_style); #if GTEST_HAS_DEATH_TEST namespace internal { // Returns a Boolean value indicating whether the caller is currently // executing in the context of the death test child process. Tools such as // Valgrind heap checkers may need this to modify their behavior in death // tests. IMPORTANT: This is an internal utility. Using it may break the // implementation of death tests. User code MUST NOT use it. GTEST_API_ bool InDeathTestChild(); } // namespace internal // The following macros are useful for writing death tests. // Here's what happens when an ASSERT_DEATH* or EXPECT_DEATH* is // executed: // // 1. It generates a warning if there is more than one active // thread. This is because it's safe to fork() or clone() only // when there is a single thread. // // 2. The parent process clone()s a sub-process and runs the death // test in it; the sub-process exits with code 0 at the end of the // death test, if it hasn't exited already. // // 3. The parent process waits for the sub-process to terminate. // // 4. The parent process checks the exit code and error message of // the sub-process. // // Examples: // // ASSERT_DEATH(server.SendMessage(56, "Hello"), "Invalid port number"); // for (int i = 0; i < 5; i++) { // EXPECT_DEATH(server.ProcessRequest(i), // "Invalid request .* in ProcessRequest()") // << "Failed to die on request " << i; // } // // ASSERT_EXIT(server.ExitNow(), ::testing::ExitedWithCode(0), "Exiting"); // // bool KilledBySIGHUP(int exit_code) { // return WIFSIGNALED(exit_code) && WTERMSIG(exit_code) == SIGHUP; // } // // ASSERT_EXIT(client.HangUpServer(), KilledBySIGHUP, "Hanging up!"); // // On the regular expressions used in death tests: // // On POSIX-compliant systems (*nix), we use the library, // which uses the POSIX extended regex syntax. // // On other platforms (e.g. Windows), we only support a simple regex // syntax implemented as part of Google Test. This limited // implementation should be enough most of the time when writing // death tests; though it lacks many features you can find in PCRE // or POSIX extended regex syntax. For example, we don't support // union ("x|y"), grouping ("(xy)"), brackets ("[xy]"), and // repetition count ("x{5,7}"), among others. // // Below is the syntax that we do support. We chose it to be a // subset of both PCRE and POSIX extended regex, so it's easy to // learn wherever you come from. In the following: 'A' denotes a // literal character, period (.), or a single \\ escape sequence; // 'x' and 'y' denote regular expressions; 'm' and 'n' are for // natural numbers. // // c matches any literal character c // \\d matches any decimal digit // \\D matches any character that's not a decimal digit // \\f matches \f // \\n matches \n // \\r matches \r // \\s matches any ASCII whitespace, including \n // \\S matches any character that's not a whitespace // \\t matches \t // \\v matches \v // \\w matches any letter, _, or decimal digit // \\W matches any character that \\w doesn't match // \\c matches any literal character c, which must be a punctuation // . matches any single character except \n // A? matches 0 or 1 occurrences of A // A* matches 0 or many occurrences of A // A+ matches 1 or many occurrences of A // ^ matches the beginning of a string (not that of each line) // $ matches the end of a string (not that of each line) // xy matches x followed by y // // If you accidentally use PCRE or POSIX extended regex features // not implemented by us, you will get a run-time failure. In that // case, please try to rewrite your regular expression within the // above syntax. // // This implementation is *not* meant to be as highly tuned or robust // as a compiled regex library, but should perform well enough for a // death test, which already incurs significant overhead by launching // a child process. // // Known caveats: // // A "threadsafe" style death test obtains the path to the test // program from argv[0] and re-executes it in the sub-process. For // simplicity, the current implementation doesn't search the PATH // when launching the sub-process. This means that the user must // invoke the test program via a path that contains at least one // path separator (e.g. path/to/foo_test and // /absolute/path/to/bar_test are fine, but foo_test is not). This // is rarely a problem as people usually don't put the test binary // directory in PATH. // // TODO(wan@google.com): make thread-safe death tests search the PATH. // Asserts that a given statement causes the program to exit, with an // integer exit status that satisfies predicate, and emitting error output // that matches regex. # define ASSERT_EXIT(statement, predicate, regex) \ GTEST_DEATH_TEST_(statement, predicate, regex, GTEST_FATAL_FAILURE_) // Like ASSERT_EXIT, but continues on to successive tests in the // test case, if any: # define EXPECT_EXIT(statement, predicate, regex) \ GTEST_DEATH_TEST_(statement, predicate, regex, GTEST_NONFATAL_FAILURE_) // Asserts that a given statement causes the program to exit, either by // explicitly exiting with a nonzero exit code or being killed by a // signal, and emitting error output that matches regex. # define ASSERT_DEATH(statement, regex) \ ASSERT_EXIT(statement, ::testing::internal::ExitedUnsuccessfully, regex) // Like ASSERT_DEATH, but continues on to successive tests in the // test case, if any: # define EXPECT_DEATH(statement, regex) \ EXPECT_EXIT(statement, ::testing::internal::ExitedUnsuccessfully, regex) // Two predicate classes that can be used in {ASSERT,EXPECT}_EXIT*: // Tests that an exit code describes a normal exit with a given exit code. class GTEST_API_ ExitedWithCode { public: explicit ExitedWithCode(int exit_code); bool operator()(int exit_status) const; private: // No implementation - assignment is unsupported. void operator=(const ExitedWithCode& other); const int exit_code_; }; # if !GTEST_OS_WINDOWS // Tests that an exit code describes an exit due to termination by a // given signal. class GTEST_API_ KilledBySignal { public: explicit KilledBySignal(int signum); bool operator()(int exit_status) const; private: const int signum_; }; # endif // !GTEST_OS_WINDOWS // EXPECT_DEBUG_DEATH asserts that the given statements die in debug mode. // The death testing framework causes this to have interesting semantics, // since the sideeffects of the call are only visible in opt mode, and not // in debug mode. // // In practice, this can be used to test functions that utilize the // LOG(DFATAL) macro using the following style: // // int DieInDebugOr12(int* sideeffect) { // if (sideeffect) { // *sideeffect = 12; // } // LOG(DFATAL) << "death"; // return 12; // } // // TEST(TestCase, TestDieOr12WorksInDgbAndOpt) { // int sideeffect = 0; // // Only asserts in dbg. // EXPECT_DEBUG_DEATH(DieInDebugOr12(&sideeffect), "death"); // // #ifdef NDEBUG // // opt-mode has sideeffect visible. // EXPECT_EQ(12, sideeffect); // #else // // dbg-mode no visible sideeffect. // EXPECT_EQ(0, sideeffect); // #endif // } // // This will assert that DieInDebugReturn12InOpt() crashes in debug // mode, usually due to a DCHECK or LOG(DFATAL), but returns the // appropriate fallback value (12 in this case) in opt mode. If you // need to test that a function has appropriate side-effects in opt // mode, include assertions against the side-effects. A general // pattern for this is: // // EXPECT_DEBUG_DEATH({ // // Side-effects here will have an effect after this statement in // // opt mode, but none in debug mode. // EXPECT_EQ(12, DieInDebugOr12(&sideeffect)); // }, "death"); // # ifdef NDEBUG # define EXPECT_DEBUG_DEATH(statement, regex) \ GTEST_EXECUTE_STATEMENT_(statement, regex) # define ASSERT_DEBUG_DEATH(statement, regex) \ GTEST_EXECUTE_STATEMENT_(statement, regex) # else # define EXPECT_DEBUG_DEATH(statement, regex) \ EXPECT_DEATH(statement, regex) # define ASSERT_DEBUG_DEATH(statement, regex) \ ASSERT_DEATH(statement, regex) # endif // NDEBUG for EXPECT_DEBUG_DEATH #endif // GTEST_HAS_DEATH_TEST // EXPECT_DEATH_IF_SUPPORTED(statement, regex) and // ASSERT_DEATH_IF_SUPPORTED(statement, regex) expand to real death tests if // death tests are supported; otherwise they just issue a warning. This is // useful when you are combining death test assertions with normal test // assertions in one test. #if GTEST_HAS_DEATH_TEST # define EXPECT_DEATH_IF_SUPPORTED(statement, regex) \ EXPECT_DEATH(statement, regex) # define ASSERT_DEATH_IF_SUPPORTED(statement, regex) \ ASSERT_DEATH(statement, regex) #else # define EXPECT_DEATH_IF_SUPPORTED(statement, regex) \ GTEST_UNSUPPORTED_DEATH_TEST_(statement, regex, ) # define ASSERT_DEATH_IF_SUPPORTED(statement, regex) \ GTEST_UNSUPPORTED_DEATH_TEST_(statement, regex, return) #endif } // namespace testing #endif // GTEST_INCLUDE_GTEST_GTEST_DEATH_TEST_H_ // This file was GENERATED by command: // pump.py gtest-param-test.h.pump // DO NOT EDIT BY HAND!!! // Copyright 2008, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Authors: vladl@google.com (Vlad Losev) // // Macros and functions for implementing parameterized tests // in Google C++ Testing Framework (Google Test) // // This file is generated by a SCRIPT. DO NOT EDIT BY HAND! // #ifndef GTEST_INCLUDE_GTEST_GTEST_PARAM_TEST_H_ #define GTEST_INCLUDE_GTEST_GTEST_PARAM_TEST_H_ // Value-parameterized tests allow you to test your code with different // parameters without writing multiple copies of the same test. // // Here is how you use value-parameterized tests: #if 0 // To write value-parameterized tests, first you should define a fixture // class. It is usually derived from testing::TestWithParam (see below for // another inheritance scheme that's sometimes useful in more complicated // class hierarchies), where the type of your parameter values. // TestWithParam is itself derived from testing::Test. T can be any // copyable type. If it's a raw pointer, you are responsible for managing the // lifespan of the pointed values. class FooTest : public ::testing::TestWithParam { // You can implement all the usual class fixture members here. }; // Then, use the TEST_P macro to define as many parameterized tests // for this fixture as you want. The _P suffix is for "parameterized" // or "pattern", whichever you prefer to think. TEST_P(FooTest, DoesBlah) { // Inside a test, access the test parameter with the GetParam() method // of the TestWithParam class: EXPECT_TRUE(foo.Blah(GetParam())); ... } TEST_P(FooTest, HasBlahBlah) { ... } // Finally, you can use INSTANTIATE_TEST_CASE_P to instantiate the test // case with any set of parameters you want. Google Test defines a number // of functions for generating test parameters. They return what we call // (surprise!) parameter generators. Here is a summary of them, which // are all in the testing namespace: // // // Range(begin, end [, step]) - Yields values {begin, begin+step, // begin+step+step, ...}. The values do not // include end. step defaults to 1. // Values(v1, v2, ..., vN) - Yields values {v1, v2, ..., vN}. // ValuesIn(container) - Yields values from a C-style array, an STL // ValuesIn(begin,end) container, or an iterator range [begin, end). // Bool() - Yields sequence {false, true}. // Combine(g1, g2, ..., gN) - Yields all combinations (the Cartesian product // for the math savvy) of the values generated // by the N generators. // // For more details, see comments at the definitions of these functions below // in this file. // // The following statement will instantiate tests from the FooTest test case // each with parameter values "meeny", "miny", and "moe". INSTANTIATE_TEST_CASE_P(InstantiationName, FooTest, Values("meeny", "miny", "moe")); // To distinguish different instances of the pattern, (yes, you // can instantiate it more then once) the first argument to the // INSTANTIATE_TEST_CASE_P macro is a prefix that will be added to the // actual test case name. Remember to pick unique prefixes for different // instantiations. The tests from the instantiation above will have // these names: // // * InstantiationName/FooTest.DoesBlah/0 for "meeny" // * InstantiationName/FooTest.DoesBlah/1 for "miny" // * InstantiationName/FooTest.DoesBlah/2 for "moe" // * InstantiationName/FooTest.HasBlahBlah/0 for "meeny" // * InstantiationName/FooTest.HasBlahBlah/1 for "miny" // * InstantiationName/FooTest.HasBlahBlah/2 for "moe" // // You can use these names in --gtest_filter. // // This statement will instantiate all tests from FooTest again, each // with parameter values "cat" and "dog": const char* pets[] = {"cat", "dog"}; INSTANTIATE_TEST_CASE_P(AnotherInstantiationName, FooTest, ValuesIn(pets)); // The tests from the instantiation above will have these names: // // * AnotherInstantiationName/FooTest.DoesBlah/0 for "cat" // * AnotherInstantiationName/FooTest.DoesBlah/1 for "dog" // * AnotherInstantiationName/FooTest.HasBlahBlah/0 for "cat" // * AnotherInstantiationName/FooTest.HasBlahBlah/1 for "dog" // // Please note that INSTANTIATE_TEST_CASE_P will instantiate all tests // in the given test case, whether their definitions come before or // AFTER the INSTANTIATE_TEST_CASE_P statement. // // Please also note that generator expressions (including parameters to the // generators) are evaluated in InitGoogleTest(), after main() has started. // This allows the user on one hand, to adjust generator parameters in order // to dynamically determine a set of tests to run and on the other hand, // give the user a chance to inspect the generated tests with Google Test // reflection API before RUN_ALL_TESTS() is executed. // // You can see samples/sample7_unittest.cc and samples/sample8_unittest.cc // for more examples. // // In the future, we plan to publish the API for defining new parameter // generators. But for now this interface remains part of the internal // implementation and is subject to change. // // // A parameterized test fixture must be derived from testing::Test and from // testing::WithParamInterface, where T is the type of the parameter // values. Inheriting from TestWithParam satisfies that requirement because // TestWithParam inherits from both Test and WithParamInterface. In more // complicated hierarchies, however, it is occasionally useful to inherit // separately from Test and WithParamInterface. For example: class BaseTest : public ::testing::Test { // You can inherit all the usual members for a non-parameterized test // fixture here. }; class DerivedTest : public BaseTest, public ::testing::WithParamInterface { // The usual test fixture members go here too. }; TEST_F(BaseTest, HasFoo) { // This is an ordinary non-parameterized test. } TEST_P(DerivedTest, DoesBlah) { // GetParam works just the same here as if you inherit from TestWithParam. EXPECT_TRUE(foo.Blah(GetParam())); } #endif // 0 #if !GTEST_OS_SYMBIAN # include #endif // scripts/fuse_gtest.py depends on gtest's own header being #included // *unconditionally*. Therefore these #includes cannot be moved // inside #if GTEST_HAS_PARAM_TEST. // Copyright 2008 Google Inc. // All Rights Reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: vladl@google.com (Vlad Losev) // Type and function utilities for implementing parameterized tests. #ifndef GTEST_INCLUDE_GTEST_INTERNAL_GTEST_PARAM_UTIL_H_ #define GTEST_INCLUDE_GTEST_INTERNAL_GTEST_PARAM_UTIL_H_ #include #include #include // scripts/fuse_gtest.py depends on gtest's own header being #included // *unconditionally*. Therefore these #includes cannot be moved // inside #if GTEST_HAS_PARAM_TEST. // Copyright 2003 Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Authors: Dan Egnor (egnor@google.com) // // A "smart" pointer type with reference tracking. Every pointer to a // particular object is kept on a circular linked list. When the last pointer // to an object is destroyed or reassigned, the object is deleted. // // Used properly, this deletes the object when the last reference goes away. // There are several caveats: // - Like all reference counting schemes, cycles lead to leaks. // - Each smart pointer is actually two pointers (8 bytes instead of 4). // - Every time a pointer is assigned, the entire list of pointers to that // object is traversed. This class is therefore NOT SUITABLE when there // will often be more than two or three pointers to a particular object. // - References are only tracked as long as linked_ptr<> objects are copied. // If a linked_ptr<> is converted to a raw pointer and back, BAD THINGS // will happen (double deletion). // // A good use of this class is storing object references in STL containers. // You can safely put linked_ptr<> in a vector<>. // Other uses may not be as good. // // Note: If you use an incomplete type with linked_ptr<>, the class // *containing* linked_ptr<> must have a constructor and destructor (even // if they do nothing!). // // Bill Gibbons suggested we use something like this. // // Thread Safety: // Unlike other linked_ptr implementations, in this implementation // a linked_ptr object is thread-safe in the sense that: // - it's safe to copy linked_ptr objects concurrently, // - it's safe to copy *from* a linked_ptr and read its underlying // raw pointer (e.g. via get()) concurrently, and // - it's safe to write to two linked_ptrs that point to the same // shared object concurrently. // TODO(wan@google.com): rename this to safe_linked_ptr to avoid // confusion with normal linked_ptr. #ifndef GTEST_INCLUDE_GTEST_INTERNAL_GTEST_LINKED_PTR_H_ #define GTEST_INCLUDE_GTEST_INTERNAL_GTEST_LINKED_PTR_H_ #include #include namespace testing { namespace internal { // Protects copying of all linked_ptr objects. GTEST_API_ GTEST_DECLARE_STATIC_MUTEX_(g_linked_ptr_mutex); // This is used internally by all instances of linked_ptr<>. It needs to be // a non-template class because different types of linked_ptr<> can refer to // the same object (linked_ptr(obj) vs linked_ptr(obj)). // So, it needs to be possible for different types of linked_ptr to participate // in the same circular linked list, so we need a single class type here. // // DO NOT USE THIS CLASS DIRECTLY YOURSELF. Use linked_ptr. class linked_ptr_internal { public: // Create a new circle that includes only this instance. void join_new() { next_ = this; } // Many linked_ptr operations may change p.link_ for some linked_ptr // variable p in the same circle as this object. Therefore we need // to prevent two such operations from occurring concurrently. // // Note that different types of linked_ptr objects can coexist in a // circle (e.g. linked_ptr, linked_ptr, and // linked_ptr). Therefore we must use a single mutex to // protect all linked_ptr objects. This can create serious // contention in production code, but is acceptable in a testing // framework. // Join an existing circle. void join(linked_ptr_internal const* ptr) GTEST_LOCK_EXCLUDED_(g_linked_ptr_mutex) { MutexLock lock(&g_linked_ptr_mutex); linked_ptr_internal const* p = ptr; while (p->next_ != ptr) p = p->next_; p->next_ = this; next_ = ptr; } // Leave whatever circle we're part of. Returns true if we were the // last member of the circle. Once this is done, you can join() another. bool depart() GTEST_LOCK_EXCLUDED_(g_linked_ptr_mutex) { MutexLock lock(&g_linked_ptr_mutex); if (next_ == this) return true; linked_ptr_internal const* p = next_; while (p->next_ != this) p = p->next_; p->next_ = next_; return false; } private: mutable linked_ptr_internal const* next_; }; template class linked_ptr { public: typedef T element_type; // Take over ownership of a raw pointer. This should happen as soon as // possible after the object is created. explicit linked_ptr(T* ptr = NULL) { capture(ptr); } ~linked_ptr() { depart(); } // Copy an existing linked_ptr<>, adding ourselves to the list of references. template linked_ptr(linked_ptr const& ptr) { copy(&ptr); } linked_ptr(linked_ptr const& ptr) { // NOLINT assert(&ptr != this); copy(&ptr); } // Assignment releases the old value and acquires the new. template linked_ptr& operator=(linked_ptr const& ptr) { depart(); copy(&ptr); return *this; } linked_ptr& operator=(linked_ptr const& ptr) { if (&ptr != this) { depart(); copy(&ptr); } return *this; } // Smart pointer members. void reset(T* ptr = NULL) { depart(); capture(ptr); } T* get() const { return value_; } T* operator->() const { return value_; } T& operator*() const { return *value_; } bool operator==(T* p) const { return value_ == p; } bool operator!=(T* p) const { return value_ != p; } template bool operator==(linked_ptr const& ptr) const { return value_ == ptr.get(); } template bool operator!=(linked_ptr const& ptr) const { return value_ != ptr.get(); } private: template friend class linked_ptr; T* value_; linked_ptr_internal link_; void depart() { if (link_.depart()) delete value_; } void capture(T* ptr) { value_ = ptr; link_.join_new(); } template void copy(linked_ptr const* ptr) { value_ = ptr->get(); if (value_) link_.join(&ptr->link_); else link_.join_new(); } }; template inline bool operator==(T* ptr, const linked_ptr& x) { return ptr == x.get(); } template inline bool operator!=(T* ptr, const linked_ptr& x) { return ptr != x.get(); } // A function to convert T* into linked_ptr // Doing e.g. make_linked_ptr(new FooBarBaz(arg)) is a shorter notation // for linked_ptr >(new FooBarBaz(arg)) template linked_ptr make_linked_ptr(T* ptr) { return linked_ptr(ptr); } } // namespace internal } // namespace testing #endif // GTEST_INCLUDE_GTEST_INTERNAL_GTEST_LINKED_PTR_H_ // Copyright 2007, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: wan@google.com (Zhanyong Wan) // Google Test - The Google C++ Testing Framework // // This file implements a universal value printer that can print a // value of any type T: // // void ::testing::internal::UniversalPrinter::Print(value, ostream_ptr); // // A user can teach this function how to print a class type T by // defining either operator<<() or PrintTo() in the namespace that // defines T. More specifically, the FIRST defined function in the // following list will be used (assuming T is defined in namespace // foo): // // 1. foo::PrintTo(const T&, ostream*) // 2. operator<<(ostream&, const T&) defined in either foo or the // global namespace. // // If none of the above is defined, it will print the debug string of // the value if it is a protocol buffer, or print the raw bytes in the // value otherwise. // // To aid debugging: when T is a reference type, the address of the // value is also printed; when T is a (const) char pointer, both the // pointer value and the NUL-terminated string it points to are // printed. // // We also provide some convenient wrappers: // // // Prints a value to a string. For a (const or not) char // // pointer, the NUL-terminated string (but not the pointer) is // // printed. // std::string ::testing::PrintToString(const T& value); // // // Prints a value tersely: for a reference type, the referenced // // value (but not the address) is printed; for a (const or not) char // // pointer, the NUL-terminated string (but not the pointer) is // // printed. // void ::testing::internal::UniversalTersePrint(const T& value, ostream*); // // // Prints value using the type inferred by the compiler. The difference // // from UniversalTersePrint() is that this function prints both the // // pointer and the NUL-terminated string for a (const or not) char pointer. // void ::testing::internal::UniversalPrint(const T& value, ostream*); // // // Prints the fields of a tuple tersely to a string vector, one // // element for each field. Tuple support must be enabled in // // gtest-port.h. // std::vector UniversalTersePrintTupleFieldsToStrings( // const Tuple& value); // // Known limitation: // // The print primitives print the elements of an STL-style container // using the compiler-inferred type of *iter where iter is a // const_iterator of the container. When const_iterator is an input // iterator but not a forward iterator, this inferred type may not // match value_type, and the print output may be incorrect. In // practice, this is rarely a problem as for most containers // const_iterator is a forward iterator. We'll fix this if there's an // actual need for it. Note that this fix cannot rely on value_type // being defined as many user-defined container types don't have // value_type. #ifndef GTEST_INCLUDE_GTEST_GTEST_PRINTERS_H_ #define GTEST_INCLUDE_GTEST_GTEST_PRINTERS_H_ #include // NOLINT #include #include #include #include namespace testing { // Definitions in the 'internal' and 'internal2' name spaces are // subject to change without notice. DO NOT USE THEM IN USER CODE! namespace internal2 { // Prints the given number of bytes in the given object to the given // ostream. GTEST_API_ void PrintBytesInObjectTo(const unsigned char* obj_bytes, size_t count, ::std::ostream* os); // For selecting which printer to use when a given type has neither << // nor PrintTo(). enum TypeKind { kProtobuf, // a protobuf type kConvertibleToInteger, // a type implicitly convertible to BiggestInt // (e.g. a named or unnamed enum type) kOtherType // anything else }; // TypeWithoutFormatter::PrintValue(value, os) is called // by the universal printer to print a value of type T when neither // operator<< nor PrintTo() is defined for T, where kTypeKind is the // "kind" of T as defined by enum TypeKind. template class TypeWithoutFormatter { public: // This default version is called when kTypeKind is kOtherType. static void PrintValue(const T& value, ::std::ostream* os) { PrintBytesInObjectTo(reinterpret_cast(&value), sizeof(value), os); } }; // We print a protobuf using its ShortDebugString() when the string // doesn't exceed this many characters; otherwise we print it using // DebugString() for better readability. const size_t kProtobufOneLinerMaxLength = 50; template class TypeWithoutFormatter { public: static void PrintValue(const T& value, ::std::ostream* os) { const ::testing::internal::string short_str = value.ShortDebugString(); const ::testing::internal::string pretty_str = short_str.length() <= kProtobufOneLinerMaxLength ? short_str : ("\n" + value.DebugString()); *os << ("<" + pretty_str + ">"); } }; template class TypeWithoutFormatter { public: // Since T has no << operator or PrintTo() but can be implicitly // converted to BiggestInt, we print it as a BiggestInt. // // Most likely T is an enum type (either named or unnamed), in which // case printing it as an integer is the desired behavior. In case // T is not an enum, printing it as an integer is the best we can do // given that it has no user-defined printer. static void PrintValue(const T& value, ::std::ostream* os) { const internal::BiggestInt kBigInt = value; *os << kBigInt; } }; // Prints the given value to the given ostream. If the value is a // protocol message, its debug string is printed; if it's an enum or // of a type implicitly convertible to BiggestInt, it's printed as an // integer; otherwise the bytes in the value are printed. This is // what UniversalPrinter::Print() does when it knows nothing about // type T and T has neither << operator nor PrintTo(). // // A user can override this behavior for a class type Foo by defining // a << operator in the namespace where Foo is defined. // // We put this operator in namespace 'internal2' instead of 'internal' // to simplify the implementation, as much code in 'internal' needs to // use << in STL, which would conflict with our own << were it defined // in 'internal'. // // Note that this operator<< takes a generic std::basic_ostream type instead of the more restricted std::ostream. If // we define it to take an std::ostream instead, we'll get an // "ambiguous overloads" compiler error when trying to print a type // Foo that supports streaming to std::basic_ostream, as the compiler cannot tell whether // operator<<(std::ostream&, const T&) or // operator<<(std::basic_stream, const Foo&) is more // specific. template ::std::basic_ostream& operator<<( ::std::basic_ostream& os, const T& x) { TypeWithoutFormatter::value ? kProtobuf : internal::ImplicitlyConvertible::value ? kConvertibleToInteger : kOtherType)>::PrintValue(x, &os); return os; } } // namespace internal2 } // namespace testing // This namespace MUST NOT BE NESTED IN ::testing, or the name look-up // magic needed for implementing UniversalPrinter won't work. namespace testing_internal { // Used to print a value that is not an STL-style container when the // user doesn't define PrintTo() for it. template void DefaultPrintNonContainerTo(const T& value, ::std::ostream* os) { // With the following statement, during unqualified name lookup, // testing::internal2::operator<< appears as if it was declared in // the nearest enclosing namespace that contains both // ::testing_internal and ::testing::internal2, i.e. the global // namespace. For more details, refer to the C++ Standard section // 7.3.4-1 [namespace.udir]. This allows us to fall back onto // testing::internal2::operator<< in case T doesn't come with a << // operator. // // We cannot write 'using ::testing::internal2::operator<<;', which // gcc 3.3 fails to compile due to a compiler bug. using namespace ::testing::internal2; // NOLINT // Assuming T is defined in namespace foo, in the next statement, // the compiler will consider all of: // // 1. foo::operator<< (thanks to Koenig look-up), // 2. ::operator<< (as the current namespace is enclosed in ::), // 3. testing::internal2::operator<< (thanks to the using statement above). // // The operator<< whose type matches T best will be picked. // // We deliberately allow #2 to be a candidate, as sometimes it's // impossible to define #1 (e.g. when foo is ::std, defining // anything in it is undefined behavior unless you are a compiler // vendor.). *os << value; } } // namespace testing_internal namespace testing { namespace internal { // UniversalPrinter::Print(value, ostream_ptr) prints the given // value to the given ostream. The caller must ensure that // 'ostream_ptr' is not NULL, or the behavior is undefined. // // We define UniversalPrinter as a class template (as opposed to a // function template), as we need to partially specialize it for // reference types, which cannot be done with function templates. template class UniversalPrinter; template void UniversalPrint(const T& value, ::std::ostream* os); // Used to print an STL-style container when the user doesn't define // a PrintTo() for it. template void DefaultPrintTo(IsContainer /* dummy */, false_type /* is not a pointer */, const C& container, ::std::ostream* os) { const size_t kMaxCount = 32; // The maximum number of elements to print. *os << '{'; size_t count = 0; for (typename C::const_iterator it = container.begin(); it != container.end(); ++it, ++count) { if (count > 0) { *os << ','; if (count == kMaxCount) { // Enough has been printed. *os << " ..."; break; } } *os << ' '; // We cannot call PrintTo(*it, os) here as PrintTo() doesn't // handle *it being a native array. internal::UniversalPrint(*it, os); } if (count > 0) { *os << ' '; } *os << '}'; } // Used to print a pointer that is neither a char pointer nor a member // pointer, when the user doesn't define PrintTo() for it. (A member // variable pointer or member function pointer doesn't really point to // a location in the address space. Their representation is // implementation-defined. Therefore they will be printed as raw // bytes.) template void DefaultPrintTo(IsNotContainer /* dummy */, true_type /* is a pointer */, T* p, ::std::ostream* os) { if (p == NULL) { *os << "NULL"; } else { // C++ doesn't allow casting from a function pointer to any object // pointer. // // IsTrue() silences warnings: "Condition is always true", // "unreachable code". if (IsTrue(ImplicitlyConvertible::value)) { // T is not a function type. We just call << to print p, // relying on ADL to pick up user-defined << for their pointer // types, if any. *os << p; } else { // T is a function type, so '*os << p' doesn't do what we want // (it just prints p as bool). We want to print p as a const // void*. However, we cannot cast it to const void* directly, // even using reinterpret_cast, as earlier versions of gcc // (e.g. 3.4.5) cannot compile the cast when p is a function // pointer. Casting to UInt64 first solves the problem. *os << reinterpret_cast( reinterpret_cast(p)); } } } // Used to print a non-container, non-pointer value when the user // doesn't define PrintTo() for it. template void DefaultPrintTo(IsNotContainer /* dummy */, false_type /* is not a pointer */, const T& value, ::std::ostream* os) { ::testing_internal::DefaultPrintNonContainerTo(value, os); } // Prints the given value using the << operator if it has one; // otherwise prints the bytes in it. This is what // UniversalPrinter::Print() does when PrintTo() is not specialized // or overloaded for type T. // // A user can override this behavior for a class type Foo by defining // an overload of PrintTo() in the namespace where Foo is defined. We // give the user this option as sometimes defining a << operator for // Foo is not desirable (e.g. the coding style may prevent doing it, // or there is already a << operator but it doesn't do what the user // wants). template void PrintTo(const T& value, ::std::ostream* os) { // DefaultPrintTo() is overloaded. The type of its first two // arguments determine which version will be picked. If T is an // STL-style container, the version for container will be called; if // T is a pointer, the pointer version will be called; otherwise the // generic version will be called. // // Note that we check for container types here, prior to we check // for protocol message types in our operator<<. The rationale is: // // For protocol messages, we want to give people a chance to // override Google Mock's format by defining a PrintTo() or // operator<<. For STL containers, other formats can be // incompatible with Google Mock's format for the container // elements; therefore we check for container types here to ensure // that our format is used. // // The second argument of DefaultPrintTo() is needed to bypass a bug // in Symbian's C++ compiler that prevents it from picking the right // overload between: // // PrintTo(const T& x, ...); // PrintTo(T* x, ...); DefaultPrintTo(IsContainerTest(0), is_pointer(), value, os); } // The following list of PrintTo() overloads tells // UniversalPrinter::Print() how to print standard types (built-in // types, strings, plain arrays, and pointers). // Overloads for various char types. GTEST_API_ void PrintTo(unsigned char c, ::std::ostream* os); GTEST_API_ void PrintTo(signed char c, ::std::ostream* os); inline void PrintTo(char c, ::std::ostream* os) { // When printing a plain char, we always treat it as unsigned. This // way, the output won't be affected by whether the compiler thinks // char is signed or not. PrintTo(static_cast(c), os); } // Overloads for other simple built-in types. inline void PrintTo(bool x, ::std::ostream* os) { *os << (x ? "true" : "false"); } // Overload for wchar_t type. // Prints a wchar_t as a symbol if it is printable or as its internal // code otherwise and also as its decimal code (except for L'\0'). // The L'\0' char is printed as "L'\\0'". The decimal code is printed // as signed integer when wchar_t is implemented by the compiler // as a signed type and is printed as an unsigned integer when wchar_t // is implemented as an unsigned type. GTEST_API_ void PrintTo(wchar_t wc, ::std::ostream* os); // Overloads for C strings. GTEST_API_ void PrintTo(const char* s, ::std::ostream* os); inline void PrintTo(char* s, ::std::ostream* os) { PrintTo(ImplicitCast_(s), os); } // signed/unsigned char is often used for representing binary data, so // we print pointers to it as void* to be safe. inline void PrintTo(const signed char* s, ::std::ostream* os) { PrintTo(ImplicitCast_(s), os); } inline void PrintTo(signed char* s, ::std::ostream* os) { PrintTo(ImplicitCast_(s), os); } inline void PrintTo(const unsigned char* s, ::std::ostream* os) { PrintTo(ImplicitCast_(s), os); } inline void PrintTo(unsigned char* s, ::std::ostream* os) { PrintTo(ImplicitCast_(s), os); } // MSVC can be configured to define wchar_t as a typedef of unsigned // short. It defines _NATIVE_WCHAR_T_DEFINED when wchar_t is a native // type. When wchar_t is a typedef, defining an overload for const // wchar_t* would cause unsigned short* be printed as a wide string, // possibly causing invalid memory accesses. #if !defined(_MSC_VER) || defined(_NATIVE_WCHAR_T_DEFINED) // Overloads for wide C strings GTEST_API_ void PrintTo(const wchar_t* s, ::std::ostream* os); inline void PrintTo(wchar_t* s, ::std::ostream* os) { PrintTo(ImplicitCast_(s), os); } #endif // Overload for C arrays. Multi-dimensional arrays are printed // properly. // Prints the given number of elements in an array, without printing // the curly braces. template void PrintRawArrayTo(const T a[], size_t count, ::std::ostream* os) { UniversalPrint(a[0], os); for (size_t i = 1; i != count; i++) { *os << ", "; UniversalPrint(a[i], os); } } // Overloads for ::string and ::std::string. #if GTEST_HAS_GLOBAL_STRING GTEST_API_ void PrintStringTo(const ::string&s, ::std::ostream* os); inline void PrintTo(const ::string& s, ::std::ostream* os) { PrintStringTo(s, os); } #endif // GTEST_HAS_GLOBAL_STRING GTEST_API_ void PrintStringTo(const ::std::string&s, ::std::ostream* os); inline void PrintTo(const ::std::string& s, ::std::ostream* os) { PrintStringTo(s, os); } // Overloads for ::wstring and ::std::wstring. #if GTEST_HAS_GLOBAL_WSTRING GTEST_API_ void PrintWideStringTo(const ::wstring&s, ::std::ostream* os); inline void PrintTo(const ::wstring& s, ::std::ostream* os) { PrintWideStringTo(s, os); } #endif // GTEST_HAS_GLOBAL_WSTRING #if GTEST_HAS_STD_WSTRING GTEST_API_ void PrintWideStringTo(const ::std::wstring&s, ::std::ostream* os); inline void PrintTo(const ::std::wstring& s, ::std::ostream* os) { PrintWideStringTo(s, os); } #endif // GTEST_HAS_STD_WSTRING #if GTEST_HAS_TR1_TUPLE // Overload for ::std::tr1::tuple. Needed for printing function arguments, // which are packed as tuples. // Helper function for printing a tuple. T must be instantiated with // a tuple type. template void PrintTupleTo(const T& t, ::std::ostream* os); // Overloaded PrintTo() for tuples of various arities. We support // tuples of up-to 10 fields. The following implementation works // regardless of whether tr1::tuple is implemented using the // non-standard variadic template feature or not. inline void PrintTo(const ::std::tr1::tuple<>& t, ::std::ostream* os) { PrintTupleTo(t, os); } template void PrintTo(const ::std::tr1::tuple& t, ::std::ostream* os) { PrintTupleTo(t, os); } template void PrintTo(const ::std::tr1::tuple& t, ::std::ostream* os) { PrintTupleTo(t, os); } template void PrintTo(const ::std::tr1::tuple& t, ::std::ostream* os) { PrintTupleTo(t, os); } template void PrintTo(const ::std::tr1::tuple& t, ::std::ostream* os) { PrintTupleTo(t, os); } template void PrintTo(const ::std::tr1::tuple& t, ::std::ostream* os) { PrintTupleTo(t, os); } template void PrintTo(const ::std::tr1::tuple& t, ::std::ostream* os) { PrintTupleTo(t, os); } template void PrintTo(const ::std::tr1::tuple& t, ::std::ostream* os) { PrintTupleTo(t, os); } template void PrintTo(const ::std::tr1::tuple& t, ::std::ostream* os) { PrintTupleTo(t, os); } template void PrintTo(const ::std::tr1::tuple& t, ::std::ostream* os) { PrintTupleTo(t, os); } template void PrintTo( const ::std::tr1::tuple& t, ::std::ostream* os) { PrintTupleTo(t, os); } #endif // GTEST_HAS_TR1_TUPLE // Overload for std::pair. template void PrintTo(const ::std::pair& value, ::std::ostream* os) { *os << '('; // We cannot use UniversalPrint(value.first, os) here, as T1 may be // a reference type. The same for printing value.second. UniversalPrinter::Print(value.first, os); *os << ", "; UniversalPrinter::Print(value.second, os); *os << ')'; } // Implements printing a non-reference type T by letting the compiler // pick the right overload of PrintTo() for T. template class UniversalPrinter { public: // MSVC warns about adding const to a function type, so we want to // disable the warning. #ifdef _MSC_VER # pragma warning(push) // Saves the current warning state. # pragma warning(disable:4180) // Temporarily disables warning 4180. #endif // _MSC_VER // Note: we deliberately don't call this PrintTo(), as that name // conflicts with ::testing::internal::PrintTo in the body of the // function. static void Print(const T& value, ::std::ostream* os) { // By default, ::testing::internal::PrintTo() is used for printing // the value. // // Thanks to Koenig look-up, if T is a class and has its own // PrintTo() function defined in its namespace, that function will // be visible here. Since it is more specific than the generic ones // in ::testing::internal, it will be picked by the compiler in the // following statement - exactly what we want. PrintTo(value, os); } #ifdef _MSC_VER # pragma warning(pop) // Restores the warning state. #endif // _MSC_VER }; // UniversalPrintArray(begin, len, os) prints an array of 'len' // elements, starting at address 'begin'. template void UniversalPrintArray(const T* begin, size_t len, ::std::ostream* os) { if (len == 0) { *os << "{}"; } else { *os << "{ "; const size_t kThreshold = 18; const size_t kChunkSize = 8; // If the array has more than kThreshold elements, we'll have to // omit some details by printing only the first and the last // kChunkSize elements. // TODO(wan@google.com): let the user control the threshold using a flag. if (len <= kThreshold) { PrintRawArrayTo(begin, len, os); } else { PrintRawArrayTo(begin, kChunkSize, os); *os << ", ..., "; PrintRawArrayTo(begin + len - kChunkSize, kChunkSize, os); } *os << " }"; } } // This overload prints a (const) char array compactly. GTEST_API_ void UniversalPrintArray( const char* begin, size_t len, ::std::ostream* os); // This overload prints a (const) wchar_t array compactly. GTEST_API_ void UniversalPrintArray( const wchar_t* begin, size_t len, ::std::ostream* os); // Implements printing an array type T[N]. template class UniversalPrinter { public: // Prints the given array, omitting some elements when there are too // many. static void Print(const T (&a)[N], ::std::ostream* os) { UniversalPrintArray(a, N, os); } }; // Implements printing a reference type T&. template class UniversalPrinter { public: // MSVC warns about adding const to a function type, so we want to // disable the warning. #ifdef _MSC_VER # pragma warning(push) // Saves the current warning state. # pragma warning(disable:4180) // Temporarily disables warning 4180. #endif // _MSC_VER static void Print(const T& value, ::std::ostream* os) { // Prints the address of the value. We use reinterpret_cast here // as static_cast doesn't compile when T is a function type. *os << "@" << reinterpret_cast(&value) << " "; // Then prints the value itself. UniversalPrint(value, os); } #ifdef _MSC_VER # pragma warning(pop) // Restores the warning state. #endif // _MSC_VER }; // Prints a value tersely: for a reference type, the referenced value // (but not the address) is printed; for a (const) char pointer, the // NUL-terminated string (but not the pointer) is printed. template class UniversalTersePrinter { public: static void Print(const T& value, ::std::ostream* os) { UniversalPrint(value, os); } }; template class UniversalTersePrinter { public: static void Print(const T& value, ::std::ostream* os) { UniversalPrint(value, os); } }; template class UniversalTersePrinter { public: static void Print(const T (&value)[N], ::std::ostream* os) { UniversalPrinter::Print(value, os); } }; template <> class UniversalTersePrinter { public: static void Print(const char* str, ::std::ostream* os) { if (str == NULL) { *os << "NULL"; } else { UniversalPrint(string(str), os); } } }; template <> class UniversalTersePrinter { public: static void Print(char* str, ::std::ostream* os) { UniversalTersePrinter::Print(str, os); } }; #if GTEST_HAS_STD_WSTRING template <> class UniversalTersePrinter { public: static void Print(const wchar_t* str, ::std::ostream* os) { if (str == NULL) { *os << "NULL"; } else { UniversalPrint(::std::wstring(str), os); } } }; #endif template <> class UniversalTersePrinter { public: static void Print(wchar_t* str, ::std::ostream* os) { UniversalTersePrinter::Print(str, os); } }; template void UniversalTersePrint(const T& value, ::std::ostream* os) { UniversalTersePrinter::Print(value, os); } // Prints a value using the type inferred by the compiler. The // difference between this and UniversalTersePrint() is that for a // (const) char pointer, this prints both the pointer and the // NUL-terminated string. template void UniversalPrint(const T& value, ::std::ostream* os) { // A workarond for the bug in VC++ 7.1 that prevents us from instantiating // UniversalPrinter with T directly. typedef T T1; UniversalPrinter::Print(value, os); } #if GTEST_HAS_TR1_TUPLE typedef ::std::vector Strings; // This helper template allows PrintTo() for tuples and // UniversalTersePrintTupleFieldsToStrings() to be defined by // induction on the number of tuple fields. The idea is that // TuplePrefixPrinter::PrintPrefixTo(t, os) prints the first N // fields in tuple t, and can be defined in terms of // TuplePrefixPrinter. // The inductive case. template struct TuplePrefixPrinter { // Prints the first N fields of a tuple. template static void PrintPrefixTo(const Tuple& t, ::std::ostream* os) { TuplePrefixPrinter::PrintPrefixTo(t, os); *os << ", "; UniversalPrinter::type> ::Print(::std::tr1::get(t), os); } // Tersely prints the first N fields of a tuple to a string vector, // one element for each field. template static void TersePrintPrefixToStrings(const Tuple& t, Strings* strings) { TuplePrefixPrinter::TersePrintPrefixToStrings(t, strings); ::std::stringstream ss; UniversalTersePrint(::std::tr1::get(t), &ss); strings->push_back(ss.str()); } }; // Base cases. template <> struct TuplePrefixPrinter<0> { template static void PrintPrefixTo(const Tuple&, ::std::ostream*) {} template static void TersePrintPrefixToStrings(const Tuple&, Strings*) {} }; // We have to specialize the entire TuplePrefixPrinter<> class // template here, even though the definition of // TersePrintPrefixToStrings() is the same as the generic version, as // Embarcadero (formerly CodeGear, formerly Borland) C++ doesn't // support specializing a method template of a class template. template <> struct TuplePrefixPrinter<1> { template static void PrintPrefixTo(const Tuple& t, ::std::ostream* os) { UniversalPrinter::type>:: Print(::std::tr1::get<0>(t), os); } template static void TersePrintPrefixToStrings(const Tuple& t, Strings* strings) { ::std::stringstream ss; UniversalTersePrint(::std::tr1::get<0>(t), &ss); strings->push_back(ss.str()); } }; // Helper function for printing a tuple. T must be instantiated with // a tuple type. template void PrintTupleTo(const T& t, ::std::ostream* os) { *os << "("; TuplePrefixPrinter< ::std::tr1::tuple_size::value>:: PrintPrefixTo(t, os); *os << ")"; } // Prints the fields of a tuple tersely to a string vector, one // element for each field. See the comment before // UniversalTersePrint() for how we define "tersely". template Strings UniversalTersePrintTupleFieldsToStrings(const Tuple& value) { Strings result; TuplePrefixPrinter< ::std::tr1::tuple_size::value>:: TersePrintPrefixToStrings(value, &result); return result; } #endif // GTEST_HAS_TR1_TUPLE } // namespace internal template ::std::string PrintToString(const T& value) { ::std::stringstream ss; internal::UniversalTersePrinter::Print(value, &ss); return ss.str(); } } // namespace testing #endif // GTEST_INCLUDE_GTEST_GTEST_PRINTERS_H_ #if GTEST_HAS_PARAM_TEST namespace testing { namespace internal { // INTERNAL IMPLEMENTATION - DO NOT USE IN USER CODE. // // Outputs a message explaining invalid registration of different // fixture class for the same test case. This may happen when // TEST_P macro is used to define two tests with the same name // but in different namespaces. GTEST_API_ void ReportInvalidTestCaseType(const char* test_case_name, const char* file, int line); template class ParamGeneratorInterface; template class ParamGenerator; // Interface for iterating over elements provided by an implementation // of ParamGeneratorInterface. template class ParamIteratorInterface { public: virtual ~ParamIteratorInterface() {} // A pointer to the base generator instance. // Used only for the purposes of iterator comparison // to make sure that two iterators belong to the same generator. virtual const ParamGeneratorInterface* BaseGenerator() const = 0; // Advances iterator to point to the next element // provided by the generator. The caller is responsible // for not calling Advance() on an iterator equal to // BaseGenerator()->End(). virtual void Advance() = 0; // Clones the iterator object. Used for implementing copy semantics // of ParamIterator. virtual ParamIteratorInterface* Clone() const = 0; // Dereferences the current iterator and provides (read-only) access // to the pointed value. It is the caller's responsibility not to call // Current() on an iterator equal to BaseGenerator()->End(). // Used for implementing ParamGenerator::operator*(). virtual const T* Current() const = 0; // Determines whether the given iterator and other point to the same // element in the sequence generated by the generator. // Used for implementing ParamGenerator::operator==(). virtual bool Equals(const ParamIteratorInterface& other) const = 0; }; // Class iterating over elements provided by an implementation of // ParamGeneratorInterface. It wraps ParamIteratorInterface // and implements the const forward iterator concept. template class ParamIterator { public: typedef T value_type; typedef const T& reference; typedef ptrdiff_t difference_type; // ParamIterator assumes ownership of the impl_ pointer. ParamIterator(const ParamIterator& other) : impl_(other.impl_->Clone()) {} ParamIterator& operator=(const ParamIterator& other) { if (this != &other) impl_.reset(other.impl_->Clone()); return *this; } const T& operator*() const { return *impl_->Current(); } const T* operator->() const { return impl_->Current(); } // Prefix version of operator++. ParamIterator& operator++() { impl_->Advance(); return *this; } // Postfix version of operator++. ParamIterator operator++(int /*unused*/) { ParamIteratorInterface* clone = impl_->Clone(); impl_->Advance(); return ParamIterator(clone); } bool operator==(const ParamIterator& other) const { return impl_.get() == other.impl_.get() || impl_->Equals(*other.impl_); } bool operator!=(const ParamIterator& other) const { return !(*this == other); } private: friend class ParamGenerator; explicit ParamIterator(ParamIteratorInterface* impl) : impl_(impl) {} scoped_ptr > impl_; }; // ParamGeneratorInterface is the binary interface to access generators // defined in other translation units. template class ParamGeneratorInterface { public: typedef T ParamType; virtual ~ParamGeneratorInterface() {} // Generator interface definition virtual ParamIteratorInterface* Begin() const = 0; virtual ParamIteratorInterface* End() const = 0; }; // Wraps ParamGeneratorInterface and provides general generator syntax // compatible with the STL Container concept. // This class implements copy initialization semantics and the contained // ParamGeneratorInterface instance is shared among all copies // of the original object. This is possible because that instance is immutable. template class ParamGenerator { public: typedef ParamIterator iterator; explicit ParamGenerator(ParamGeneratorInterface* impl) : impl_(impl) {} ParamGenerator(const ParamGenerator& other) : impl_(other.impl_) {} ParamGenerator& operator=(const ParamGenerator& other) { impl_ = other.impl_; return *this; } iterator begin() const { return iterator(impl_->Begin()); } iterator end() const { return iterator(impl_->End()); } private: linked_ptr > impl_; }; // Generates values from a range of two comparable values. Can be used to // generate sequences of user-defined types that implement operator+() and // operator<(). // This class is used in the Range() function. template class RangeGenerator : public ParamGeneratorInterface { public: RangeGenerator(T begin, T end, IncrementT step) : begin_(begin), end_(end), step_(step), end_index_(CalculateEndIndex(begin, end, step)) {} virtual ~RangeGenerator() {} virtual ParamIteratorInterface* Begin() const { return new Iterator(this, begin_, 0, step_); } virtual ParamIteratorInterface* End() const { return new Iterator(this, end_, end_index_, step_); } private: class Iterator : public ParamIteratorInterface { public: Iterator(const ParamGeneratorInterface* base, T value, int index, IncrementT step) : base_(base), value_(value), index_(index), step_(step) {} virtual ~Iterator() {} virtual const ParamGeneratorInterface* BaseGenerator() const { return base_; } virtual void Advance() { value_ = value_ + step_; index_++; } virtual ParamIteratorInterface* Clone() const { return new Iterator(*this); } virtual const T* Current() const { return &value_; } virtual bool Equals(const ParamIteratorInterface& other) const { // Having the same base generator guarantees that the other // iterator is of the same type and we can downcast. GTEST_CHECK_(BaseGenerator() == other.BaseGenerator()) << "The program attempted to compare iterators " << "from different generators." << std::endl; const int other_index = CheckedDowncastToActualType(&other)->index_; return index_ == other_index; } private: Iterator(const Iterator& other) : ParamIteratorInterface(), base_(other.base_), value_(other.value_), index_(other.index_), step_(other.step_) {} // No implementation - assignment is unsupported. void operator=(const Iterator& other); const ParamGeneratorInterface* const base_; T value_; int index_; const IncrementT step_; }; // class RangeGenerator::Iterator static int CalculateEndIndex(const T& begin, const T& end, const IncrementT& step) { int end_index = 0; for (T i = begin; i < end; i = i + step) end_index++; return end_index; } // No implementation - assignment is unsupported. void operator=(const RangeGenerator& other); const T begin_; const T end_; const IncrementT step_; // The index for the end() iterator. All the elements in the generated // sequence are indexed (0-based) to aid iterator comparison. const int end_index_; }; // class RangeGenerator // Generates values from a pair of STL-style iterators. Used in the // ValuesIn() function. The elements are copied from the source range // since the source can be located on the stack, and the generator // is likely to persist beyond that stack frame. template class ValuesInIteratorRangeGenerator : public ParamGeneratorInterface { public: template ValuesInIteratorRangeGenerator(ForwardIterator begin, ForwardIterator end) : container_(begin, end) {} virtual ~ValuesInIteratorRangeGenerator() {} virtual ParamIteratorInterface* Begin() const { return new Iterator(this, container_.begin()); } virtual ParamIteratorInterface* End() const { return new Iterator(this, container_.end()); } private: typedef typename ::std::vector ContainerType; class Iterator : public ParamIteratorInterface { public: Iterator(const ParamGeneratorInterface* base, typename ContainerType::const_iterator iterator) : base_(base), iterator_(iterator) {} virtual ~Iterator() {} virtual const ParamGeneratorInterface* BaseGenerator() const { return base_; } virtual void Advance() { ++iterator_; value_.reset(); } virtual ParamIteratorInterface* Clone() const { return new Iterator(*this); } // We need to use cached value referenced by iterator_ because *iterator_ // can return a temporary object (and of type other then T), so just // having "return &*iterator_;" doesn't work. // value_ is updated here and not in Advance() because Advance() // can advance iterator_ beyond the end of the range, and we cannot // detect that fact. The client code, on the other hand, is // responsible for not calling Current() on an out-of-range iterator. virtual const T* Current() const { if (value_.get() == NULL) value_.reset(new T(*iterator_)); return value_.get(); } virtual bool Equals(const ParamIteratorInterface& other) const { // Having the same base generator guarantees that the other // iterator is of the same type and we can downcast. GTEST_CHECK_(BaseGenerator() == other.BaseGenerator()) << "The program attempted to compare iterators " << "from different generators." << std::endl; return iterator_ == CheckedDowncastToActualType(&other)->iterator_; } private: Iterator(const Iterator& other) // The explicit constructor call suppresses a false warning // emitted by gcc when supplied with the -Wextra option. : ParamIteratorInterface(), base_(other.base_), iterator_(other.iterator_) {} const ParamGeneratorInterface* const base_; typename ContainerType::const_iterator iterator_; // A cached value of *iterator_. We keep it here to allow access by // pointer in the wrapping iterator's operator->(). // value_ needs to be mutable to be accessed in Current(). // Use of scoped_ptr helps manage cached value's lifetime, // which is bound by the lifespan of the iterator itself. mutable scoped_ptr value_; }; // class ValuesInIteratorRangeGenerator::Iterator // No implementation - assignment is unsupported. void operator=(const ValuesInIteratorRangeGenerator& other); const ContainerType container_; }; // class ValuesInIteratorRangeGenerator // INTERNAL IMPLEMENTATION - DO NOT USE IN USER CODE. // // Stores a parameter value and later creates tests parameterized with that // value. template class ParameterizedTestFactory : public TestFactoryBase { public: typedef typename TestClass::ParamType ParamType; explicit ParameterizedTestFactory(ParamType parameter) : parameter_(parameter) {} virtual Test* CreateTest() { TestClass::SetParam(¶meter_); return new TestClass(); } private: const ParamType parameter_; GTEST_DISALLOW_COPY_AND_ASSIGN_(ParameterizedTestFactory); }; // INTERNAL IMPLEMENTATION - DO NOT USE IN USER CODE. // // TestMetaFactoryBase is a base class for meta-factories that create // test factories for passing into MakeAndRegisterTestInfo function. template class TestMetaFactoryBase { public: virtual ~TestMetaFactoryBase() {} virtual TestFactoryBase* CreateTestFactory(ParamType parameter) = 0; }; // INTERNAL IMPLEMENTATION - DO NOT USE IN USER CODE. // // TestMetaFactory creates test factories for passing into // MakeAndRegisterTestInfo function. Since MakeAndRegisterTestInfo receives // ownership of test factory pointer, same factory object cannot be passed // into that method twice. But ParameterizedTestCaseInfo is going to call // it for each Test/Parameter value combination. Thus it needs meta factory // creator class. template class TestMetaFactory : public TestMetaFactoryBase { public: typedef typename TestCase::ParamType ParamType; TestMetaFactory() {} virtual TestFactoryBase* CreateTestFactory(ParamType parameter) { return new ParameterizedTestFactory(parameter); } private: GTEST_DISALLOW_COPY_AND_ASSIGN_(TestMetaFactory); }; // INTERNAL IMPLEMENTATION - DO NOT USE IN USER CODE. // // ParameterizedTestCaseInfoBase is a generic interface // to ParameterizedTestCaseInfo classes. ParameterizedTestCaseInfoBase // accumulates test information provided by TEST_P macro invocations // and generators provided by INSTANTIATE_TEST_CASE_P macro invocations // and uses that information to register all resulting test instances // in RegisterTests method. The ParameterizeTestCaseRegistry class holds // a collection of pointers to the ParameterizedTestCaseInfo objects // and calls RegisterTests() on each of them when asked. class ParameterizedTestCaseInfoBase { public: virtual ~ParameterizedTestCaseInfoBase() {} // Base part of test case name for display purposes. virtual const string& GetTestCaseName() const = 0; // Test case id to verify identity. virtual TypeId GetTestCaseTypeId() const = 0; // UnitTest class invokes this method to register tests in this // test case right before running them in RUN_ALL_TESTS macro. // This method should not be called more then once on any single // instance of a ParameterizedTestCaseInfoBase derived class. virtual void RegisterTests() = 0; protected: ParameterizedTestCaseInfoBase() {} private: GTEST_DISALLOW_COPY_AND_ASSIGN_(ParameterizedTestCaseInfoBase); }; // INTERNAL IMPLEMENTATION - DO NOT USE IN USER CODE. // // ParameterizedTestCaseInfo accumulates tests obtained from TEST_P // macro invocations for a particular test case and generators // obtained from INSTANTIATE_TEST_CASE_P macro invocations for that // test case. It registers tests with all values generated by all // generators when asked. template class ParameterizedTestCaseInfo : public ParameterizedTestCaseInfoBase { public: // ParamType and GeneratorCreationFunc are private types but are required // for declarations of public methods AddTestPattern() and // AddTestCaseInstantiation(). typedef typename TestCase::ParamType ParamType; // A function that returns an instance of appropriate generator type. typedef ParamGenerator(GeneratorCreationFunc)(); explicit ParameterizedTestCaseInfo(const char* name) : test_case_name_(name) {} // Test case base name for display purposes. virtual const string& GetTestCaseName() const { return test_case_name_; } // Test case id to verify identity. virtual TypeId GetTestCaseTypeId() const { return GetTypeId(); } // TEST_P macro uses AddTestPattern() to record information // about a single test in a LocalTestInfo structure. // test_case_name is the base name of the test case (without invocation // prefix). test_base_name is the name of an individual test without // parameter index. For the test SequenceA/FooTest.DoBar/1 FooTest is // test case base name and DoBar is test base name. void AddTestPattern(const char* test_case_name, const char* test_base_name, TestMetaFactoryBase* meta_factory) { tests_.push_back(linked_ptr(new TestInfo(test_case_name, test_base_name, meta_factory))); } // INSTANTIATE_TEST_CASE_P macro uses AddGenerator() to record information // about a generator. int AddTestCaseInstantiation(const string& instantiation_name, GeneratorCreationFunc* func, const char* /* file */, int /* line */) { instantiations_.push_back(::std::make_pair(instantiation_name, func)); return 0; // Return value used only to run this method in namespace scope. } // UnitTest class invokes this method to register tests in this test case // test cases right before running tests in RUN_ALL_TESTS macro. // This method should not be called more then once on any single // instance of a ParameterizedTestCaseInfoBase derived class. // UnitTest has a guard to prevent from calling this method more then once. virtual void RegisterTests() { for (typename TestInfoContainer::iterator test_it = tests_.begin(); test_it != tests_.end(); ++test_it) { linked_ptr test_info = *test_it; for (typename InstantiationContainer::iterator gen_it = instantiations_.begin(); gen_it != instantiations_.end(); ++gen_it) { const string& instantiation_name = gen_it->first; ParamGenerator generator((*gen_it->second)()); string test_case_name; if ( !instantiation_name.empty() ) test_case_name = instantiation_name + "/"; test_case_name += test_info->test_case_base_name; int i = 0; for (typename ParamGenerator::iterator param_it = generator.begin(); param_it != generator.end(); ++param_it, ++i) { Message test_name_stream; test_name_stream << test_info->test_base_name << "/" << i; MakeAndRegisterTestInfo( test_case_name.c_str(), test_name_stream.GetString().c_str(), NULL, // No type parameter. PrintToString(*param_it).c_str(), GetTestCaseTypeId(), TestCase::SetUpTestCase, TestCase::TearDownTestCase, test_info->test_meta_factory->CreateTestFactory(*param_it)); } // for param_it } // for gen_it } // for test_it } // RegisterTests private: // LocalTestInfo structure keeps information about a single test registered // with TEST_P macro. struct TestInfo { TestInfo(const char* a_test_case_base_name, const char* a_test_base_name, TestMetaFactoryBase* a_test_meta_factory) : test_case_base_name(a_test_case_base_name), test_base_name(a_test_base_name), test_meta_factory(a_test_meta_factory) {} const string test_case_base_name; const string test_base_name; const scoped_ptr > test_meta_factory; }; typedef ::std::vector > TestInfoContainer; // Keeps pairs of // received from INSTANTIATE_TEST_CASE_P macros. typedef ::std::vector > InstantiationContainer; const string test_case_name_; TestInfoContainer tests_; InstantiationContainer instantiations_; GTEST_DISALLOW_COPY_AND_ASSIGN_(ParameterizedTestCaseInfo); }; // class ParameterizedTestCaseInfo // INTERNAL IMPLEMENTATION - DO NOT USE IN USER CODE. // // ParameterizedTestCaseRegistry contains a map of ParameterizedTestCaseInfoBase // classes accessed by test case names. TEST_P and INSTANTIATE_TEST_CASE_P // macros use it to locate their corresponding ParameterizedTestCaseInfo // descriptors. class ParameterizedTestCaseRegistry { public: ParameterizedTestCaseRegistry() {} ~ParameterizedTestCaseRegistry() { for (TestCaseInfoContainer::iterator it = test_case_infos_.begin(); it != test_case_infos_.end(); ++it) { delete *it; } } // Looks up or creates and returns a structure containing information about // tests and instantiations of a particular test case. template ParameterizedTestCaseInfo* GetTestCasePatternHolder( const char* test_case_name, const char* file, int line) { ParameterizedTestCaseInfo* typed_test_info = NULL; for (TestCaseInfoContainer::iterator it = test_case_infos_.begin(); it != test_case_infos_.end(); ++it) { if ((*it)->GetTestCaseName() == test_case_name) { if ((*it)->GetTestCaseTypeId() != GetTypeId()) { // Complain about incorrect usage of Google Test facilities // and terminate the program since we cannot guaranty correct // test case setup and tear-down in this case. ReportInvalidTestCaseType(test_case_name, file, line); posix::Abort(); } else { // At this point we are sure that the object we found is of the same // type we are looking for, so we downcast it to that type // without further checks. typed_test_info = CheckedDowncastToActualType< ParameterizedTestCaseInfo >(*it); } break; } } if (typed_test_info == NULL) { typed_test_info = new ParameterizedTestCaseInfo(test_case_name); test_case_infos_.push_back(typed_test_info); } return typed_test_info; } void RegisterTests() { for (TestCaseInfoContainer::iterator it = test_case_infos_.begin(); it != test_case_infos_.end(); ++it) { (*it)->RegisterTests(); } } private: typedef ::std::vector TestCaseInfoContainer; TestCaseInfoContainer test_case_infos_; GTEST_DISALLOW_COPY_AND_ASSIGN_(ParameterizedTestCaseRegistry); }; } // namespace internal } // namespace testing #endif // GTEST_HAS_PARAM_TEST #endif // GTEST_INCLUDE_GTEST_INTERNAL_GTEST_PARAM_UTIL_H_ // This file was GENERATED by command: // pump.py gtest-param-util-generated.h.pump // DO NOT EDIT BY HAND!!! // Copyright 2008 Google Inc. // All Rights Reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: vladl@google.com (Vlad Losev) // Type and function utilities for implementing parameterized tests. // This file is generated by a SCRIPT. DO NOT EDIT BY HAND! // // Currently Google Test supports at most 50 arguments in Values, // and at most 10 arguments in Combine. Please contact // googletestframework@googlegroups.com if you need more. // Please note that the number of arguments to Combine is limited // by the maximum arity of the implementation of tr1::tuple which is // currently set at 10. #ifndef GTEST_INCLUDE_GTEST_INTERNAL_GTEST_PARAM_UTIL_GENERATED_H_ #define GTEST_INCLUDE_GTEST_INTERNAL_GTEST_PARAM_UTIL_GENERATED_H_ // scripts/fuse_gtest.py depends on gtest's own header being #included // *unconditionally*. Therefore these #includes cannot be moved // inside #if GTEST_HAS_PARAM_TEST. #if GTEST_HAS_PARAM_TEST namespace testing { // Forward declarations of ValuesIn(), which is implemented in // include/gtest/gtest-param-test.h. template internal::ParamGenerator< typename ::testing::internal::IteratorTraits::value_type> ValuesIn(ForwardIterator begin, ForwardIterator end); template internal::ParamGenerator ValuesIn(const T (&array)[N]); template internal::ParamGenerator ValuesIn( const Container& container); namespace internal { // Used in the Values() function to provide polymorphic capabilities. template class ValueArray1 { public: explicit ValueArray1(T1 v1) : v1_(v1) {} template operator ParamGenerator() const { return ValuesIn(&v1_, &v1_ + 1); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray1& other); const T1 v1_; }; template class ValueArray2 { public: ValueArray2(T1 v1, T2 v2) : v1_(v1), v2_(v2) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray2& other); const T1 v1_; const T2 v2_; }; template class ValueArray3 { public: ValueArray3(T1 v1, T2 v2, T3 v3) : v1_(v1), v2_(v2), v3_(v3) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray3& other); const T1 v1_; const T2 v2_; const T3 v3_; }; template class ValueArray4 { public: ValueArray4(T1 v1, T2 v2, T3 v3, T4 v4) : v1_(v1), v2_(v2), v3_(v3), v4_(v4) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray4& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; }; template class ValueArray5 { public: ValueArray5(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray5& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; }; template class ValueArray6 { public: ValueArray6(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray6& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; }; template class ValueArray7 { public: ValueArray7(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray7& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; }; template class ValueArray8 { public: ValueArray8(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray8& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; }; template class ValueArray9 { public: ValueArray9(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray9& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; }; template class ValueArray10 { public: ValueArray10(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray10& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; }; template class ValueArray11 { public: ValueArray11(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray11& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; }; template class ValueArray12 { public: ValueArray12(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray12& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; }; template class ValueArray13 { public: ValueArray13(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray13& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; }; template class ValueArray14 { public: ValueArray14(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray14& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; }; template class ValueArray15 { public: ValueArray15(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray15& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; }; template class ValueArray16 { public: ValueArray16(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray16& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; }; template class ValueArray17 { public: ValueArray17(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_), static_cast(v17_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray17& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; const T17 v17_; }; template class ValueArray18 { public: ValueArray18(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_), static_cast(v17_), static_cast(v18_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray18& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; const T17 v17_; const T18 v18_; }; template class ValueArray19 { public: ValueArray19(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_), static_cast(v17_), static_cast(v18_), static_cast(v19_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray19& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; const T17 v17_; const T18 v18_; const T19 v19_; }; template class ValueArray20 { public: ValueArray20(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_), static_cast(v17_), static_cast(v18_), static_cast(v19_), static_cast(v20_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray20& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; const T17 v17_; const T18 v18_; const T19 v19_; const T20 v20_; }; template class ValueArray21 { public: ValueArray21(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_), static_cast(v17_), static_cast(v18_), static_cast(v19_), static_cast(v20_), static_cast(v21_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray21& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; const T17 v17_; const T18 v18_; const T19 v19_; const T20 v20_; const T21 v21_; }; template class ValueArray22 { public: ValueArray22(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_), static_cast(v17_), static_cast(v18_), static_cast(v19_), static_cast(v20_), static_cast(v21_), static_cast(v22_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray22& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; const T17 v17_; const T18 v18_; const T19 v19_; const T20 v20_; const T21 v21_; const T22 v22_; }; template class ValueArray23 { public: ValueArray23(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22), v23_(v23) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_), static_cast(v17_), static_cast(v18_), static_cast(v19_), static_cast(v20_), static_cast(v21_), static_cast(v22_), static_cast(v23_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray23& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; const T17 v17_; const T18 v18_; const T19 v19_; const T20 v20_; const T21 v21_; const T22 v22_; const T23 v23_; }; template class ValueArray24 { public: ValueArray24(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22), v23_(v23), v24_(v24) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_), static_cast(v17_), static_cast(v18_), static_cast(v19_), static_cast(v20_), static_cast(v21_), static_cast(v22_), static_cast(v23_), static_cast(v24_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray24& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; const T17 v17_; const T18 v18_; const T19 v19_; const T20 v20_; const T21 v21_; const T22 v22_; const T23 v23_; const T24 v24_; }; template class ValueArray25 { public: ValueArray25(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22), v23_(v23), v24_(v24), v25_(v25) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_), static_cast(v17_), static_cast(v18_), static_cast(v19_), static_cast(v20_), static_cast(v21_), static_cast(v22_), static_cast(v23_), static_cast(v24_), static_cast(v25_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray25& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; const T17 v17_; const T18 v18_; const T19 v19_; const T20 v20_; const T21 v21_; const T22 v22_; const T23 v23_; const T24 v24_; const T25 v25_; }; template class ValueArray26 { public: ValueArray26(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22), v23_(v23), v24_(v24), v25_(v25), v26_(v26) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_), static_cast(v17_), static_cast(v18_), static_cast(v19_), static_cast(v20_), static_cast(v21_), static_cast(v22_), static_cast(v23_), static_cast(v24_), static_cast(v25_), static_cast(v26_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray26& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; const T17 v17_; const T18 v18_; const T19 v19_; const T20 v20_; const T21 v21_; const T22 v22_; const T23 v23_; const T24 v24_; const T25 v25_; const T26 v26_; }; template class ValueArray27 { public: ValueArray27(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22), v23_(v23), v24_(v24), v25_(v25), v26_(v26), v27_(v27) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_), static_cast(v17_), static_cast(v18_), static_cast(v19_), static_cast(v20_), static_cast(v21_), static_cast(v22_), static_cast(v23_), static_cast(v24_), static_cast(v25_), static_cast(v26_), static_cast(v27_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray27& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; const T17 v17_; const T18 v18_; const T19 v19_; const T20 v20_; const T21 v21_; const T22 v22_; const T23 v23_; const T24 v24_; const T25 v25_; const T26 v26_; const T27 v27_; }; template class ValueArray28 { public: ValueArray28(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22), v23_(v23), v24_(v24), v25_(v25), v26_(v26), v27_(v27), v28_(v28) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_), static_cast(v17_), static_cast(v18_), static_cast(v19_), static_cast(v20_), static_cast(v21_), static_cast(v22_), static_cast(v23_), static_cast(v24_), static_cast(v25_), static_cast(v26_), static_cast(v27_), static_cast(v28_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray28& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; const T17 v17_; const T18 v18_; const T19 v19_; const T20 v20_; const T21 v21_; const T22 v22_; const T23 v23_; const T24 v24_; const T25 v25_; const T26 v26_; const T27 v27_; const T28 v28_; }; template class ValueArray29 { public: ValueArray29(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22), v23_(v23), v24_(v24), v25_(v25), v26_(v26), v27_(v27), v28_(v28), v29_(v29) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_), static_cast(v17_), static_cast(v18_), static_cast(v19_), static_cast(v20_), static_cast(v21_), static_cast(v22_), static_cast(v23_), static_cast(v24_), static_cast(v25_), static_cast(v26_), static_cast(v27_), static_cast(v28_), static_cast(v29_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray29& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; const T17 v17_; const T18 v18_; const T19 v19_; const T20 v20_; const T21 v21_; const T22 v22_; const T23 v23_; const T24 v24_; const T25 v25_; const T26 v26_; const T27 v27_; const T28 v28_; const T29 v29_; }; template class ValueArray30 { public: ValueArray30(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22), v23_(v23), v24_(v24), v25_(v25), v26_(v26), v27_(v27), v28_(v28), v29_(v29), v30_(v30) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_), static_cast(v17_), static_cast(v18_), static_cast(v19_), static_cast(v20_), static_cast(v21_), static_cast(v22_), static_cast(v23_), static_cast(v24_), static_cast(v25_), static_cast(v26_), static_cast(v27_), static_cast(v28_), static_cast(v29_), static_cast(v30_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray30& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; const T17 v17_; const T18 v18_; const T19 v19_; const T20 v20_; const T21 v21_; const T22 v22_; const T23 v23_; const T24 v24_; const T25 v25_; const T26 v26_; const T27 v27_; const T28 v28_; const T29 v29_; const T30 v30_; }; template class ValueArray31 { public: ValueArray31(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22), v23_(v23), v24_(v24), v25_(v25), v26_(v26), v27_(v27), v28_(v28), v29_(v29), v30_(v30), v31_(v31) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_), static_cast(v17_), static_cast(v18_), static_cast(v19_), static_cast(v20_), static_cast(v21_), static_cast(v22_), static_cast(v23_), static_cast(v24_), static_cast(v25_), static_cast(v26_), static_cast(v27_), static_cast(v28_), static_cast(v29_), static_cast(v30_), static_cast(v31_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray31& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; const T17 v17_; const T18 v18_; const T19 v19_; const T20 v20_; const T21 v21_; const T22 v22_; const T23 v23_; const T24 v24_; const T25 v25_; const T26 v26_; const T27 v27_; const T28 v28_; const T29 v29_; const T30 v30_; const T31 v31_; }; template class ValueArray32 { public: ValueArray32(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22), v23_(v23), v24_(v24), v25_(v25), v26_(v26), v27_(v27), v28_(v28), v29_(v29), v30_(v30), v31_(v31), v32_(v32) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_), static_cast(v17_), static_cast(v18_), static_cast(v19_), static_cast(v20_), static_cast(v21_), static_cast(v22_), static_cast(v23_), static_cast(v24_), static_cast(v25_), static_cast(v26_), static_cast(v27_), static_cast(v28_), static_cast(v29_), static_cast(v30_), static_cast(v31_), static_cast(v32_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray32& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; const T17 v17_; const T18 v18_; const T19 v19_; const T20 v20_; const T21 v21_; const T22 v22_; const T23 v23_; const T24 v24_; const T25 v25_; const T26 v26_; const T27 v27_; const T28 v28_; const T29 v29_; const T30 v30_; const T31 v31_; const T32 v32_; }; template class ValueArray33 { public: ValueArray33(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22), v23_(v23), v24_(v24), v25_(v25), v26_(v26), v27_(v27), v28_(v28), v29_(v29), v30_(v30), v31_(v31), v32_(v32), v33_(v33) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_), static_cast(v17_), static_cast(v18_), static_cast(v19_), static_cast(v20_), static_cast(v21_), static_cast(v22_), static_cast(v23_), static_cast(v24_), static_cast(v25_), static_cast(v26_), static_cast(v27_), static_cast(v28_), static_cast(v29_), static_cast(v30_), static_cast(v31_), static_cast(v32_), static_cast(v33_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray33& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; const T17 v17_; const T18 v18_; const T19 v19_; const T20 v20_; const T21 v21_; const T22 v22_; const T23 v23_; const T24 v24_; const T25 v25_; const T26 v26_; const T27 v27_; const T28 v28_; const T29 v29_; const T30 v30_; const T31 v31_; const T32 v32_; const T33 v33_; }; template class ValueArray34 { public: ValueArray34(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22), v23_(v23), v24_(v24), v25_(v25), v26_(v26), v27_(v27), v28_(v28), v29_(v29), v30_(v30), v31_(v31), v32_(v32), v33_(v33), v34_(v34) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_), static_cast(v17_), static_cast(v18_), static_cast(v19_), static_cast(v20_), static_cast(v21_), static_cast(v22_), static_cast(v23_), static_cast(v24_), static_cast(v25_), static_cast(v26_), static_cast(v27_), static_cast(v28_), static_cast(v29_), static_cast(v30_), static_cast(v31_), static_cast(v32_), static_cast(v33_), static_cast(v34_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray34& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; const T17 v17_; const T18 v18_; const T19 v19_; const T20 v20_; const T21 v21_; const T22 v22_; const T23 v23_; const T24 v24_; const T25 v25_; const T26 v26_; const T27 v27_; const T28 v28_; const T29 v29_; const T30 v30_; const T31 v31_; const T32 v32_; const T33 v33_; const T34 v34_; }; template class ValueArray35 { public: ValueArray35(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22), v23_(v23), v24_(v24), v25_(v25), v26_(v26), v27_(v27), v28_(v28), v29_(v29), v30_(v30), v31_(v31), v32_(v32), v33_(v33), v34_(v34), v35_(v35) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_), static_cast(v17_), static_cast(v18_), static_cast(v19_), static_cast(v20_), static_cast(v21_), static_cast(v22_), static_cast(v23_), static_cast(v24_), static_cast(v25_), static_cast(v26_), static_cast(v27_), static_cast(v28_), static_cast(v29_), static_cast(v30_), static_cast(v31_), static_cast(v32_), static_cast(v33_), static_cast(v34_), static_cast(v35_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray35& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; const T17 v17_; const T18 v18_; const T19 v19_; const T20 v20_; const T21 v21_; const T22 v22_; const T23 v23_; const T24 v24_; const T25 v25_; const T26 v26_; const T27 v27_; const T28 v28_; const T29 v29_; const T30 v30_; const T31 v31_; const T32 v32_; const T33 v33_; const T34 v34_; const T35 v35_; }; template class ValueArray36 { public: ValueArray36(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35, T36 v36) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22), v23_(v23), v24_(v24), v25_(v25), v26_(v26), v27_(v27), v28_(v28), v29_(v29), v30_(v30), v31_(v31), v32_(v32), v33_(v33), v34_(v34), v35_(v35), v36_(v36) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_), static_cast(v17_), static_cast(v18_), static_cast(v19_), static_cast(v20_), static_cast(v21_), static_cast(v22_), static_cast(v23_), static_cast(v24_), static_cast(v25_), static_cast(v26_), static_cast(v27_), static_cast(v28_), static_cast(v29_), static_cast(v30_), static_cast(v31_), static_cast(v32_), static_cast(v33_), static_cast(v34_), static_cast(v35_), static_cast(v36_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray36& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; const T17 v17_; const T18 v18_; const T19 v19_; const T20 v20_; const T21 v21_; const T22 v22_; const T23 v23_; const T24 v24_; const T25 v25_; const T26 v26_; const T27 v27_; const T28 v28_; const T29 v29_; const T30 v30_; const T31 v31_; const T32 v32_; const T33 v33_; const T34 v34_; const T35 v35_; const T36 v36_; }; template class ValueArray37 { public: ValueArray37(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35, T36 v36, T37 v37) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22), v23_(v23), v24_(v24), v25_(v25), v26_(v26), v27_(v27), v28_(v28), v29_(v29), v30_(v30), v31_(v31), v32_(v32), v33_(v33), v34_(v34), v35_(v35), v36_(v36), v37_(v37) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_), static_cast(v17_), static_cast(v18_), static_cast(v19_), static_cast(v20_), static_cast(v21_), static_cast(v22_), static_cast(v23_), static_cast(v24_), static_cast(v25_), static_cast(v26_), static_cast(v27_), static_cast(v28_), static_cast(v29_), static_cast(v30_), static_cast(v31_), static_cast(v32_), static_cast(v33_), static_cast(v34_), static_cast(v35_), static_cast(v36_), static_cast(v37_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray37& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; const T17 v17_; const T18 v18_; const T19 v19_; const T20 v20_; const T21 v21_; const T22 v22_; const T23 v23_; const T24 v24_; const T25 v25_; const T26 v26_; const T27 v27_; const T28 v28_; const T29 v29_; const T30 v30_; const T31 v31_; const T32 v32_; const T33 v33_; const T34 v34_; const T35 v35_; const T36 v36_; const T37 v37_; }; template class ValueArray38 { public: ValueArray38(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35, T36 v36, T37 v37, T38 v38) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22), v23_(v23), v24_(v24), v25_(v25), v26_(v26), v27_(v27), v28_(v28), v29_(v29), v30_(v30), v31_(v31), v32_(v32), v33_(v33), v34_(v34), v35_(v35), v36_(v36), v37_(v37), v38_(v38) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_), static_cast(v17_), static_cast(v18_), static_cast(v19_), static_cast(v20_), static_cast(v21_), static_cast(v22_), static_cast(v23_), static_cast(v24_), static_cast(v25_), static_cast(v26_), static_cast(v27_), static_cast(v28_), static_cast(v29_), static_cast(v30_), static_cast(v31_), static_cast(v32_), static_cast(v33_), static_cast(v34_), static_cast(v35_), static_cast(v36_), static_cast(v37_), static_cast(v38_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray38& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; const T17 v17_; const T18 v18_; const T19 v19_; const T20 v20_; const T21 v21_; const T22 v22_; const T23 v23_; const T24 v24_; const T25 v25_; const T26 v26_; const T27 v27_; const T28 v28_; const T29 v29_; const T30 v30_; const T31 v31_; const T32 v32_; const T33 v33_; const T34 v34_; const T35 v35_; const T36 v36_; const T37 v37_; const T38 v38_; }; template class ValueArray39 { public: ValueArray39(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35, T36 v36, T37 v37, T38 v38, T39 v39) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22), v23_(v23), v24_(v24), v25_(v25), v26_(v26), v27_(v27), v28_(v28), v29_(v29), v30_(v30), v31_(v31), v32_(v32), v33_(v33), v34_(v34), v35_(v35), v36_(v36), v37_(v37), v38_(v38), v39_(v39) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_), static_cast(v17_), static_cast(v18_), static_cast(v19_), static_cast(v20_), static_cast(v21_), static_cast(v22_), static_cast(v23_), static_cast(v24_), static_cast(v25_), static_cast(v26_), static_cast(v27_), static_cast(v28_), static_cast(v29_), static_cast(v30_), static_cast(v31_), static_cast(v32_), static_cast(v33_), static_cast(v34_), static_cast(v35_), static_cast(v36_), static_cast(v37_), static_cast(v38_), static_cast(v39_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray39& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; const T17 v17_; const T18 v18_; const T19 v19_; const T20 v20_; const T21 v21_; const T22 v22_; const T23 v23_; const T24 v24_; const T25 v25_; const T26 v26_; const T27 v27_; const T28 v28_; const T29 v29_; const T30 v30_; const T31 v31_; const T32 v32_; const T33 v33_; const T34 v34_; const T35 v35_; const T36 v36_; const T37 v37_; const T38 v38_; const T39 v39_; }; template class ValueArray40 { public: ValueArray40(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35, T36 v36, T37 v37, T38 v38, T39 v39, T40 v40) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22), v23_(v23), v24_(v24), v25_(v25), v26_(v26), v27_(v27), v28_(v28), v29_(v29), v30_(v30), v31_(v31), v32_(v32), v33_(v33), v34_(v34), v35_(v35), v36_(v36), v37_(v37), v38_(v38), v39_(v39), v40_(v40) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_), static_cast(v17_), static_cast(v18_), static_cast(v19_), static_cast(v20_), static_cast(v21_), static_cast(v22_), static_cast(v23_), static_cast(v24_), static_cast(v25_), static_cast(v26_), static_cast(v27_), static_cast(v28_), static_cast(v29_), static_cast(v30_), static_cast(v31_), static_cast(v32_), static_cast(v33_), static_cast(v34_), static_cast(v35_), static_cast(v36_), static_cast(v37_), static_cast(v38_), static_cast(v39_), static_cast(v40_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray40& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; const T17 v17_; const T18 v18_; const T19 v19_; const T20 v20_; const T21 v21_; const T22 v22_; const T23 v23_; const T24 v24_; const T25 v25_; const T26 v26_; const T27 v27_; const T28 v28_; const T29 v29_; const T30 v30_; const T31 v31_; const T32 v32_; const T33 v33_; const T34 v34_; const T35 v35_; const T36 v36_; const T37 v37_; const T38 v38_; const T39 v39_; const T40 v40_; }; template class ValueArray41 { public: ValueArray41(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35, T36 v36, T37 v37, T38 v38, T39 v39, T40 v40, T41 v41) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22), v23_(v23), v24_(v24), v25_(v25), v26_(v26), v27_(v27), v28_(v28), v29_(v29), v30_(v30), v31_(v31), v32_(v32), v33_(v33), v34_(v34), v35_(v35), v36_(v36), v37_(v37), v38_(v38), v39_(v39), v40_(v40), v41_(v41) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_), static_cast(v17_), static_cast(v18_), static_cast(v19_), static_cast(v20_), static_cast(v21_), static_cast(v22_), static_cast(v23_), static_cast(v24_), static_cast(v25_), static_cast(v26_), static_cast(v27_), static_cast(v28_), static_cast(v29_), static_cast(v30_), static_cast(v31_), static_cast(v32_), static_cast(v33_), static_cast(v34_), static_cast(v35_), static_cast(v36_), static_cast(v37_), static_cast(v38_), static_cast(v39_), static_cast(v40_), static_cast(v41_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray41& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; const T17 v17_; const T18 v18_; const T19 v19_; const T20 v20_; const T21 v21_; const T22 v22_; const T23 v23_; const T24 v24_; const T25 v25_; const T26 v26_; const T27 v27_; const T28 v28_; const T29 v29_; const T30 v30_; const T31 v31_; const T32 v32_; const T33 v33_; const T34 v34_; const T35 v35_; const T36 v36_; const T37 v37_; const T38 v38_; const T39 v39_; const T40 v40_; const T41 v41_; }; template class ValueArray42 { public: ValueArray42(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35, T36 v36, T37 v37, T38 v38, T39 v39, T40 v40, T41 v41, T42 v42) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22), v23_(v23), v24_(v24), v25_(v25), v26_(v26), v27_(v27), v28_(v28), v29_(v29), v30_(v30), v31_(v31), v32_(v32), v33_(v33), v34_(v34), v35_(v35), v36_(v36), v37_(v37), v38_(v38), v39_(v39), v40_(v40), v41_(v41), v42_(v42) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_), static_cast(v17_), static_cast(v18_), static_cast(v19_), static_cast(v20_), static_cast(v21_), static_cast(v22_), static_cast(v23_), static_cast(v24_), static_cast(v25_), static_cast(v26_), static_cast(v27_), static_cast(v28_), static_cast(v29_), static_cast(v30_), static_cast(v31_), static_cast(v32_), static_cast(v33_), static_cast(v34_), static_cast(v35_), static_cast(v36_), static_cast(v37_), static_cast(v38_), static_cast(v39_), static_cast(v40_), static_cast(v41_), static_cast(v42_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray42& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; const T17 v17_; const T18 v18_; const T19 v19_; const T20 v20_; const T21 v21_; const T22 v22_; const T23 v23_; const T24 v24_; const T25 v25_; const T26 v26_; const T27 v27_; const T28 v28_; const T29 v29_; const T30 v30_; const T31 v31_; const T32 v32_; const T33 v33_; const T34 v34_; const T35 v35_; const T36 v36_; const T37 v37_; const T38 v38_; const T39 v39_; const T40 v40_; const T41 v41_; const T42 v42_; }; template class ValueArray43 { public: ValueArray43(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35, T36 v36, T37 v37, T38 v38, T39 v39, T40 v40, T41 v41, T42 v42, T43 v43) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22), v23_(v23), v24_(v24), v25_(v25), v26_(v26), v27_(v27), v28_(v28), v29_(v29), v30_(v30), v31_(v31), v32_(v32), v33_(v33), v34_(v34), v35_(v35), v36_(v36), v37_(v37), v38_(v38), v39_(v39), v40_(v40), v41_(v41), v42_(v42), v43_(v43) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_), static_cast(v17_), static_cast(v18_), static_cast(v19_), static_cast(v20_), static_cast(v21_), static_cast(v22_), static_cast(v23_), static_cast(v24_), static_cast(v25_), static_cast(v26_), static_cast(v27_), static_cast(v28_), static_cast(v29_), static_cast(v30_), static_cast(v31_), static_cast(v32_), static_cast(v33_), static_cast(v34_), static_cast(v35_), static_cast(v36_), static_cast(v37_), static_cast(v38_), static_cast(v39_), static_cast(v40_), static_cast(v41_), static_cast(v42_), static_cast(v43_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray43& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; const T17 v17_; const T18 v18_; const T19 v19_; const T20 v20_; const T21 v21_; const T22 v22_; const T23 v23_; const T24 v24_; const T25 v25_; const T26 v26_; const T27 v27_; const T28 v28_; const T29 v29_; const T30 v30_; const T31 v31_; const T32 v32_; const T33 v33_; const T34 v34_; const T35 v35_; const T36 v36_; const T37 v37_; const T38 v38_; const T39 v39_; const T40 v40_; const T41 v41_; const T42 v42_; const T43 v43_; }; template class ValueArray44 { public: ValueArray44(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35, T36 v36, T37 v37, T38 v38, T39 v39, T40 v40, T41 v41, T42 v42, T43 v43, T44 v44) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22), v23_(v23), v24_(v24), v25_(v25), v26_(v26), v27_(v27), v28_(v28), v29_(v29), v30_(v30), v31_(v31), v32_(v32), v33_(v33), v34_(v34), v35_(v35), v36_(v36), v37_(v37), v38_(v38), v39_(v39), v40_(v40), v41_(v41), v42_(v42), v43_(v43), v44_(v44) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_), static_cast(v17_), static_cast(v18_), static_cast(v19_), static_cast(v20_), static_cast(v21_), static_cast(v22_), static_cast(v23_), static_cast(v24_), static_cast(v25_), static_cast(v26_), static_cast(v27_), static_cast(v28_), static_cast(v29_), static_cast(v30_), static_cast(v31_), static_cast(v32_), static_cast(v33_), static_cast(v34_), static_cast(v35_), static_cast(v36_), static_cast(v37_), static_cast(v38_), static_cast(v39_), static_cast(v40_), static_cast(v41_), static_cast(v42_), static_cast(v43_), static_cast(v44_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray44& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; const T17 v17_; const T18 v18_; const T19 v19_; const T20 v20_; const T21 v21_; const T22 v22_; const T23 v23_; const T24 v24_; const T25 v25_; const T26 v26_; const T27 v27_; const T28 v28_; const T29 v29_; const T30 v30_; const T31 v31_; const T32 v32_; const T33 v33_; const T34 v34_; const T35 v35_; const T36 v36_; const T37 v37_; const T38 v38_; const T39 v39_; const T40 v40_; const T41 v41_; const T42 v42_; const T43 v43_; const T44 v44_; }; template class ValueArray45 { public: ValueArray45(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35, T36 v36, T37 v37, T38 v38, T39 v39, T40 v40, T41 v41, T42 v42, T43 v43, T44 v44, T45 v45) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22), v23_(v23), v24_(v24), v25_(v25), v26_(v26), v27_(v27), v28_(v28), v29_(v29), v30_(v30), v31_(v31), v32_(v32), v33_(v33), v34_(v34), v35_(v35), v36_(v36), v37_(v37), v38_(v38), v39_(v39), v40_(v40), v41_(v41), v42_(v42), v43_(v43), v44_(v44), v45_(v45) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_), static_cast(v17_), static_cast(v18_), static_cast(v19_), static_cast(v20_), static_cast(v21_), static_cast(v22_), static_cast(v23_), static_cast(v24_), static_cast(v25_), static_cast(v26_), static_cast(v27_), static_cast(v28_), static_cast(v29_), static_cast(v30_), static_cast(v31_), static_cast(v32_), static_cast(v33_), static_cast(v34_), static_cast(v35_), static_cast(v36_), static_cast(v37_), static_cast(v38_), static_cast(v39_), static_cast(v40_), static_cast(v41_), static_cast(v42_), static_cast(v43_), static_cast(v44_), static_cast(v45_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray45& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; const T17 v17_; const T18 v18_; const T19 v19_; const T20 v20_; const T21 v21_; const T22 v22_; const T23 v23_; const T24 v24_; const T25 v25_; const T26 v26_; const T27 v27_; const T28 v28_; const T29 v29_; const T30 v30_; const T31 v31_; const T32 v32_; const T33 v33_; const T34 v34_; const T35 v35_; const T36 v36_; const T37 v37_; const T38 v38_; const T39 v39_; const T40 v40_; const T41 v41_; const T42 v42_; const T43 v43_; const T44 v44_; const T45 v45_; }; template class ValueArray46 { public: ValueArray46(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35, T36 v36, T37 v37, T38 v38, T39 v39, T40 v40, T41 v41, T42 v42, T43 v43, T44 v44, T45 v45, T46 v46) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22), v23_(v23), v24_(v24), v25_(v25), v26_(v26), v27_(v27), v28_(v28), v29_(v29), v30_(v30), v31_(v31), v32_(v32), v33_(v33), v34_(v34), v35_(v35), v36_(v36), v37_(v37), v38_(v38), v39_(v39), v40_(v40), v41_(v41), v42_(v42), v43_(v43), v44_(v44), v45_(v45), v46_(v46) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_), static_cast(v17_), static_cast(v18_), static_cast(v19_), static_cast(v20_), static_cast(v21_), static_cast(v22_), static_cast(v23_), static_cast(v24_), static_cast(v25_), static_cast(v26_), static_cast(v27_), static_cast(v28_), static_cast(v29_), static_cast(v30_), static_cast(v31_), static_cast(v32_), static_cast(v33_), static_cast(v34_), static_cast(v35_), static_cast(v36_), static_cast(v37_), static_cast(v38_), static_cast(v39_), static_cast(v40_), static_cast(v41_), static_cast(v42_), static_cast(v43_), static_cast(v44_), static_cast(v45_), static_cast(v46_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray46& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; const T17 v17_; const T18 v18_; const T19 v19_; const T20 v20_; const T21 v21_; const T22 v22_; const T23 v23_; const T24 v24_; const T25 v25_; const T26 v26_; const T27 v27_; const T28 v28_; const T29 v29_; const T30 v30_; const T31 v31_; const T32 v32_; const T33 v33_; const T34 v34_; const T35 v35_; const T36 v36_; const T37 v37_; const T38 v38_; const T39 v39_; const T40 v40_; const T41 v41_; const T42 v42_; const T43 v43_; const T44 v44_; const T45 v45_; const T46 v46_; }; template class ValueArray47 { public: ValueArray47(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35, T36 v36, T37 v37, T38 v38, T39 v39, T40 v40, T41 v41, T42 v42, T43 v43, T44 v44, T45 v45, T46 v46, T47 v47) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22), v23_(v23), v24_(v24), v25_(v25), v26_(v26), v27_(v27), v28_(v28), v29_(v29), v30_(v30), v31_(v31), v32_(v32), v33_(v33), v34_(v34), v35_(v35), v36_(v36), v37_(v37), v38_(v38), v39_(v39), v40_(v40), v41_(v41), v42_(v42), v43_(v43), v44_(v44), v45_(v45), v46_(v46), v47_(v47) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_), static_cast(v17_), static_cast(v18_), static_cast(v19_), static_cast(v20_), static_cast(v21_), static_cast(v22_), static_cast(v23_), static_cast(v24_), static_cast(v25_), static_cast(v26_), static_cast(v27_), static_cast(v28_), static_cast(v29_), static_cast(v30_), static_cast(v31_), static_cast(v32_), static_cast(v33_), static_cast(v34_), static_cast(v35_), static_cast(v36_), static_cast(v37_), static_cast(v38_), static_cast(v39_), static_cast(v40_), static_cast(v41_), static_cast(v42_), static_cast(v43_), static_cast(v44_), static_cast(v45_), static_cast(v46_), static_cast(v47_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray47& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; const T17 v17_; const T18 v18_; const T19 v19_; const T20 v20_; const T21 v21_; const T22 v22_; const T23 v23_; const T24 v24_; const T25 v25_; const T26 v26_; const T27 v27_; const T28 v28_; const T29 v29_; const T30 v30_; const T31 v31_; const T32 v32_; const T33 v33_; const T34 v34_; const T35 v35_; const T36 v36_; const T37 v37_; const T38 v38_; const T39 v39_; const T40 v40_; const T41 v41_; const T42 v42_; const T43 v43_; const T44 v44_; const T45 v45_; const T46 v46_; const T47 v47_; }; template class ValueArray48 { public: ValueArray48(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35, T36 v36, T37 v37, T38 v38, T39 v39, T40 v40, T41 v41, T42 v42, T43 v43, T44 v44, T45 v45, T46 v46, T47 v47, T48 v48) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22), v23_(v23), v24_(v24), v25_(v25), v26_(v26), v27_(v27), v28_(v28), v29_(v29), v30_(v30), v31_(v31), v32_(v32), v33_(v33), v34_(v34), v35_(v35), v36_(v36), v37_(v37), v38_(v38), v39_(v39), v40_(v40), v41_(v41), v42_(v42), v43_(v43), v44_(v44), v45_(v45), v46_(v46), v47_(v47), v48_(v48) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_), static_cast(v17_), static_cast(v18_), static_cast(v19_), static_cast(v20_), static_cast(v21_), static_cast(v22_), static_cast(v23_), static_cast(v24_), static_cast(v25_), static_cast(v26_), static_cast(v27_), static_cast(v28_), static_cast(v29_), static_cast(v30_), static_cast(v31_), static_cast(v32_), static_cast(v33_), static_cast(v34_), static_cast(v35_), static_cast(v36_), static_cast(v37_), static_cast(v38_), static_cast(v39_), static_cast(v40_), static_cast(v41_), static_cast(v42_), static_cast(v43_), static_cast(v44_), static_cast(v45_), static_cast(v46_), static_cast(v47_), static_cast(v48_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray48& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; const T17 v17_; const T18 v18_; const T19 v19_; const T20 v20_; const T21 v21_; const T22 v22_; const T23 v23_; const T24 v24_; const T25 v25_; const T26 v26_; const T27 v27_; const T28 v28_; const T29 v29_; const T30 v30_; const T31 v31_; const T32 v32_; const T33 v33_; const T34 v34_; const T35 v35_; const T36 v36_; const T37 v37_; const T38 v38_; const T39 v39_; const T40 v40_; const T41 v41_; const T42 v42_; const T43 v43_; const T44 v44_; const T45 v45_; const T46 v46_; const T47 v47_; const T48 v48_; }; template class ValueArray49 { public: ValueArray49(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35, T36 v36, T37 v37, T38 v38, T39 v39, T40 v40, T41 v41, T42 v42, T43 v43, T44 v44, T45 v45, T46 v46, T47 v47, T48 v48, T49 v49) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22), v23_(v23), v24_(v24), v25_(v25), v26_(v26), v27_(v27), v28_(v28), v29_(v29), v30_(v30), v31_(v31), v32_(v32), v33_(v33), v34_(v34), v35_(v35), v36_(v36), v37_(v37), v38_(v38), v39_(v39), v40_(v40), v41_(v41), v42_(v42), v43_(v43), v44_(v44), v45_(v45), v46_(v46), v47_(v47), v48_(v48), v49_(v49) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_), static_cast(v17_), static_cast(v18_), static_cast(v19_), static_cast(v20_), static_cast(v21_), static_cast(v22_), static_cast(v23_), static_cast(v24_), static_cast(v25_), static_cast(v26_), static_cast(v27_), static_cast(v28_), static_cast(v29_), static_cast(v30_), static_cast(v31_), static_cast(v32_), static_cast(v33_), static_cast(v34_), static_cast(v35_), static_cast(v36_), static_cast(v37_), static_cast(v38_), static_cast(v39_), static_cast(v40_), static_cast(v41_), static_cast(v42_), static_cast(v43_), static_cast(v44_), static_cast(v45_), static_cast(v46_), static_cast(v47_), static_cast(v48_), static_cast(v49_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray49& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; const T17 v17_; const T18 v18_; const T19 v19_; const T20 v20_; const T21 v21_; const T22 v22_; const T23 v23_; const T24 v24_; const T25 v25_; const T26 v26_; const T27 v27_; const T28 v28_; const T29 v29_; const T30 v30_; const T31 v31_; const T32 v32_; const T33 v33_; const T34 v34_; const T35 v35_; const T36 v36_; const T37 v37_; const T38 v38_; const T39 v39_; const T40 v40_; const T41 v41_; const T42 v42_; const T43 v43_; const T44 v44_; const T45 v45_; const T46 v46_; const T47 v47_; const T48 v48_; const T49 v49_; }; template class ValueArray50 { public: ValueArray50(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35, T36 v36, T37 v37, T38 v38, T39 v39, T40 v40, T41 v41, T42 v42, T43 v43, T44 v44, T45 v45, T46 v46, T47 v47, T48 v48, T49 v49, T50 v50) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22), v23_(v23), v24_(v24), v25_(v25), v26_(v26), v27_(v27), v28_(v28), v29_(v29), v30_(v30), v31_(v31), v32_(v32), v33_(v33), v34_(v34), v35_(v35), v36_(v36), v37_(v37), v38_(v38), v39_(v39), v40_(v40), v41_(v41), v42_(v42), v43_(v43), v44_(v44), v45_(v45), v46_(v46), v47_(v47), v48_(v48), v49_(v49), v50_(v50) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_), static_cast(v17_), static_cast(v18_), static_cast(v19_), static_cast(v20_), static_cast(v21_), static_cast(v22_), static_cast(v23_), static_cast(v24_), static_cast(v25_), static_cast(v26_), static_cast(v27_), static_cast(v28_), static_cast(v29_), static_cast(v30_), static_cast(v31_), static_cast(v32_), static_cast(v33_), static_cast(v34_), static_cast(v35_), static_cast(v36_), static_cast(v37_), static_cast(v38_), static_cast(v39_), static_cast(v40_), static_cast(v41_), static_cast(v42_), static_cast(v43_), static_cast(v44_), static_cast(v45_), static_cast(v46_), static_cast(v47_), static_cast(v48_), static_cast(v49_), static_cast(v50_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray50& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; const T17 v17_; const T18 v18_; const T19 v19_; const T20 v20_; const T21 v21_; const T22 v22_; const T23 v23_; const T24 v24_; const T25 v25_; const T26 v26_; const T27 v27_; const T28 v28_; const T29 v29_; const T30 v30_; const T31 v31_; const T32 v32_; const T33 v33_; const T34 v34_; const T35 v35_; const T36 v36_; const T37 v37_; const T38 v38_; const T39 v39_; const T40 v40_; const T41 v41_; const T42 v42_; const T43 v43_; const T44 v44_; const T45 v45_; const T46 v46_; const T47 v47_; const T48 v48_; const T49 v49_; const T50 v50_; }; # if GTEST_HAS_COMBINE // INTERNAL IMPLEMENTATION - DO NOT USE IN USER CODE. // // Generates values from the Cartesian product of values produced // by the argument generators. // template class CartesianProductGenerator2 : public ParamGeneratorInterface< ::std::tr1::tuple > { public: typedef ::std::tr1::tuple ParamType; CartesianProductGenerator2(const ParamGenerator& g1, const ParamGenerator& g2) : g1_(g1), g2_(g2) {} virtual ~CartesianProductGenerator2() {} virtual ParamIteratorInterface* Begin() const { return new Iterator(this, g1_, g1_.begin(), g2_, g2_.begin()); } virtual ParamIteratorInterface* End() const { return new Iterator(this, g1_, g1_.end(), g2_, g2_.end()); } private: class Iterator : public ParamIteratorInterface { public: Iterator(const ParamGeneratorInterface* base, const ParamGenerator& g1, const typename ParamGenerator::iterator& current1, const ParamGenerator& g2, const typename ParamGenerator::iterator& current2) : base_(base), begin1_(g1.begin()), end1_(g1.end()), current1_(current1), begin2_(g2.begin()), end2_(g2.end()), current2_(current2) { ComputeCurrentValue(); } virtual ~Iterator() {} virtual const ParamGeneratorInterface* BaseGenerator() const { return base_; } // Advance should not be called on beyond-of-range iterators // so no component iterators must be beyond end of range, either. virtual void Advance() { assert(!AtEnd()); ++current2_; if (current2_ == end2_) { current2_ = begin2_; ++current1_; } ComputeCurrentValue(); } virtual ParamIteratorInterface* Clone() const { return new Iterator(*this); } virtual const ParamType* Current() const { return ¤t_value_; } virtual bool Equals(const ParamIteratorInterface& other) const { // Having the same base generator guarantees that the other // iterator is of the same type and we can downcast. GTEST_CHECK_(BaseGenerator() == other.BaseGenerator()) << "The program attempted to compare iterators " << "from different generators." << std::endl; const Iterator* typed_other = CheckedDowncastToActualType(&other); // We must report iterators equal if they both point beyond their // respective ranges. That can happen in a variety of fashions, // so we have to consult AtEnd(). return (AtEnd() && typed_other->AtEnd()) || ( current1_ == typed_other->current1_ && current2_ == typed_other->current2_); } private: Iterator(const Iterator& other) : base_(other.base_), begin1_(other.begin1_), end1_(other.end1_), current1_(other.current1_), begin2_(other.begin2_), end2_(other.end2_), current2_(other.current2_) { ComputeCurrentValue(); } void ComputeCurrentValue() { if (!AtEnd()) current_value_ = ParamType(*current1_, *current2_); } bool AtEnd() const { // We must report iterator past the end of the range when either of the // component iterators has reached the end of its range. return current1_ == end1_ || current2_ == end2_; } // No implementation - assignment is unsupported. void operator=(const Iterator& other); const ParamGeneratorInterface* const base_; // begin[i]_ and end[i]_ define the i-th range that Iterator traverses. // current[i]_ is the actual traversing iterator. const typename ParamGenerator::iterator begin1_; const typename ParamGenerator::iterator end1_; typename ParamGenerator::iterator current1_; const typename ParamGenerator::iterator begin2_; const typename ParamGenerator::iterator end2_; typename ParamGenerator::iterator current2_; ParamType current_value_; }; // class CartesianProductGenerator2::Iterator // No implementation - assignment is unsupported. void operator=(const CartesianProductGenerator2& other); const ParamGenerator g1_; const ParamGenerator g2_; }; // class CartesianProductGenerator2 template class CartesianProductGenerator3 : public ParamGeneratorInterface< ::std::tr1::tuple > { public: typedef ::std::tr1::tuple ParamType; CartesianProductGenerator3(const ParamGenerator& g1, const ParamGenerator& g2, const ParamGenerator& g3) : g1_(g1), g2_(g2), g3_(g3) {} virtual ~CartesianProductGenerator3() {} virtual ParamIteratorInterface* Begin() const { return new Iterator(this, g1_, g1_.begin(), g2_, g2_.begin(), g3_, g3_.begin()); } virtual ParamIteratorInterface* End() const { return new Iterator(this, g1_, g1_.end(), g2_, g2_.end(), g3_, g3_.end()); } private: class Iterator : public ParamIteratorInterface { public: Iterator(const ParamGeneratorInterface* base, const ParamGenerator& g1, const typename ParamGenerator::iterator& current1, const ParamGenerator& g2, const typename ParamGenerator::iterator& current2, const ParamGenerator& g3, const typename ParamGenerator::iterator& current3) : base_(base), begin1_(g1.begin()), end1_(g1.end()), current1_(current1), begin2_(g2.begin()), end2_(g2.end()), current2_(current2), begin3_(g3.begin()), end3_(g3.end()), current3_(current3) { ComputeCurrentValue(); } virtual ~Iterator() {} virtual const ParamGeneratorInterface* BaseGenerator() const { return base_; } // Advance should not be called on beyond-of-range iterators // so no component iterators must be beyond end of range, either. virtual void Advance() { assert(!AtEnd()); ++current3_; if (current3_ == end3_) { current3_ = begin3_; ++current2_; } if (current2_ == end2_) { current2_ = begin2_; ++current1_; } ComputeCurrentValue(); } virtual ParamIteratorInterface* Clone() const { return new Iterator(*this); } virtual const ParamType* Current() const { return ¤t_value_; } virtual bool Equals(const ParamIteratorInterface& other) const { // Having the same base generator guarantees that the other // iterator is of the same type and we can downcast. GTEST_CHECK_(BaseGenerator() == other.BaseGenerator()) << "The program attempted to compare iterators " << "from different generators." << std::endl; const Iterator* typed_other = CheckedDowncastToActualType(&other); // We must report iterators equal if they both point beyond their // respective ranges. That can happen in a variety of fashions, // so we have to consult AtEnd(). return (AtEnd() && typed_other->AtEnd()) || ( current1_ == typed_other->current1_ && current2_ == typed_other->current2_ && current3_ == typed_other->current3_); } private: Iterator(const Iterator& other) : base_(other.base_), begin1_(other.begin1_), end1_(other.end1_), current1_(other.current1_), begin2_(other.begin2_), end2_(other.end2_), current2_(other.current2_), begin3_(other.begin3_), end3_(other.end3_), current3_(other.current3_) { ComputeCurrentValue(); } void ComputeCurrentValue() { if (!AtEnd()) current_value_ = ParamType(*current1_, *current2_, *current3_); } bool AtEnd() const { // We must report iterator past the end of the range when either of the // component iterators has reached the end of its range. return current1_ == end1_ || current2_ == end2_ || current3_ == end3_; } // No implementation - assignment is unsupported. void operator=(const Iterator& other); const ParamGeneratorInterface* const base_; // begin[i]_ and end[i]_ define the i-th range that Iterator traverses. // current[i]_ is the actual traversing iterator. const typename ParamGenerator::iterator begin1_; const typename ParamGenerator::iterator end1_; typename ParamGenerator::iterator current1_; const typename ParamGenerator::iterator begin2_; const typename ParamGenerator::iterator end2_; typename ParamGenerator::iterator current2_; const typename ParamGenerator::iterator begin3_; const typename ParamGenerator::iterator end3_; typename ParamGenerator::iterator current3_; ParamType current_value_; }; // class CartesianProductGenerator3::Iterator // No implementation - assignment is unsupported. void operator=(const CartesianProductGenerator3& other); const ParamGenerator g1_; const ParamGenerator g2_; const ParamGenerator g3_; }; // class CartesianProductGenerator3 template class CartesianProductGenerator4 : public ParamGeneratorInterface< ::std::tr1::tuple > { public: typedef ::std::tr1::tuple ParamType; CartesianProductGenerator4(const ParamGenerator& g1, const ParamGenerator& g2, const ParamGenerator& g3, const ParamGenerator& g4) : g1_(g1), g2_(g2), g3_(g3), g4_(g4) {} virtual ~CartesianProductGenerator4() {} virtual ParamIteratorInterface* Begin() const { return new Iterator(this, g1_, g1_.begin(), g2_, g2_.begin(), g3_, g3_.begin(), g4_, g4_.begin()); } virtual ParamIteratorInterface* End() const { return new Iterator(this, g1_, g1_.end(), g2_, g2_.end(), g3_, g3_.end(), g4_, g4_.end()); } private: class Iterator : public ParamIteratorInterface { public: Iterator(const ParamGeneratorInterface* base, const ParamGenerator& g1, const typename ParamGenerator::iterator& current1, const ParamGenerator& g2, const typename ParamGenerator::iterator& current2, const ParamGenerator& g3, const typename ParamGenerator::iterator& current3, const ParamGenerator& g4, const typename ParamGenerator::iterator& current4) : base_(base), begin1_(g1.begin()), end1_(g1.end()), current1_(current1), begin2_(g2.begin()), end2_(g2.end()), current2_(current2), begin3_(g3.begin()), end3_(g3.end()), current3_(current3), begin4_(g4.begin()), end4_(g4.end()), current4_(current4) { ComputeCurrentValue(); } virtual ~Iterator() {} virtual const ParamGeneratorInterface* BaseGenerator() const { return base_; } // Advance should not be called on beyond-of-range iterators // so no component iterators must be beyond end of range, either. virtual void Advance() { assert(!AtEnd()); ++current4_; if (current4_ == end4_) { current4_ = begin4_; ++current3_; } if (current3_ == end3_) { current3_ = begin3_; ++current2_; } if (current2_ == end2_) { current2_ = begin2_; ++current1_; } ComputeCurrentValue(); } virtual ParamIteratorInterface* Clone() const { return new Iterator(*this); } virtual const ParamType* Current() const { return ¤t_value_; } virtual bool Equals(const ParamIteratorInterface& other) const { // Having the same base generator guarantees that the other // iterator is of the same type and we can downcast. GTEST_CHECK_(BaseGenerator() == other.BaseGenerator()) << "The program attempted to compare iterators " << "from different generators." << std::endl; const Iterator* typed_other = CheckedDowncastToActualType(&other); // We must report iterators equal if they both point beyond their // respective ranges. That can happen in a variety of fashions, // so we have to consult AtEnd(). return (AtEnd() && typed_other->AtEnd()) || ( current1_ == typed_other->current1_ && current2_ == typed_other->current2_ && current3_ == typed_other->current3_ && current4_ == typed_other->current4_); } private: Iterator(const Iterator& other) : base_(other.base_), begin1_(other.begin1_), end1_(other.end1_), current1_(other.current1_), begin2_(other.begin2_), end2_(other.end2_), current2_(other.current2_), begin3_(other.begin3_), end3_(other.end3_), current3_(other.current3_), begin4_(other.begin4_), end4_(other.end4_), current4_(other.current4_) { ComputeCurrentValue(); } void ComputeCurrentValue() { if (!AtEnd()) current_value_ = ParamType(*current1_, *current2_, *current3_, *current4_); } bool AtEnd() const { // We must report iterator past the end of the range when either of the // component iterators has reached the end of its range. return current1_ == end1_ || current2_ == end2_ || current3_ == end3_ || current4_ == end4_; } // No implementation - assignment is unsupported. void operator=(const Iterator& other); const ParamGeneratorInterface* const base_; // begin[i]_ and end[i]_ define the i-th range that Iterator traverses. // current[i]_ is the actual traversing iterator. const typename ParamGenerator::iterator begin1_; const typename ParamGenerator::iterator end1_; typename ParamGenerator::iterator current1_; const typename ParamGenerator::iterator begin2_; const typename ParamGenerator::iterator end2_; typename ParamGenerator::iterator current2_; const typename ParamGenerator::iterator begin3_; const typename ParamGenerator::iterator end3_; typename ParamGenerator::iterator current3_; const typename ParamGenerator::iterator begin4_; const typename ParamGenerator::iterator end4_; typename ParamGenerator::iterator current4_; ParamType current_value_; }; // class CartesianProductGenerator4::Iterator // No implementation - assignment is unsupported. void operator=(const CartesianProductGenerator4& other); const ParamGenerator g1_; const ParamGenerator g2_; const ParamGenerator g3_; const ParamGenerator g4_; }; // class CartesianProductGenerator4 template class CartesianProductGenerator5 : public ParamGeneratorInterface< ::std::tr1::tuple > { public: typedef ::std::tr1::tuple ParamType; CartesianProductGenerator5(const ParamGenerator& g1, const ParamGenerator& g2, const ParamGenerator& g3, const ParamGenerator& g4, const ParamGenerator& g5) : g1_(g1), g2_(g2), g3_(g3), g4_(g4), g5_(g5) {} virtual ~CartesianProductGenerator5() {} virtual ParamIteratorInterface* Begin() const { return new Iterator(this, g1_, g1_.begin(), g2_, g2_.begin(), g3_, g3_.begin(), g4_, g4_.begin(), g5_, g5_.begin()); } virtual ParamIteratorInterface* End() const { return new Iterator(this, g1_, g1_.end(), g2_, g2_.end(), g3_, g3_.end(), g4_, g4_.end(), g5_, g5_.end()); } private: class Iterator : public ParamIteratorInterface { public: Iterator(const ParamGeneratorInterface* base, const ParamGenerator& g1, const typename ParamGenerator::iterator& current1, const ParamGenerator& g2, const typename ParamGenerator::iterator& current2, const ParamGenerator& g3, const typename ParamGenerator::iterator& current3, const ParamGenerator& g4, const typename ParamGenerator::iterator& current4, const ParamGenerator& g5, const typename ParamGenerator::iterator& current5) : base_(base), begin1_(g1.begin()), end1_(g1.end()), current1_(current1), begin2_(g2.begin()), end2_(g2.end()), current2_(current2), begin3_(g3.begin()), end3_(g3.end()), current3_(current3), begin4_(g4.begin()), end4_(g4.end()), current4_(current4), begin5_(g5.begin()), end5_(g5.end()), current5_(current5) { ComputeCurrentValue(); } virtual ~Iterator() {} virtual const ParamGeneratorInterface* BaseGenerator() const { return base_; } // Advance should not be called on beyond-of-range iterators // so no component iterators must be beyond end of range, either. virtual void Advance() { assert(!AtEnd()); ++current5_; if (current5_ == end5_) { current5_ = begin5_; ++current4_; } if (current4_ == end4_) { current4_ = begin4_; ++current3_; } if (current3_ == end3_) { current3_ = begin3_; ++current2_; } if (current2_ == end2_) { current2_ = begin2_; ++current1_; } ComputeCurrentValue(); } virtual ParamIteratorInterface* Clone() const { return new Iterator(*this); } virtual const ParamType* Current() const { return ¤t_value_; } virtual bool Equals(const ParamIteratorInterface& other) const { // Having the same base generator guarantees that the other // iterator is of the same type and we can downcast. GTEST_CHECK_(BaseGenerator() == other.BaseGenerator()) << "The program attempted to compare iterators " << "from different generators." << std::endl; const Iterator* typed_other = CheckedDowncastToActualType(&other); // We must report iterators equal if they both point beyond their // respective ranges. That can happen in a variety of fashions, // so we have to consult AtEnd(). return (AtEnd() && typed_other->AtEnd()) || ( current1_ == typed_other->current1_ && current2_ == typed_other->current2_ && current3_ == typed_other->current3_ && current4_ == typed_other->current4_ && current5_ == typed_other->current5_); } private: Iterator(const Iterator& other) : base_(other.base_), begin1_(other.begin1_), end1_(other.end1_), current1_(other.current1_), begin2_(other.begin2_), end2_(other.end2_), current2_(other.current2_), begin3_(other.begin3_), end3_(other.end3_), current3_(other.current3_), begin4_(other.begin4_), end4_(other.end4_), current4_(other.current4_), begin5_(other.begin5_), end5_(other.end5_), current5_(other.current5_) { ComputeCurrentValue(); } void ComputeCurrentValue() { if (!AtEnd()) current_value_ = ParamType(*current1_, *current2_, *current3_, *current4_, *current5_); } bool AtEnd() const { // We must report iterator past the end of the range when either of the // component iterators has reached the end of its range. return current1_ == end1_ || current2_ == end2_ || current3_ == end3_ || current4_ == end4_ || current5_ == end5_; } // No implementation - assignment is unsupported. void operator=(const Iterator& other); const ParamGeneratorInterface* const base_; // begin[i]_ and end[i]_ define the i-th range that Iterator traverses. // current[i]_ is the actual traversing iterator. const typename ParamGenerator::iterator begin1_; const typename ParamGenerator::iterator end1_; typename ParamGenerator::iterator current1_; const typename ParamGenerator::iterator begin2_; const typename ParamGenerator::iterator end2_; typename ParamGenerator::iterator current2_; const typename ParamGenerator::iterator begin3_; const typename ParamGenerator::iterator end3_; typename ParamGenerator::iterator current3_; const typename ParamGenerator::iterator begin4_; const typename ParamGenerator::iterator end4_; typename ParamGenerator::iterator current4_; const typename ParamGenerator::iterator begin5_; const typename ParamGenerator::iterator end5_; typename ParamGenerator::iterator current5_; ParamType current_value_; }; // class CartesianProductGenerator5::Iterator // No implementation - assignment is unsupported. void operator=(const CartesianProductGenerator5& other); const ParamGenerator g1_; const ParamGenerator g2_; const ParamGenerator g3_; const ParamGenerator g4_; const ParamGenerator g5_; }; // class CartesianProductGenerator5 template class CartesianProductGenerator6 : public ParamGeneratorInterface< ::std::tr1::tuple > { public: typedef ::std::tr1::tuple ParamType; CartesianProductGenerator6(const ParamGenerator& g1, const ParamGenerator& g2, const ParamGenerator& g3, const ParamGenerator& g4, const ParamGenerator& g5, const ParamGenerator& g6) : g1_(g1), g2_(g2), g3_(g3), g4_(g4), g5_(g5), g6_(g6) {} virtual ~CartesianProductGenerator6() {} virtual ParamIteratorInterface* Begin() const { return new Iterator(this, g1_, g1_.begin(), g2_, g2_.begin(), g3_, g3_.begin(), g4_, g4_.begin(), g5_, g5_.begin(), g6_, g6_.begin()); } virtual ParamIteratorInterface* End() const { return new Iterator(this, g1_, g1_.end(), g2_, g2_.end(), g3_, g3_.end(), g4_, g4_.end(), g5_, g5_.end(), g6_, g6_.end()); } private: class Iterator : public ParamIteratorInterface { public: Iterator(const ParamGeneratorInterface* base, const ParamGenerator& g1, const typename ParamGenerator::iterator& current1, const ParamGenerator& g2, const typename ParamGenerator::iterator& current2, const ParamGenerator& g3, const typename ParamGenerator::iterator& current3, const ParamGenerator& g4, const typename ParamGenerator::iterator& current4, const ParamGenerator& g5, const typename ParamGenerator::iterator& current5, const ParamGenerator& g6, const typename ParamGenerator::iterator& current6) : base_(base), begin1_(g1.begin()), end1_(g1.end()), current1_(current1), begin2_(g2.begin()), end2_(g2.end()), current2_(current2), begin3_(g3.begin()), end3_(g3.end()), current3_(current3), begin4_(g4.begin()), end4_(g4.end()), current4_(current4), begin5_(g5.begin()), end5_(g5.end()), current5_(current5), begin6_(g6.begin()), end6_(g6.end()), current6_(current6) { ComputeCurrentValue(); } virtual ~Iterator() {} virtual const ParamGeneratorInterface* BaseGenerator() const { return base_; } // Advance should not be called on beyond-of-range iterators // so no component iterators must be beyond end of range, either. virtual void Advance() { assert(!AtEnd()); ++current6_; if (current6_ == end6_) { current6_ = begin6_; ++current5_; } if (current5_ == end5_) { current5_ = begin5_; ++current4_; } if (current4_ == end4_) { current4_ = begin4_; ++current3_; } if (current3_ == end3_) { current3_ = begin3_; ++current2_; } if (current2_ == end2_) { current2_ = begin2_; ++current1_; } ComputeCurrentValue(); } virtual ParamIteratorInterface* Clone() const { return new Iterator(*this); } virtual const ParamType* Current() const { return ¤t_value_; } virtual bool Equals(const ParamIteratorInterface& other) const { // Having the same base generator guarantees that the other // iterator is of the same type and we can downcast. GTEST_CHECK_(BaseGenerator() == other.BaseGenerator()) << "The program attempted to compare iterators " << "from different generators." << std::endl; const Iterator* typed_other = CheckedDowncastToActualType(&other); // We must report iterators equal if they both point beyond their // respective ranges. That can happen in a variety of fashions, // so we have to consult AtEnd(). return (AtEnd() && typed_other->AtEnd()) || ( current1_ == typed_other->current1_ && current2_ == typed_other->current2_ && current3_ == typed_other->current3_ && current4_ == typed_other->current4_ && current5_ == typed_other->current5_ && current6_ == typed_other->current6_); } private: Iterator(const Iterator& other) : base_(other.base_), begin1_(other.begin1_), end1_(other.end1_), current1_(other.current1_), begin2_(other.begin2_), end2_(other.end2_), current2_(other.current2_), begin3_(other.begin3_), end3_(other.end3_), current3_(other.current3_), begin4_(other.begin4_), end4_(other.end4_), current4_(other.current4_), begin5_(other.begin5_), end5_(other.end5_), current5_(other.current5_), begin6_(other.begin6_), end6_(other.end6_), current6_(other.current6_) { ComputeCurrentValue(); } void ComputeCurrentValue() { if (!AtEnd()) current_value_ = ParamType(*current1_, *current2_, *current3_, *current4_, *current5_, *current6_); } bool AtEnd() const { // We must report iterator past the end of the range when either of the // component iterators has reached the end of its range. return current1_ == end1_ || current2_ == end2_ || current3_ == end3_ || current4_ == end4_ || current5_ == end5_ || current6_ == end6_; } // No implementation - assignment is unsupported. void operator=(const Iterator& other); const ParamGeneratorInterface* const base_; // begin[i]_ and end[i]_ define the i-th range that Iterator traverses. // current[i]_ is the actual traversing iterator. const typename ParamGenerator::iterator begin1_; const typename ParamGenerator::iterator end1_; typename ParamGenerator::iterator current1_; const typename ParamGenerator::iterator begin2_; const typename ParamGenerator::iterator end2_; typename ParamGenerator::iterator current2_; const typename ParamGenerator::iterator begin3_; const typename ParamGenerator::iterator end3_; typename ParamGenerator::iterator current3_; const typename ParamGenerator::iterator begin4_; const typename ParamGenerator::iterator end4_; typename ParamGenerator::iterator current4_; const typename ParamGenerator::iterator begin5_; const typename ParamGenerator::iterator end5_; typename ParamGenerator::iterator current5_; const typename ParamGenerator::iterator begin6_; const typename ParamGenerator::iterator end6_; typename ParamGenerator::iterator current6_; ParamType current_value_; }; // class CartesianProductGenerator6::Iterator // No implementation - assignment is unsupported. void operator=(const CartesianProductGenerator6& other); const ParamGenerator g1_; const ParamGenerator g2_; const ParamGenerator g3_; const ParamGenerator g4_; const ParamGenerator g5_; const ParamGenerator g6_; }; // class CartesianProductGenerator6 template class CartesianProductGenerator7 : public ParamGeneratorInterface< ::std::tr1::tuple > { public: typedef ::std::tr1::tuple ParamType; CartesianProductGenerator7(const ParamGenerator& g1, const ParamGenerator& g2, const ParamGenerator& g3, const ParamGenerator& g4, const ParamGenerator& g5, const ParamGenerator& g6, const ParamGenerator& g7) : g1_(g1), g2_(g2), g3_(g3), g4_(g4), g5_(g5), g6_(g6), g7_(g7) {} virtual ~CartesianProductGenerator7() {} virtual ParamIteratorInterface* Begin() const { return new Iterator(this, g1_, g1_.begin(), g2_, g2_.begin(), g3_, g3_.begin(), g4_, g4_.begin(), g5_, g5_.begin(), g6_, g6_.begin(), g7_, g7_.begin()); } virtual ParamIteratorInterface* End() const { return new Iterator(this, g1_, g1_.end(), g2_, g2_.end(), g3_, g3_.end(), g4_, g4_.end(), g5_, g5_.end(), g6_, g6_.end(), g7_, g7_.end()); } private: class Iterator : public ParamIteratorInterface { public: Iterator(const ParamGeneratorInterface* base, const ParamGenerator& g1, const typename ParamGenerator::iterator& current1, const ParamGenerator& g2, const typename ParamGenerator::iterator& current2, const ParamGenerator& g3, const typename ParamGenerator::iterator& current3, const ParamGenerator& g4, const typename ParamGenerator::iterator& current4, const ParamGenerator& g5, const typename ParamGenerator::iterator& current5, const ParamGenerator& g6, const typename ParamGenerator::iterator& current6, const ParamGenerator& g7, const typename ParamGenerator::iterator& current7) : base_(base), begin1_(g1.begin()), end1_(g1.end()), current1_(current1), begin2_(g2.begin()), end2_(g2.end()), current2_(current2), begin3_(g3.begin()), end3_(g3.end()), current3_(current3), begin4_(g4.begin()), end4_(g4.end()), current4_(current4), begin5_(g5.begin()), end5_(g5.end()), current5_(current5), begin6_(g6.begin()), end6_(g6.end()), current6_(current6), begin7_(g7.begin()), end7_(g7.end()), current7_(current7) { ComputeCurrentValue(); } virtual ~Iterator() {} virtual const ParamGeneratorInterface* BaseGenerator() const { return base_; } // Advance should not be called on beyond-of-range iterators // so no component iterators must be beyond end of range, either. virtual void Advance() { assert(!AtEnd()); ++current7_; if (current7_ == end7_) { current7_ = begin7_; ++current6_; } if (current6_ == end6_) { current6_ = begin6_; ++current5_; } if (current5_ == end5_) { current5_ = begin5_; ++current4_; } if (current4_ == end4_) { current4_ = begin4_; ++current3_; } if (current3_ == end3_) { current3_ = begin3_; ++current2_; } if (current2_ == end2_) { current2_ = begin2_; ++current1_; } ComputeCurrentValue(); } virtual ParamIteratorInterface* Clone() const { return new Iterator(*this); } virtual const ParamType* Current() const { return ¤t_value_; } virtual bool Equals(const ParamIteratorInterface& other) const { // Having the same base generator guarantees that the other // iterator is of the same type and we can downcast. GTEST_CHECK_(BaseGenerator() == other.BaseGenerator()) << "The program attempted to compare iterators " << "from different generators." << std::endl; const Iterator* typed_other = CheckedDowncastToActualType(&other); // We must report iterators equal if they both point beyond their // respective ranges. That can happen in a variety of fashions, // so we have to consult AtEnd(). return (AtEnd() && typed_other->AtEnd()) || ( current1_ == typed_other->current1_ && current2_ == typed_other->current2_ && current3_ == typed_other->current3_ && current4_ == typed_other->current4_ && current5_ == typed_other->current5_ && current6_ == typed_other->current6_ && current7_ == typed_other->current7_); } private: Iterator(const Iterator& other) : base_(other.base_), begin1_(other.begin1_), end1_(other.end1_), current1_(other.current1_), begin2_(other.begin2_), end2_(other.end2_), current2_(other.current2_), begin3_(other.begin3_), end3_(other.end3_), current3_(other.current3_), begin4_(other.begin4_), end4_(other.end4_), current4_(other.current4_), begin5_(other.begin5_), end5_(other.end5_), current5_(other.current5_), begin6_(other.begin6_), end6_(other.end6_), current6_(other.current6_), begin7_(other.begin7_), end7_(other.end7_), current7_(other.current7_) { ComputeCurrentValue(); } void ComputeCurrentValue() { if (!AtEnd()) current_value_ = ParamType(*current1_, *current2_, *current3_, *current4_, *current5_, *current6_, *current7_); } bool AtEnd() const { // We must report iterator past the end of the range when either of the // component iterators has reached the end of its range. return current1_ == end1_ || current2_ == end2_ || current3_ == end3_ || current4_ == end4_ || current5_ == end5_ || current6_ == end6_ || current7_ == end7_; } // No implementation - assignment is unsupported. void operator=(const Iterator& other); const ParamGeneratorInterface* const base_; // begin[i]_ and end[i]_ define the i-th range that Iterator traverses. // current[i]_ is the actual traversing iterator. const typename ParamGenerator::iterator begin1_; const typename ParamGenerator::iterator end1_; typename ParamGenerator::iterator current1_; const typename ParamGenerator::iterator begin2_; const typename ParamGenerator::iterator end2_; typename ParamGenerator::iterator current2_; const typename ParamGenerator::iterator begin3_; const typename ParamGenerator::iterator end3_; typename ParamGenerator::iterator current3_; const typename ParamGenerator::iterator begin4_; const typename ParamGenerator::iterator end4_; typename ParamGenerator::iterator current4_; const typename ParamGenerator::iterator begin5_; const typename ParamGenerator::iterator end5_; typename ParamGenerator::iterator current5_; const typename ParamGenerator::iterator begin6_; const typename ParamGenerator::iterator end6_; typename ParamGenerator::iterator current6_; const typename ParamGenerator::iterator begin7_; const typename ParamGenerator::iterator end7_; typename ParamGenerator::iterator current7_; ParamType current_value_; }; // class CartesianProductGenerator7::Iterator // No implementation - assignment is unsupported. void operator=(const CartesianProductGenerator7& other); const ParamGenerator g1_; const ParamGenerator g2_; const ParamGenerator g3_; const ParamGenerator g4_; const ParamGenerator g5_; const ParamGenerator g6_; const ParamGenerator g7_; }; // class CartesianProductGenerator7 template class CartesianProductGenerator8 : public ParamGeneratorInterface< ::std::tr1::tuple > { public: typedef ::std::tr1::tuple ParamType; CartesianProductGenerator8(const ParamGenerator& g1, const ParamGenerator& g2, const ParamGenerator& g3, const ParamGenerator& g4, const ParamGenerator& g5, const ParamGenerator& g6, const ParamGenerator& g7, const ParamGenerator& g8) : g1_(g1), g2_(g2), g3_(g3), g4_(g4), g5_(g5), g6_(g6), g7_(g7), g8_(g8) {} virtual ~CartesianProductGenerator8() {} virtual ParamIteratorInterface* Begin() const { return new Iterator(this, g1_, g1_.begin(), g2_, g2_.begin(), g3_, g3_.begin(), g4_, g4_.begin(), g5_, g5_.begin(), g6_, g6_.begin(), g7_, g7_.begin(), g8_, g8_.begin()); } virtual ParamIteratorInterface* End() const { return new Iterator(this, g1_, g1_.end(), g2_, g2_.end(), g3_, g3_.end(), g4_, g4_.end(), g5_, g5_.end(), g6_, g6_.end(), g7_, g7_.end(), g8_, g8_.end()); } private: class Iterator : public ParamIteratorInterface { public: Iterator(const ParamGeneratorInterface* base, const ParamGenerator& g1, const typename ParamGenerator::iterator& current1, const ParamGenerator& g2, const typename ParamGenerator::iterator& current2, const ParamGenerator& g3, const typename ParamGenerator::iterator& current3, const ParamGenerator& g4, const typename ParamGenerator::iterator& current4, const ParamGenerator& g5, const typename ParamGenerator::iterator& current5, const ParamGenerator& g6, const typename ParamGenerator::iterator& current6, const ParamGenerator& g7, const typename ParamGenerator::iterator& current7, const ParamGenerator& g8, const typename ParamGenerator::iterator& current8) : base_(base), begin1_(g1.begin()), end1_(g1.end()), current1_(current1), begin2_(g2.begin()), end2_(g2.end()), current2_(current2), begin3_(g3.begin()), end3_(g3.end()), current3_(current3), begin4_(g4.begin()), end4_(g4.end()), current4_(current4), begin5_(g5.begin()), end5_(g5.end()), current5_(current5), begin6_(g6.begin()), end6_(g6.end()), current6_(current6), begin7_(g7.begin()), end7_(g7.end()), current7_(current7), begin8_(g8.begin()), end8_(g8.end()), current8_(current8) { ComputeCurrentValue(); } virtual ~Iterator() {} virtual const ParamGeneratorInterface* BaseGenerator() const { return base_; } // Advance should not be called on beyond-of-range iterators // so no component iterators must be beyond end of range, either. virtual void Advance() { assert(!AtEnd()); ++current8_; if (current8_ == end8_) { current8_ = begin8_; ++current7_; } if (current7_ == end7_) { current7_ = begin7_; ++current6_; } if (current6_ == end6_) { current6_ = begin6_; ++current5_; } if (current5_ == end5_) { current5_ = begin5_; ++current4_; } if (current4_ == end4_) { current4_ = begin4_; ++current3_; } if (current3_ == end3_) { current3_ = begin3_; ++current2_; } if (current2_ == end2_) { current2_ = begin2_; ++current1_; } ComputeCurrentValue(); } virtual ParamIteratorInterface* Clone() const { return new Iterator(*this); } virtual const ParamType* Current() const { return ¤t_value_; } virtual bool Equals(const ParamIteratorInterface& other) const { // Having the same base generator guarantees that the other // iterator is of the same type and we can downcast. GTEST_CHECK_(BaseGenerator() == other.BaseGenerator()) << "The program attempted to compare iterators " << "from different generators." << std::endl; const Iterator* typed_other = CheckedDowncastToActualType(&other); // We must report iterators equal if they both point beyond their // respective ranges. That can happen in a variety of fashions, // so we have to consult AtEnd(). return (AtEnd() && typed_other->AtEnd()) || ( current1_ == typed_other->current1_ && current2_ == typed_other->current2_ && current3_ == typed_other->current3_ && current4_ == typed_other->current4_ && current5_ == typed_other->current5_ && current6_ == typed_other->current6_ && current7_ == typed_other->current7_ && current8_ == typed_other->current8_); } private: Iterator(const Iterator& other) : base_(other.base_), begin1_(other.begin1_), end1_(other.end1_), current1_(other.current1_), begin2_(other.begin2_), end2_(other.end2_), current2_(other.current2_), begin3_(other.begin3_), end3_(other.end3_), current3_(other.current3_), begin4_(other.begin4_), end4_(other.end4_), current4_(other.current4_), begin5_(other.begin5_), end5_(other.end5_), current5_(other.current5_), begin6_(other.begin6_), end6_(other.end6_), current6_(other.current6_), begin7_(other.begin7_), end7_(other.end7_), current7_(other.current7_), begin8_(other.begin8_), end8_(other.end8_), current8_(other.current8_) { ComputeCurrentValue(); } void ComputeCurrentValue() { if (!AtEnd()) current_value_ = ParamType(*current1_, *current2_, *current3_, *current4_, *current5_, *current6_, *current7_, *current8_); } bool AtEnd() const { // We must report iterator past the end of the range when either of the // component iterators has reached the end of its range. return current1_ == end1_ || current2_ == end2_ || current3_ == end3_ || current4_ == end4_ || current5_ == end5_ || current6_ == end6_ || current7_ == end7_ || current8_ == end8_; } // No implementation - assignment is unsupported. void operator=(const Iterator& other); const ParamGeneratorInterface* const base_; // begin[i]_ and end[i]_ define the i-th range that Iterator traverses. // current[i]_ is the actual traversing iterator. const typename ParamGenerator::iterator begin1_; const typename ParamGenerator::iterator end1_; typename ParamGenerator::iterator current1_; const typename ParamGenerator::iterator begin2_; const typename ParamGenerator::iterator end2_; typename ParamGenerator::iterator current2_; const typename ParamGenerator::iterator begin3_; const typename ParamGenerator::iterator end3_; typename ParamGenerator::iterator current3_; const typename ParamGenerator::iterator begin4_; const typename ParamGenerator::iterator end4_; typename ParamGenerator::iterator current4_; const typename ParamGenerator::iterator begin5_; const typename ParamGenerator::iterator end5_; typename ParamGenerator::iterator current5_; const typename ParamGenerator::iterator begin6_; const typename ParamGenerator::iterator end6_; typename ParamGenerator::iterator current6_; const typename ParamGenerator::iterator begin7_; const typename ParamGenerator::iterator end7_; typename ParamGenerator::iterator current7_; const typename ParamGenerator::iterator begin8_; const typename ParamGenerator::iterator end8_; typename ParamGenerator::iterator current8_; ParamType current_value_; }; // class CartesianProductGenerator8::Iterator // No implementation - assignment is unsupported. void operator=(const CartesianProductGenerator8& other); const ParamGenerator g1_; const ParamGenerator g2_; const ParamGenerator g3_; const ParamGenerator g4_; const ParamGenerator g5_; const ParamGenerator g6_; const ParamGenerator g7_; const ParamGenerator g8_; }; // class CartesianProductGenerator8 template class CartesianProductGenerator9 : public ParamGeneratorInterface< ::std::tr1::tuple > { public: typedef ::std::tr1::tuple ParamType; CartesianProductGenerator9(const ParamGenerator& g1, const ParamGenerator& g2, const ParamGenerator& g3, const ParamGenerator& g4, const ParamGenerator& g5, const ParamGenerator& g6, const ParamGenerator& g7, const ParamGenerator& g8, const ParamGenerator& g9) : g1_(g1), g2_(g2), g3_(g3), g4_(g4), g5_(g5), g6_(g6), g7_(g7), g8_(g8), g9_(g9) {} virtual ~CartesianProductGenerator9() {} virtual ParamIteratorInterface* Begin() const { return new Iterator(this, g1_, g1_.begin(), g2_, g2_.begin(), g3_, g3_.begin(), g4_, g4_.begin(), g5_, g5_.begin(), g6_, g6_.begin(), g7_, g7_.begin(), g8_, g8_.begin(), g9_, g9_.begin()); } virtual ParamIteratorInterface* End() const { return new Iterator(this, g1_, g1_.end(), g2_, g2_.end(), g3_, g3_.end(), g4_, g4_.end(), g5_, g5_.end(), g6_, g6_.end(), g7_, g7_.end(), g8_, g8_.end(), g9_, g9_.end()); } private: class Iterator : public ParamIteratorInterface { public: Iterator(const ParamGeneratorInterface* base, const ParamGenerator& g1, const typename ParamGenerator::iterator& current1, const ParamGenerator& g2, const typename ParamGenerator::iterator& current2, const ParamGenerator& g3, const typename ParamGenerator::iterator& current3, const ParamGenerator& g4, const typename ParamGenerator::iterator& current4, const ParamGenerator& g5, const typename ParamGenerator::iterator& current5, const ParamGenerator& g6, const typename ParamGenerator::iterator& current6, const ParamGenerator& g7, const typename ParamGenerator::iterator& current7, const ParamGenerator& g8, const typename ParamGenerator::iterator& current8, const ParamGenerator& g9, const typename ParamGenerator::iterator& current9) : base_(base), begin1_(g1.begin()), end1_(g1.end()), current1_(current1), begin2_(g2.begin()), end2_(g2.end()), current2_(current2), begin3_(g3.begin()), end3_(g3.end()), current3_(current3), begin4_(g4.begin()), end4_(g4.end()), current4_(current4), begin5_(g5.begin()), end5_(g5.end()), current5_(current5), begin6_(g6.begin()), end6_(g6.end()), current6_(current6), begin7_(g7.begin()), end7_(g7.end()), current7_(current7), begin8_(g8.begin()), end8_(g8.end()), current8_(current8), begin9_(g9.begin()), end9_(g9.end()), current9_(current9) { ComputeCurrentValue(); } virtual ~Iterator() {} virtual const ParamGeneratorInterface* BaseGenerator() const { return base_; } // Advance should not be called on beyond-of-range iterators // so no component iterators must be beyond end of range, either. virtual void Advance() { assert(!AtEnd()); ++current9_; if (current9_ == end9_) { current9_ = begin9_; ++current8_; } if (current8_ == end8_) { current8_ = begin8_; ++current7_; } if (current7_ == end7_) { current7_ = begin7_; ++current6_; } if (current6_ == end6_) { current6_ = begin6_; ++current5_; } if (current5_ == end5_) { current5_ = begin5_; ++current4_; } if (current4_ == end4_) { current4_ = begin4_; ++current3_; } if (current3_ == end3_) { current3_ = begin3_; ++current2_; } if (current2_ == end2_) { current2_ = begin2_; ++current1_; } ComputeCurrentValue(); } virtual ParamIteratorInterface* Clone() const { return new Iterator(*this); } virtual const ParamType* Current() const { return ¤t_value_; } virtual bool Equals(const ParamIteratorInterface& other) const { // Having the same base generator guarantees that the other // iterator is of the same type and we can downcast. GTEST_CHECK_(BaseGenerator() == other.BaseGenerator()) << "The program attempted to compare iterators " << "from different generators." << std::endl; const Iterator* typed_other = CheckedDowncastToActualType(&other); // We must report iterators equal if they both point beyond their // respective ranges. That can happen in a variety of fashions, // so we have to consult AtEnd(). return (AtEnd() && typed_other->AtEnd()) || ( current1_ == typed_other->current1_ && current2_ == typed_other->current2_ && current3_ == typed_other->current3_ && current4_ == typed_other->current4_ && current5_ == typed_other->current5_ && current6_ == typed_other->current6_ && current7_ == typed_other->current7_ && current8_ == typed_other->current8_ && current9_ == typed_other->current9_); } private: Iterator(const Iterator& other) : base_(other.base_), begin1_(other.begin1_), end1_(other.end1_), current1_(other.current1_), begin2_(other.begin2_), end2_(other.end2_), current2_(other.current2_), begin3_(other.begin3_), end3_(other.end3_), current3_(other.current3_), begin4_(other.begin4_), end4_(other.end4_), current4_(other.current4_), begin5_(other.begin5_), end5_(other.end5_), current5_(other.current5_), begin6_(other.begin6_), end6_(other.end6_), current6_(other.current6_), begin7_(other.begin7_), end7_(other.end7_), current7_(other.current7_), begin8_(other.begin8_), end8_(other.end8_), current8_(other.current8_), begin9_(other.begin9_), end9_(other.end9_), current9_(other.current9_) { ComputeCurrentValue(); } void ComputeCurrentValue() { if (!AtEnd()) current_value_ = ParamType(*current1_, *current2_, *current3_, *current4_, *current5_, *current6_, *current7_, *current8_, *current9_); } bool AtEnd() const { // We must report iterator past the end of the range when either of the // component iterators has reached the end of its range. return current1_ == end1_ || current2_ == end2_ || current3_ == end3_ || current4_ == end4_ || current5_ == end5_ || current6_ == end6_ || current7_ == end7_ || current8_ == end8_ || current9_ == end9_; } // No implementation - assignment is unsupported. void operator=(const Iterator& other); const ParamGeneratorInterface* const base_; // begin[i]_ and end[i]_ define the i-th range that Iterator traverses. // current[i]_ is the actual traversing iterator. const typename ParamGenerator::iterator begin1_; const typename ParamGenerator::iterator end1_; typename ParamGenerator::iterator current1_; const typename ParamGenerator::iterator begin2_; const typename ParamGenerator::iterator end2_; typename ParamGenerator::iterator current2_; const typename ParamGenerator::iterator begin3_; const typename ParamGenerator::iterator end3_; typename ParamGenerator::iterator current3_; const typename ParamGenerator::iterator begin4_; const typename ParamGenerator::iterator end4_; typename ParamGenerator::iterator current4_; const typename ParamGenerator::iterator begin5_; const typename ParamGenerator::iterator end5_; typename ParamGenerator::iterator current5_; const typename ParamGenerator::iterator begin6_; const typename ParamGenerator::iterator end6_; typename ParamGenerator::iterator current6_; const typename ParamGenerator::iterator begin7_; const typename ParamGenerator::iterator end7_; typename ParamGenerator::iterator current7_; const typename ParamGenerator::iterator begin8_; const typename ParamGenerator::iterator end8_; typename ParamGenerator::iterator current8_; const typename ParamGenerator::iterator begin9_; const typename ParamGenerator::iterator end9_; typename ParamGenerator::iterator current9_; ParamType current_value_; }; // class CartesianProductGenerator9::Iterator // No implementation - assignment is unsupported. void operator=(const CartesianProductGenerator9& other); const ParamGenerator g1_; const ParamGenerator g2_; const ParamGenerator g3_; const ParamGenerator g4_; const ParamGenerator g5_; const ParamGenerator g6_; const ParamGenerator g7_; const ParamGenerator g8_; const ParamGenerator g9_; }; // class CartesianProductGenerator9 template class CartesianProductGenerator10 : public ParamGeneratorInterface< ::std::tr1::tuple > { public: typedef ::std::tr1::tuple ParamType; CartesianProductGenerator10(const ParamGenerator& g1, const ParamGenerator& g2, const ParamGenerator& g3, const ParamGenerator& g4, const ParamGenerator& g5, const ParamGenerator& g6, const ParamGenerator& g7, const ParamGenerator& g8, const ParamGenerator& g9, const ParamGenerator& g10) : g1_(g1), g2_(g2), g3_(g3), g4_(g4), g5_(g5), g6_(g6), g7_(g7), g8_(g8), g9_(g9), g10_(g10) {} virtual ~CartesianProductGenerator10() {} virtual ParamIteratorInterface* Begin() const { return new Iterator(this, g1_, g1_.begin(), g2_, g2_.begin(), g3_, g3_.begin(), g4_, g4_.begin(), g5_, g5_.begin(), g6_, g6_.begin(), g7_, g7_.begin(), g8_, g8_.begin(), g9_, g9_.begin(), g10_, g10_.begin()); } virtual ParamIteratorInterface* End() const { return new Iterator(this, g1_, g1_.end(), g2_, g2_.end(), g3_, g3_.end(), g4_, g4_.end(), g5_, g5_.end(), g6_, g6_.end(), g7_, g7_.end(), g8_, g8_.end(), g9_, g9_.end(), g10_, g10_.end()); } private: class Iterator : public ParamIteratorInterface { public: Iterator(const ParamGeneratorInterface* base, const ParamGenerator& g1, const typename ParamGenerator::iterator& current1, const ParamGenerator& g2, const typename ParamGenerator::iterator& current2, const ParamGenerator& g3, const typename ParamGenerator::iterator& current3, const ParamGenerator& g4, const typename ParamGenerator::iterator& current4, const ParamGenerator& g5, const typename ParamGenerator::iterator& current5, const ParamGenerator& g6, const typename ParamGenerator::iterator& current6, const ParamGenerator& g7, const typename ParamGenerator::iterator& current7, const ParamGenerator& g8, const typename ParamGenerator::iterator& current8, const ParamGenerator& g9, const typename ParamGenerator::iterator& current9, const ParamGenerator& g10, const typename ParamGenerator::iterator& current10) : base_(base), begin1_(g1.begin()), end1_(g1.end()), current1_(current1), begin2_(g2.begin()), end2_(g2.end()), current2_(current2), begin3_(g3.begin()), end3_(g3.end()), current3_(current3), begin4_(g4.begin()), end4_(g4.end()), current4_(current4), begin5_(g5.begin()), end5_(g5.end()), current5_(current5), begin6_(g6.begin()), end6_(g6.end()), current6_(current6), begin7_(g7.begin()), end7_(g7.end()), current7_(current7), begin8_(g8.begin()), end8_(g8.end()), current8_(current8), begin9_(g9.begin()), end9_(g9.end()), current9_(current9), begin10_(g10.begin()), end10_(g10.end()), current10_(current10) { ComputeCurrentValue(); } virtual ~Iterator() {} virtual const ParamGeneratorInterface* BaseGenerator() const { return base_; } // Advance should not be called on beyond-of-range iterators // so no component iterators must be beyond end of range, either. virtual void Advance() { assert(!AtEnd()); ++current10_; if (current10_ == end10_) { current10_ = begin10_; ++current9_; } if (current9_ == end9_) { current9_ = begin9_; ++current8_; } if (current8_ == end8_) { current8_ = begin8_; ++current7_; } if (current7_ == end7_) { current7_ = begin7_; ++current6_; } if (current6_ == end6_) { current6_ = begin6_; ++current5_; } if (current5_ == end5_) { current5_ = begin5_; ++current4_; } if (current4_ == end4_) { current4_ = begin4_; ++current3_; } if (current3_ == end3_) { current3_ = begin3_; ++current2_; } if (current2_ == end2_) { current2_ = begin2_; ++current1_; } ComputeCurrentValue(); } virtual ParamIteratorInterface* Clone() const { return new Iterator(*this); } virtual const ParamType* Current() const { return ¤t_value_; } virtual bool Equals(const ParamIteratorInterface& other) const { // Having the same base generator guarantees that the other // iterator is of the same type and we can downcast. GTEST_CHECK_(BaseGenerator() == other.BaseGenerator()) << "The program attempted to compare iterators " << "from different generators." << std::endl; const Iterator* typed_other = CheckedDowncastToActualType(&other); // We must report iterators equal if they both point beyond their // respective ranges. That can happen in a variety of fashions, // so we have to consult AtEnd(). return (AtEnd() && typed_other->AtEnd()) || ( current1_ == typed_other->current1_ && current2_ == typed_other->current2_ && current3_ == typed_other->current3_ && current4_ == typed_other->current4_ && current5_ == typed_other->current5_ && current6_ == typed_other->current6_ && current7_ == typed_other->current7_ && current8_ == typed_other->current8_ && current9_ == typed_other->current9_ && current10_ == typed_other->current10_); } private: Iterator(const Iterator& other) : base_(other.base_), begin1_(other.begin1_), end1_(other.end1_), current1_(other.current1_), begin2_(other.begin2_), end2_(other.end2_), current2_(other.current2_), begin3_(other.begin3_), end3_(other.end3_), current3_(other.current3_), begin4_(other.begin4_), end4_(other.end4_), current4_(other.current4_), begin5_(other.begin5_), end5_(other.end5_), current5_(other.current5_), begin6_(other.begin6_), end6_(other.end6_), current6_(other.current6_), begin7_(other.begin7_), end7_(other.end7_), current7_(other.current7_), begin8_(other.begin8_), end8_(other.end8_), current8_(other.current8_), begin9_(other.begin9_), end9_(other.end9_), current9_(other.current9_), begin10_(other.begin10_), end10_(other.end10_), current10_(other.current10_) { ComputeCurrentValue(); } void ComputeCurrentValue() { if (!AtEnd()) current_value_ = ParamType(*current1_, *current2_, *current3_, *current4_, *current5_, *current6_, *current7_, *current8_, *current9_, *current10_); } bool AtEnd() const { // We must report iterator past the end of the range when either of the // component iterators has reached the end of its range. return current1_ == end1_ || current2_ == end2_ || current3_ == end3_ || current4_ == end4_ || current5_ == end5_ || current6_ == end6_ || current7_ == end7_ || current8_ == end8_ || current9_ == end9_ || current10_ == end10_; } // No implementation - assignment is unsupported. void operator=(const Iterator& other); const ParamGeneratorInterface* const base_; // begin[i]_ and end[i]_ define the i-th range that Iterator traverses. // current[i]_ is the actual traversing iterator. const typename ParamGenerator::iterator begin1_; const typename ParamGenerator::iterator end1_; typename ParamGenerator::iterator current1_; const typename ParamGenerator::iterator begin2_; const typename ParamGenerator::iterator end2_; typename ParamGenerator::iterator current2_; const typename ParamGenerator::iterator begin3_; const typename ParamGenerator::iterator end3_; typename ParamGenerator::iterator current3_; const typename ParamGenerator::iterator begin4_; const typename ParamGenerator::iterator end4_; typename ParamGenerator::iterator current4_; const typename ParamGenerator::iterator begin5_; const typename ParamGenerator::iterator end5_; typename ParamGenerator::iterator current5_; const typename ParamGenerator::iterator begin6_; const typename ParamGenerator::iterator end6_; typename ParamGenerator::iterator current6_; const typename ParamGenerator::iterator begin7_; const typename ParamGenerator::iterator end7_; typename ParamGenerator::iterator current7_; const typename ParamGenerator::iterator begin8_; const typename ParamGenerator::iterator end8_; typename ParamGenerator::iterator current8_; const typename ParamGenerator::iterator begin9_; const typename ParamGenerator::iterator end9_; typename ParamGenerator::iterator current9_; const typename ParamGenerator::iterator begin10_; const typename ParamGenerator::iterator end10_; typename ParamGenerator::iterator current10_; ParamType current_value_; }; // class CartesianProductGenerator10::Iterator // No implementation - assignment is unsupported. void operator=(const CartesianProductGenerator10& other); const ParamGenerator g1_; const ParamGenerator g2_; const ParamGenerator g3_; const ParamGenerator g4_; const ParamGenerator g5_; const ParamGenerator g6_; const ParamGenerator g7_; const ParamGenerator g8_; const ParamGenerator g9_; const ParamGenerator g10_; }; // class CartesianProductGenerator10 // INTERNAL IMPLEMENTATION - DO NOT USE IN USER CODE. // // Helper classes providing Combine() with polymorphic features. They allow // casting CartesianProductGeneratorN to ParamGenerator if T is // convertible to U. // template class CartesianProductHolder2 { public: CartesianProductHolder2(const Generator1& g1, const Generator2& g2) : g1_(g1), g2_(g2) {} template operator ParamGenerator< ::std::tr1::tuple >() const { return ParamGenerator< ::std::tr1::tuple >( new CartesianProductGenerator2( static_cast >(g1_), static_cast >(g2_))); } private: // No implementation - assignment is unsupported. void operator=(const CartesianProductHolder2& other); const Generator1 g1_; const Generator2 g2_; }; // class CartesianProductHolder2 template class CartesianProductHolder3 { public: CartesianProductHolder3(const Generator1& g1, const Generator2& g2, const Generator3& g3) : g1_(g1), g2_(g2), g3_(g3) {} template operator ParamGenerator< ::std::tr1::tuple >() const { return ParamGenerator< ::std::tr1::tuple >( new CartesianProductGenerator3( static_cast >(g1_), static_cast >(g2_), static_cast >(g3_))); } private: // No implementation - assignment is unsupported. void operator=(const CartesianProductHolder3& other); const Generator1 g1_; const Generator2 g2_; const Generator3 g3_; }; // class CartesianProductHolder3 template class CartesianProductHolder4 { public: CartesianProductHolder4(const Generator1& g1, const Generator2& g2, const Generator3& g3, const Generator4& g4) : g1_(g1), g2_(g2), g3_(g3), g4_(g4) {} template operator ParamGenerator< ::std::tr1::tuple >() const { return ParamGenerator< ::std::tr1::tuple >( new CartesianProductGenerator4( static_cast >(g1_), static_cast >(g2_), static_cast >(g3_), static_cast >(g4_))); } private: // No implementation - assignment is unsupported. void operator=(const CartesianProductHolder4& other); const Generator1 g1_; const Generator2 g2_; const Generator3 g3_; const Generator4 g4_; }; // class CartesianProductHolder4 template class CartesianProductHolder5 { public: CartesianProductHolder5(const Generator1& g1, const Generator2& g2, const Generator3& g3, const Generator4& g4, const Generator5& g5) : g1_(g1), g2_(g2), g3_(g3), g4_(g4), g5_(g5) {} template operator ParamGenerator< ::std::tr1::tuple >() const { return ParamGenerator< ::std::tr1::tuple >( new CartesianProductGenerator5( static_cast >(g1_), static_cast >(g2_), static_cast >(g3_), static_cast >(g4_), static_cast >(g5_))); } private: // No implementation - assignment is unsupported. void operator=(const CartesianProductHolder5& other); const Generator1 g1_; const Generator2 g2_; const Generator3 g3_; const Generator4 g4_; const Generator5 g5_; }; // class CartesianProductHolder5 template class CartesianProductHolder6 { public: CartesianProductHolder6(const Generator1& g1, const Generator2& g2, const Generator3& g3, const Generator4& g4, const Generator5& g5, const Generator6& g6) : g1_(g1), g2_(g2), g3_(g3), g4_(g4), g5_(g5), g6_(g6) {} template operator ParamGenerator< ::std::tr1::tuple >() const { return ParamGenerator< ::std::tr1::tuple >( new CartesianProductGenerator6( static_cast >(g1_), static_cast >(g2_), static_cast >(g3_), static_cast >(g4_), static_cast >(g5_), static_cast >(g6_))); } private: // No implementation - assignment is unsupported. void operator=(const CartesianProductHolder6& other); const Generator1 g1_; const Generator2 g2_; const Generator3 g3_; const Generator4 g4_; const Generator5 g5_; const Generator6 g6_; }; // class CartesianProductHolder6 template class CartesianProductHolder7 { public: CartesianProductHolder7(const Generator1& g1, const Generator2& g2, const Generator3& g3, const Generator4& g4, const Generator5& g5, const Generator6& g6, const Generator7& g7) : g1_(g1), g2_(g2), g3_(g3), g4_(g4), g5_(g5), g6_(g6), g7_(g7) {} template operator ParamGenerator< ::std::tr1::tuple >() const { return ParamGenerator< ::std::tr1::tuple >( new CartesianProductGenerator7( static_cast >(g1_), static_cast >(g2_), static_cast >(g3_), static_cast >(g4_), static_cast >(g5_), static_cast >(g6_), static_cast >(g7_))); } private: // No implementation - assignment is unsupported. void operator=(const CartesianProductHolder7& other); const Generator1 g1_; const Generator2 g2_; const Generator3 g3_; const Generator4 g4_; const Generator5 g5_; const Generator6 g6_; const Generator7 g7_; }; // class CartesianProductHolder7 template class CartesianProductHolder8 { public: CartesianProductHolder8(const Generator1& g1, const Generator2& g2, const Generator3& g3, const Generator4& g4, const Generator5& g5, const Generator6& g6, const Generator7& g7, const Generator8& g8) : g1_(g1), g2_(g2), g3_(g3), g4_(g4), g5_(g5), g6_(g6), g7_(g7), g8_(g8) {} template operator ParamGenerator< ::std::tr1::tuple >() const { return ParamGenerator< ::std::tr1::tuple >( new CartesianProductGenerator8( static_cast >(g1_), static_cast >(g2_), static_cast >(g3_), static_cast >(g4_), static_cast >(g5_), static_cast >(g6_), static_cast >(g7_), static_cast >(g8_))); } private: // No implementation - assignment is unsupported. void operator=(const CartesianProductHolder8& other); const Generator1 g1_; const Generator2 g2_; const Generator3 g3_; const Generator4 g4_; const Generator5 g5_; const Generator6 g6_; const Generator7 g7_; const Generator8 g8_; }; // class CartesianProductHolder8 template class CartesianProductHolder9 { public: CartesianProductHolder9(const Generator1& g1, const Generator2& g2, const Generator3& g3, const Generator4& g4, const Generator5& g5, const Generator6& g6, const Generator7& g7, const Generator8& g8, const Generator9& g9) : g1_(g1), g2_(g2), g3_(g3), g4_(g4), g5_(g5), g6_(g6), g7_(g7), g8_(g8), g9_(g9) {} template operator ParamGenerator< ::std::tr1::tuple >() const { return ParamGenerator< ::std::tr1::tuple >( new CartesianProductGenerator9( static_cast >(g1_), static_cast >(g2_), static_cast >(g3_), static_cast >(g4_), static_cast >(g5_), static_cast >(g6_), static_cast >(g7_), static_cast >(g8_), static_cast >(g9_))); } private: // No implementation - assignment is unsupported. void operator=(const CartesianProductHolder9& other); const Generator1 g1_; const Generator2 g2_; const Generator3 g3_; const Generator4 g4_; const Generator5 g5_; const Generator6 g6_; const Generator7 g7_; const Generator8 g8_; const Generator9 g9_; }; // class CartesianProductHolder9 template class CartesianProductHolder10 { public: CartesianProductHolder10(const Generator1& g1, const Generator2& g2, const Generator3& g3, const Generator4& g4, const Generator5& g5, const Generator6& g6, const Generator7& g7, const Generator8& g8, const Generator9& g9, const Generator10& g10) : g1_(g1), g2_(g2), g3_(g3), g4_(g4), g5_(g5), g6_(g6), g7_(g7), g8_(g8), g9_(g9), g10_(g10) {} template operator ParamGenerator< ::std::tr1::tuple >() const { return ParamGenerator< ::std::tr1::tuple >( new CartesianProductGenerator10( static_cast >(g1_), static_cast >(g2_), static_cast >(g3_), static_cast >(g4_), static_cast >(g5_), static_cast >(g6_), static_cast >(g7_), static_cast >(g8_), static_cast >(g9_), static_cast >(g10_))); } private: // No implementation - assignment is unsupported. void operator=(const CartesianProductHolder10& other); const Generator1 g1_; const Generator2 g2_; const Generator3 g3_; const Generator4 g4_; const Generator5 g5_; const Generator6 g6_; const Generator7 g7_; const Generator8 g8_; const Generator9 g9_; const Generator10 g10_; }; // class CartesianProductHolder10 # endif // GTEST_HAS_COMBINE } // namespace internal } // namespace testing #endif // GTEST_HAS_PARAM_TEST #endif // GTEST_INCLUDE_GTEST_INTERNAL_GTEST_PARAM_UTIL_GENERATED_H_ #if GTEST_HAS_PARAM_TEST namespace testing { // Functions producing parameter generators. // // Google Test uses these generators to produce parameters for value- // parameterized tests. When a parameterized test case is instantiated // with a particular generator, Google Test creates and runs tests // for each element in the sequence produced by the generator. // // In the following sample, tests from test case FooTest are instantiated // each three times with parameter values 3, 5, and 8: // // class FooTest : public TestWithParam { ... }; // // TEST_P(FooTest, TestThis) { // } // TEST_P(FooTest, TestThat) { // } // INSTANTIATE_TEST_CASE_P(TestSequence, FooTest, Values(3, 5, 8)); // // Range() returns generators providing sequences of values in a range. // // Synopsis: // Range(start, end) // - returns a generator producing a sequence of values {start, start+1, // start+2, ..., }. // Range(start, end, step) // - returns a generator producing a sequence of values {start, start+step, // start+step+step, ..., }. // Notes: // * The generated sequences never include end. For example, Range(1, 5) // returns a generator producing a sequence {1, 2, 3, 4}. Range(1, 9, 2) // returns a generator producing {1, 3, 5, 7}. // * start and end must have the same type. That type may be any integral or // floating-point type or a user defined type satisfying these conditions: // * It must be assignable (have operator=() defined). // * It must have operator+() (operator+(int-compatible type) for // two-operand version). // * It must have operator<() defined. // Elements in the resulting sequences will also have that type. // * Condition start < end must be satisfied in order for resulting sequences // to contain any elements. // template internal::ParamGenerator Range(T start, T end, IncrementT step) { return internal::ParamGenerator( new internal::RangeGenerator(start, end, step)); } template internal::ParamGenerator Range(T start, T end) { return Range(start, end, 1); } // ValuesIn() function allows generation of tests with parameters coming from // a container. // // Synopsis: // ValuesIn(const T (&array)[N]) // - returns a generator producing sequences with elements from // a C-style array. // ValuesIn(const Container& container) // - returns a generator producing sequences with elements from // an STL-style container. // ValuesIn(Iterator begin, Iterator end) // - returns a generator producing sequences with elements from // a range [begin, end) defined by a pair of STL-style iterators. These // iterators can also be plain C pointers. // // Please note that ValuesIn copies the values from the containers // passed in and keeps them to generate tests in RUN_ALL_TESTS(). // // Examples: // // This instantiates tests from test case StringTest // each with C-string values of "foo", "bar", and "baz": // // const char* strings[] = {"foo", "bar", "baz"}; // INSTANTIATE_TEST_CASE_P(StringSequence, SrtingTest, ValuesIn(strings)); // // This instantiates tests from test case StlStringTest // each with STL strings with values "a" and "b": // // ::std::vector< ::std::string> GetParameterStrings() { // ::std::vector< ::std::string> v; // v.push_back("a"); // v.push_back("b"); // return v; // } // // INSTANTIATE_TEST_CASE_P(CharSequence, // StlStringTest, // ValuesIn(GetParameterStrings())); // // // This will also instantiate tests from CharTest // each with parameter values 'a' and 'b': // // ::std::list GetParameterChars() { // ::std::list list; // list.push_back('a'); // list.push_back('b'); // return list; // } // ::std::list l = GetParameterChars(); // INSTANTIATE_TEST_CASE_P(CharSequence2, // CharTest, // ValuesIn(l.begin(), l.end())); // template internal::ParamGenerator< typename ::testing::internal::IteratorTraits::value_type> ValuesIn(ForwardIterator begin, ForwardIterator end) { typedef typename ::testing::internal::IteratorTraits ::value_type ParamType; return internal::ParamGenerator( new internal::ValuesInIteratorRangeGenerator(begin, end)); } template internal::ParamGenerator ValuesIn(const T (&array)[N]) { return ValuesIn(array, array + N); } template internal::ParamGenerator ValuesIn( const Container& container) { return ValuesIn(container.begin(), container.end()); } // Values() allows generating tests from explicitly specified list of // parameters. // // Synopsis: // Values(T v1, T v2, ..., T vN) // - returns a generator producing sequences with elements v1, v2, ..., vN. // // For example, this instantiates tests from test case BarTest each // with values "one", "two", and "three": // // INSTANTIATE_TEST_CASE_P(NumSequence, BarTest, Values("one", "two", "three")); // // This instantiates tests from test case BazTest each with values 1, 2, 3.5. // The exact type of values will depend on the type of parameter in BazTest. // // INSTANTIATE_TEST_CASE_P(FloatingNumbers, BazTest, Values(1, 2, 3.5)); // // Currently, Values() supports from 1 to 50 parameters. // template internal::ValueArray1 Values(T1 v1) { return internal::ValueArray1(v1); } template internal::ValueArray2 Values(T1 v1, T2 v2) { return internal::ValueArray2(v1, v2); } template internal::ValueArray3 Values(T1 v1, T2 v2, T3 v3) { return internal::ValueArray3(v1, v2, v3); } template internal::ValueArray4 Values(T1 v1, T2 v2, T3 v3, T4 v4) { return internal::ValueArray4(v1, v2, v3, v4); } template internal::ValueArray5 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5) { return internal::ValueArray5(v1, v2, v3, v4, v5); } template internal::ValueArray6 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6) { return internal::ValueArray6(v1, v2, v3, v4, v5, v6); } template internal::ValueArray7 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7) { return internal::ValueArray7(v1, v2, v3, v4, v5, v6, v7); } template internal::ValueArray8 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8) { return internal::ValueArray8(v1, v2, v3, v4, v5, v6, v7, v8); } template internal::ValueArray9 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9) { return internal::ValueArray9(v1, v2, v3, v4, v5, v6, v7, v8, v9); } template internal::ValueArray10 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10) { return internal::ValueArray10(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10); } template internal::ValueArray11 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11) { return internal::ValueArray11(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11); } template internal::ValueArray12 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12) { return internal::ValueArray12(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12); } template internal::ValueArray13 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13) { return internal::ValueArray13(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13); } template internal::ValueArray14 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14) { return internal::ValueArray14(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14); } template internal::ValueArray15 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15) { return internal::ValueArray15(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15); } template internal::ValueArray16 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16) { return internal::ValueArray16(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16); } template internal::ValueArray17 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17) { return internal::ValueArray17(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17); } template internal::ValueArray18 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18) { return internal::ValueArray18(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18); } template internal::ValueArray19 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19) { return internal::ValueArray19(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19); } template internal::ValueArray20 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20) { return internal::ValueArray20(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20); } template internal::ValueArray21 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21) { return internal::ValueArray21(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21); } template internal::ValueArray22 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22) { return internal::ValueArray22(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21, v22); } template internal::ValueArray23 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23) { return internal::ValueArray23(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21, v22, v23); } template internal::ValueArray24 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24) { return internal::ValueArray24(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21, v22, v23, v24); } template internal::ValueArray25 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25) { return internal::ValueArray25(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21, v22, v23, v24, v25); } template internal::ValueArray26 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26) { return internal::ValueArray26(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21, v22, v23, v24, v25, v26); } template internal::ValueArray27 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27) { return internal::ValueArray27(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21, v22, v23, v24, v25, v26, v27); } template internal::ValueArray28 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28) { return internal::ValueArray28(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21, v22, v23, v24, v25, v26, v27, v28); } template internal::ValueArray29 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29) { return internal::ValueArray29(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21, v22, v23, v24, v25, v26, v27, v28, v29); } template internal::ValueArray30 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30) { return internal::ValueArray30(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21, v22, v23, v24, v25, v26, v27, v28, v29, v30); } template internal::ValueArray31 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31) { return internal::ValueArray31(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21, v22, v23, v24, v25, v26, v27, v28, v29, v30, v31); } template internal::ValueArray32 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32) { return internal::ValueArray32(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21, v22, v23, v24, v25, v26, v27, v28, v29, v30, v31, v32); } template internal::ValueArray33 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33) { return internal::ValueArray33(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21, v22, v23, v24, v25, v26, v27, v28, v29, v30, v31, v32, v33); } template internal::ValueArray34 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34) { return internal::ValueArray34(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21, v22, v23, v24, v25, v26, v27, v28, v29, v30, v31, v32, v33, v34); } template internal::ValueArray35 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35) { return internal::ValueArray35(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21, v22, v23, v24, v25, v26, v27, v28, v29, v30, v31, v32, v33, v34, v35); } template internal::ValueArray36 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35, T36 v36) { return internal::ValueArray36(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21, v22, v23, v24, v25, v26, v27, v28, v29, v30, v31, v32, v33, v34, v35, v36); } template internal::ValueArray37 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35, T36 v36, T37 v37) { return internal::ValueArray37(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21, v22, v23, v24, v25, v26, v27, v28, v29, v30, v31, v32, v33, v34, v35, v36, v37); } template internal::ValueArray38 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35, T36 v36, T37 v37, T38 v38) { return internal::ValueArray38(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21, v22, v23, v24, v25, v26, v27, v28, v29, v30, v31, v32, v33, v34, v35, v36, v37, v38); } template internal::ValueArray39 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35, T36 v36, T37 v37, T38 v38, T39 v39) { return internal::ValueArray39(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21, v22, v23, v24, v25, v26, v27, v28, v29, v30, v31, v32, v33, v34, v35, v36, v37, v38, v39); } template internal::ValueArray40 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35, T36 v36, T37 v37, T38 v38, T39 v39, T40 v40) { return internal::ValueArray40(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21, v22, v23, v24, v25, v26, v27, v28, v29, v30, v31, v32, v33, v34, v35, v36, v37, v38, v39, v40); } template internal::ValueArray41 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35, T36 v36, T37 v37, T38 v38, T39 v39, T40 v40, T41 v41) { return internal::ValueArray41(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21, v22, v23, v24, v25, v26, v27, v28, v29, v30, v31, v32, v33, v34, v35, v36, v37, v38, v39, v40, v41); } template internal::ValueArray42 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35, T36 v36, T37 v37, T38 v38, T39 v39, T40 v40, T41 v41, T42 v42) { return internal::ValueArray42(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21, v22, v23, v24, v25, v26, v27, v28, v29, v30, v31, v32, v33, v34, v35, v36, v37, v38, v39, v40, v41, v42); } template internal::ValueArray43 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35, T36 v36, T37 v37, T38 v38, T39 v39, T40 v40, T41 v41, T42 v42, T43 v43) { return internal::ValueArray43(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21, v22, v23, v24, v25, v26, v27, v28, v29, v30, v31, v32, v33, v34, v35, v36, v37, v38, v39, v40, v41, v42, v43); } template internal::ValueArray44 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35, T36 v36, T37 v37, T38 v38, T39 v39, T40 v40, T41 v41, T42 v42, T43 v43, T44 v44) { return internal::ValueArray44(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21, v22, v23, v24, v25, v26, v27, v28, v29, v30, v31, v32, v33, v34, v35, v36, v37, v38, v39, v40, v41, v42, v43, v44); } template internal::ValueArray45 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35, T36 v36, T37 v37, T38 v38, T39 v39, T40 v40, T41 v41, T42 v42, T43 v43, T44 v44, T45 v45) { return internal::ValueArray45(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21, v22, v23, v24, v25, v26, v27, v28, v29, v30, v31, v32, v33, v34, v35, v36, v37, v38, v39, v40, v41, v42, v43, v44, v45); } template internal::ValueArray46 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35, T36 v36, T37 v37, T38 v38, T39 v39, T40 v40, T41 v41, T42 v42, T43 v43, T44 v44, T45 v45, T46 v46) { return internal::ValueArray46(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21, v22, v23, v24, v25, v26, v27, v28, v29, v30, v31, v32, v33, v34, v35, v36, v37, v38, v39, v40, v41, v42, v43, v44, v45, v46); } template internal::ValueArray47 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35, T36 v36, T37 v37, T38 v38, T39 v39, T40 v40, T41 v41, T42 v42, T43 v43, T44 v44, T45 v45, T46 v46, T47 v47) { return internal::ValueArray47(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21, v22, v23, v24, v25, v26, v27, v28, v29, v30, v31, v32, v33, v34, v35, v36, v37, v38, v39, v40, v41, v42, v43, v44, v45, v46, v47); } template internal::ValueArray48 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35, T36 v36, T37 v37, T38 v38, T39 v39, T40 v40, T41 v41, T42 v42, T43 v43, T44 v44, T45 v45, T46 v46, T47 v47, T48 v48) { return internal::ValueArray48(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21, v22, v23, v24, v25, v26, v27, v28, v29, v30, v31, v32, v33, v34, v35, v36, v37, v38, v39, v40, v41, v42, v43, v44, v45, v46, v47, v48); } template internal::ValueArray49 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35, T36 v36, T37 v37, T38 v38, T39 v39, T40 v40, T41 v41, T42 v42, T43 v43, T44 v44, T45 v45, T46 v46, T47 v47, T48 v48, T49 v49) { return internal::ValueArray49(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21, v22, v23, v24, v25, v26, v27, v28, v29, v30, v31, v32, v33, v34, v35, v36, v37, v38, v39, v40, v41, v42, v43, v44, v45, v46, v47, v48, v49); } template internal::ValueArray50 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35, T36 v36, T37 v37, T38 v38, T39 v39, T40 v40, T41 v41, T42 v42, T43 v43, T44 v44, T45 v45, T46 v46, T47 v47, T48 v48, T49 v49, T50 v50) { return internal::ValueArray50(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21, v22, v23, v24, v25, v26, v27, v28, v29, v30, v31, v32, v33, v34, v35, v36, v37, v38, v39, v40, v41, v42, v43, v44, v45, v46, v47, v48, v49, v50); } // Bool() allows generating tests with parameters in a set of (false, true). // // Synopsis: // Bool() // - returns a generator producing sequences with elements {false, true}. // // It is useful when testing code that depends on Boolean flags. Combinations // of multiple flags can be tested when several Bool()'s are combined using // Combine() function. // // In the following example all tests in the test case FlagDependentTest // will be instantiated twice with parameters false and true. // // class FlagDependentTest : public testing::TestWithParam { // virtual void SetUp() { // external_flag = GetParam(); // } // } // INSTANTIATE_TEST_CASE_P(BoolSequence, FlagDependentTest, Bool()); // inline internal::ParamGenerator Bool() { return Values(false, true); } # if GTEST_HAS_COMBINE // Combine() allows the user to combine two or more sequences to produce // values of a Cartesian product of those sequences' elements. // // Synopsis: // Combine(gen1, gen2, ..., genN) // - returns a generator producing sequences with elements coming from // the Cartesian product of elements from the sequences generated by // gen1, gen2, ..., genN. The sequence elements will have a type of // tuple where T1, T2, ..., TN are the types // of elements from sequences produces by gen1, gen2, ..., genN. // // Combine can have up to 10 arguments. This number is currently limited // by the maximum number of elements in the tuple implementation used by Google // Test. // // Example: // // This will instantiate tests in test case AnimalTest each one with // the parameter values tuple("cat", BLACK), tuple("cat", WHITE), // tuple("dog", BLACK), and tuple("dog", WHITE): // // enum Color { BLACK, GRAY, WHITE }; // class AnimalTest // : public testing::TestWithParam > {...}; // // TEST_P(AnimalTest, AnimalLooksNice) {...} // // INSTANTIATE_TEST_CASE_P(AnimalVariations, AnimalTest, // Combine(Values("cat", "dog"), // Values(BLACK, WHITE))); // // This will instantiate tests in FlagDependentTest with all variations of two // Boolean flags: // // class FlagDependentTest // : public testing::TestWithParam > { // virtual void SetUp() { // // Assigns external_flag_1 and external_flag_2 values from the tuple. // tie(external_flag_1, external_flag_2) = GetParam(); // } // }; // // TEST_P(FlagDependentTest, TestFeature1) { // // Test your code using external_flag_1 and external_flag_2 here. // } // INSTANTIATE_TEST_CASE_P(TwoBoolSequence, FlagDependentTest, // Combine(Bool(), Bool())); // template internal::CartesianProductHolder2 Combine( const Generator1& g1, const Generator2& g2) { return internal::CartesianProductHolder2( g1, g2); } template internal::CartesianProductHolder3 Combine( const Generator1& g1, const Generator2& g2, const Generator3& g3) { return internal::CartesianProductHolder3( g1, g2, g3); } template internal::CartesianProductHolder4 Combine( const Generator1& g1, const Generator2& g2, const Generator3& g3, const Generator4& g4) { return internal::CartesianProductHolder4( g1, g2, g3, g4); } template internal::CartesianProductHolder5 Combine( const Generator1& g1, const Generator2& g2, const Generator3& g3, const Generator4& g4, const Generator5& g5) { return internal::CartesianProductHolder5( g1, g2, g3, g4, g5); } template internal::CartesianProductHolder6 Combine( const Generator1& g1, const Generator2& g2, const Generator3& g3, const Generator4& g4, const Generator5& g5, const Generator6& g6) { return internal::CartesianProductHolder6( g1, g2, g3, g4, g5, g6); } template internal::CartesianProductHolder7 Combine( const Generator1& g1, const Generator2& g2, const Generator3& g3, const Generator4& g4, const Generator5& g5, const Generator6& g6, const Generator7& g7) { return internal::CartesianProductHolder7( g1, g2, g3, g4, g5, g6, g7); } template internal::CartesianProductHolder8 Combine( const Generator1& g1, const Generator2& g2, const Generator3& g3, const Generator4& g4, const Generator5& g5, const Generator6& g6, const Generator7& g7, const Generator8& g8) { return internal::CartesianProductHolder8( g1, g2, g3, g4, g5, g6, g7, g8); } template internal::CartesianProductHolder9 Combine( const Generator1& g1, const Generator2& g2, const Generator3& g3, const Generator4& g4, const Generator5& g5, const Generator6& g6, const Generator7& g7, const Generator8& g8, const Generator9& g9) { return internal::CartesianProductHolder9( g1, g2, g3, g4, g5, g6, g7, g8, g9); } template internal::CartesianProductHolder10 Combine( const Generator1& g1, const Generator2& g2, const Generator3& g3, const Generator4& g4, const Generator5& g5, const Generator6& g6, const Generator7& g7, const Generator8& g8, const Generator9& g9, const Generator10& g10) { return internal::CartesianProductHolder10( g1, g2, g3, g4, g5, g6, g7, g8, g9, g10); } # endif // GTEST_HAS_COMBINE # define TEST_P(test_case_name, test_name) \ class GTEST_TEST_CLASS_NAME_(test_case_name, test_name) \ : public test_case_name { \ public: \ GTEST_TEST_CLASS_NAME_(test_case_name, test_name)() {} \ virtual void TestBody(); \ private: \ static int AddToRegistry() { \ ::testing::UnitTest::GetInstance()->parameterized_test_registry(). \ GetTestCasePatternHolder(\ #test_case_name, __FILE__, __LINE__)->AddTestPattern(\ #test_case_name, \ #test_name, \ new ::testing::internal::TestMetaFactory< \ GTEST_TEST_CLASS_NAME_(test_case_name, test_name)>()); \ return 0; \ } \ static int gtest_registering_dummy_; \ GTEST_DISALLOW_COPY_AND_ASSIGN_(\ GTEST_TEST_CLASS_NAME_(test_case_name, test_name)); \ }; \ int GTEST_TEST_CLASS_NAME_(test_case_name, \ test_name)::gtest_registering_dummy_ = \ GTEST_TEST_CLASS_NAME_(test_case_name, test_name)::AddToRegistry(); \ void GTEST_TEST_CLASS_NAME_(test_case_name, test_name)::TestBody() # define INSTANTIATE_TEST_CASE_P(prefix, test_case_name, generator) \ ::testing::internal::ParamGenerator \ gtest_##prefix##test_case_name##_EvalGenerator_() { return generator; } \ int gtest_##prefix##test_case_name##_dummy_ = \ ::testing::UnitTest::GetInstance()->parameterized_test_registry(). \ GetTestCasePatternHolder(\ #test_case_name, __FILE__, __LINE__)->AddTestCaseInstantiation(\ #prefix, \ >est_##prefix##test_case_name##_EvalGenerator_, \ __FILE__, __LINE__) } // namespace testing #endif // GTEST_HAS_PARAM_TEST #endif // GTEST_INCLUDE_GTEST_GTEST_PARAM_TEST_H_ // Copyright 2006, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: wan@google.com (Zhanyong Wan) // // Google C++ Testing Framework definitions useful in production code. #ifndef GTEST_INCLUDE_GTEST_GTEST_PROD_H_ #define GTEST_INCLUDE_GTEST_GTEST_PROD_H_ // When you need to test the private or protected members of a class, // use the FRIEND_TEST macro to declare your tests as friends of the // class. For example: // // class MyClass { // private: // void MyMethod(); // FRIEND_TEST(MyClassTest, MyMethod); // }; // // class MyClassTest : public testing::Test { // // ... // }; // // TEST_F(MyClassTest, MyMethod) { // // Can call MyClass::MyMethod() here. // } #define FRIEND_TEST(test_case_name, test_name)\ friend class test_case_name##_##test_name##_Test #endif // GTEST_INCLUDE_GTEST_GTEST_PROD_H_ // Copyright 2008, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: mheule@google.com (Markus Heule) // #ifndef GTEST_INCLUDE_GTEST_GTEST_TEST_PART_H_ #define GTEST_INCLUDE_GTEST_GTEST_TEST_PART_H_ #include #include namespace testing { // A copyable object representing the result of a test part (i.e. an // assertion or an explicit FAIL(), ADD_FAILURE(), or SUCCESS()). // // Don't inherit from TestPartResult as its destructor is not virtual. class GTEST_API_ TestPartResult { public: // The possible outcomes of a test part (i.e. an assertion or an // explicit SUCCEED(), FAIL(), or ADD_FAILURE()). enum Type { kSuccess, // Succeeded. kNonFatalFailure, // Failed but the test can continue. kFatalFailure // Failed and the test should be terminated. }; // C'tor. TestPartResult does NOT have a default constructor. // Always use this constructor (with parameters) to create a // TestPartResult object. TestPartResult(Type a_type, const char* a_file_name, int a_line_number, const char* a_message) : type_(a_type), file_name_(a_file_name == NULL ? "" : a_file_name), line_number_(a_line_number), summary_(ExtractSummary(a_message)), message_(a_message) { } // Gets the outcome of the test part. Type type() const { return type_; } // Gets the name of the source file where the test part took place, or // NULL if it's unknown. const char* file_name() const { return file_name_.empty() ? NULL : file_name_.c_str(); } // Gets the line in the source file where the test part took place, // or -1 if it's unknown. int line_number() const { return line_number_; } // Gets the summary of the failure message. const char* summary() const { return summary_.c_str(); } // Gets the message associated with the test part. const char* message() const { return message_.c_str(); } // Returns true iff the test part passed. bool passed() const { return type_ == kSuccess; } // Returns true iff the test part failed. bool failed() const { return type_ != kSuccess; } // Returns true iff the test part non-fatally failed. bool nonfatally_failed() const { return type_ == kNonFatalFailure; } // Returns true iff the test part fatally failed. bool fatally_failed() const { return type_ == kFatalFailure; } private: Type type_; // Gets the summary of the failure message by omitting the stack // trace in it. static std::string ExtractSummary(const char* message); // The name of the source file where the test part took place, or // "" if the source file is unknown. std::string file_name_; // The line in the source file where the test part took place, or -1 // if the line number is unknown. int line_number_; std::string summary_; // The test failure summary. std::string message_; // The test failure message. }; // Prints a TestPartResult object. std::ostream& operator<<(std::ostream& os, const TestPartResult& result); // An array of TestPartResult objects. // // Don't inherit from TestPartResultArray as its destructor is not // virtual. class GTEST_API_ TestPartResultArray { public: TestPartResultArray() {} // Appends the given TestPartResult to the array. void Append(const TestPartResult& result); // Returns the TestPartResult at the given index (0-based). const TestPartResult& GetTestPartResult(int index) const; // Returns the number of TestPartResult objects in the array. int size() const; private: std::vector array_; GTEST_DISALLOW_COPY_AND_ASSIGN_(TestPartResultArray); }; // This interface knows how to report a test part result. class TestPartResultReporterInterface { public: virtual ~TestPartResultReporterInterface() {} virtual void ReportTestPartResult(const TestPartResult& result) = 0; }; namespace internal { // This helper class is used by {ASSERT|EXPECT}_NO_FATAL_FAILURE to check if a // statement generates new fatal failures. To do so it registers itself as the // current test part result reporter. Besides checking if fatal failures were // reported, it only delegates the reporting to the former result reporter. // The original result reporter is restored in the destructor. // INTERNAL IMPLEMENTATION - DO NOT USE IN A USER PROGRAM. class GTEST_API_ HasNewFatalFailureHelper : public TestPartResultReporterInterface { public: HasNewFatalFailureHelper(); virtual ~HasNewFatalFailureHelper(); virtual void ReportTestPartResult(const TestPartResult& result); bool has_new_fatal_failure() const { return has_new_fatal_failure_; } private: bool has_new_fatal_failure_; TestPartResultReporterInterface* original_reporter_; GTEST_DISALLOW_COPY_AND_ASSIGN_(HasNewFatalFailureHelper); }; } // namespace internal } // namespace testing #endif // GTEST_INCLUDE_GTEST_GTEST_TEST_PART_H_ // Copyright 2008 Google Inc. // All Rights Reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: wan@google.com (Zhanyong Wan) #ifndef GTEST_INCLUDE_GTEST_GTEST_TYPED_TEST_H_ #define GTEST_INCLUDE_GTEST_GTEST_TYPED_TEST_H_ // This header implements typed tests and type-parameterized tests. // Typed (aka type-driven) tests repeat the same test for types in a // list. You must know which types you want to test with when writing // typed tests. Here's how you do it: #if 0 // First, define a fixture class template. It should be parameterized // by a type. Remember to derive it from testing::Test. template class FooTest : public testing::Test { public: ... typedef std::list List; static T shared_; T value_; }; // Next, associate a list of types with the test case, which will be // repeated for each type in the list. The typedef is necessary for // the macro to parse correctly. typedef testing::Types MyTypes; TYPED_TEST_CASE(FooTest, MyTypes); // If the type list contains only one type, you can write that type // directly without Types<...>: // TYPED_TEST_CASE(FooTest, int); // Then, use TYPED_TEST() instead of TEST_F() to define as many typed // tests for this test case as you want. TYPED_TEST(FooTest, DoesBlah) { // Inside a test, refer to TypeParam to get the type parameter. // Since we are inside a derived class template, C++ requires use to // visit the members of FooTest via 'this'. TypeParam n = this->value_; // To visit static members of the fixture, add the TestFixture:: // prefix. n += TestFixture::shared_; // To refer to typedefs in the fixture, add the "typename // TestFixture::" prefix. typename TestFixture::List values; values.push_back(n); ... } TYPED_TEST(FooTest, HasPropertyA) { ... } #endif // 0 // Type-parameterized tests are abstract test patterns parameterized // by a type. Compared with typed tests, type-parameterized tests // allow you to define the test pattern without knowing what the type // parameters are. The defined pattern can be instantiated with // different types any number of times, in any number of translation // units. // // If you are designing an interface or concept, you can define a // suite of type-parameterized tests to verify properties that any // valid implementation of the interface/concept should have. Then, // each implementation can easily instantiate the test suite to verify // that it conforms to the requirements, without having to write // similar tests repeatedly. Here's an example: #if 0 // First, define a fixture class template. It should be parameterized // by a type. Remember to derive it from testing::Test. template class FooTest : public testing::Test { ... }; // Next, declare that you will define a type-parameterized test case // (the _P suffix is for "parameterized" or "pattern", whichever you // prefer): TYPED_TEST_CASE_P(FooTest); // Then, use TYPED_TEST_P() to define as many type-parameterized tests // for this type-parameterized test case as you want. TYPED_TEST_P(FooTest, DoesBlah) { // Inside a test, refer to TypeParam to get the type parameter. TypeParam n = 0; ... } TYPED_TEST_P(FooTest, HasPropertyA) { ... } // Now the tricky part: you need to register all test patterns before // you can instantiate them. The first argument of the macro is the // test case name; the rest are the names of the tests in this test // case. REGISTER_TYPED_TEST_CASE_P(FooTest, DoesBlah, HasPropertyA); // Finally, you are free to instantiate the pattern with the types you // want. If you put the above code in a header file, you can #include // it in multiple C++ source files and instantiate it multiple times. // // To distinguish different instances of the pattern, the first // argument to the INSTANTIATE_* macro is a prefix that will be added // to the actual test case name. Remember to pick unique prefixes for // different instances. typedef testing::Types MyTypes; INSTANTIATE_TYPED_TEST_CASE_P(My, FooTest, MyTypes); // If the type list contains only one type, you can write that type // directly without Types<...>: // INSTANTIATE_TYPED_TEST_CASE_P(My, FooTest, int); #endif // 0 // Implements typed tests. #if GTEST_HAS_TYPED_TEST // INTERNAL IMPLEMENTATION - DO NOT USE IN USER CODE. // // Expands to the name of the typedef for the type parameters of the // given test case. # define GTEST_TYPE_PARAMS_(TestCaseName) gtest_type_params_##TestCaseName##_ // The 'Types' template argument below must have spaces around it // since some compilers may choke on '>>' when passing a template // instance (e.g. Types) # define TYPED_TEST_CASE(CaseName, Types) \ typedef ::testing::internal::TypeList< Types >::type \ GTEST_TYPE_PARAMS_(CaseName) # define TYPED_TEST(CaseName, TestName) \ template \ class GTEST_TEST_CLASS_NAME_(CaseName, TestName) \ : public CaseName { \ private: \ typedef CaseName TestFixture; \ typedef gtest_TypeParam_ TypeParam; \ virtual void TestBody(); \ }; \ bool gtest_##CaseName##_##TestName##_registered_ GTEST_ATTRIBUTE_UNUSED_ = \ ::testing::internal::TypeParameterizedTest< \ CaseName, \ ::testing::internal::TemplateSel< \ GTEST_TEST_CLASS_NAME_(CaseName, TestName)>, \ GTEST_TYPE_PARAMS_(CaseName)>::Register(\ "", #CaseName, #TestName, 0); \ template \ void GTEST_TEST_CLASS_NAME_(CaseName, TestName)::TestBody() #endif // GTEST_HAS_TYPED_TEST // Implements type-parameterized tests. #if GTEST_HAS_TYPED_TEST_P // INTERNAL IMPLEMENTATION - DO NOT USE IN USER CODE. // // Expands to the namespace name that the type-parameterized tests for // the given type-parameterized test case are defined in. The exact // name of the namespace is subject to change without notice. # define GTEST_CASE_NAMESPACE_(TestCaseName) \ gtest_case_##TestCaseName##_ // INTERNAL IMPLEMENTATION - DO NOT USE IN USER CODE. // // Expands to the name of the variable used to remember the names of // the defined tests in the given test case. # define GTEST_TYPED_TEST_CASE_P_STATE_(TestCaseName) \ gtest_typed_test_case_p_state_##TestCaseName##_ // INTERNAL IMPLEMENTATION - DO NOT USE IN USER CODE DIRECTLY. // // Expands to the name of the variable used to remember the names of // the registered tests in the given test case. # define GTEST_REGISTERED_TEST_NAMES_(TestCaseName) \ gtest_registered_test_names_##TestCaseName##_ // The variables defined in the type-parameterized test macros are // static as typically these macros are used in a .h file that can be // #included in multiple translation units linked together. # define TYPED_TEST_CASE_P(CaseName) \ static ::testing::internal::TypedTestCasePState \ GTEST_TYPED_TEST_CASE_P_STATE_(CaseName) # define TYPED_TEST_P(CaseName, TestName) \ namespace GTEST_CASE_NAMESPACE_(CaseName) { \ template \ class TestName : public CaseName { \ private: \ typedef CaseName TestFixture; \ typedef gtest_TypeParam_ TypeParam; \ virtual void TestBody(); \ }; \ static bool gtest_##TestName##_defined_ GTEST_ATTRIBUTE_UNUSED_ = \ GTEST_TYPED_TEST_CASE_P_STATE_(CaseName).AddTestName(\ __FILE__, __LINE__, #CaseName, #TestName); \ } \ template \ void GTEST_CASE_NAMESPACE_(CaseName)::TestName::TestBody() # define REGISTER_TYPED_TEST_CASE_P(CaseName, ...) \ namespace GTEST_CASE_NAMESPACE_(CaseName) { \ typedef ::testing::internal::Templates<__VA_ARGS__>::type gtest_AllTests_; \ } \ static const char* const GTEST_REGISTERED_TEST_NAMES_(CaseName) = \ GTEST_TYPED_TEST_CASE_P_STATE_(CaseName).VerifyRegisteredTestNames(\ __FILE__, __LINE__, #__VA_ARGS__) // The 'Types' template argument below must have spaces around it // since some compilers may choke on '>>' when passing a template // instance (e.g. Types) # define INSTANTIATE_TYPED_TEST_CASE_P(Prefix, CaseName, Types) \ bool gtest_##Prefix##_##CaseName GTEST_ATTRIBUTE_UNUSED_ = \ ::testing::internal::TypeParameterizedTestCase::type>::Register(\ #Prefix, #CaseName, GTEST_REGISTERED_TEST_NAMES_(CaseName)) #endif // GTEST_HAS_TYPED_TEST_P #endif // GTEST_INCLUDE_GTEST_GTEST_TYPED_TEST_H_ // Depending on the platform, different string classes are available. // On Linux, in addition to ::std::string, Google also makes use of // class ::string, which has the same interface as ::std::string, but // has a different implementation. // // The user can define GTEST_HAS_GLOBAL_STRING to 1 to indicate that // ::string is available AND is a distinct type to ::std::string, or // define it to 0 to indicate otherwise. // // If the user's ::std::string and ::string are the same class due to // aliasing, he should define GTEST_HAS_GLOBAL_STRING to 0. // // If the user doesn't define GTEST_HAS_GLOBAL_STRING, it is defined // heuristically. namespace testing { // Declares the flags. // This flag temporary enables the disabled tests. GTEST_DECLARE_bool_(also_run_disabled_tests); // This flag brings the debugger on an assertion failure. GTEST_DECLARE_bool_(break_on_failure); // This flag controls whether Google Test catches all test-thrown exceptions // and logs them as failures. GTEST_DECLARE_bool_(catch_exceptions); // This flag enables using colors in terminal output. Available values are // "yes" to enable colors, "no" (disable colors), or "auto" (the default) // to let Google Test decide. GTEST_DECLARE_string_(color); // This flag sets up the filter to select by name using a glob pattern // the tests to run. If the filter is not given all tests are executed. GTEST_DECLARE_string_(filter); // This flag causes the Google Test to list tests. None of the tests listed // are actually run if the flag is provided. GTEST_DECLARE_bool_(list_tests); // This flag controls whether Google Test emits a detailed XML report to a file // in addition to its normal textual output. GTEST_DECLARE_string_(output); // This flags control whether Google Test prints the elapsed time for each // test. GTEST_DECLARE_bool_(print_time); // This flag specifies the random number seed. GTEST_DECLARE_int32_(random_seed); // This flag sets how many times the tests are repeated. The default value // is 1. If the value is -1 the tests are repeating forever. GTEST_DECLARE_int32_(repeat); // This flag controls whether Google Test includes Google Test internal // stack frames in failure stack traces. GTEST_DECLARE_bool_(show_internal_stack_frames); // When this flag is specified, tests' order is randomized on every iteration. GTEST_DECLARE_bool_(shuffle); // This flag specifies the maximum number of stack frames to be // printed in a failure message. GTEST_DECLARE_int32_(stack_trace_depth); // When this flag is specified, a failed assertion will throw an // exception if exceptions are enabled, or exit the program with a // non-zero code otherwise. GTEST_DECLARE_bool_(throw_on_failure); // When this flag is set with a "host:port" string, on supported // platforms test results are streamed to the specified port on // the specified host machine. GTEST_DECLARE_string_(stream_result_to); // The upper limit for valid stack trace depths. const int kMaxStackTraceDepth = 100; namespace internal { class AssertHelper; class DefaultGlobalTestPartResultReporter; class ExecDeathTest; class NoExecDeathTest; class FinalSuccessChecker; class GTestFlagSaver; class StreamingListenerTest; class TestResultAccessor; class TestEventListenersAccessor; class TestEventRepeater; class UnitTestRecordPropertyTestHelper; class WindowsDeathTest; class UnitTestImpl* GetUnitTestImpl(); void ReportFailureInUnknownLocation(TestPartResult::Type result_type, const std::string& message); } // namespace internal // The friend relationship of some of these classes is cyclic. // If we don't forward declare them the compiler might confuse the classes // in friendship clauses with same named classes on the scope. class Test; class TestCase; class TestInfo; class UnitTest; // A class for indicating whether an assertion was successful. When // the assertion wasn't successful, the AssertionResult object // remembers a non-empty message that describes how it failed. // // To create an instance of this class, use one of the factory functions // (AssertionSuccess() and AssertionFailure()). // // This class is useful for two purposes: // 1. Defining predicate functions to be used with Boolean test assertions // EXPECT_TRUE/EXPECT_FALSE and their ASSERT_ counterparts // 2. Defining predicate-format functions to be // used with predicate assertions (ASSERT_PRED_FORMAT*, etc). // // For example, if you define IsEven predicate: // // testing::AssertionResult IsEven(int n) { // if ((n % 2) == 0) // return testing::AssertionSuccess(); // else // return testing::AssertionFailure() << n << " is odd"; // } // // Then the failed expectation EXPECT_TRUE(IsEven(Fib(5))) // will print the message // // Value of: IsEven(Fib(5)) // Actual: false (5 is odd) // Expected: true // // instead of a more opaque // // Value of: IsEven(Fib(5)) // Actual: false // Expected: true // // in case IsEven is a simple Boolean predicate. // // If you expect your predicate to be reused and want to support informative // messages in EXPECT_FALSE and ASSERT_FALSE (negative assertions show up // about half as often as positive ones in our tests), supply messages for // both success and failure cases: // // testing::AssertionResult IsEven(int n) { // if ((n % 2) == 0) // return testing::AssertionSuccess() << n << " is even"; // else // return testing::AssertionFailure() << n << " is odd"; // } // // Then a statement EXPECT_FALSE(IsEven(Fib(6))) will print // // Value of: IsEven(Fib(6)) // Actual: true (8 is even) // Expected: false // // NB: Predicates that support negative Boolean assertions have reduced // performance in positive ones so be careful not to use them in tests // that have lots (tens of thousands) of positive Boolean assertions. // // To use this class with EXPECT_PRED_FORMAT assertions such as: // // // Verifies that Foo() returns an even number. // EXPECT_PRED_FORMAT1(IsEven, Foo()); // // you need to define: // // testing::AssertionResult IsEven(const char* expr, int n) { // if ((n % 2) == 0) // return testing::AssertionSuccess(); // else // return testing::AssertionFailure() // << "Expected: " << expr << " is even\n Actual: it's " << n; // } // // If Foo() returns 5, you will see the following message: // // Expected: Foo() is even // Actual: it's 5 // class GTEST_API_ AssertionResult { public: // Copy constructor. // Used in EXPECT_TRUE/FALSE(assertion_result). AssertionResult(const AssertionResult& other); // Used in the EXPECT_TRUE/FALSE(bool_expression). explicit AssertionResult(bool success) : success_(success) {} // Returns true iff the assertion succeeded. operator bool() const { return success_; } // NOLINT // Returns the assertion's negation. Used with EXPECT/ASSERT_FALSE. AssertionResult operator!() const; // Returns the text streamed into this AssertionResult. Test assertions // use it when they fail (i.e., the predicate's outcome doesn't match the // assertion's expectation). When nothing has been streamed into the // object, returns an empty string. const char* message() const { return message_.get() != NULL ? message_->c_str() : ""; } // TODO(vladl@google.com): Remove this after making sure no clients use it. // Deprecated; please use message() instead. const char* failure_message() const { return message(); } // Streams a custom failure message into this object. template AssertionResult& operator<<(const T& value) { AppendMessage(Message() << value); return *this; } // Allows streaming basic output manipulators such as endl or flush into // this object. AssertionResult& operator<<( ::std::ostream& (*basic_manipulator)(::std::ostream& stream)) { AppendMessage(Message() << basic_manipulator); return *this; } private: // Appends the contents of message to message_. void AppendMessage(const Message& a_message) { if (message_.get() == NULL) message_.reset(new ::std::string); message_->append(a_message.GetString().c_str()); } // Stores result of the assertion predicate. bool success_; // Stores the message describing the condition in case the expectation // construct is not satisfied with the predicate's outcome. // Referenced via a pointer to avoid taking too much stack frame space // with test assertions. internal::scoped_ptr< ::std::string> message_; GTEST_DISALLOW_ASSIGN_(AssertionResult); }; // Makes a successful assertion result. GTEST_API_ AssertionResult AssertionSuccess(); // Makes a failed assertion result. GTEST_API_ AssertionResult AssertionFailure(); // Makes a failed assertion result with the given failure message. // Deprecated; use AssertionFailure() << msg. GTEST_API_ AssertionResult AssertionFailure(const Message& msg); // The abstract class that all tests inherit from. // // In Google Test, a unit test program contains one or many TestCases, and // each TestCase contains one or many Tests. // // When you define a test using the TEST macro, you don't need to // explicitly derive from Test - the TEST macro automatically does // this for you. // // The only time you derive from Test is when defining a test fixture // to be used a TEST_F. For example: // // class FooTest : public testing::Test { // protected: // virtual void SetUp() { ... } // virtual void TearDown() { ... } // ... // }; // // TEST_F(FooTest, Bar) { ... } // TEST_F(FooTest, Baz) { ... } // // Test is not copyable. class GTEST_API_ Test { public: friend class TestInfo; // Defines types for pointers to functions that set up and tear down // a test case. typedef internal::SetUpTestCaseFunc SetUpTestCaseFunc; typedef internal::TearDownTestCaseFunc TearDownTestCaseFunc; // The d'tor is virtual as we intend to inherit from Test. virtual ~Test(); // Sets up the stuff shared by all tests in this test case. // // Google Test will call Foo::SetUpTestCase() before running the first // test in test case Foo. Hence a sub-class can define its own // SetUpTestCase() method to shadow the one defined in the super // class. static void SetUpTestCase() {} // Tears down the stuff shared by all tests in this test case. // // Google Test will call Foo::TearDownTestCase() after running the last // test in test case Foo. Hence a sub-class can define its own // TearDownTestCase() method to shadow the one defined in the super // class. static void TearDownTestCase() {} // Returns true iff the current test has a fatal failure. static bool HasFatalFailure(); // Returns true iff the current test has a non-fatal failure. static bool HasNonfatalFailure(); // Returns true iff the current test has a (either fatal or // non-fatal) failure. static bool HasFailure() { return HasFatalFailure() || HasNonfatalFailure(); } // Logs a property for the current test, test case, or for the entire // invocation of the test program when used outside of the context of a // test case. Only the last value for a given key is remembered. These // are public static so they can be called from utility functions that are // not members of the test fixture. Calls to RecordProperty made during // lifespan of the test (from the moment its constructor starts to the // moment its destructor finishes) will be output in XML as attributes of // the element. Properties recorded from fixture's // SetUpTestCase or TearDownTestCase are logged as attributes of the // corresponding element. Calls to RecordProperty made in the // global context (before or after invocation of RUN_ALL_TESTS and from // SetUp/TearDown method of Environment objects registered with Google // Test) will be output as attributes of the element. static void RecordProperty(const std::string& key, const std::string& value); static void RecordProperty(const std::string& key, int value); protected: // Creates a Test object. Test(); // Sets up the test fixture. virtual void SetUp(); // Tears down the test fixture. virtual void TearDown(); private: // Returns true iff the current test has the same fixture class as // the first test in the current test case. static bool HasSameFixtureClass(); // Runs the test after the test fixture has been set up. // // A sub-class must implement this to define the test logic. // // DO NOT OVERRIDE THIS FUNCTION DIRECTLY IN A USER PROGRAM. // Instead, use the TEST or TEST_F macro. virtual void TestBody() = 0; // Sets up, executes, and tears down the test. void Run(); // Deletes self. We deliberately pick an unusual name for this // internal method to avoid clashing with names used in user TESTs. void DeleteSelf_() { delete this; } // Uses a GTestFlagSaver to save and restore all Google Test flags. const internal::GTestFlagSaver* const gtest_flag_saver_; // Often a user mis-spells SetUp() as Setup() and spends a long time // wondering why it is never called by Google Test. The declaration of // the following method is solely for catching such an error at // compile time: // // - The return type is deliberately chosen to be not void, so it // will be a conflict if a user declares void Setup() in his test // fixture. // // - This method is private, so it will be another compiler error // if a user calls it from his test fixture. // // DO NOT OVERRIDE THIS FUNCTION. // // If you see an error about overriding the following function or // about it being private, you have mis-spelled SetUp() as Setup(). struct Setup_should_be_spelled_SetUp {}; virtual Setup_should_be_spelled_SetUp* Setup() { return NULL; } // We disallow copying Tests. GTEST_DISALLOW_COPY_AND_ASSIGN_(Test); }; typedef internal::TimeInMillis TimeInMillis; // A copyable object representing a user specified test property which can be // output as a key/value string pair. // // Don't inherit from TestProperty as its destructor is not virtual. class TestProperty { public: // C'tor. TestProperty does NOT have a default constructor. // Always use this constructor (with parameters) to create a // TestProperty object. TestProperty(const std::string& a_key, const std::string& a_value) : key_(a_key), value_(a_value) { } // Gets the user supplied key. const char* key() const { return key_.c_str(); } // Gets the user supplied value. const char* value() const { return value_.c_str(); } // Sets a new value, overriding the one supplied in the constructor. void SetValue(const std::string& new_value) { value_ = new_value; } private: // The key supplied by the user. std::string key_; // The value supplied by the user. std::string value_; }; // The result of a single Test. This includes a list of // TestPartResults, a list of TestProperties, a count of how many // death tests there are in the Test, and how much time it took to run // the Test. // // TestResult is not copyable. class GTEST_API_ TestResult { public: // Creates an empty TestResult. TestResult(); // D'tor. Do not inherit from TestResult. ~TestResult(); // Gets the number of all test parts. This is the sum of the number // of successful test parts and the number of failed test parts. int total_part_count() const; // Returns the number of the test properties. int test_property_count() const; // Returns true iff the test passed (i.e. no test part failed). bool Passed() const { return !Failed(); } // Returns true iff the test failed. bool Failed() const; // Returns true iff the test fatally failed. bool HasFatalFailure() const; // Returns true iff the test has a non-fatal failure. bool HasNonfatalFailure() const; // Returns the elapsed time, in milliseconds. TimeInMillis elapsed_time() const { return elapsed_time_; } // Returns the i-th test part result among all the results. i can range // from 0 to test_property_count() - 1. If i is not in that range, aborts // the program. const TestPartResult& GetTestPartResult(int i) const; // Returns the i-th test property. i can range from 0 to // test_property_count() - 1. If i is not in that range, aborts the // program. const TestProperty& GetTestProperty(int i) const; private: friend class TestInfo; friend class TestCase; friend class UnitTest; friend class internal::DefaultGlobalTestPartResultReporter; friend class internal::ExecDeathTest; friend class internal::TestResultAccessor; friend class internal::UnitTestImpl; friend class internal::WindowsDeathTest; // Gets the vector of TestPartResults. const std::vector& test_part_results() const { return test_part_results_; } // Gets the vector of TestProperties. const std::vector& test_properties() const { return test_properties_; } // Sets the elapsed time. void set_elapsed_time(TimeInMillis elapsed) { elapsed_time_ = elapsed; } // Adds a test property to the list. The property is validated and may add // a non-fatal failure if invalid (e.g., if it conflicts with reserved // key names). If a property is already recorded for the same key, the // value will be updated, rather than storing multiple values for the same // key. xml_element specifies the element for which the property is being // recorded and is used for validation. void RecordProperty(const std::string& xml_element, const TestProperty& test_property); // Adds a failure if the key is a reserved attribute of Google Test // testcase tags. Returns true if the property is valid. // TODO(russr): Validate attribute names are legal and human readable. static bool ValidateTestProperty(const std::string& xml_element, const TestProperty& test_property); // Adds a test part result to the list. void AddTestPartResult(const TestPartResult& test_part_result); // Returns the death test count. int death_test_count() const { return death_test_count_; } // Increments the death test count, returning the new count. int increment_death_test_count() { return ++death_test_count_; } // Clears the test part results. void ClearTestPartResults(); // Clears the object. void Clear(); // Protects mutable state of the property vector and of owned // properties, whose values may be updated. internal::Mutex test_properites_mutex_; // The vector of TestPartResults std::vector test_part_results_; // The vector of TestProperties std::vector test_properties_; // Running count of death tests. int death_test_count_; // The elapsed time, in milliseconds. TimeInMillis elapsed_time_; // We disallow copying TestResult. GTEST_DISALLOW_COPY_AND_ASSIGN_(TestResult); }; // class TestResult // A TestInfo object stores the following information about a test: // // Test case name // Test name // Whether the test should be run // A function pointer that creates the test object when invoked // Test result // // The constructor of TestInfo registers itself with the UnitTest // singleton such that the RUN_ALL_TESTS() macro knows which tests to // run. class GTEST_API_ TestInfo { public: // Destructs a TestInfo object. This function is not virtual, so // don't inherit from TestInfo. ~TestInfo(); // Returns the test case name. const char* test_case_name() const { return test_case_name_.c_str(); } // Returns the test name. const char* name() const { return name_.c_str(); } // Returns the name of the parameter type, or NULL if this is not a typed // or a type-parameterized test. const char* type_param() const { if (type_param_.get() != NULL) return type_param_->c_str(); return NULL; } // Returns the text representation of the value parameter, or NULL if this // is not a value-parameterized test. const char* value_param() const { if (value_param_.get() != NULL) return value_param_->c_str(); return NULL; } // Returns true if this test should run, that is if the test is not // disabled (or it is disabled but the also_run_disabled_tests flag has // been specified) and its full name matches the user-specified filter. // // Google Test allows the user to filter the tests by their full names. // The full name of a test Bar in test case Foo is defined as // "Foo.Bar". Only the tests that match the filter will run. // // A filter is a colon-separated list of glob (not regex) patterns, // optionally followed by a '-' and a colon-separated list of // negative patterns (tests to exclude). A test is run if it // matches one of the positive patterns and does not match any of // the negative patterns. // // For example, *A*:Foo.* is a filter that matches any string that // contains the character 'A' or starts with "Foo.". bool should_run() const { return should_run_; } // Returns true iff this test will appear in the XML report. bool is_reportable() const { // For now, the XML report includes all tests matching the filter. // In the future, we may trim tests that are excluded because of // sharding. return matches_filter_; } // Returns the result of the test. const TestResult* result() const { return &result_; } private: #if GTEST_HAS_DEATH_TEST friend class internal::DefaultDeathTestFactory; #endif // GTEST_HAS_DEATH_TEST friend class Test; friend class TestCase; friend class internal::UnitTestImpl; friend class internal::StreamingListenerTest; friend TestInfo* internal::MakeAndRegisterTestInfo( const char* test_case_name, const char* name, const char* type_param, const char* value_param, internal::TypeId fixture_class_id, Test::SetUpTestCaseFunc set_up_tc, Test::TearDownTestCaseFunc tear_down_tc, internal::TestFactoryBase* factory); // Constructs a TestInfo object. The newly constructed instance assumes // ownership of the factory object. TestInfo(const std::string& test_case_name, const std::string& name, const char* a_type_param, // NULL if not a type-parameterized test const char* a_value_param, // NULL if not a value-parameterized test internal::TypeId fixture_class_id, internal::TestFactoryBase* factory); // Increments the number of death tests encountered in this test so // far. int increment_death_test_count() { return result_.increment_death_test_count(); } // Creates the test object, runs it, records its result, and then // deletes it. void Run(); static void ClearTestResult(TestInfo* test_info) { test_info->result_.Clear(); } // These fields are immutable properties of the test. const std::string test_case_name_; // Test case name const std::string name_; // Test name // Name of the parameter type, or NULL if this is not a typed or a // type-parameterized test. const internal::scoped_ptr type_param_; // Text representation of the value parameter, or NULL if this is not a // value-parameterized test. const internal::scoped_ptr value_param_; const internal::TypeId fixture_class_id_; // ID of the test fixture class bool should_run_; // True iff this test should run bool is_disabled_; // True iff this test is disabled bool matches_filter_; // True if this test matches the // user-specified filter. internal::TestFactoryBase* const factory_; // The factory that creates // the test object // This field is mutable and needs to be reset before running the // test for the second time. TestResult result_; GTEST_DISALLOW_COPY_AND_ASSIGN_(TestInfo); }; // A test case, which consists of a vector of TestInfos. // // TestCase is not copyable. class GTEST_API_ TestCase { public: // Creates a TestCase with the given name. // // TestCase does NOT have a default constructor. Always use this // constructor to create a TestCase object. // // Arguments: // // name: name of the test case // a_type_param: the name of the test's type parameter, or NULL if // this is not a type-parameterized test. // set_up_tc: pointer to the function that sets up the test case // tear_down_tc: pointer to the function that tears down the test case TestCase(const char* name, const char* a_type_param, Test::SetUpTestCaseFunc set_up_tc, Test::TearDownTestCaseFunc tear_down_tc); // Destructor of TestCase. virtual ~TestCase(); // Gets the name of the TestCase. const char* name() const { return name_.c_str(); } // Returns the name of the parameter type, or NULL if this is not a // type-parameterized test case. const char* type_param() const { if (type_param_.get() != NULL) return type_param_->c_str(); return NULL; } // Returns true if any test in this test case should run. bool should_run() const { return should_run_; } // Gets the number of successful tests in this test case. int successful_test_count() const; // Gets the number of failed tests in this test case. int failed_test_count() const; // Gets the number of disabled tests that will be reported in the XML report. int reportable_disabled_test_count() const; // Gets the number of disabled tests in this test case. int disabled_test_count() const; // Gets the number of tests to be printed in the XML report. int reportable_test_count() const; // Get the number of tests in this test case that should run. int test_to_run_count() const; // Gets the number of all tests in this test case. int total_test_count() const; // Returns true iff the test case passed. bool Passed() const { return !Failed(); } // Returns true iff the test case failed. bool Failed() const { return failed_test_count() > 0; } // Returns the elapsed time, in milliseconds. TimeInMillis elapsed_time() const { return elapsed_time_; } // Returns the i-th test among all the tests. i can range from 0 to // total_test_count() - 1. If i is not in that range, returns NULL. const TestInfo* GetTestInfo(int i) const; // Returns the TestResult that holds test properties recorded during // execution of SetUpTestCase and TearDownTestCase. const TestResult& ad_hoc_test_result() const { return ad_hoc_test_result_; } private: friend class Test; friend class internal::UnitTestImpl; // Gets the (mutable) vector of TestInfos in this TestCase. std::vector& test_info_list() { return test_info_list_; } // Gets the (immutable) vector of TestInfos in this TestCase. const std::vector& test_info_list() const { return test_info_list_; } // Returns the i-th test among all the tests. i can range from 0 to // total_test_count() - 1. If i is not in that range, returns NULL. TestInfo* GetMutableTestInfo(int i); // Sets the should_run member. void set_should_run(bool should) { should_run_ = should; } // Adds a TestInfo to this test case. Will delete the TestInfo upon // destruction of the TestCase object. void AddTestInfo(TestInfo * test_info); // Clears the results of all tests in this test case. void ClearResult(); // Clears the results of all tests in the given test case. static void ClearTestCaseResult(TestCase* test_case) { test_case->ClearResult(); } // Runs every test in this TestCase. void Run(); // Runs SetUpTestCase() for this TestCase. This wrapper is needed // for catching exceptions thrown from SetUpTestCase(). void RunSetUpTestCase() { (*set_up_tc_)(); } // Runs TearDownTestCase() for this TestCase. This wrapper is // needed for catching exceptions thrown from TearDownTestCase(). void RunTearDownTestCase() { (*tear_down_tc_)(); } // Returns true iff test passed. static bool TestPassed(const TestInfo* test_info) { return test_info->should_run() && test_info->result()->Passed(); } // Returns true iff test failed. static bool TestFailed(const TestInfo* test_info) { return test_info->should_run() && test_info->result()->Failed(); } // Returns true iff the test is disabled and will be reported in the XML // report. static bool TestReportableDisabled(const TestInfo* test_info) { return test_info->is_reportable() && test_info->is_disabled_; } // Returns true iff test is disabled. static bool TestDisabled(const TestInfo* test_info) { return test_info->is_disabled_; } // Returns true iff this test will appear in the XML report. static bool TestReportable(const TestInfo* test_info) { return test_info->is_reportable(); } // Returns true if the given test should run. static bool ShouldRunTest(const TestInfo* test_info) { return test_info->should_run(); } // Shuffles the tests in this test case. void ShuffleTests(internal::Random* random); // Restores the test order to before the first shuffle. void UnshuffleTests(); // Name of the test case. std::string name_; // Name of the parameter type, or NULL if this is not a typed or a // type-parameterized test. const internal::scoped_ptr type_param_; // The vector of TestInfos in their original order. It owns the // elements in the vector. std::vector test_info_list_; // Provides a level of indirection for the test list to allow easy // shuffling and restoring the test order. The i-th element in this // vector is the index of the i-th test in the shuffled test list. std::vector test_indices_; // Pointer to the function that sets up the test case. Test::SetUpTestCaseFunc set_up_tc_; // Pointer to the function that tears down the test case. Test::TearDownTestCaseFunc tear_down_tc_; // True iff any test in this test case should run. bool should_run_; // Elapsed time, in milliseconds. TimeInMillis elapsed_time_; // Holds test properties recorded during execution of SetUpTestCase and // TearDownTestCase. TestResult ad_hoc_test_result_; // We disallow copying TestCases. GTEST_DISALLOW_COPY_AND_ASSIGN_(TestCase); }; // An Environment object is capable of setting up and tearing down an // environment. The user should subclass this to define his own // environment(s). // // An Environment object does the set-up and tear-down in virtual // methods SetUp() and TearDown() instead of the constructor and the // destructor, as: // // 1. You cannot safely throw from a destructor. This is a problem // as in some cases Google Test is used where exceptions are enabled, and // we may want to implement ASSERT_* using exceptions where they are // available. // 2. You cannot use ASSERT_* directly in a constructor or // destructor. class Environment { public: // The d'tor is virtual as we need to subclass Environment. virtual ~Environment() {} // Override this to define how to set up the environment. virtual void SetUp() {} // Override this to define how to tear down the environment. virtual void TearDown() {} private: // If you see an error about overriding the following function or // about it being private, you have mis-spelled SetUp() as Setup(). struct Setup_should_be_spelled_SetUp {}; virtual Setup_should_be_spelled_SetUp* Setup() { return NULL; } }; // The interface for tracing execution of tests. The methods are organized in // the order the corresponding events are fired. class TestEventListener { public: virtual ~TestEventListener() {} // Fired before any test activity starts. virtual void OnTestProgramStart(const UnitTest& unit_test) = 0; // Fired before each iteration of tests starts. There may be more than // one iteration if GTEST_FLAG(repeat) is set. iteration is the iteration // index, starting from 0. virtual void OnTestIterationStart(const UnitTest& unit_test, int iteration) = 0; // Fired before environment set-up for each iteration of tests starts. virtual void OnEnvironmentsSetUpStart(const UnitTest& unit_test) = 0; // Fired after environment set-up for each iteration of tests ends. virtual void OnEnvironmentsSetUpEnd(const UnitTest& unit_test) = 0; // Fired before the test case starts. virtual void OnTestCaseStart(const TestCase& test_case) = 0; // Fired before the test starts. virtual void OnTestStart(const TestInfo& test_info) = 0; // Fired after a failed assertion or a SUCCEED() invocation. virtual void OnTestPartResult(const TestPartResult& test_part_result) = 0; // Fired after the test ends. virtual void OnTestEnd(const TestInfo& test_info) = 0; // Fired after the test case ends. virtual void OnTestCaseEnd(const TestCase& test_case) = 0; // Fired before environment tear-down for each iteration of tests starts. virtual void OnEnvironmentsTearDownStart(const UnitTest& unit_test) = 0; // Fired after environment tear-down for each iteration of tests ends. virtual void OnEnvironmentsTearDownEnd(const UnitTest& unit_test) = 0; // Fired after each iteration of tests finishes. virtual void OnTestIterationEnd(const UnitTest& unit_test, int iteration) = 0; // Fired after all test activities have ended. virtual void OnTestProgramEnd(const UnitTest& unit_test) = 0; }; // The convenience class for users who need to override just one or two // methods and are not concerned that a possible change to a signature of // the methods they override will not be caught during the build. For // comments about each method please see the definition of TestEventListener // above. class EmptyTestEventListener : public TestEventListener { public: virtual void OnTestProgramStart(const UnitTest& /*unit_test*/) {} virtual void OnTestIterationStart(const UnitTest& /*unit_test*/, int /*iteration*/) {} virtual void OnEnvironmentsSetUpStart(const UnitTest& /*unit_test*/) {} virtual void OnEnvironmentsSetUpEnd(const UnitTest& /*unit_test*/) {} virtual void OnTestCaseStart(const TestCase& /*test_case*/) {} virtual void OnTestStart(const TestInfo& /*test_info*/) {} virtual void OnTestPartResult(const TestPartResult& /*test_part_result*/) {} virtual void OnTestEnd(const TestInfo& /*test_info*/) {} virtual void OnTestCaseEnd(const TestCase& /*test_case*/) {} virtual void OnEnvironmentsTearDownStart(const UnitTest& /*unit_test*/) {} virtual void OnEnvironmentsTearDownEnd(const UnitTest& /*unit_test*/) {} virtual void OnTestIterationEnd(const UnitTest& /*unit_test*/, int /*iteration*/) {} virtual void OnTestProgramEnd(const UnitTest& /*unit_test*/) {} }; // TestEventListeners lets users add listeners to track events in Google Test. class GTEST_API_ TestEventListeners { public: TestEventListeners(); ~TestEventListeners(); // Appends an event listener to the end of the list. Google Test assumes // the ownership of the listener (i.e. it will delete the listener when // the test program finishes). void Append(TestEventListener* listener); // Removes the given event listener from the list and returns it. It then // becomes the caller's responsibility to delete the listener. Returns // NULL if the listener is not found in the list. TestEventListener* Release(TestEventListener* listener); // Returns the standard listener responsible for the default console // output. Can be removed from the listeners list to shut down default // console output. Note that removing this object from the listener list // with Release transfers its ownership to the caller and makes this // function return NULL the next time. TestEventListener* default_result_printer() const { return default_result_printer_; } // Returns the standard listener responsible for the default XML output // controlled by the --gtest_output=xml flag. Can be removed from the // listeners list by users who want to shut down the default XML output // controlled by this flag and substitute it with custom one. Note that // removing this object from the listener list with Release transfers its // ownership to the caller and makes this function return NULL the next // time. TestEventListener* default_xml_generator() const { return default_xml_generator_; } private: friend class TestCase; friend class TestInfo; friend class internal::DefaultGlobalTestPartResultReporter; friend class internal::NoExecDeathTest; friend class internal::TestEventListenersAccessor; friend class internal::UnitTestImpl; // Returns repeater that broadcasts the TestEventListener events to all // subscribers. TestEventListener* repeater(); // Sets the default_result_printer attribute to the provided listener. // The listener is also added to the listener list and previous // default_result_printer is removed from it and deleted. The listener can // also be NULL in which case it will not be added to the list. Does // nothing if the previous and the current listener objects are the same. void SetDefaultResultPrinter(TestEventListener* listener); // Sets the default_xml_generator attribute to the provided listener. The // listener is also added to the listener list and previous // default_xml_generator is removed from it and deleted. The listener can // also be NULL in which case it will not be added to the list. Does // nothing if the previous and the current listener objects are the same. void SetDefaultXmlGenerator(TestEventListener* listener); // Controls whether events will be forwarded by the repeater to the // listeners in the list. bool EventForwardingEnabled() const; void SuppressEventForwarding(); // The actual list of listeners. internal::TestEventRepeater* repeater_; // Listener responsible for the standard result output. TestEventListener* default_result_printer_; // Listener responsible for the creation of the XML output file. TestEventListener* default_xml_generator_; // We disallow copying TestEventListeners. GTEST_DISALLOW_COPY_AND_ASSIGN_(TestEventListeners); }; // A UnitTest consists of a vector of TestCases. // // This is a singleton class. The only instance of UnitTest is // created when UnitTest::GetInstance() is first called. This // instance is never deleted. // // UnitTest is not copyable. // // This class is thread-safe as long as the methods are called // according to their specification. class GTEST_API_ UnitTest { public: // Gets the singleton UnitTest object. The first time this method // is called, a UnitTest object is constructed and returned. // Consecutive calls will return the same object. static UnitTest* GetInstance(); // Runs all tests in this UnitTest object and prints the result. // Returns 0 if successful, or 1 otherwise. // // This method can only be called from the main thread. // // INTERNAL IMPLEMENTATION - DO NOT USE IN A USER PROGRAM. int Run() GTEST_MUST_USE_RESULT_; // Returns the working directory when the first TEST() or TEST_F() // was executed. The UnitTest object owns the string. const char* original_working_dir() const; // Returns the TestCase object for the test that's currently running, // or NULL if no test is running. const TestCase* current_test_case() const GTEST_LOCK_EXCLUDED_(mutex_); // Returns the TestInfo object for the test that's currently running, // or NULL if no test is running. const TestInfo* current_test_info() const GTEST_LOCK_EXCLUDED_(mutex_); // Returns the random seed used at the start of the current test run. int random_seed() const; #if GTEST_HAS_PARAM_TEST // Returns the ParameterizedTestCaseRegistry object used to keep track of // value-parameterized tests and instantiate and register them. // // INTERNAL IMPLEMENTATION - DO NOT USE IN A USER PROGRAM. internal::ParameterizedTestCaseRegistry& parameterized_test_registry() GTEST_LOCK_EXCLUDED_(mutex_); #endif // GTEST_HAS_PARAM_TEST // Gets the number of successful test cases. int successful_test_case_count() const; // Gets the number of failed test cases. int failed_test_case_count() const; // Gets the number of all test cases. int total_test_case_count() const; // Gets the number of all test cases that contain at least one test // that should run. int test_case_to_run_count() const; // Gets the number of successful tests. int successful_test_count() const; // Gets the number of failed tests. int failed_test_count() const; // Gets the number of disabled tests that will be reported in the XML report. int reportable_disabled_test_count() const; // Gets the number of disabled tests. int disabled_test_count() const; // Gets the number of tests to be printed in the XML report. int reportable_test_count() const; // Gets the number of all tests. int total_test_count() const; // Gets the number of tests that should run. int test_to_run_count() const; // Gets the time of the test program start, in ms from the start of the // UNIX epoch. TimeInMillis start_timestamp() const; // Gets the elapsed time, in milliseconds. TimeInMillis elapsed_time() const; // Returns true iff the unit test passed (i.e. all test cases passed). bool Passed() const; // Returns true iff the unit test failed (i.e. some test case failed // or something outside of all tests failed). bool Failed() const; // Gets the i-th test case among all the test cases. i can range from 0 to // total_test_case_count() - 1. If i is not in that range, returns NULL. const TestCase* GetTestCase(int i) const; // Returns the TestResult containing information on test failures and // properties logged outside of individual test cases. const TestResult& ad_hoc_test_result() const; // Returns the list of event listeners that can be used to track events // inside Google Test. TestEventListeners& listeners(); private: // Registers and returns a global test environment. When a test // program is run, all global test environments will be set-up in // the order they were registered. After all tests in the program // have finished, all global test environments will be torn-down in // the *reverse* order they were registered. // // The UnitTest object takes ownership of the given environment. // // This method can only be called from the main thread. Environment* AddEnvironment(Environment* env); // Adds a TestPartResult to the current TestResult object. All // Google Test assertion macros (e.g. ASSERT_TRUE, EXPECT_EQ, etc) // eventually call this to report their results. The user code // should use the assertion macros instead of calling this directly. void AddTestPartResult(TestPartResult::Type result_type, const char* file_name, int line_number, const std::string& message, const std::string& os_stack_trace) GTEST_LOCK_EXCLUDED_(mutex_); // Adds a TestProperty to the current TestResult object when invoked from // inside a test, to current TestCase's ad_hoc_test_result_ when invoked // from SetUpTestCase or TearDownTestCase, or to the global property set // when invoked elsewhere. If the result already contains a property with // the same key, the value will be updated. void RecordProperty(const std::string& key, const std::string& value); // Gets the i-th test case among all the test cases. i can range from 0 to // total_test_case_count() - 1. If i is not in that range, returns NULL. TestCase* GetMutableTestCase(int i); // Accessors for the implementation object. internal::UnitTestImpl* impl() { return impl_; } const internal::UnitTestImpl* impl() const { return impl_; } // These classes and funcions are friends as they need to access private // members of UnitTest. friend class Test; friend class internal::AssertHelper; friend class internal::ScopedTrace; friend class internal::StreamingListenerTest; friend class internal::UnitTestRecordPropertyTestHelper; friend Environment* AddGlobalTestEnvironment(Environment* env); friend internal::UnitTestImpl* internal::GetUnitTestImpl(); friend void internal::ReportFailureInUnknownLocation( TestPartResult::Type result_type, const std::string& message); // Creates an empty UnitTest. UnitTest(); // D'tor virtual ~UnitTest(); // Pushes a trace defined by SCOPED_TRACE() on to the per-thread // Google Test trace stack. void PushGTestTrace(const internal::TraceInfo& trace) GTEST_LOCK_EXCLUDED_(mutex_); // Pops a trace from the per-thread Google Test trace stack. void PopGTestTrace() GTEST_LOCK_EXCLUDED_(mutex_); // Protects mutable state in *impl_. This is mutable as some const // methods need to lock it too. mutable internal::Mutex mutex_; // Opaque implementation object. This field is never changed once // the object is constructed. We don't mark it as const here, as // doing so will cause a warning in the constructor of UnitTest. // Mutable state in *impl_ is protected by mutex_. internal::UnitTestImpl* impl_; // We disallow copying UnitTest. GTEST_DISALLOW_COPY_AND_ASSIGN_(UnitTest); }; // A convenient wrapper for adding an environment for the test // program. // // You should call this before RUN_ALL_TESTS() is called, probably in // main(). If you use gtest_main, you need to call this before main() // starts for it to take effect. For example, you can define a global // variable like this: // // testing::Environment* const foo_env = // testing::AddGlobalTestEnvironment(new FooEnvironment); // // However, we strongly recommend you to write your own main() and // call AddGlobalTestEnvironment() there, as relying on initialization // of global variables makes the code harder to read and may cause // problems when you register multiple environments from different // translation units and the environments have dependencies among them // (remember that the compiler doesn't guarantee the order in which // global variables from different translation units are initialized). inline Environment* AddGlobalTestEnvironment(Environment* env) { return UnitTest::GetInstance()->AddEnvironment(env); } // Initializes Google Test. This must be called before calling // RUN_ALL_TESTS(). In particular, it parses a command line for the // flags that Google Test recognizes. Whenever a Google Test flag is // seen, it is removed from argv, and *argc is decremented. // // No value is returned. Instead, the Google Test flag variables are // updated. // // Calling the function for the second time has no user-visible effect. GTEST_API_ void InitGoogleTest(int* argc, char** argv); // This overloaded version can be used in Windows programs compiled in // UNICODE mode. GTEST_API_ void InitGoogleTest(int* argc, wchar_t** argv); namespace internal { // FormatForComparison::Format(value) formats a // value of type ToPrint that is an operand of a comparison assertion // (e.g. ASSERT_EQ). OtherOperand is the type of the other operand in // the comparison, and is used to help determine the best way to // format the value. In particular, when the value is a C string // (char pointer) and the other operand is an STL string object, we // want to format the C string as a string, since we know it is // compared by value with the string object. If the value is a char // pointer but the other operand is not an STL string object, we don't // know whether the pointer is supposed to point to a NUL-terminated // string, and thus want to print it as a pointer to be safe. // // INTERNAL IMPLEMENTATION - DO NOT USE IN A USER PROGRAM. // The default case. template class FormatForComparison { public: static ::std::string Format(const ToPrint& value) { return ::testing::PrintToString(value); } }; // Array. template class FormatForComparison { public: static ::std::string Format(const ToPrint* value) { return FormatForComparison::Format(value); } }; // By default, print C string as pointers to be safe, as we don't know // whether they actually point to a NUL-terminated string. #define GTEST_IMPL_FORMAT_C_STRING_AS_POINTER_(CharType) \ template \ class FormatForComparison { \ public: \ static ::std::string Format(CharType* value) { \ return ::testing::PrintToString(static_cast(value)); \ } \ } GTEST_IMPL_FORMAT_C_STRING_AS_POINTER_(char); GTEST_IMPL_FORMAT_C_STRING_AS_POINTER_(const char); GTEST_IMPL_FORMAT_C_STRING_AS_POINTER_(wchar_t); GTEST_IMPL_FORMAT_C_STRING_AS_POINTER_(const wchar_t); #undef GTEST_IMPL_FORMAT_C_STRING_AS_POINTER_ // If a C string is compared with an STL string object, we know it's meant // to point to a NUL-terminated string, and thus can print it as a string. #define GTEST_IMPL_FORMAT_C_STRING_AS_STRING_(CharType, OtherStringType) \ template <> \ class FormatForComparison { \ public: \ static ::std::string Format(CharType* value) { \ return ::testing::PrintToString(value); \ } \ } GTEST_IMPL_FORMAT_C_STRING_AS_STRING_(char, ::std::string); GTEST_IMPL_FORMAT_C_STRING_AS_STRING_(const char, ::std::string); #if GTEST_HAS_GLOBAL_STRING GTEST_IMPL_FORMAT_C_STRING_AS_STRING_(char, ::string); GTEST_IMPL_FORMAT_C_STRING_AS_STRING_(const char, ::string); #endif #if GTEST_HAS_GLOBAL_WSTRING GTEST_IMPL_FORMAT_C_STRING_AS_STRING_(wchar_t, ::wstring); GTEST_IMPL_FORMAT_C_STRING_AS_STRING_(const wchar_t, ::wstring); #endif #if GTEST_HAS_STD_WSTRING GTEST_IMPL_FORMAT_C_STRING_AS_STRING_(wchar_t, ::std::wstring); GTEST_IMPL_FORMAT_C_STRING_AS_STRING_(const wchar_t, ::std::wstring); #endif #undef GTEST_IMPL_FORMAT_C_STRING_AS_STRING_ // Formats a comparison assertion (e.g. ASSERT_EQ, EXPECT_LT, and etc) // operand to be used in a failure message. The type (but not value) // of the other operand may affect the format. This allows us to // print a char* as a raw pointer when it is compared against another // char* or void*, and print it as a C string when it is compared // against an std::string object, for example. // // INTERNAL IMPLEMENTATION - DO NOT USE IN A USER PROGRAM. template std::string FormatForComparisonFailureMessage( const T1& value, const T2& /* other_operand */) { return FormatForComparison::Format(value); } // The helper function for {ASSERT|EXPECT}_EQ. template AssertionResult CmpHelperEQ(const char* expected_expression, const char* actual_expression, const T1& expected, const T2& actual) { #ifdef _MSC_VER # pragma warning(push) // Saves the current warning state. # pragma warning(disable:4389) // Temporarily disables warning on // signed/unsigned mismatch. #endif if (expected == actual) { return AssertionSuccess(); } #ifdef _MSC_VER # pragma warning(pop) // Restores the warning state. #endif return EqFailure(expected_expression, actual_expression, FormatForComparisonFailureMessage(expected, actual), FormatForComparisonFailureMessage(actual, expected), false); } // With this overloaded version, we allow anonymous enums to be used // in {ASSERT|EXPECT}_EQ when compiled with gcc 4, as anonymous enums // can be implicitly cast to BiggestInt. GTEST_API_ AssertionResult CmpHelperEQ(const char* expected_expression, const char* actual_expression, BiggestInt expected, BiggestInt actual); // The helper class for {ASSERT|EXPECT}_EQ. The template argument // lhs_is_null_literal is true iff the first argument to ASSERT_EQ() // is a null pointer literal. The following default implementation is // for lhs_is_null_literal being false. template class EqHelper { public: // This templatized version is for the general case. template static AssertionResult Compare(const char* expected_expression, const char* actual_expression, const T1& expected, const T2& actual) { return CmpHelperEQ(expected_expression, actual_expression, expected, actual); } // With this overloaded version, we allow anonymous enums to be used // in {ASSERT|EXPECT}_EQ when compiled with gcc 4, as anonymous // enums can be implicitly cast to BiggestInt. // // Even though its body looks the same as the above version, we // cannot merge the two, as it will make anonymous enums unhappy. static AssertionResult Compare(const char* expected_expression, const char* actual_expression, BiggestInt expected, BiggestInt actual) { return CmpHelperEQ(expected_expression, actual_expression, expected, actual); } }; // This specialization is used when the first argument to ASSERT_EQ() // is a null pointer literal, like NULL, false, or 0. template <> class EqHelper { public: // We define two overloaded versions of Compare(). The first // version will be picked when the second argument to ASSERT_EQ() is // NOT a pointer, e.g. ASSERT_EQ(0, AnIntFunction()) or // EXPECT_EQ(false, a_bool). template static AssertionResult Compare( const char* expected_expression, const char* actual_expression, const T1& expected, const T2& actual, // The following line prevents this overload from being considered if T2 // is not a pointer type. We need this because ASSERT_EQ(NULL, my_ptr) // expands to Compare("", "", NULL, my_ptr), which requires a conversion // to match the Secret* in the other overload, which would otherwise make // this template match better. typename EnableIf::value>::type* = 0) { return CmpHelperEQ(expected_expression, actual_expression, expected, actual); } // This version will be picked when the second argument to ASSERT_EQ() is a // pointer, e.g. ASSERT_EQ(NULL, a_pointer). template static AssertionResult Compare( const char* expected_expression, const char* actual_expression, // We used to have a second template parameter instead of Secret*. That // template parameter would deduce to 'long', making this a better match // than the first overload even without the first overload's EnableIf. // Unfortunately, gcc with -Wconversion-null warns when "passing NULL to // non-pointer argument" (even a deduced integral argument), so the old // implementation caused warnings in user code. Secret* /* expected (NULL) */, T* actual) { // We already know that 'expected' is a null pointer. return CmpHelperEQ(expected_expression, actual_expression, static_cast(NULL), actual); } }; // A macro for implementing the helper functions needed to implement // ASSERT_?? and EXPECT_??. It is here just to avoid copy-and-paste // of similar code. // // For each templatized helper function, we also define an overloaded // version for BiggestInt in order to reduce code bloat and allow // anonymous enums to be used with {ASSERT|EXPECT}_?? when compiled // with gcc 4. // // INTERNAL IMPLEMENTATION - DO NOT USE IN A USER PROGRAM. #define GTEST_IMPL_CMP_HELPER_(op_name, op)\ template \ AssertionResult CmpHelper##op_name(const char* expr1, const char* expr2, \ const T1& val1, const T2& val2) {\ if (val1 op val2) {\ return AssertionSuccess();\ } else {\ return AssertionFailure() \ << "Expected: (" << expr1 << ") " #op " (" << expr2\ << "), actual: " << FormatForComparisonFailureMessage(val1, val2)\ << " vs " << FormatForComparisonFailureMessage(val2, val1);\ }\ }\ GTEST_API_ AssertionResult CmpHelper##op_name(\ const char* expr1, const char* expr2, BiggestInt val1, BiggestInt val2) // INTERNAL IMPLEMENTATION - DO NOT USE IN A USER PROGRAM. // Implements the helper function for {ASSERT|EXPECT}_NE GTEST_IMPL_CMP_HELPER_(NE, !=); // Implements the helper function for {ASSERT|EXPECT}_LE GTEST_IMPL_CMP_HELPER_(LE, <=); // Implements the helper function for {ASSERT|EXPECT}_LT GTEST_IMPL_CMP_HELPER_(LT, <); // Implements the helper function for {ASSERT|EXPECT}_GE GTEST_IMPL_CMP_HELPER_(GE, >=); // Implements the helper function for {ASSERT|EXPECT}_GT GTEST_IMPL_CMP_HELPER_(GT, >); #undef GTEST_IMPL_CMP_HELPER_ // The helper function for {ASSERT|EXPECT}_STREQ. // // INTERNAL IMPLEMENTATION - DO NOT USE IN A USER PROGRAM. GTEST_API_ AssertionResult CmpHelperSTREQ(const char* expected_expression, const char* actual_expression, const char* expected, const char* actual); // The helper function for {ASSERT|EXPECT}_STRCASEEQ. // // INTERNAL IMPLEMENTATION - DO NOT USE IN A USER PROGRAM. GTEST_API_ AssertionResult CmpHelperSTRCASEEQ(const char* expected_expression, const char* actual_expression, const char* expected, const char* actual); // The helper function for {ASSERT|EXPECT}_STRNE. // // INTERNAL IMPLEMENTATION - DO NOT USE IN A USER PROGRAM. GTEST_API_ AssertionResult CmpHelperSTRNE(const char* s1_expression, const char* s2_expression, const char* s1, const char* s2); // The helper function for {ASSERT|EXPECT}_STRCASENE. // // INTERNAL IMPLEMENTATION - DO NOT USE IN A USER PROGRAM. GTEST_API_ AssertionResult CmpHelperSTRCASENE(const char* s1_expression, const char* s2_expression, const char* s1, const char* s2); // Helper function for *_STREQ on wide strings. // // INTERNAL IMPLEMENTATION - DO NOT USE IN A USER PROGRAM. GTEST_API_ AssertionResult CmpHelperSTREQ(const char* expected_expression, const char* actual_expression, const wchar_t* expected, const wchar_t* actual); // Helper function for *_STRNE on wide strings. // // INTERNAL IMPLEMENTATION - DO NOT USE IN A USER PROGRAM. GTEST_API_ AssertionResult CmpHelperSTRNE(const char* s1_expression, const char* s2_expression, const wchar_t* s1, const wchar_t* s2); } // namespace internal // IsSubstring() and IsNotSubstring() are intended to be used as the // first argument to {EXPECT,ASSERT}_PRED_FORMAT2(), not by // themselves. They check whether needle is a substring of haystack // (NULL is considered a substring of itself only), and return an // appropriate error message when they fail. // // The {needle,haystack}_expr arguments are the stringified // expressions that generated the two real arguments. GTEST_API_ AssertionResult IsSubstring( const char* needle_expr, const char* haystack_expr, const char* needle, const char* haystack); GTEST_API_ AssertionResult IsSubstring( const char* needle_expr, const char* haystack_expr, const wchar_t* needle, const wchar_t* haystack); GTEST_API_ AssertionResult IsNotSubstring( const char* needle_expr, const char* haystack_expr, const char* needle, const char* haystack); GTEST_API_ AssertionResult IsNotSubstring( const char* needle_expr, const char* haystack_expr, const wchar_t* needle, const wchar_t* haystack); GTEST_API_ AssertionResult IsSubstring( const char* needle_expr, const char* haystack_expr, const ::std::string& needle, const ::std::string& haystack); GTEST_API_ AssertionResult IsNotSubstring( const char* needle_expr, const char* haystack_expr, const ::std::string& needle, const ::std::string& haystack); #if GTEST_HAS_STD_WSTRING GTEST_API_ AssertionResult IsSubstring( const char* needle_expr, const char* haystack_expr, const ::std::wstring& needle, const ::std::wstring& haystack); GTEST_API_ AssertionResult IsNotSubstring( const char* needle_expr, const char* haystack_expr, const ::std::wstring& needle, const ::std::wstring& haystack); #endif // GTEST_HAS_STD_WSTRING namespace internal { // Helper template function for comparing floating-points. // // Template parameter: // // RawType: the raw floating-point type (either float or double) // // INTERNAL IMPLEMENTATION - DO NOT USE IN A USER PROGRAM. template AssertionResult CmpHelperFloatingPointEQ(const char* expected_expression, const char* actual_expression, RawType expected, RawType actual) { const FloatingPoint lhs(expected), rhs(actual); if (lhs.AlmostEquals(rhs)) { return AssertionSuccess(); } ::std::stringstream expected_ss; expected_ss << std::setprecision(std::numeric_limits::digits10 + 2) << expected; ::std::stringstream actual_ss; actual_ss << std::setprecision(std::numeric_limits::digits10 + 2) << actual; return EqFailure(expected_expression, actual_expression, StringStreamToString(&expected_ss), StringStreamToString(&actual_ss), false); } // Helper function for implementing ASSERT_NEAR. // // INTERNAL IMPLEMENTATION - DO NOT USE IN A USER PROGRAM. GTEST_API_ AssertionResult DoubleNearPredFormat(const char* expr1, const char* expr2, const char* abs_error_expr, double val1, double val2, double abs_error); // INTERNAL IMPLEMENTATION - DO NOT USE IN USER CODE. // A class that enables one to stream messages to assertion macros class GTEST_API_ AssertHelper { public: // Constructor. AssertHelper(TestPartResult::Type type, const char* file, int line, const char* message); ~AssertHelper(); // Message assignment is a semantic trick to enable assertion // streaming; see the GTEST_MESSAGE_ macro below. void operator=(const Message& message) const; private: // We put our data in a struct so that the size of the AssertHelper class can // be as small as possible. This is important because gcc is incapable of // re-using stack space even for temporary variables, so every EXPECT_EQ // reserves stack space for another AssertHelper. struct AssertHelperData { AssertHelperData(TestPartResult::Type t, const char* srcfile, int line_num, const char* msg) : type(t), file(srcfile), line(line_num), message(msg) { } TestPartResult::Type const type; const char* const file; int const line; std::string const message; private: GTEST_DISALLOW_COPY_AND_ASSIGN_(AssertHelperData); }; AssertHelperData* const data_; GTEST_DISALLOW_COPY_AND_ASSIGN_(AssertHelper); }; } // namespace internal #if GTEST_HAS_PARAM_TEST // The pure interface class that all value-parameterized tests inherit from. // A value-parameterized class must inherit from both ::testing::Test and // ::testing::WithParamInterface. In most cases that just means inheriting // from ::testing::TestWithParam, but more complicated test hierarchies // may need to inherit from Test and WithParamInterface at different levels. // // This interface has support for accessing the test parameter value via // the GetParam() method. // // Use it with one of the parameter generator defining functions, like Range(), // Values(), ValuesIn(), Bool(), and Combine(). // // class FooTest : public ::testing::TestWithParam { // protected: // FooTest() { // // Can use GetParam() here. // } // virtual ~FooTest() { // // Can use GetParam() here. // } // virtual void SetUp() { // // Can use GetParam() here. // } // virtual void TearDown { // // Can use GetParam() here. // } // }; // TEST_P(FooTest, DoesBar) { // // Can use GetParam() method here. // Foo foo; // ASSERT_TRUE(foo.DoesBar(GetParam())); // } // INSTANTIATE_TEST_CASE_P(OneToTenRange, FooTest, ::testing::Range(1, 10)); template class WithParamInterface { public: typedef T ParamType; virtual ~WithParamInterface() {} // The current parameter value. Is also available in the test fixture's // constructor. This member function is non-static, even though it only // references static data, to reduce the opportunity for incorrect uses // like writing 'WithParamInterface::GetParam()' for a test that // uses a fixture whose parameter type is int. const ParamType& GetParam() const { GTEST_CHECK_(parameter_ != NULL) << "GetParam() can only be called inside a value-parameterized test " << "-- did you intend to write TEST_P instead of TEST_F?"; return *parameter_; } private: // Sets parameter value. The caller is responsible for making sure the value // remains alive and unchanged throughout the current test. static void SetParam(const ParamType* parameter) { parameter_ = parameter; } // Static value used for accessing parameter during a test lifetime. static const ParamType* parameter_; // TestClass must be a subclass of WithParamInterface and Test. template friend class internal::ParameterizedTestFactory; }; template const T* WithParamInterface::parameter_ = NULL; // Most value-parameterized classes can ignore the existence of // WithParamInterface, and can just inherit from ::testing::TestWithParam. template class TestWithParam : public Test, public WithParamInterface { }; #endif // GTEST_HAS_PARAM_TEST // Macros for indicating success/failure in test code. // ADD_FAILURE unconditionally adds a failure to the current test. // SUCCEED generates a success - it doesn't automatically make the // current test successful, as a test is only successful when it has // no failure. // // EXPECT_* verifies that a certain condition is satisfied. If not, // it behaves like ADD_FAILURE. In particular: // // EXPECT_TRUE verifies that a Boolean condition is true. // EXPECT_FALSE verifies that a Boolean condition is false. // // FAIL and ASSERT_* are similar to ADD_FAILURE and EXPECT_*, except // that they will also abort the current function on failure. People // usually want the fail-fast behavior of FAIL and ASSERT_*, but those // writing data-driven tests often find themselves using ADD_FAILURE // and EXPECT_* more. // Generates a nonfatal failure with a generic message. #define ADD_FAILURE() GTEST_NONFATAL_FAILURE_("Failed") // Generates a nonfatal failure at the given source file location with // a generic message. #define ADD_FAILURE_AT(file, line) \ GTEST_MESSAGE_AT_(file, line, "Failed", \ ::testing::TestPartResult::kNonFatalFailure) // Generates a fatal failure with a generic message. #define GTEST_FAIL() GTEST_FATAL_FAILURE_("Failed") // Define this macro to 1 to omit the definition of FAIL(), which is a // generic name and clashes with some other libraries. #if !GTEST_DONT_DEFINE_FAIL # define FAIL() GTEST_FAIL() #endif // Generates a success with a generic message. #define GTEST_SUCCEED() GTEST_SUCCESS_("Succeeded") // Define this macro to 1 to omit the definition of SUCCEED(), which // is a generic name and clashes with some other libraries. #if !GTEST_DONT_DEFINE_SUCCEED # define SUCCEED() GTEST_SUCCEED() #endif // Macros for testing exceptions. // // * {ASSERT|EXPECT}_THROW(statement, expected_exception): // Tests that the statement throws the expected exception. // * {ASSERT|EXPECT}_NO_THROW(statement): // Tests that the statement doesn't throw any exception. // * {ASSERT|EXPECT}_ANY_THROW(statement): // Tests that the statement throws an exception. #define EXPECT_THROW(statement, expected_exception) \ GTEST_TEST_THROW_(statement, expected_exception, GTEST_NONFATAL_FAILURE_) #define EXPECT_NO_THROW(statement) \ GTEST_TEST_NO_THROW_(statement, GTEST_NONFATAL_FAILURE_) #define EXPECT_ANY_THROW(statement) \ GTEST_TEST_ANY_THROW_(statement, GTEST_NONFATAL_FAILURE_) #define ASSERT_THROW(statement, expected_exception) \ GTEST_TEST_THROW_(statement, expected_exception, GTEST_FATAL_FAILURE_) #define ASSERT_NO_THROW(statement) \ GTEST_TEST_NO_THROW_(statement, GTEST_FATAL_FAILURE_) #define ASSERT_ANY_THROW(statement) \ GTEST_TEST_ANY_THROW_(statement, GTEST_FATAL_FAILURE_) // Boolean assertions. Condition can be either a Boolean expression or an // AssertionResult. For more information on how to use AssertionResult with // these macros see comments on that class. #define EXPECT_TRUE(condition) \ GTEST_TEST_BOOLEAN_(condition, #condition, false, true, \ GTEST_NONFATAL_FAILURE_) #define EXPECT_FALSE(condition) \ GTEST_TEST_BOOLEAN_(!(condition), #condition, true, false, \ GTEST_NONFATAL_FAILURE_) #define ASSERT_TRUE(condition) \ GTEST_TEST_BOOLEAN_(condition, #condition, false, true, \ GTEST_FATAL_FAILURE_) #define ASSERT_FALSE(condition) \ GTEST_TEST_BOOLEAN_(!(condition), #condition, true, false, \ GTEST_FATAL_FAILURE_) // Includes the auto-generated header that implements a family of // generic predicate assertion macros. // Copyright 2006, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // This file is AUTOMATICALLY GENERATED on 10/31/2011 by command // 'gen_gtest_pred_impl.py 5'. DO NOT EDIT BY HAND! // // Implements a family of generic predicate assertion macros. #ifndef GTEST_INCLUDE_GTEST_GTEST_PRED_IMPL_H_ #define GTEST_INCLUDE_GTEST_GTEST_PRED_IMPL_H_ // Makes sure this header is not included before gtest.h. #ifndef GTEST_INCLUDE_GTEST_GTEST_H_ # error Do not include gtest_pred_impl.h directly. Include gtest.h instead. #endif // GTEST_INCLUDE_GTEST_GTEST_H_ // This header implements a family of generic predicate assertion // macros: // // ASSERT_PRED_FORMAT1(pred_format, v1) // ASSERT_PRED_FORMAT2(pred_format, v1, v2) // ... // // where pred_format is a function or functor that takes n (in the // case of ASSERT_PRED_FORMATn) values and their source expression // text, and returns a testing::AssertionResult. See the definition // of ASSERT_EQ in gtest.h for an example. // // If you don't care about formatting, you can use the more // restrictive version: // // ASSERT_PRED1(pred, v1) // ASSERT_PRED2(pred, v1, v2) // ... // // where pred is an n-ary function or functor that returns bool, // and the values v1, v2, ..., must support the << operator for // streaming to std::ostream. // // We also define the EXPECT_* variations. // // For now we only support predicates whose arity is at most 5. // Please email googletestframework@googlegroups.com if you need // support for higher arities. // GTEST_ASSERT_ is the basic statement to which all of the assertions // in this file reduce. Don't use this in your code. #define GTEST_ASSERT_(expression, on_failure) \ GTEST_AMBIGUOUS_ELSE_BLOCKER_ \ if (const ::testing::AssertionResult gtest_ar = (expression)) \ ; \ else \ on_failure(gtest_ar.failure_message()) // Helper function for implementing {EXPECT|ASSERT}_PRED1. Don't use // this in your code. template AssertionResult AssertPred1Helper(const char* pred_text, const char* e1, Pred pred, const T1& v1) { if (pred(v1)) return AssertionSuccess(); return AssertionFailure() << pred_text << "(" << e1 << ") evaluates to false, where" << "\n" << e1 << " evaluates to " << v1; } // Internal macro for implementing {EXPECT|ASSERT}_PRED_FORMAT1. // Don't use this in your code. #define GTEST_PRED_FORMAT1_(pred_format, v1, on_failure)\ GTEST_ASSERT_(pred_format(#v1, v1), \ on_failure) // Internal macro for implementing {EXPECT|ASSERT}_PRED1. Don't use // this in your code. #define GTEST_PRED1_(pred, v1, on_failure)\ GTEST_ASSERT_(::testing::AssertPred1Helper(#pred, \ #v1, \ pred, \ v1), on_failure) // Unary predicate assertion macros. #define EXPECT_PRED_FORMAT1(pred_format, v1) \ GTEST_PRED_FORMAT1_(pred_format, v1, GTEST_NONFATAL_FAILURE_) #define EXPECT_PRED1(pred, v1) \ GTEST_PRED1_(pred, v1, GTEST_NONFATAL_FAILURE_) #define ASSERT_PRED_FORMAT1(pred_format, v1) \ GTEST_PRED_FORMAT1_(pred_format, v1, GTEST_FATAL_FAILURE_) #define ASSERT_PRED1(pred, v1) \ GTEST_PRED1_(pred, v1, GTEST_FATAL_FAILURE_) // Helper function for implementing {EXPECT|ASSERT}_PRED2. Don't use // this in your code. template AssertionResult AssertPred2Helper(const char* pred_text, const char* e1, const char* e2, Pred pred, const T1& v1, const T2& v2) { if (pred(v1, v2)) return AssertionSuccess(); return AssertionFailure() << pred_text << "(" << e1 << ", " << e2 << ") evaluates to false, where" << "\n" << e1 << " evaluates to " << v1 << "\n" << e2 << " evaluates to " << v2; } // Internal macro for implementing {EXPECT|ASSERT}_PRED_FORMAT2. // Don't use this in your code. #define GTEST_PRED_FORMAT2_(pred_format, v1, v2, on_failure)\ GTEST_ASSERT_(pred_format(#v1, #v2, v1, v2), \ on_failure) // Internal macro for implementing {EXPECT|ASSERT}_PRED2. Don't use // this in your code. #define GTEST_PRED2_(pred, v1, v2, on_failure)\ GTEST_ASSERT_(::testing::AssertPred2Helper(#pred, \ #v1, \ #v2, \ pred, \ v1, \ v2), on_failure) // Binary predicate assertion macros. #define EXPECT_PRED_FORMAT2(pred_format, v1, v2) \ GTEST_PRED_FORMAT2_(pred_format, v1, v2, GTEST_NONFATAL_FAILURE_) #define EXPECT_PRED2(pred, v1, v2) \ GTEST_PRED2_(pred, v1, v2, GTEST_NONFATAL_FAILURE_) #define ASSERT_PRED_FORMAT2(pred_format, v1, v2) \ GTEST_PRED_FORMAT2_(pred_format, v1, v2, GTEST_FATAL_FAILURE_) #define ASSERT_PRED2(pred, v1, v2) \ GTEST_PRED2_(pred, v1, v2, GTEST_FATAL_FAILURE_) // Helper function for implementing {EXPECT|ASSERT}_PRED3. Don't use // this in your code. template AssertionResult AssertPred3Helper(const char* pred_text, const char* e1, const char* e2, const char* e3, Pred pred, const T1& v1, const T2& v2, const T3& v3) { if (pred(v1, v2, v3)) return AssertionSuccess(); return AssertionFailure() << pred_text << "(" << e1 << ", " << e2 << ", " << e3 << ") evaluates to false, where" << "\n" << e1 << " evaluates to " << v1 << "\n" << e2 << " evaluates to " << v2 << "\n" << e3 << " evaluates to " << v3; } // Internal macro for implementing {EXPECT|ASSERT}_PRED_FORMAT3. // Don't use this in your code. #define GTEST_PRED_FORMAT3_(pred_format, v1, v2, v3, on_failure)\ GTEST_ASSERT_(pred_format(#v1, #v2, #v3, v1, v2, v3), \ on_failure) // Internal macro for implementing {EXPECT|ASSERT}_PRED3. Don't use // this in your code. #define GTEST_PRED3_(pred, v1, v2, v3, on_failure)\ GTEST_ASSERT_(::testing::AssertPred3Helper(#pred, \ #v1, \ #v2, \ #v3, \ pred, \ v1, \ v2, \ v3), on_failure) // Ternary predicate assertion macros. #define EXPECT_PRED_FORMAT3(pred_format, v1, v2, v3) \ GTEST_PRED_FORMAT3_(pred_format, v1, v2, v3, GTEST_NONFATAL_FAILURE_) #define EXPECT_PRED3(pred, v1, v2, v3) \ GTEST_PRED3_(pred, v1, v2, v3, GTEST_NONFATAL_FAILURE_) #define ASSERT_PRED_FORMAT3(pred_format, v1, v2, v3) \ GTEST_PRED_FORMAT3_(pred_format, v1, v2, v3, GTEST_FATAL_FAILURE_) #define ASSERT_PRED3(pred, v1, v2, v3) \ GTEST_PRED3_(pred, v1, v2, v3, GTEST_FATAL_FAILURE_) // Helper function for implementing {EXPECT|ASSERT}_PRED4. Don't use // this in your code. template AssertionResult AssertPred4Helper(const char* pred_text, const char* e1, const char* e2, const char* e3, const char* e4, Pred pred, const T1& v1, const T2& v2, const T3& v3, const T4& v4) { if (pred(v1, v2, v3, v4)) return AssertionSuccess(); return AssertionFailure() << pred_text << "(" << e1 << ", " << e2 << ", " << e3 << ", " << e4 << ") evaluates to false, where" << "\n" << e1 << " evaluates to " << v1 << "\n" << e2 << " evaluates to " << v2 << "\n" << e3 << " evaluates to " << v3 << "\n" << e4 << " evaluates to " << v4; } // Internal macro for implementing {EXPECT|ASSERT}_PRED_FORMAT4. // Don't use this in your code. #define GTEST_PRED_FORMAT4_(pred_format, v1, v2, v3, v4, on_failure)\ GTEST_ASSERT_(pred_format(#v1, #v2, #v3, #v4, v1, v2, v3, v4), \ on_failure) // Internal macro for implementing {EXPECT|ASSERT}_PRED4. Don't use // this in your code. #define GTEST_PRED4_(pred, v1, v2, v3, v4, on_failure)\ GTEST_ASSERT_(::testing::AssertPred4Helper(#pred, \ #v1, \ #v2, \ #v3, \ #v4, \ pred, \ v1, \ v2, \ v3, \ v4), on_failure) // 4-ary predicate assertion macros. #define EXPECT_PRED_FORMAT4(pred_format, v1, v2, v3, v4) \ GTEST_PRED_FORMAT4_(pred_format, v1, v2, v3, v4, GTEST_NONFATAL_FAILURE_) #define EXPECT_PRED4(pred, v1, v2, v3, v4) \ GTEST_PRED4_(pred, v1, v2, v3, v4, GTEST_NONFATAL_FAILURE_) #define ASSERT_PRED_FORMAT4(pred_format, v1, v2, v3, v4) \ GTEST_PRED_FORMAT4_(pred_format, v1, v2, v3, v4, GTEST_FATAL_FAILURE_) #define ASSERT_PRED4(pred, v1, v2, v3, v4) \ GTEST_PRED4_(pred, v1, v2, v3, v4, GTEST_FATAL_FAILURE_) // Helper function for implementing {EXPECT|ASSERT}_PRED5. Don't use // this in your code. template AssertionResult AssertPred5Helper(const char* pred_text, const char* e1, const char* e2, const char* e3, const char* e4, const char* e5, Pred pred, const T1& v1, const T2& v2, const T3& v3, const T4& v4, const T5& v5) { if (pred(v1, v2, v3, v4, v5)) return AssertionSuccess(); return AssertionFailure() << pred_text << "(" << e1 << ", " << e2 << ", " << e3 << ", " << e4 << ", " << e5 << ") evaluates to false, where" << "\n" << e1 << " evaluates to " << v1 << "\n" << e2 << " evaluates to " << v2 << "\n" << e3 << " evaluates to " << v3 << "\n" << e4 << " evaluates to " << v4 << "\n" << e5 << " evaluates to " << v5; } // Internal macro for implementing {EXPECT|ASSERT}_PRED_FORMAT5. // Don't use this in your code. #define GTEST_PRED_FORMAT5_(pred_format, v1, v2, v3, v4, v5, on_failure)\ GTEST_ASSERT_(pred_format(#v1, #v2, #v3, #v4, #v5, v1, v2, v3, v4, v5), \ on_failure) // Internal macro for implementing {EXPECT|ASSERT}_PRED5. Don't use // this in your code. #define GTEST_PRED5_(pred, v1, v2, v3, v4, v5, on_failure)\ GTEST_ASSERT_(::testing::AssertPred5Helper(#pred, \ #v1, \ #v2, \ #v3, \ #v4, \ #v5, \ pred, \ v1, \ v2, \ v3, \ v4, \ v5), on_failure) // 5-ary predicate assertion macros. #define EXPECT_PRED_FORMAT5(pred_format, v1, v2, v3, v4, v5) \ GTEST_PRED_FORMAT5_(pred_format, v1, v2, v3, v4, v5, GTEST_NONFATAL_FAILURE_) #define EXPECT_PRED5(pred, v1, v2, v3, v4, v5) \ GTEST_PRED5_(pred, v1, v2, v3, v4, v5, GTEST_NONFATAL_FAILURE_) #define ASSERT_PRED_FORMAT5(pred_format, v1, v2, v3, v4, v5) \ GTEST_PRED_FORMAT5_(pred_format, v1, v2, v3, v4, v5, GTEST_FATAL_FAILURE_) #define ASSERT_PRED5(pred, v1, v2, v3, v4, v5) \ GTEST_PRED5_(pred, v1, v2, v3, v4, v5, GTEST_FATAL_FAILURE_) #endif // GTEST_INCLUDE_GTEST_GTEST_PRED_IMPL_H_ // Macros for testing equalities and inequalities. // // * {ASSERT|EXPECT}_EQ(expected, actual): Tests that expected == actual // * {ASSERT|EXPECT}_NE(v1, v2): Tests that v1 != v2 // * {ASSERT|EXPECT}_LT(v1, v2): Tests that v1 < v2 // * {ASSERT|EXPECT}_LE(v1, v2): Tests that v1 <= v2 // * {ASSERT|EXPECT}_GT(v1, v2): Tests that v1 > v2 // * {ASSERT|EXPECT}_GE(v1, v2): Tests that v1 >= v2 // // When they are not, Google Test prints both the tested expressions and // their actual values. The values must be compatible built-in types, // or you will get a compiler error. By "compatible" we mean that the // values can be compared by the respective operator. // // Note: // // 1. It is possible to make a user-defined type work with // {ASSERT|EXPECT}_??(), but that requires overloading the // comparison operators and is thus discouraged by the Google C++ // Usage Guide. Therefore, you are advised to use the // {ASSERT|EXPECT}_TRUE() macro to assert that two objects are // equal. // // 2. The {ASSERT|EXPECT}_??() macros do pointer comparisons on // pointers (in particular, C strings). Therefore, if you use it // with two C strings, you are testing how their locations in memory // are related, not how their content is related. To compare two C // strings by content, use {ASSERT|EXPECT}_STR*(). // // 3. {ASSERT|EXPECT}_EQ(expected, actual) is preferred to // {ASSERT|EXPECT}_TRUE(expected == actual), as the former tells you // what the actual value is when it fails, and similarly for the // other comparisons. // // 4. Do not depend on the order in which {ASSERT|EXPECT}_??() // evaluate their arguments, which is undefined. // // 5. These macros evaluate their arguments exactly once. // // Examples: // // EXPECT_NE(5, Foo()); // EXPECT_EQ(NULL, a_pointer); // ASSERT_LT(i, array_size); // ASSERT_GT(records.size(), 0) << "There is no record left."; #define EXPECT_EQ(expected, actual) \ EXPECT_PRED_FORMAT2(::testing::internal:: \ EqHelper::Compare, \ expected, actual) #define EXPECT_NE(expected, actual) \ EXPECT_PRED_FORMAT2(::testing::internal::CmpHelperNE, expected, actual) #define EXPECT_LE(val1, val2) \ EXPECT_PRED_FORMAT2(::testing::internal::CmpHelperLE, val1, val2) #define EXPECT_LT(val1, val2) \ EXPECT_PRED_FORMAT2(::testing::internal::CmpHelperLT, val1, val2) #define EXPECT_GE(val1, val2) \ EXPECT_PRED_FORMAT2(::testing::internal::CmpHelperGE, val1, val2) #define EXPECT_GT(val1, val2) \ EXPECT_PRED_FORMAT2(::testing::internal::CmpHelperGT, val1, val2) #define GTEST_ASSERT_EQ(expected, actual) \ ASSERT_PRED_FORMAT2(::testing::internal:: \ EqHelper::Compare, \ expected, actual) #define GTEST_ASSERT_NE(val1, val2) \ ASSERT_PRED_FORMAT2(::testing::internal::CmpHelperNE, val1, val2) #define GTEST_ASSERT_LE(val1, val2) \ ASSERT_PRED_FORMAT2(::testing::internal::CmpHelperLE, val1, val2) #define GTEST_ASSERT_LT(val1, val2) \ ASSERT_PRED_FORMAT2(::testing::internal::CmpHelperLT, val1, val2) #define GTEST_ASSERT_GE(val1, val2) \ ASSERT_PRED_FORMAT2(::testing::internal::CmpHelperGE, val1, val2) #define GTEST_ASSERT_GT(val1, val2) \ ASSERT_PRED_FORMAT2(::testing::internal::CmpHelperGT, val1, val2) // Define macro GTEST_DONT_DEFINE_ASSERT_XY to 1 to omit the definition of // ASSERT_XY(), which clashes with some users' own code. #if !GTEST_DONT_DEFINE_ASSERT_EQ # define ASSERT_EQ(val1, val2) GTEST_ASSERT_EQ(val1, val2) #endif #if !GTEST_DONT_DEFINE_ASSERT_NE # define ASSERT_NE(val1, val2) GTEST_ASSERT_NE(val1, val2) #endif #if !GTEST_DONT_DEFINE_ASSERT_LE # define ASSERT_LE(val1, val2) GTEST_ASSERT_LE(val1, val2) #endif #if !GTEST_DONT_DEFINE_ASSERT_LT # define ASSERT_LT(val1, val2) GTEST_ASSERT_LT(val1, val2) #endif #if !GTEST_DONT_DEFINE_ASSERT_GE # define ASSERT_GE(val1, val2) GTEST_ASSERT_GE(val1, val2) #endif #if !GTEST_DONT_DEFINE_ASSERT_GT # define ASSERT_GT(val1, val2) GTEST_ASSERT_GT(val1, val2) #endif // C-string Comparisons. All tests treat NULL and any non-NULL string // as different. Two NULLs are equal. // // * {ASSERT|EXPECT}_STREQ(s1, s2): Tests that s1 == s2 // * {ASSERT|EXPECT}_STRNE(s1, s2): Tests that s1 != s2 // * {ASSERT|EXPECT}_STRCASEEQ(s1, s2): Tests that s1 == s2, ignoring case // * {ASSERT|EXPECT}_STRCASENE(s1, s2): Tests that s1 != s2, ignoring case // // For wide or narrow string objects, you can use the // {ASSERT|EXPECT}_??() macros. // // Don't depend on the order in which the arguments are evaluated, // which is undefined. // // These macros evaluate their arguments exactly once. #define EXPECT_STREQ(expected, actual) \ EXPECT_PRED_FORMAT2(::testing::internal::CmpHelperSTREQ, expected, actual) #define EXPECT_STRNE(s1, s2) \ EXPECT_PRED_FORMAT2(::testing::internal::CmpHelperSTRNE, s1, s2) #define EXPECT_STRCASEEQ(expected, actual) \ EXPECT_PRED_FORMAT2(::testing::internal::CmpHelperSTRCASEEQ, expected, actual) #define EXPECT_STRCASENE(s1, s2)\ EXPECT_PRED_FORMAT2(::testing::internal::CmpHelperSTRCASENE, s1, s2) #define ASSERT_STREQ(expected, actual) \ ASSERT_PRED_FORMAT2(::testing::internal::CmpHelperSTREQ, expected, actual) #define ASSERT_STRNE(s1, s2) \ ASSERT_PRED_FORMAT2(::testing::internal::CmpHelperSTRNE, s1, s2) #define ASSERT_STRCASEEQ(expected, actual) \ ASSERT_PRED_FORMAT2(::testing::internal::CmpHelperSTRCASEEQ, expected, actual) #define ASSERT_STRCASENE(s1, s2)\ ASSERT_PRED_FORMAT2(::testing::internal::CmpHelperSTRCASENE, s1, s2) // Macros for comparing floating-point numbers. // // * {ASSERT|EXPECT}_FLOAT_EQ(expected, actual): // Tests that two float values are almost equal. // * {ASSERT|EXPECT}_DOUBLE_EQ(expected, actual): // Tests that two double values are almost equal. // * {ASSERT|EXPECT}_NEAR(v1, v2, abs_error): // Tests that v1 and v2 are within the given distance to each other. // // Google Test uses ULP-based comparison to automatically pick a default // error bound that is appropriate for the operands. See the // FloatingPoint template class in gtest-internal.h if you are // interested in the implementation details. #define EXPECT_FLOAT_EQ(expected, actual)\ EXPECT_PRED_FORMAT2(::testing::internal::CmpHelperFloatingPointEQ, \ expected, actual) #define EXPECT_DOUBLE_EQ(expected, actual)\ EXPECT_PRED_FORMAT2(::testing::internal::CmpHelperFloatingPointEQ, \ expected, actual) #define ASSERT_FLOAT_EQ(expected, actual)\ ASSERT_PRED_FORMAT2(::testing::internal::CmpHelperFloatingPointEQ, \ expected, actual) #define ASSERT_DOUBLE_EQ(expected, actual)\ ASSERT_PRED_FORMAT2(::testing::internal::CmpHelperFloatingPointEQ, \ expected, actual) #define EXPECT_NEAR(val1, val2, abs_error)\ EXPECT_PRED_FORMAT3(::testing::internal::DoubleNearPredFormat, \ val1, val2, abs_error) #define ASSERT_NEAR(val1, val2, abs_error)\ ASSERT_PRED_FORMAT3(::testing::internal::DoubleNearPredFormat, \ val1, val2, abs_error) // These predicate format functions work on floating-point values, and // can be used in {ASSERT|EXPECT}_PRED_FORMAT2*(), e.g. // // EXPECT_PRED_FORMAT2(testing::DoubleLE, Foo(), 5.0); // Asserts that val1 is less than, or almost equal to, val2. Fails // otherwise. In particular, it fails if either val1 or val2 is NaN. GTEST_API_ AssertionResult FloatLE(const char* expr1, const char* expr2, float val1, float val2); GTEST_API_ AssertionResult DoubleLE(const char* expr1, const char* expr2, double val1, double val2); #if GTEST_OS_WINDOWS // Macros that test for HRESULT failure and success, these are only useful // on Windows, and rely on Windows SDK macros and APIs to compile. // // * {ASSERT|EXPECT}_HRESULT_{SUCCEEDED|FAILED}(expr) // // When expr unexpectedly fails or succeeds, Google Test prints the // expected result and the actual result with both a human-readable // string representation of the error, if available, as well as the // hex result code. # define EXPECT_HRESULT_SUCCEEDED(expr) \ EXPECT_PRED_FORMAT1(::testing::internal::IsHRESULTSuccess, (expr)) # define ASSERT_HRESULT_SUCCEEDED(expr) \ ASSERT_PRED_FORMAT1(::testing::internal::IsHRESULTSuccess, (expr)) # define EXPECT_HRESULT_FAILED(expr) \ EXPECT_PRED_FORMAT1(::testing::internal::IsHRESULTFailure, (expr)) # define ASSERT_HRESULT_FAILED(expr) \ ASSERT_PRED_FORMAT1(::testing::internal::IsHRESULTFailure, (expr)) #endif // GTEST_OS_WINDOWS // Macros that execute statement and check that it doesn't generate new fatal // failures in the current thread. // // * {ASSERT|EXPECT}_NO_FATAL_FAILURE(statement); // // Examples: // // EXPECT_NO_FATAL_FAILURE(Process()); // ASSERT_NO_FATAL_FAILURE(Process()) << "Process() failed"; // #define ASSERT_NO_FATAL_FAILURE(statement) \ GTEST_TEST_NO_FATAL_FAILURE_(statement, GTEST_FATAL_FAILURE_) #define EXPECT_NO_FATAL_FAILURE(statement) \ GTEST_TEST_NO_FATAL_FAILURE_(statement, GTEST_NONFATAL_FAILURE_) // Causes a trace (including the source file path, the current line // number, and the given message) to be included in every test failure // message generated by code in the current scope. The effect is // undone when the control leaves the current scope. // // The message argument can be anything streamable to std::ostream. // // In the implementation, we include the current line number as part // of the dummy variable name, thus allowing multiple SCOPED_TRACE()s // to appear in the same block - as long as they are on different // lines. #define SCOPED_TRACE(message) \ ::testing::internal::ScopedTrace GTEST_CONCAT_TOKEN_(gtest_trace_, __LINE__)(\ __FILE__, __LINE__, ::testing::Message() << (message)) // Compile-time assertion for type equality. // StaticAssertTypeEq() compiles iff type1 and type2 are // the same type. The value it returns is not interesting. // // Instead of making StaticAssertTypeEq a class template, we make it a // function template that invokes a helper class template. This // prevents a user from misusing StaticAssertTypeEq by // defining objects of that type. // // CAVEAT: // // When used inside a method of a class template, // StaticAssertTypeEq() is effective ONLY IF the method is // instantiated. For example, given: // // template class Foo { // public: // void Bar() { testing::StaticAssertTypeEq(); } // }; // // the code: // // void Test1() { Foo foo; } // // will NOT generate a compiler error, as Foo::Bar() is never // actually instantiated. Instead, you need: // // void Test2() { Foo foo; foo.Bar(); } // // to cause a compiler error. template bool StaticAssertTypeEq() { (void)internal::StaticAssertTypeEqHelper(); return true; } // Defines a test. // // The first parameter is the name of the test case, and the second // parameter is the name of the test within the test case. // // The convention is to end the test case name with "Test". For // example, a test case for the Foo class can be named FooTest. // // The user should put his test code between braces after using this // macro. Example: // // TEST(FooTest, InitializesCorrectly) { // Foo foo; // EXPECT_TRUE(foo.StatusIsOK()); // } // Note that we call GetTestTypeId() instead of GetTypeId< // ::testing::Test>() here to get the type ID of testing::Test. This // is to work around a suspected linker bug when using Google Test as // a framework on Mac OS X. The bug causes GetTypeId< // ::testing::Test>() to return different values depending on whether // the call is from the Google Test framework itself or from user test // code. GetTestTypeId() is guaranteed to always return the same // value, as it always calls GetTypeId<>() from the Google Test // framework. #define GTEST_TEST(test_case_name, test_name)\ GTEST_TEST_(test_case_name, test_name, \ ::testing::Test, ::testing::internal::GetTestTypeId()) // Define this macro to 1 to omit the definition of TEST(), which // is a generic name and clashes with some other libraries. #if !GTEST_DONT_DEFINE_TEST # define TEST(test_case_name, test_name) GTEST_TEST(test_case_name, test_name) #endif // Defines a test that uses a test fixture. // // The first parameter is the name of the test fixture class, which // also doubles as the test case name. The second parameter is the // name of the test within the test case. // // A test fixture class must be declared earlier. The user should put // his test code between braces after using this macro. Example: // // class FooTest : public testing::Test { // protected: // virtual void SetUp() { b_.AddElement(3); } // // Foo a_; // Foo b_; // }; // // TEST_F(FooTest, InitializesCorrectly) { // EXPECT_TRUE(a_.StatusIsOK()); // } // // TEST_F(FooTest, ReturnsElementCountCorrectly) { // EXPECT_EQ(0, a_.size()); // EXPECT_EQ(1, b_.size()); // } #define TEST_F(test_fixture, test_name)\ GTEST_TEST_(test_fixture, test_name, test_fixture, \ ::testing::internal::GetTypeId()) } // namespace testing // Use this function in main() to run all tests. It returns 0 if all // tests are successful, or 1 otherwise. // // RUN_ALL_TESTS() should be invoked after the command line has been // parsed by InitGoogleTest(). // // This function was formerly a macro; thus, it is in the global // namespace and has an all-caps name. int RUN_ALL_TESTS() GTEST_MUST_USE_RESULT_; inline int RUN_ALL_TESTS() { return ::testing::UnitTest::GetInstance()->Run(); } #endif // GTEST_INCLUDE_GTEST_GTEST_H_ ffc-2019.1.0.post0/libs/gtest-1.7.0/fused-src/gtest/gtest_main.cc000066400000000000000000000033451346026536000237620ustar00rootroot00000000000000// Copyright 2006, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. #include #include "gtest/gtest.h" GTEST_API_ int main(int argc, char **argv) { printf("Running main() from gtest_main.cc\n"); testing::InitGoogleTest(&argc, argv); return RUN_ALL_TESTS(); } ffc-2019.1.0.post0/libs/gtest-1.7.0/include/000077500000000000000000000000001346026536000177165ustar00rootroot00000000000000ffc-2019.1.0.post0/libs/gtest-1.7.0/include/gtest/000077500000000000000000000000001346026536000210445ustar00rootroot00000000000000ffc-2019.1.0.post0/libs/gtest-1.7.0/include/gtest/gtest-death-test.h000066400000000000000000000264031346026536000244100ustar00rootroot00000000000000// Copyright 2005, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: wan@google.com (Zhanyong Wan) // // The Google C++ Testing Framework (Google Test) // // This header file defines the public API for death tests. It is // #included by gtest.h so a user doesn't need to include this // directly. #ifndef GTEST_INCLUDE_GTEST_GTEST_DEATH_TEST_H_ #define GTEST_INCLUDE_GTEST_GTEST_DEATH_TEST_H_ #include "gtest/internal/gtest-death-test-internal.h" namespace testing { // This flag controls the style of death tests. Valid values are "threadsafe", // meaning that the death test child process will re-execute the test binary // from the start, running only a single death test, or "fast", // meaning that the child process will execute the test logic immediately // after forking. GTEST_DECLARE_string_(death_test_style); #if GTEST_HAS_DEATH_TEST namespace internal { // Returns a Boolean value indicating whether the caller is currently // executing in the context of the death test child process. Tools such as // Valgrind heap checkers may need this to modify their behavior in death // tests. IMPORTANT: This is an internal utility. Using it may break the // implementation of death tests. User code MUST NOT use it. GTEST_API_ bool InDeathTestChild(); } // namespace internal // The following macros are useful for writing death tests. // Here's what happens when an ASSERT_DEATH* or EXPECT_DEATH* is // executed: // // 1. It generates a warning if there is more than one active // thread. This is because it's safe to fork() or clone() only // when there is a single thread. // // 2. The parent process clone()s a sub-process and runs the death // test in it; the sub-process exits with code 0 at the end of the // death test, if it hasn't exited already. // // 3. The parent process waits for the sub-process to terminate. // // 4. The parent process checks the exit code and error message of // the sub-process. // // Examples: // // ASSERT_DEATH(server.SendMessage(56, "Hello"), "Invalid port number"); // for (int i = 0; i < 5; i++) { // EXPECT_DEATH(server.ProcessRequest(i), // "Invalid request .* in ProcessRequest()") // << "Failed to die on request " << i; // } // // ASSERT_EXIT(server.ExitNow(), ::testing::ExitedWithCode(0), "Exiting"); // // bool KilledBySIGHUP(int exit_code) { // return WIFSIGNALED(exit_code) && WTERMSIG(exit_code) == SIGHUP; // } // // ASSERT_EXIT(client.HangUpServer(), KilledBySIGHUP, "Hanging up!"); // // On the regular expressions used in death tests: // // On POSIX-compliant systems (*nix), we use the library, // which uses the POSIX extended regex syntax. // // On other platforms (e.g. Windows), we only support a simple regex // syntax implemented as part of Google Test. This limited // implementation should be enough most of the time when writing // death tests; though it lacks many features you can find in PCRE // or POSIX extended regex syntax. For example, we don't support // union ("x|y"), grouping ("(xy)"), brackets ("[xy]"), and // repetition count ("x{5,7}"), among others. // // Below is the syntax that we do support. We chose it to be a // subset of both PCRE and POSIX extended regex, so it's easy to // learn wherever you come from. In the following: 'A' denotes a // literal character, period (.), or a single \\ escape sequence; // 'x' and 'y' denote regular expressions; 'm' and 'n' are for // natural numbers. // // c matches any literal character c // \\d matches any decimal digit // \\D matches any character that's not a decimal digit // \\f matches \f // \\n matches \n // \\r matches \r // \\s matches any ASCII whitespace, including \n // \\S matches any character that's not a whitespace // \\t matches \t // \\v matches \v // \\w matches any letter, _, or decimal digit // \\W matches any character that \\w doesn't match // \\c matches any literal character c, which must be a punctuation // . matches any single character except \n // A? matches 0 or 1 occurrences of A // A* matches 0 or many occurrences of A // A+ matches 1 or many occurrences of A // ^ matches the beginning of a string (not that of each line) // $ matches the end of a string (not that of each line) // xy matches x followed by y // // If you accidentally use PCRE or POSIX extended regex features // not implemented by us, you will get a run-time failure. In that // case, please try to rewrite your regular expression within the // above syntax. // // This implementation is *not* meant to be as highly tuned or robust // as a compiled regex library, but should perform well enough for a // death test, which already incurs significant overhead by launching // a child process. // // Known caveats: // // A "threadsafe" style death test obtains the path to the test // program from argv[0] and re-executes it in the sub-process. For // simplicity, the current implementation doesn't search the PATH // when launching the sub-process. This means that the user must // invoke the test program via a path that contains at least one // path separator (e.g. path/to/foo_test and // /absolute/path/to/bar_test are fine, but foo_test is not). This // is rarely a problem as people usually don't put the test binary // directory in PATH. // // TODO(wan@google.com): make thread-safe death tests search the PATH. // Asserts that a given statement causes the program to exit, with an // integer exit status that satisfies predicate, and emitting error output // that matches regex. # define ASSERT_EXIT(statement, predicate, regex) \ GTEST_DEATH_TEST_(statement, predicate, regex, GTEST_FATAL_FAILURE_) // Like ASSERT_EXIT, but continues on to successive tests in the // test case, if any: # define EXPECT_EXIT(statement, predicate, regex) \ GTEST_DEATH_TEST_(statement, predicate, regex, GTEST_NONFATAL_FAILURE_) // Asserts that a given statement causes the program to exit, either by // explicitly exiting with a nonzero exit code or being killed by a // signal, and emitting error output that matches regex. # define ASSERT_DEATH(statement, regex) \ ASSERT_EXIT(statement, ::testing::internal::ExitedUnsuccessfully, regex) // Like ASSERT_DEATH, but continues on to successive tests in the // test case, if any: # define EXPECT_DEATH(statement, regex) \ EXPECT_EXIT(statement, ::testing::internal::ExitedUnsuccessfully, regex) // Two predicate classes that can be used in {ASSERT,EXPECT}_EXIT*: // Tests that an exit code describes a normal exit with a given exit code. class GTEST_API_ ExitedWithCode { public: explicit ExitedWithCode(int exit_code); bool operator()(int exit_status) const; private: // No implementation - assignment is unsupported. void operator=(const ExitedWithCode& other); const int exit_code_; }; # if !GTEST_OS_WINDOWS // Tests that an exit code describes an exit due to termination by a // given signal. class GTEST_API_ KilledBySignal { public: explicit KilledBySignal(int signum); bool operator()(int exit_status) const; private: const int signum_; }; # endif // !GTEST_OS_WINDOWS // EXPECT_DEBUG_DEATH asserts that the given statements die in debug mode. // The death testing framework causes this to have interesting semantics, // since the sideeffects of the call are only visible in opt mode, and not // in debug mode. // // In practice, this can be used to test functions that utilize the // LOG(DFATAL) macro using the following style: // // int DieInDebugOr12(int* sideeffect) { // if (sideeffect) { // *sideeffect = 12; // } // LOG(DFATAL) << "death"; // return 12; // } // // TEST(TestCase, TestDieOr12WorksInDgbAndOpt) { // int sideeffect = 0; // // Only asserts in dbg. // EXPECT_DEBUG_DEATH(DieInDebugOr12(&sideeffect), "death"); // // #ifdef NDEBUG // // opt-mode has sideeffect visible. // EXPECT_EQ(12, sideeffect); // #else // // dbg-mode no visible sideeffect. // EXPECT_EQ(0, sideeffect); // #endif // } // // This will assert that DieInDebugReturn12InOpt() crashes in debug // mode, usually due to a DCHECK or LOG(DFATAL), but returns the // appropriate fallback value (12 in this case) in opt mode. If you // need to test that a function has appropriate side-effects in opt // mode, include assertions against the side-effects. A general // pattern for this is: // // EXPECT_DEBUG_DEATH({ // // Side-effects here will have an effect after this statement in // // opt mode, but none in debug mode. // EXPECT_EQ(12, DieInDebugOr12(&sideeffect)); // }, "death"); // # ifdef NDEBUG # define EXPECT_DEBUG_DEATH(statement, regex) \ GTEST_EXECUTE_STATEMENT_(statement, regex) # define ASSERT_DEBUG_DEATH(statement, regex) \ GTEST_EXECUTE_STATEMENT_(statement, regex) # else # define EXPECT_DEBUG_DEATH(statement, regex) \ EXPECT_DEATH(statement, regex) # define ASSERT_DEBUG_DEATH(statement, regex) \ ASSERT_DEATH(statement, regex) # endif // NDEBUG for EXPECT_DEBUG_DEATH #endif // GTEST_HAS_DEATH_TEST // EXPECT_DEATH_IF_SUPPORTED(statement, regex) and // ASSERT_DEATH_IF_SUPPORTED(statement, regex) expand to real death tests if // death tests are supported; otherwise they just issue a warning. This is // useful when you are combining death test assertions with normal test // assertions in one test. #if GTEST_HAS_DEATH_TEST # define EXPECT_DEATH_IF_SUPPORTED(statement, regex) \ EXPECT_DEATH(statement, regex) # define ASSERT_DEATH_IF_SUPPORTED(statement, regex) \ ASSERT_DEATH(statement, regex) #else # define EXPECT_DEATH_IF_SUPPORTED(statement, regex) \ GTEST_UNSUPPORTED_DEATH_TEST_(statement, regex, ) # define ASSERT_DEATH_IF_SUPPORTED(statement, regex) \ GTEST_UNSUPPORTED_DEATH_TEST_(statement, regex, return) #endif } // namespace testing #endif // GTEST_INCLUDE_GTEST_GTEST_DEATH_TEST_H_ ffc-2019.1.0.post0/libs/gtest-1.7.0/include/gtest/gtest-message.h000066400000000000000000000217421346026536000237730ustar00rootroot00000000000000// Copyright 2005, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: wan@google.com (Zhanyong Wan) // // The Google C++ Testing Framework (Google Test) // // This header file defines the Message class. // // IMPORTANT NOTE: Due to limitation of the C++ language, we have to // leave some internal implementation details in this header file. // They are clearly marked by comments like this: // // // INTERNAL IMPLEMENTATION - DO NOT USE IN A USER PROGRAM. // // Such code is NOT meant to be used by a user directly, and is subject // to CHANGE WITHOUT NOTICE. Therefore DO NOT DEPEND ON IT in a user // program! #ifndef GTEST_INCLUDE_GTEST_GTEST_MESSAGE_H_ #define GTEST_INCLUDE_GTEST_GTEST_MESSAGE_H_ #include #include "gtest/internal/gtest-port.h" // Ensures that there is at least one operator<< in the global namespace. // See Message& operator<<(...) below for why. void operator<<(const testing::internal::Secret&, int); namespace testing { // The Message class works like an ostream repeater. // // Typical usage: // // 1. You stream a bunch of values to a Message object. // It will remember the text in a stringstream. // 2. Then you stream the Message object to an ostream. // This causes the text in the Message to be streamed // to the ostream. // // For example; // // testing::Message foo; // foo << 1 << " != " << 2; // std::cout << foo; // // will print "1 != 2". // // Message is not intended to be inherited from. In particular, its // destructor is not virtual. // // Note that stringstream behaves differently in gcc and in MSVC. You // can stream a NULL char pointer to it in the former, but not in the // latter (it causes an access violation if you do). The Message // class hides this difference by treating a NULL char pointer as // "(null)". class GTEST_API_ Message { private: // The type of basic IO manipulators (endl, ends, and flush) for // narrow streams. typedef std::ostream& (*BasicNarrowIoManip)(std::ostream&); public: // Constructs an empty Message. Message(); // Copy constructor. Message(const Message& msg) : ss_(new ::std::stringstream) { // NOLINT *ss_ << msg.GetString(); } // Constructs a Message from a C-string. explicit Message(const char* str) : ss_(new ::std::stringstream) { *ss_ << str; } #if GTEST_OS_SYMBIAN // Streams a value (either a pointer or not) to this object. template inline Message& operator <<(const T& value) { StreamHelper(typename internal::is_pointer::type(), value); return *this; } #else // Streams a non-pointer value to this object. template inline Message& operator <<(const T& val) { // Some libraries overload << for STL containers. These // overloads are defined in the global namespace instead of ::std. // // C++'s symbol lookup rule (i.e. Koenig lookup) says that these // overloads are visible in either the std namespace or the global // namespace, but not other namespaces, including the testing // namespace which Google Test's Message class is in. // // To allow STL containers (and other types that has a << operator // defined in the global namespace) to be used in Google Test // assertions, testing::Message must access the custom << operator // from the global namespace. With this using declaration, // overloads of << defined in the global namespace and those // visible via Koenig lookup are both exposed in this function. using ::operator <<; *ss_ << val; return *this; } // Streams a pointer value to this object. // // This function is an overload of the previous one. When you // stream a pointer to a Message, this definition will be used as it // is more specialized. (The C++ Standard, section // [temp.func.order].) If you stream a non-pointer, then the // previous definition will be used. // // The reason for this overload is that streaming a NULL pointer to // ostream is undefined behavior. Depending on the compiler, you // may get "0", "(nil)", "(null)", or an access violation. To // ensure consistent result across compilers, we always treat NULL // as "(null)". template inline Message& operator <<(T* const& pointer) { // NOLINT if (pointer == NULL) { *ss_ << "(null)"; } else { *ss_ << pointer; } return *this; } #endif // GTEST_OS_SYMBIAN // Since the basic IO manipulators are overloaded for both narrow // and wide streams, we have to provide this specialized definition // of operator <<, even though its body is the same as the // templatized version above. Without this definition, streaming // endl or other basic IO manipulators to Message will confuse the // compiler. Message& operator <<(BasicNarrowIoManip val) { *ss_ << val; return *this; } // Instead of 1/0, we want to see true/false for bool values. Message& operator <<(bool b) { return *this << (b ? "true" : "false"); } // These two overloads allow streaming a wide C string to a Message // using the UTF-8 encoding. Message& operator <<(const wchar_t* wide_c_str); Message& operator <<(wchar_t* wide_c_str); #if GTEST_HAS_STD_WSTRING // Converts the given wide string to a narrow string using the UTF-8 // encoding, and streams the result to this Message object. Message& operator <<(const ::std::wstring& wstr); #endif // GTEST_HAS_STD_WSTRING #if GTEST_HAS_GLOBAL_WSTRING // Converts the given wide string to a narrow string using the UTF-8 // encoding, and streams the result to this Message object. Message& operator <<(const ::wstring& wstr); #endif // GTEST_HAS_GLOBAL_WSTRING // Gets the text streamed to this object so far as an std::string. // Each '\0' character in the buffer is replaced with "\\0". // // INTERNAL IMPLEMENTATION - DO NOT USE IN A USER PROGRAM. std::string GetString() const; private: #if GTEST_OS_SYMBIAN // These are needed as the Nokia Symbian Compiler cannot decide between // const T& and const T* in a function template. The Nokia compiler _can_ // decide between class template specializations for T and T*, so a // tr1::type_traits-like is_pointer works, and we can overload on that. template inline void StreamHelper(internal::true_type /*is_pointer*/, T* pointer) { if (pointer == NULL) { *ss_ << "(null)"; } else { *ss_ << pointer; } } template inline void StreamHelper(internal::false_type /*is_pointer*/, const T& value) { // See the comments in Message& operator <<(const T&) above for why // we need this using statement. using ::operator <<; *ss_ << value; } #endif // GTEST_OS_SYMBIAN // We'll hold the text streamed to this object here. const internal::scoped_ptr< ::std::stringstream> ss_; // We declare (but don't implement) this to prevent the compiler // from implementing the assignment operator. void operator=(const Message&); }; // Streams a Message to an ostream. inline std::ostream& operator <<(std::ostream& os, const Message& sb) { return os << sb.GetString(); } namespace internal { // Converts a streamable value to an std::string. A NULL pointer is // converted to "(null)". When the input value is a ::string, // ::std::string, ::wstring, or ::std::wstring object, each NUL // character in it is replaced with "\\0". template std::string StreamableToString(const T& streamable) { return (Message() << streamable).GetString(); } } // namespace internal } // namespace testing #endif // GTEST_INCLUDE_GTEST_GTEST_MESSAGE_H_ ffc-2019.1.0.post0/libs/gtest-1.7.0/include/gtest/gtest-param-test.h000066400000000000000000002241301346026536000244200ustar00rootroot00000000000000// This file was GENERATED by command: // pump.py gtest-param-test.h.pump // DO NOT EDIT BY HAND!!! // Copyright 2008, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Authors: vladl@google.com (Vlad Losev) // // Macros and functions for implementing parameterized tests // in Google C++ Testing Framework (Google Test) // // This file is generated by a SCRIPT. DO NOT EDIT BY HAND! // #ifndef GTEST_INCLUDE_GTEST_GTEST_PARAM_TEST_H_ #define GTEST_INCLUDE_GTEST_GTEST_PARAM_TEST_H_ // Value-parameterized tests allow you to test your code with different // parameters without writing multiple copies of the same test. // // Here is how you use value-parameterized tests: #if 0 // To write value-parameterized tests, first you should define a fixture // class. It is usually derived from testing::TestWithParam (see below for // another inheritance scheme that's sometimes useful in more complicated // class hierarchies), where the type of your parameter values. // TestWithParam is itself derived from testing::Test. T can be any // copyable type. If it's a raw pointer, you are responsible for managing the // lifespan of the pointed values. class FooTest : public ::testing::TestWithParam { // You can implement all the usual class fixture members here. }; // Then, use the TEST_P macro to define as many parameterized tests // for this fixture as you want. The _P suffix is for "parameterized" // or "pattern", whichever you prefer to think. TEST_P(FooTest, DoesBlah) { // Inside a test, access the test parameter with the GetParam() method // of the TestWithParam class: EXPECT_TRUE(foo.Blah(GetParam())); ... } TEST_P(FooTest, HasBlahBlah) { ... } // Finally, you can use INSTANTIATE_TEST_CASE_P to instantiate the test // case with any set of parameters you want. Google Test defines a number // of functions for generating test parameters. They return what we call // (surprise!) parameter generators. Here is a summary of them, which // are all in the testing namespace: // // // Range(begin, end [, step]) - Yields values {begin, begin+step, // begin+step+step, ...}. The values do not // include end. step defaults to 1. // Values(v1, v2, ..., vN) - Yields values {v1, v2, ..., vN}. // ValuesIn(container) - Yields values from a C-style array, an STL // ValuesIn(begin,end) container, or an iterator range [begin, end). // Bool() - Yields sequence {false, true}. // Combine(g1, g2, ..., gN) - Yields all combinations (the Cartesian product // for the math savvy) of the values generated // by the N generators. // // For more details, see comments at the definitions of these functions below // in this file. // // The following statement will instantiate tests from the FooTest test case // each with parameter values "meeny", "miny", and "moe". INSTANTIATE_TEST_CASE_P(InstantiationName, FooTest, Values("meeny", "miny", "moe")); // To distinguish different instances of the pattern, (yes, you // can instantiate it more then once) the first argument to the // INSTANTIATE_TEST_CASE_P macro is a prefix that will be added to the // actual test case name. Remember to pick unique prefixes for different // instantiations. The tests from the instantiation above will have // these names: // // * InstantiationName/FooTest.DoesBlah/0 for "meeny" // * InstantiationName/FooTest.DoesBlah/1 for "miny" // * InstantiationName/FooTest.DoesBlah/2 for "moe" // * InstantiationName/FooTest.HasBlahBlah/0 for "meeny" // * InstantiationName/FooTest.HasBlahBlah/1 for "miny" // * InstantiationName/FooTest.HasBlahBlah/2 for "moe" // // You can use these names in --gtest_filter. // // This statement will instantiate all tests from FooTest again, each // with parameter values "cat" and "dog": const char* pets[] = {"cat", "dog"}; INSTANTIATE_TEST_CASE_P(AnotherInstantiationName, FooTest, ValuesIn(pets)); // The tests from the instantiation above will have these names: // // * AnotherInstantiationName/FooTest.DoesBlah/0 for "cat" // * AnotherInstantiationName/FooTest.DoesBlah/1 for "dog" // * AnotherInstantiationName/FooTest.HasBlahBlah/0 for "cat" // * AnotherInstantiationName/FooTest.HasBlahBlah/1 for "dog" // // Please note that INSTANTIATE_TEST_CASE_P will instantiate all tests // in the given test case, whether their definitions come before or // AFTER the INSTANTIATE_TEST_CASE_P statement. // // Please also note that generator expressions (including parameters to the // generators) are evaluated in InitGoogleTest(), after main() has started. // This allows the user on one hand, to adjust generator parameters in order // to dynamically determine a set of tests to run and on the other hand, // give the user a chance to inspect the generated tests with Google Test // reflection API before RUN_ALL_TESTS() is executed. // // You can see samples/sample7_unittest.cc and samples/sample8_unittest.cc // for more examples. // // In the future, we plan to publish the API for defining new parameter // generators. But for now this interface remains part of the internal // implementation and is subject to change. // // // A parameterized test fixture must be derived from testing::Test and from // testing::WithParamInterface, where T is the type of the parameter // values. Inheriting from TestWithParam satisfies that requirement because // TestWithParam inherits from both Test and WithParamInterface. In more // complicated hierarchies, however, it is occasionally useful to inherit // separately from Test and WithParamInterface. For example: class BaseTest : public ::testing::Test { // You can inherit all the usual members for a non-parameterized test // fixture here. }; class DerivedTest : public BaseTest, public ::testing::WithParamInterface { // The usual test fixture members go here too. }; TEST_F(BaseTest, HasFoo) { // This is an ordinary non-parameterized test. } TEST_P(DerivedTest, DoesBlah) { // GetParam works just the same here as if you inherit from TestWithParam. EXPECT_TRUE(foo.Blah(GetParam())); } #endif // 0 #include "gtest/internal/gtest-port.h" #if !GTEST_OS_SYMBIAN # include #endif // scripts/fuse_gtest.py depends on gtest's own header being #included // *unconditionally*. Therefore these #includes cannot be moved // inside #if GTEST_HAS_PARAM_TEST. #include "gtest/internal/gtest-internal.h" #include "gtest/internal/gtest-param-util.h" #include "gtest/internal/gtest-param-util-generated.h" #if GTEST_HAS_PARAM_TEST namespace testing { // Functions producing parameter generators. // // Google Test uses these generators to produce parameters for value- // parameterized tests. When a parameterized test case is instantiated // with a particular generator, Google Test creates and runs tests // for each element in the sequence produced by the generator. // // In the following sample, tests from test case FooTest are instantiated // each three times with parameter values 3, 5, and 8: // // class FooTest : public TestWithParam { ... }; // // TEST_P(FooTest, TestThis) { // } // TEST_P(FooTest, TestThat) { // } // INSTANTIATE_TEST_CASE_P(TestSequence, FooTest, Values(3, 5, 8)); // // Range() returns generators providing sequences of values in a range. // // Synopsis: // Range(start, end) // - returns a generator producing a sequence of values {start, start+1, // start+2, ..., }. // Range(start, end, step) // - returns a generator producing a sequence of values {start, start+step, // start+step+step, ..., }. // Notes: // * The generated sequences never include end. For example, Range(1, 5) // returns a generator producing a sequence {1, 2, 3, 4}. Range(1, 9, 2) // returns a generator producing {1, 3, 5, 7}. // * start and end must have the same type. That type may be any integral or // floating-point type or a user defined type satisfying these conditions: // * It must be assignable (have operator=() defined). // * It must have operator+() (operator+(int-compatible type) for // two-operand version). // * It must have operator<() defined. // Elements in the resulting sequences will also have that type. // * Condition start < end must be satisfied in order for resulting sequences // to contain any elements. // template internal::ParamGenerator Range(T start, T end, IncrementT step) { return internal::ParamGenerator( new internal::RangeGenerator(start, end, step)); } template internal::ParamGenerator Range(T start, T end) { return Range(start, end, 1); } // ValuesIn() function allows generation of tests with parameters coming from // a container. // // Synopsis: // ValuesIn(const T (&array)[N]) // - returns a generator producing sequences with elements from // a C-style array. // ValuesIn(const Container& container) // - returns a generator producing sequences with elements from // an STL-style container. // ValuesIn(Iterator begin, Iterator end) // - returns a generator producing sequences with elements from // a range [begin, end) defined by a pair of STL-style iterators. These // iterators can also be plain C pointers. // // Please note that ValuesIn copies the values from the containers // passed in and keeps them to generate tests in RUN_ALL_TESTS(). // // Examples: // // This instantiates tests from test case StringTest // each with C-string values of "foo", "bar", and "baz": // // const char* strings[] = {"foo", "bar", "baz"}; // INSTANTIATE_TEST_CASE_P(StringSequence, SrtingTest, ValuesIn(strings)); // // This instantiates tests from test case StlStringTest // each with STL strings with values "a" and "b": // // ::std::vector< ::std::string> GetParameterStrings() { // ::std::vector< ::std::string> v; // v.push_back("a"); // v.push_back("b"); // return v; // } // // INSTANTIATE_TEST_CASE_P(CharSequence, // StlStringTest, // ValuesIn(GetParameterStrings())); // // // This will also instantiate tests from CharTest // each with parameter values 'a' and 'b': // // ::std::list GetParameterChars() { // ::std::list list; // list.push_back('a'); // list.push_back('b'); // return list; // } // ::std::list l = GetParameterChars(); // INSTANTIATE_TEST_CASE_P(CharSequence2, // CharTest, // ValuesIn(l.begin(), l.end())); // template internal::ParamGenerator< typename ::testing::internal::IteratorTraits::value_type> ValuesIn(ForwardIterator begin, ForwardIterator end) { typedef typename ::testing::internal::IteratorTraits ::value_type ParamType; return internal::ParamGenerator( new internal::ValuesInIteratorRangeGenerator(begin, end)); } template internal::ParamGenerator ValuesIn(const T (&array)[N]) { return ValuesIn(array, array + N); } template internal::ParamGenerator ValuesIn( const Container& container) { return ValuesIn(container.begin(), container.end()); } // Values() allows generating tests from explicitly specified list of // parameters. // // Synopsis: // Values(T v1, T v2, ..., T vN) // - returns a generator producing sequences with elements v1, v2, ..., vN. // // For example, this instantiates tests from test case BarTest each // with values "one", "two", and "three": // // INSTANTIATE_TEST_CASE_P(NumSequence, BarTest, Values("one", "two", "three")); // // This instantiates tests from test case BazTest each with values 1, 2, 3.5. // The exact type of values will depend on the type of parameter in BazTest. // // INSTANTIATE_TEST_CASE_P(FloatingNumbers, BazTest, Values(1, 2, 3.5)); // // Currently, Values() supports from 1 to 50 parameters. // template internal::ValueArray1 Values(T1 v1) { return internal::ValueArray1(v1); } template internal::ValueArray2 Values(T1 v1, T2 v2) { return internal::ValueArray2(v1, v2); } template internal::ValueArray3 Values(T1 v1, T2 v2, T3 v3) { return internal::ValueArray3(v1, v2, v3); } template internal::ValueArray4 Values(T1 v1, T2 v2, T3 v3, T4 v4) { return internal::ValueArray4(v1, v2, v3, v4); } template internal::ValueArray5 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5) { return internal::ValueArray5(v1, v2, v3, v4, v5); } template internal::ValueArray6 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6) { return internal::ValueArray6(v1, v2, v3, v4, v5, v6); } template internal::ValueArray7 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7) { return internal::ValueArray7(v1, v2, v3, v4, v5, v6, v7); } template internal::ValueArray8 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8) { return internal::ValueArray8(v1, v2, v3, v4, v5, v6, v7, v8); } template internal::ValueArray9 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9) { return internal::ValueArray9(v1, v2, v3, v4, v5, v6, v7, v8, v9); } template internal::ValueArray10 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10) { return internal::ValueArray10(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10); } template internal::ValueArray11 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11) { return internal::ValueArray11(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11); } template internal::ValueArray12 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12) { return internal::ValueArray12(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12); } template internal::ValueArray13 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13) { return internal::ValueArray13(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13); } template internal::ValueArray14 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14) { return internal::ValueArray14(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14); } template internal::ValueArray15 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15) { return internal::ValueArray15(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15); } template internal::ValueArray16 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16) { return internal::ValueArray16(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16); } template internal::ValueArray17 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17) { return internal::ValueArray17(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17); } template internal::ValueArray18 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18) { return internal::ValueArray18(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18); } template internal::ValueArray19 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19) { return internal::ValueArray19(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19); } template internal::ValueArray20 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20) { return internal::ValueArray20(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20); } template internal::ValueArray21 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21) { return internal::ValueArray21(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21); } template internal::ValueArray22 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22) { return internal::ValueArray22(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21, v22); } template internal::ValueArray23 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23) { return internal::ValueArray23(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21, v22, v23); } template internal::ValueArray24 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24) { return internal::ValueArray24(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21, v22, v23, v24); } template internal::ValueArray25 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25) { return internal::ValueArray25(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21, v22, v23, v24, v25); } template internal::ValueArray26 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26) { return internal::ValueArray26(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21, v22, v23, v24, v25, v26); } template internal::ValueArray27 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27) { return internal::ValueArray27(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21, v22, v23, v24, v25, v26, v27); } template internal::ValueArray28 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28) { return internal::ValueArray28(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21, v22, v23, v24, v25, v26, v27, v28); } template internal::ValueArray29 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29) { return internal::ValueArray29(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21, v22, v23, v24, v25, v26, v27, v28, v29); } template internal::ValueArray30 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30) { return internal::ValueArray30(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21, v22, v23, v24, v25, v26, v27, v28, v29, v30); } template internal::ValueArray31 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31) { return internal::ValueArray31(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21, v22, v23, v24, v25, v26, v27, v28, v29, v30, v31); } template internal::ValueArray32 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32) { return internal::ValueArray32(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21, v22, v23, v24, v25, v26, v27, v28, v29, v30, v31, v32); } template internal::ValueArray33 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33) { return internal::ValueArray33(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21, v22, v23, v24, v25, v26, v27, v28, v29, v30, v31, v32, v33); } template internal::ValueArray34 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34) { return internal::ValueArray34(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21, v22, v23, v24, v25, v26, v27, v28, v29, v30, v31, v32, v33, v34); } template internal::ValueArray35 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35) { return internal::ValueArray35(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21, v22, v23, v24, v25, v26, v27, v28, v29, v30, v31, v32, v33, v34, v35); } template internal::ValueArray36 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35, T36 v36) { return internal::ValueArray36(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21, v22, v23, v24, v25, v26, v27, v28, v29, v30, v31, v32, v33, v34, v35, v36); } template internal::ValueArray37 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35, T36 v36, T37 v37) { return internal::ValueArray37(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21, v22, v23, v24, v25, v26, v27, v28, v29, v30, v31, v32, v33, v34, v35, v36, v37); } template internal::ValueArray38 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35, T36 v36, T37 v37, T38 v38) { return internal::ValueArray38(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21, v22, v23, v24, v25, v26, v27, v28, v29, v30, v31, v32, v33, v34, v35, v36, v37, v38); } template internal::ValueArray39 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35, T36 v36, T37 v37, T38 v38, T39 v39) { return internal::ValueArray39(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21, v22, v23, v24, v25, v26, v27, v28, v29, v30, v31, v32, v33, v34, v35, v36, v37, v38, v39); } template internal::ValueArray40 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35, T36 v36, T37 v37, T38 v38, T39 v39, T40 v40) { return internal::ValueArray40(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21, v22, v23, v24, v25, v26, v27, v28, v29, v30, v31, v32, v33, v34, v35, v36, v37, v38, v39, v40); } template internal::ValueArray41 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35, T36 v36, T37 v37, T38 v38, T39 v39, T40 v40, T41 v41) { return internal::ValueArray41(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21, v22, v23, v24, v25, v26, v27, v28, v29, v30, v31, v32, v33, v34, v35, v36, v37, v38, v39, v40, v41); } template internal::ValueArray42 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35, T36 v36, T37 v37, T38 v38, T39 v39, T40 v40, T41 v41, T42 v42) { return internal::ValueArray42(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21, v22, v23, v24, v25, v26, v27, v28, v29, v30, v31, v32, v33, v34, v35, v36, v37, v38, v39, v40, v41, v42); } template internal::ValueArray43 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35, T36 v36, T37 v37, T38 v38, T39 v39, T40 v40, T41 v41, T42 v42, T43 v43) { return internal::ValueArray43(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21, v22, v23, v24, v25, v26, v27, v28, v29, v30, v31, v32, v33, v34, v35, v36, v37, v38, v39, v40, v41, v42, v43); } template internal::ValueArray44 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35, T36 v36, T37 v37, T38 v38, T39 v39, T40 v40, T41 v41, T42 v42, T43 v43, T44 v44) { return internal::ValueArray44(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21, v22, v23, v24, v25, v26, v27, v28, v29, v30, v31, v32, v33, v34, v35, v36, v37, v38, v39, v40, v41, v42, v43, v44); } template internal::ValueArray45 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35, T36 v36, T37 v37, T38 v38, T39 v39, T40 v40, T41 v41, T42 v42, T43 v43, T44 v44, T45 v45) { return internal::ValueArray45(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21, v22, v23, v24, v25, v26, v27, v28, v29, v30, v31, v32, v33, v34, v35, v36, v37, v38, v39, v40, v41, v42, v43, v44, v45); } template internal::ValueArray46 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35, T36 v36, T37 v37, T38 v38, T39 v39, T40 v40, T41 v41, T42 v42, T43 v43, T44 v44, T45 v45, T46 v46) { return internal::ValueArray46(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21, v22, v23, v24, v25, v26, v27, v28, v29, v30, v31, v32, v33, v34, v35, v36, v37, v38, v39, v40, v41, v42, v43, v44, v45, v46); } template internal::ValueArray47 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35, T36 v36, T37 v37, T38 v38, T39 v39, T40 v40, T41 v41, T42 v42, T43 v43, T44 v44, T45 v45, T46 v46, T47 v47) { return internal::ValueArray47(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21, v22, v23, v24, v25, v26, v27, v28, v29, v30, v31, v32, v33, v34, v35, v36, v37, v38, v39, v40, v41, v42, v43, v44, v45, v46, v47); } template internal::ValueArray48 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35, T36 v36, T37 v37, T38 v38, T39 v39, T40 v40, T41 v41, T42 v42, T43 v43, T44 v44, T45 v45, T46 v46, T47 v47, T48 v48) { return internal::ValueArray48(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21, v22, v23, v24, v25, v26, v27, v28, v29, v30, v31, v32, v33, v34, v35, v36, v37, v38, v39, v40, v41, v42, v43, v44, v45, v46, v47, v48); } template internal::ValueArray49 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35, T36 v36, T37 v37, T38 v38, T39 v39, T40 v40, T41 v41, T42 v42, T43 v43, T44 v44, T45 v45, T46 v46, T47 v47, T48 v48, T49 v49) { return internal::ValueArray49(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21, v22, v23, v24, v25, v26, v27, v28, v29, v30, v31, v32, v33, v34, v35, v36, v37, v38, v39, v40, v41, v42, v43, v44, v45, v46, v47, v48, v49); } template internal::ValueArray50 Values(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35, T36 v36, T37 v37, T38 v38, T39 v39, T40 v40, T41 v41, T42 v42, T43 v43, T44 v44, T45 v45, T46 v46, T47 v47, T48 v48, T49 v49, T50 v50) { return internal::ValueArray50(v1, v2, v3, v4, v5, v6, v7, v8, v9, v10, v11, v12, v13, v14, v15, v16, v17, v18, v19, v20, v21, v22, v23, v24, v25, v26, v27, v28, v29, v30, v31, v32, v33, v34, v35, v36, v37, v38, v39, v40, v41, v42, v43, v44, v45, v46, v47, v48, v49, v50); } // Bool() allows generating tests with parameters in a set of (false, true). // // Synopsis: // Bool() // - returns a generator producing sequences with elements {false, true}. // // It is useful when testing code that depends on Boolean flags. Combinations // of multiple flags can be tested when several Bool()'s are combined using // Combine() function. // // In the following example all tests in the test case FlagDependentTest // will be instantiated twice with parameters false and true. // // class FlagDependentTest : public testing::TestWithParam { // virtual void SetUp() { // external_flag = GetParam(); // } // } // INSTANTIATE_TEST_CASE_P(BoolSequence, FlagDependentTest, Bool()); // inline internal::ParamGenerator Bool() { return Values(false, true); } # if GTEST_HAS_COMBINE // Combine() allows the user to combine two or more sequences to produce // values of a Cartesian product of those sequences' elements. // // Synopsis: // Combine(gen1, gen2, ..., genN) // - returns a generator producing sequences with elements coming from // the Cartesian product of elements from the sequences generated by // gen1, gen2, ..., genN. The sequence elements will have a type of // tuple where T1, T2, ..., TN are the types // of elements from sequences produces by gen1, gen2, ..., genN. // // Combine can have up to 10 arguments. This number is currently limited // by the maximum number of elements in the tuple implementation used by Google // Test. // // Example: // // This will instantiate tests in test case AnimalTest each one with // the parameter values tuple("cat", BLACK), tuple("cat", WHITE), // tuple("dog", BLACK), and tuple("dog", WHITE): // // enum Color { BLACK, GRAY, WHITE }; // class AnimalTest // : public testing::TestWithParam > {...}; // // TEST_P(AnimalTest, AnimalLooksNice) {...} // // INSTANTIATE_TEST_CASE_P(AnimalVariations, AnimalTest, // Combine(Values("cat", "dog"), // Values(BLACK, WHITE))); // // This will instantiate tests in FlagDependentTest with all variations of two // Boolean flags: // // class FlagDependentTest // : public testing::TestWithParam > { // virtual void SetUp() { // // Assigns external_flag_1 and external_flag_2 values from the tuple. // tie(external_flag_1, external_flag_2) = GetParam(); // } // }; // // TEST_P(FlagDependentTest, TestFeature1) { // // Test your code using external_flag_1 and external_flag_2 here. // } // INSTANTIATE_TEST_CASE_P(TwoBoolSequence, FlagDependentTest, // Combine(Bool(), Bool())); // template internal::CartesianProductHolder2 Combine( const Generator1& g1, const Generator2& g2) { return internal::CartesianProductHolder2( g1, g2); } template internal::CartesianProductHolder3 Combine( const Generator1& g1, const Generator2& g2, const Generator3& g3) { return internal::CartesianProductHolder3( g1, g2, g3); } template internal::CartesianProductHolder4 Combine( const Generator1& g1, const Generator2& g2, const Generator3& g3, const Generator4& g4) { return internal::CartesianProductHolder4( g1, g2, g3, g4); } template internal::CartesianProductHolder5 Combine( const Generator1& g1, const Generator2& g2, const Generator3& g3, const Generator4& g4, const Generator5& g5) { return internal::CartesianProductHolder5( g1, g2, g3, g4, g5); } template internal::CartesianProductHolder6 Combine( const Generator1& g1, const Generator2& g2, const Generator3& g3, const Generator4& g4, const Generator5& g5, const Generator6& g6) { return internal::CartesianProductHolder6( g1, g2, g3, g4, g5, g6); } template internal::CartesianProductHolder7 Combine( const Generator1& g1, const Generator2& g2, const Generator3& g3, const Generator4& g4, const Generator5& g5, const Generator6& g6, const Generator7& g7) { return internal::CartesianProductHolder7( g1, g2, g3, g4, g5, g6, g7); } template internal::CartesianProductHolder8 Combine( const Generator1& g1, const Generator2& g2, const Generator3& g3, const Generator4& g4, const Generator5& g5, const Generator6& g6, const Generator7& g7, const Generator8& g8) { return internal::CartesianProductHolder8( g1, g2, g3, g4, g5, g6, g7, g8); } template internal::CartesianProductHolder9 Combine( const Generator1& g1, const Generator2& g2, const Generator3& g3, const Generator4& g4, const Generator5& g5, const Generator6& g6, const Generator7& g7, const Generator8& g8, const Generator9& g9) { return internal::CartesianProductHolder9( g1, g2, g3, g4, g5, g6, g7, g8, g9); } template internal::CartesianProductHolder10 Combine( const Generator1& g1, const Generator2& g2, const Generator3& g3, const Generator4& g4, const Generator5& g5, const Generator6& g6, const Generator7& g7, const Generator8& g8, const Generator9& g9, const Generator10& g10) { return internal::CartesianProductHolder10( g1, g2, g3, g4, g5, g6, g7, g8, g9, g10); } # endif // GTEST_HAS_COMBINE # define TEST_P(test_case_name, test_name) \ class GTEST_TEST_CLASS_NAME_(test_case_name, test_name) \ : public test_case_name { \ public: \ GTEST_TEST_CLASS_NAME_(test_case_name, test_name)() {} \ virtual void TestBody(); \ private: \ static int AddToRegistry() { \ ::testing::UnitTest::GetInstance()->parameterized_test_registry(). \ GetTestCasePatternHolder(\ #test_case_name, __FILE__, __LINE__)->AddTestPattern(\ #test_case_name, \ #test_name, \ new ::testing::internal::TestMetaFactory< \ GTEST_TEST_CLASS_NAME_(test_case_name, test_name)>()); \ return 0; \ } \ static int gtest_registering_dummy_; \ GTEST_DISALLOW_COPY_AND_ASSIGN_(\ GTEST_TEST_CLASS_NAME_(test_case_name, test_name)); \ }; \ int GTEST_TEST_CLASS_NAME_(test_case_name, \ test_name)::gtest_registering_dummy_ = \ GTEST_TEST_CLASS_NAME_(test_case_name, test_name)::AddToRegistry(); \ void GTEST_TEST_CLASS_NAME_(test_case_name, test_name)::TestBody() # define INSTANTIATE_TEST_CASE_P(prefix, test_case_name, generator) \ ::testing::internal::ParamGenerator \ gtest_##prefix##test_case_name##_EvalGenerator_() { return generator; } \ int gtest_##prefix##test_case_name##_dummy_ = \ ::testing::UnitTest::GetInstance()->parameterized_test_registry(). \ GetTestCasePatternHolder(\ #test_case_name, __FILE__, __LINE__)->AddTestCaseInstantiation(\ #prefix, \ >est_##prefix##test_case_name##_EvalGenerator_, \ __FILE__, __LINE__) } // namespace testing #endif // GTEST_HAS_PARAM_TEST #endif // GTEST_INCLUDE_GTEST_GTEST_PARAM_TEST_H_ ffc-2019.1.0.post0/libs/gtest-1.7.0/include/gtest/gtest-param-test.h.pump000066400000000000000000000445541346026536000254120ustar00rootroot00000000000000$$ -*- mode: c++; -*- $var n = 50 $$ Maximum length of Values arguments we want to support. $var maxtuple = 10 $$ Maximum number of Combine arguments we want to support. // Copyright 2008, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Authors: vladl@google.com (Vlad Losev) // // Macros and functions for implementing parameterized tests // in Google C++ Testing Framework (Google Test) // // This file is generated by a SCRIPT. DO NOT EDIT BY HAND! // #ifndef GTEST_INCLUDE_GTEST_GTEST_PARAM_TEST_H_ #define GTEST_INCLUDE_GTEST_GTEST_PARAM_TEST_H_ // Value-parameterized tests allow you to test your code with different // parameters without writing multiple copies of the same test. // // Here is how you use value-parameterized tests: #if 0 // To write value-parameterized tests, first you should define a fixture // class. It is usually derived from testing::TestWithParam (see below for // another inheritance scheme that's sometimes useful in more complicated // class hierarchies), where the type of your parameter values. // TestWithParam is itself derived from testing::Test. T can be any // copyable type. If it's a raw pointer, you are responsible for managing the // lifespan of the pointed values. class FooTest : public ::testing::TestWithParam { // You can implement all the usual class fixture members here. }; // Then, use the TEST_P macro to define as many parameterized tests // for this fixture as you want. The _P suffix is for "parameterized" // or "pattern", whichever you prefer to think. TEST_P(FooTest, DoesBlah) { // Inside a test, access the test parameter with the GetParam() method // of the TestWithParam class: EXPECT_TRUE(foo.Blah(GetParam())); ... } TEST_P(FooTest, HasBlahBlah) { ... } // Finally, you can use INSTANTIATE_TEST_CASE_P to instantiate the test // case with any set of parameters you want. Google Test defines a number // of functions for generating test parameters. They return what we call // (surprise!) parameter generators. Here is a summary of them, which // are all in the testing namespace: // // // Range(begin, end [, step]) - Yields values {begin, begin+step, // begin+step+step, ...}. The values do not // include end. step defaults to 1. // Values(v1, v2, ..., vN) - Yields values {v1, v2, ..., vN}. // ValuesIn(container) - Yields values from a C-style array, an STL // ValuesIn(begin,end) container, or an iterator range [begin, end). // Bool() - Yields sequence {false, true}. // Combine(g1, g2, ..., gN) - Yields all combinations (the Cartesian product // for the math savvy) of the values generated // by the N generators. // // For more details, see comments at the definitions of these functions below // in this file. // // The following statement will instantiate tests from the FooTest test case // each with parameter values "meeny", "miny", and "moe". INSTANTIATE_TEST_CASE_P(InstantiationName, FooTest, Values("meeny", "miny", "moe")); // To distinguish different instances of the pattern, (yes, you // can instantiate it more then once) the first argument to the // INSTANTIATE_TEST_CASE_P macro is a prefix that will be added to the // actual test case name. Remember to pick unique prefixes for different // instantiations. The tests from the instantiation above will have // these names: // // * InstantiationName/FooTest.DoesBlah/0 for "meeny" // * InstantiationName/FooTest.DoesBlah/1 for "miny" // * InstantiationName/FooTest.DoesBlah/2 for "moe" // * InstantiationName/FooTest.HasBlahBlah/0 for "meeny" // * InstantiationName/FooTest.HasBlahBlah/1 for "miny" // * InstantiationName/FooTest.HasBlahBlah/2 for "moe" // // You can use these names in --gtest_filter. // // This statement will instantiate all tests from FooTest again, each // with parameter values "cat" and "dog": const char* pets[] = {"cat", "dog"}; INSTANTIATE_TEST_CASE_P(AnotherInstantiationName, FooTest, ValuesIn(pets)); // The tests from the instantiation above will have these names: // // * AnotherInstantiationName/FooTest.DoesBlah/0 for "cat" // * AnotherInstantiationName/FooTest.DoesBlah/1 for "dog" // * AnotherInstantiationName/FooTest.HasBlahBlah/0 for "cat" // * AnotherInstantiationName/FooTest.HasBlahBlah/1 for "dog" // // Please note that INSTANTIATE_TEST_CASE_P will instantiate all tests // in the given test case, whether their definitions come before or // AFTER the INSTANTIATE_TEST_CASE_P statement. // // Please also note that generator expressions (including parameters to the // generators) are evaluated in InitGoogleTest(), after main() has started. // This allows the user on one hand, to adjust generator parameters in order // to dynamically determine a set of tests to run and on the other hand, // give the user a chance to inspect the generated tests with Google Test // reflection API before RUN_ALL_TESTS() is executed. // // You can see samples/sample7_unittest.cc and samples/sample8_unittest.cc // for more examples. // // In the future, we plan to publish the API for defining new parameter // generators. But for now this interface remains part of the internal // implementation and is subject to change. // // // A parameterized test fixture must be derived from testing::Test and from // testing::WithParamInterface, where T is the type of the parameter // values. Inheriting from TestWithParam satisfies that requirement because // TestWithParam inherits from both Test and WithParamInterface. In more // complicated hierarchies, however, it is occasionally useful to inherit // separately from Test and WithParamInterface. For example: class BaseTest : public ::testing::Test { // You can inherit all the usual members for a non-parameterized test // fixture here. }; class DerivedTest : public BaseTest, public ::testing::WithParamInterface { // The usual test fixture members go here too. }; TEST_F(BaseTest, HasFoo) { // This is an ordinary non-parameterized test. } TEST_P(DerivedTest, DoesBlah) { // GetParam works just the same here as if you inherit from TestWithParam. EXPECT_TRUE(foo.Blah(GetParam())); } #endif // 0 #include "gtest/internal/gtest-port.h" #if !GTEST_OS_SYMBIAN # include #endif // scripts/fuse_gtest.py depends on gtest's own header being #included // *unconditionally*. Therefore these #includes cannot be moved // inside #if GTEST_HAS_PARAM_TEST. #include "gtest/internal/gtest-internal.h" #include "gtest/internal/gtest-param-util.h" #include "gtest/internal/gtest-param-util-generated.h" #if GTEST_HAS_PARAM_TEST namespace testing { // Functions producing parameter generators. // // Google Test uses these generators to produce parameters for value- // parameterized tests. When a parameterized test case is instantiated // with a particular generator, Google Test creates and runs tests // for each element in the sequence produced by the generator. // // In the following sample, tests from test case FooTest are instantiated // each three times with parameter values 3, 5, and 8: // // class FooTest : public TestWithParam { ... }; // // TEST_P(FooTest, TestThis) { // } // TEST_P(FooTest, TestThat) { // } // INSTANTIATE_TEST_CASE_P(TestSequence, FooTest, Values(3, 5, 8)); // // Range() returns generators providing sequences of values in a range. // // Synopsis: // Range(start, end) // - returns a generator producing a sequence of values {start, start+1, // start+2, ..., }. // Range(start, end, step) // - returns a generator producing a sequence of values {start, start+step, // start+step+step, ..., }. // Notes: // * The generated sequences never include end. For example, Range(1, 5) // returns a generator producing a sequence {1, 2, 3, 4}. Range(1, 9, 2) // returns a generator producing {1, 3, 5, 7}. // * start and end must have the same type. That type may be any integral or // floating-point type or a user defined type satisfying these conditions: // * It must be assignable (have operator=() defined). // * It must have operator+() (operator+(int-compatible type) for // two-operand version). // * It must have operator<() defined. // Elements in the resulting sequences will also have that type. // * Condition start < end must be satisfied in order for resulting sequences // to contain any elements. // template internal::ParamGenerator Range(T start, T end, IncrementT step) { return internal::ParamGenerator( new internal::RangeGenerator(start, end, step)); } template internal::ParamGenerator Range(T start, T end) { return Range(start, end, 1); } // ValuesIn() function allows generation of tests with parameters coming from // a container. // // Synopsis: // ValuesIn(const T (&array)[N]) // - returns a generator producing sequences with elements from // a C-style array. // ValuesIn(const Container& container) // - returns a generator producing sequences with elements from // an STL-style container. // ValuesIn(Iterator begin, Iterator end) // - returns a generator producing sequences with elements from // a range [begin, end) defined by a pair of STL-style iterators. These // iterators can also be plain C pointers. // // Please note that ValuesIn copies the values from the containers // passed in and keeps them to generate tests in RUN_ALL_TESTS(). // // Examples: // // This instantiates tests from test case StringTest // each with C-string values of "foo", "bar", and "baz": // // const char* strings[] = {"foo", "bar", "baz"}; // INSTANTIATE_TEST_CASE_P(StringSequence, SrtingTest, ValuesIn(strings)); // // This instantiates tests from test case StlStringTest // each with STL strings with values "a" and "b": // // ::std::vector< ::std::string> GetParameterStrings() { // ::std::vector< ::std::string> v; // v.push_back("a"); // v.push_back("b"); // return v; // } // // INSTANTIATE_TEST_CASE_P(CharSequence, // StlStringTest, // ValuesIn(GetParameterStrings())); // // // This will also instantiate tests from CharTest // each with parameter values 'a' and 'b': // // ::std::list GetParameterChars() { // ::std::list list; // list.push_back('a'); // list.push_back('b'); // return list; // } // ::std::list l = GetParameterChars(); // INSTANTIATE_TEST_CASE_P(CharSequence2, // CharTest, // ValuesIn(l.begin(), l.end())); // template internal::ParamGenerator< typename ::testing::internal::IteratorTraits::value_type> ValuesIn(ForwardIterator begin, ForwardIterator end) { typedef typename ::testing::internal::IteratorTraits ::value_type ParamType; return internal::ParamGenerator( new internal::ValuesInIteratorRangeGenerator(begin, end)); } template internal::ParamGenerator ValuesIn(const T (&array)[N]) { return ValuesIn(array, array + N); } template internal::ParamGenerator ValuesIn( const Container& container) { return ValuesIn(container.begin(), container.end()); } // Values() allows generating tests from explicitly specified list of // parameters. // // Synopsis: // Values(T v1, T v2, ..., T vN) // - returns a generator producing sequences with elements v1, v2, ..., vN. // // For example, this instantiates tests from test case BarTest each // with values "one", "two", and "three": // // INSTANTIATE_TEST_CASE_P(NumSequence, BarTest, Values("one", "two", "three")); // // This instantiates tests from test case BazTest each with values 1, 2, 3.5. // The exact type of values will depend on the type of parameter in BazTest. // // INSTANTIATE_TEST_CASE_P(FloatingNumbers, BazTest, Values(1, 2, 3.5)); // // Currently, Values() supports from 1 to $n parameters. // $range i 1..n $for i [[ $range j 1..i template <$for j, [[typename T$j]]> internal::ValueArray$i<$for j, [[T$j]]> Values($for j, [[T$j v$j]]) { return internal::ValueArray$i<$for j, [[T$j]]>($for j, [[v$j]]); } ]] // Bool() allows generating tests with parameters in a set of (false, true). // // Synopsis: // Bool() // - returns a generator producing sequences with elements {false, true}. // // It is useful when testing code that depends on Boolean flags. Combinations // of multiple flags can be tested when several Bool()'s are combined using // Combine() function. // // In the following example all tests in the test case FlagDependentTest // will be instantiated twice with parameters false and true. // // class FlagDependentTest : public testing::TestWithParam { // virtual void SetUp() { // external_flag = GetParam(); // } // } // INSTANTIATE_TEST_CASE_P(BoolSequence, FlagDependentTest, Bool()); // inline internal::ParamGenerator Bool() { return Values(false, true); } # if GTEST_HAS_COMBINE // Combine() allows the user to combine two or more sequences to produce // values of a Cartesian product of those sequences' elements. // // Synopsis: // Combine(gen1, gen2, ..., genN) // - returns a generator producing sequences with elements coming from // the Cartesian product of elements from the sequences generated by // gen1, gen2, ..., genN. The sequence elements will have a type of // tuple where T1, T2, ..., TN are the types // of elements from sequences produces by gen1, gen2, ..., genN. // // Combine can have up to $maxtuple arguments. This number is currently limited // by the maximum number of elements in the tuple implementation used by Google // Test. // // Example: // // This will instantiate tests in test case AnimalTest each one with // the parameter values tuple("cat", BLACK), tuple("cat", WHITE), // tuple("dog", BLACK), and tuple("dog", WHITE): // // enum Color { BLACK, GRAY, WHITE }; // class AnimalTest // : public testing::TestWithParam > {...}; // // TEST_P(AnimalTest, AnimalLooksNice) {...} // // INSTANTIATE_TEST_CASE_P(AnimalVariations, AnimalTest, // Combine(Values("cat", "dog"), // Values(BLACK, WHITE))); // // This will instantiate tests in FlagDependentTest with all variations of two // Boolean flags: // // class FlagDependentTest // : public testing::TestWithParam > { // virtual void SetUp() { // // Assigns external_flag_1 and external_flag_2 values from the tuple. // tie(external_flag_1, external_flag_2) = GetParam(); // } // }; // // TEST_P(FlagDependentTest, TestFeature1) { // // Test your code using external_flag_1 and external_flag_2 here. // } // INSTANTIATE_TEST_CASE_P(TwoBoolSequence, FlagDependentTest, // Combine(Bool(), Bool())); // $range i 2..maxtuple $for i [[ $range j 1..i template <$for j, [[typename Generator$j]]> internal::CartesianProductHolder$i<$for j, [[Generator$j]]> Combine( $for j, [[const Generator$j& g$j]]) { return internal::CartesianProductHolder$i<$for j, [[Generator$j]]>( $for j, [[g$j]]); } ]] # endif // GTEST_HAS_COMBINE # define TEST_P(test_case_name, test_name) \ class GTEST_TEST_CLASS_NAME_(test_case_name, test_name) \ : public test_case_name { \ public: \ GTEST_TEST_CLASS_NAME_(test_case_name, test_name)() {} \ virtual void TestBody(); \ private: \ static int AddToRegistry() { \ ::testing::UnitTest::GetInstance()->parameterized_test_registry(). \ GetTestCasePatternHolder(\ #test_case_name, __FILE__, __LINE__)->AddTestPattern(\ #test_case_name, \ #test_name, \ new ::testing::internal::TestMetaFactory< \ GTEST_TEST_CLASS_NAME_(test_case_name, test_name)>()); \ return 0; \ } \ static int gtest_registering_dummy_; \ GTEST_DISALLOW_COPY_AND_ASSIGN_(\ GTEST_TEST_CLASS_NAME_(test_case_name, test_name)); \ }; \ int GTEST_TEST_CLASS_NAME_(test_case_name, \ test_name)::gtest_registering_dummy_ = \ GTEST_TEST_CLASS_NAME_(test_case_name, test_name)::AddToRegistry(); \ void GTEST_TEST_CLASS_NAME_(test_case_name, test_name)::TestBody() # define INSTANTIATE_TEST_CASE_P(prefix, test_case_name, generator) \ ::testing::internal::ParamGenerator \ gtest_##prefix##test_case_name##_EvalGenerator_() { return generator; } \ int gtest_##prefix##test_case_name##_dummy_ = \ ::testing::UnitTest::GetInstance()->parameterized_test_registry(). \ GetTestCasePatternHolder(\ #test_case_name, __FILE__, __LINE__)->AddTestCaseInstantiation(\ #prefix, \ >est_##prefix##test_case_name##_EvalGenerator_, \ __FILE__, __LINE__) } // namespace testing #endif // GTEST_HAS_PARAM_TEST #endif // GTEST_INCLUDE_GTEST_GTEST_PARAM_TEST_H_ ffc-2019.1.0.post0/libs/gtest-1.7.0/include/gtest/gtest-printers.h000066400000000000000000000755711346026536000242260ustar00rootroot00000000000000// Copyright 2007, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: wan@google.com (Zhanyong Wan) // Google Test - The Google C++ Testing Framework // // This file implements a universal value printer that can print a // value of any type T: // // void ::testing::internal::UniversalPrinter::Print(value, ostream_ptr); // // A user can teach this function how to print a class type T by // defining either operator<<() or PrintTo() in the namespace that // defines T. More specifically, the FIRST defined function in the // following list will be used (assuming T is defined in namespace // foo): // // 1. foo::PrintTo(const T&, ostream*) // 2. operator<<(ostream&, const T&) defined in either foo or the // global namespace. // // If none of the above is defined, it will print the debug string of // the value if it is a protocol buffer, or print the raw bytes in the // value otherwise. // // To aid debugging: when T is a reference type, the address of the // value is also printed; when T is a (const) char pointer, both the // pointer value and the NUL-terminated string it points to are // printed. // // We also provide some convenient wrappers: // // // Prints a value to a string. For a (const or not) char // // pointer, the NUL-terminated string (but not the pointer) is // // printed. // std::string ::testing::PrintToString(const T& value); // // // Prints a value tersely: for a reference type, the referenced // // value (but not the address) is printed; for a (const or not) char // // pointer, the NUL-terminated string (but not the pointer) is // // printed. // void ::testing::internal::UniversalTersePrint(const T& value, ostream*); // // // Prints value using the type inferred by the compiler. The difference // // from UniversalTersePrint() is that this function prints both the // // pointer and the NUL-terminated string for a (const or not) char pointer. // void ::testing::internal::UniversalPrint(const T& value, ostream*); // // // Prints the fields of a tuple tersely to a string vector, one // // element for each field. Tuple support must be enabled in // // gtest-port.h. // std::vector UniversalTersePrintTupleFieldsToStrings( // const Tuple& value); // // Known limitation: // // The print primitives print the elements of an STL-style container // using the compiler-inferred type of *iter where iter is a // const_iterator of the container. When const_iterator is an input // iterator but not a forward iterator, this inferred type may not // match value_type, and the print output may be incorrect. In // practice, this is rarely a problem as for most containers // const_iterator is a forward iterator. We'll fix this if there's an // actual need for it. Note that this fix cannot rely on value_type // being defined as many user-defined container types don't have // value_type. #ifndef GTEST_INCLUDE_GTEST_GTEST_PRINTERS_H_ #define GTEST_INCLUDE_GTEST_GTEST_PRINTERS_H_ #include // NOLINT #include #include #include #include #include "gtest/internal/gtest-port.h" #include "gtest/internal/gtest-internal.h" namespace testing { // Definitions in the 'internal' and 'internal2' name spaces are // subject to change without notice. DO NOT USE THEM IN USER CODE! namespace internal2 { // Prints the given number of bytes in the given object to the given // ostream. GTEST_API_ void PrintBytesInObjectTo(const unsigned char* obj_bytes, size_t count, ::std::ostream* os); // For selecting which printer to use when a given type has neither << // nor PrintTo(). enum TypeKind { kProtobuf, // a protobuf type kConvertibleToInteger, // a type implicitly convertible to BiggestInt // (e.g. a named or unnamed enum type) kOtherType // anything else }; // TypeWithoutFormatter::PrintValue(value, os) is called // by the universal printer to print a value of type T when neither // operator<< nor PrintTo() is defined for T, where kTypeKind is the // "kind" of T as defined by enum TypeKind. template class TypeWithoutFormatter { public: // This default version is called when kTypeKind is kOtherType. static void PrintValue(const T& value, ::std::ostream* os) { PrintBytesInObjectTo(reinterpret_cast(&value), sizeof(value), os); } }; // We print a protobuf using its ShortDebugString() when the string // doesn't exceed this many characters; otherwise we print it using // DebugString() for better readability. const size_t kProtobufOneLinerMaxLength = 50; template class TypeWithoutFormatter { public: static void PrintValue(const T& value, ::std::ostream* os) { const ::testing::internal::string short_str = value.ShortDebugString(); const ::testing::internal::string pretty_str = short_str.length() <= kProtobufOneLinerMaxLength ? short_str : ("\n" + value.DebugString()); *os << ("<" + pretty_str + ">"); } }; template class TypeWithoutFormatter { public: // Since T has no << operator or PrintTo() but can be implicitly // converted to BiggestInt, we print it as a BiggestInt. // // Most likely T is an enum type (either named or unnamed), in which // case printing it as an integer is the desired behavior. In case // T is not an enum, printing it as an integer is the best we can do // given that it has no user-defined printer. static void PrintValue(const T& value, ::std::ostream* os) { const internal::BiggestInt kBigInt = value; *os << kBigInt; } }; // Prints the given value to the given ostream. If the value is a // protocol message, its debug string is printed; if it's an enum or // of a type implicitly convertible to BiggestInt, it's printed as an // integer; otherwise the bytes in the value are printed. This is // what UniversalPrinter::Print() does when it knows nothing about // type T and T has neither << operator nor PrintTo(). // // A user can override this behavior for a class type Foo by defining // a << operator in the namespace where Foo is defined. // // We put this operator in namespace 'internal2' instead of 'internal' // to simplify the implementation, as much code in 'internal' needs to // use << in STL, which would conflict with our own << were it defined // in 'internal'. // // Note that this operator<< takes a generic std::basic_ostream type instead of the more restricted std::ostream. If // we define it to take an std::ostream instead, we'll get an // "ambiguous overloads" compiler error when trying to print a type // Foo that supports streaming to std::basic_ostream, as the compiler cannot tell whether // operator<<(std::ostream&, const T&) or // operator<<(std::basic_stream, const Foo&) is more // specific. template ::std::basic_ostream& operator<<( ::std::basic_ostream& os, const T& x) { TypeWithoutFormatter::value ? kProtobuf : internal::ImplicitlyConvertible::value ? kConvertibleToInteger : kOtherType)>::PrintValue(x, &os); return os; } } // namespace internal2 } // namespace testing // This namespace MUST NOT BE NESTED IN ::testing, or the name look-up // magic needed for implementing UniversalPrinter won't work. namespace testing_internal { // Used to print a value that is not an STL-style container when the // user doesn't define PrintTo() for it. template void DefaultPrintNonContainerTo(const T& value, ::std::ostream* os) { // With the following statement, during unqualified name lookup, // testing::internal2::operator<< appears as if it was declared in // the nearest enclosing namespace that contains both // ::testing_internal and ::testing::internal2, i.e. the global // namespace. For more details, refer to the C++ Standard section // 7.3.4-1 [namespace.udir]. This allows us to fall back onto // testing::internal2::operator<< in case T doesn't come with a << // operator. // // We cannot write 'using ::testing::internal2::operator<<;', which // gcc 3.3 fails to compile due to a compiler bug. using namespace ::testing::internal2; // NOLINT // Assuming T is defined in namespace foo, in the next statement, // the compiler will consider all of: // // 1. foo::operator<< (thanks to Koenig look-up), // 2. ::operator<< (as the current namespace is enclosed in ::), // 3. testing::internal2::operator<< (thanks to the using statement above). // // The operator<< whose type matches T best will be picked. // // We deliberately allow #2 to be a candidate, as sometimes it's // impossible to define #1 (e.g. when foo is ::std, defining // anything in it is undefined behavior unless you are a compiler // vendor.). *os << value; } } // namespace testing_internal namespace testing { namespace internal { // UniversalPrinter::Print(value, ostream_ptr) prints the given // value to the given ostream. The caller must ensure that // 'ostream_ptr' is not NULL, or the behavior is undefined. // // We define UniversalPrinter as a class template (as opposed to a // function template), as we need to partially specialize it for // reference types, which cannot be done with function templates. template class UniversalPrinter; template void UniversalPrint(const T& value, ::std::ostream* os); // Used to print an STL-style container when the user doesn't define // a PrintTo() for it. template void DefaultPrintTo(IsContainer /* dummy */, false_type /* is not a pointer */, const C& container, ::std::ostream* os) { const size_t kMaxCount = 32; // The maximum number of elements to print. *os << '{'; size_t count = 0; for (typename C::const_iterator it = container.begin(); it != container.end(); ++it, ++count) { if (count > 0) { *os << ','; if (count == kMaxCount) { // Enough has been printed. *os << " ..."; break; } } *os << ' '; // We cannot call PrintTo(*it, os) here as PrintTo() doesn't // handle *it being a native array. internal::UniversalPrint(*it, os); } if (count > 0) { *os << ' '; } *os << '}'; } // Used to print a pointer that is neither a char pointer nor a member // pointer, when the user doesn't define PrintTo() for it. (A member // variable pointer or member function pointer doesn't really point to // a location in the address space. Their representation is // implementation-defined. Therefore they will be printed as raw // bytes.) template void DefaultPrintTo(IsNotContainer /* dummy */, true_type /* is a pointer */, T* p, ::std::ostream* os) { if (p == NULL) { *os << "NULL"; } else { // C++ doesn't allow casting from a function pointer to any object // pointer. // // IsTrue() silences warnings: "Condition is always true", // "unreachable code". if (IsTrue(ImplicitlyConvertible::value)) { // T is not a function type. We just call << to print p, // relying on ADL to pick up user-defined << for their pointer // types, if any. *os << p; } else { // T is a function type, so '*os << p' doesn't do what we want // (it just prints p as bool). We want to print p as a const // void*. However, we cannot cast it to const void* directly, // even using reinterpret_cast, as earlier versions of gcc // (e.g. 3.4.5) cannot compile the cast when p is a function // pointer. Casting to UInt64 first solves the problem. *os << reinterpret_cast( reinterpret_cast(p)); } } } // Used to print a non-container, non-pointer value when the user // doesn't define PrintTo() for it. template void DefaultPrintTo(IsNotContainer /* dummy */, false_type /* is not a pointer */, const T& value, ::std::ostream* os) { ::testing_internal::DefaultPrintNonContainerTo(value, os); } // Prints the given value using the << operator if it has one; // otherwise prints the bytes in it. This is what // UniversalPrinter::Print() does when PrintTo() is not specialized // or overloaded for type T. // // A user can override this behavior for a class type Foo by defining // an overload of PrintTo() in the namespace where Foo is defined. We // give the user this option as sometimes defining a << operator for // Foo is not desirable (e.g. the coding style may prevent doing it, // or there is already a << operator but it doesn't do what the user // wants). template void PrintTo(const T& value, ::std::ostream* os) { // DefaultPrintTo() is overloaded. The type of its first two // arguments determine which version will be picked. If T is an // STL-style container, the version for container will be called; if // T is a pointer, the pointer version will be called; otherwise the // generic version will be called. // // Note that we check for container types here, prior to we check // for protocol message types in our operator<<. The rationale is: // // For protocol messages, we want to give people a chance to // override Google Mock's format by defining a PrintTo() or // operator<<. For STL containers, other formats can be // incompatible with Google Mock's format for the container // elements; therefore we check for container types here to ensure // that our format is used. // // The second argument of DefaultPrintTo() is needed to bypass a bug // in Symbian's C++ compiler that prevents it from picking the right // overload between: // // PrintTo(const T& x, ...); // PrintTo(T* x, ...); DefaultPrintTo(IsContainerTest(0), is_pointer(), value, os); } // The following list of PrintTo() overloads tells // UniversalPrinter::Print() how to print standard types (built-in // types, strings, plain arrays, and pointers). // Overloads for various char types. GTEST_API_ void PrintTo(unsigned char c, ::std::ostream* os); GTEST_API_ void PrintTo(signed char c, ::std::ostream* os); inline void PrintTo(char c, ::std::ostream* os) { // When printing a plain char, we always treat it as unsigned. This // way, the output won't be affected by whether the compiler thinks // char is signed or not. PrintTo(static_cast(c), os); } // Overloads for other simple built-in types. inline void PrintTo(bool x, ::std::ostream* os) { *os << (x ? "true" : "false"); } // Overload for wchar_t type. // Prints a wchar_t as a symbol if it is printable or as its internal // code otherwise and also as its decimal code (except for L'\0'). // The L'\0' char is printed as "L'\\0'". The decimal code is printed // as signed integer when wchar_t is implemented by the compiler // as a signed type and is printed as an unsigned integer when wchar_t // is implemented as an unsigned type. GTEST_API_ void PrintTo(wchar_t wc, ::std::ostream* os); // Overloads for C strings. GTEST_API_ void PrintTo(const char* s, ::std::ostream* os); inline void PrintTo(char* s, ::std::ostream* os) { PrintTo(ImplicitCast_(s), os); } // signed/unsigned char is often used for representing binary data, so // we print pointers to it as void* to be safe. inline void PrintTo(const signed char* s, ::std::ostream* os) { PrintTo(ImplicitCast_(s), os); } inline void PrintTo(signed char* s, ::std::ostream* os) { PrintTo(ImplicitCast_(s), os); } inline void PrintTo(const unsigned char* s, ::std::ostream* os) { PrintTo(ImplicitCast_(s), os); } inline void PrintTo(unsigned char* s, ::std::ostream* os) { PrintTo(ImplicitCast_(s), os); } // MSVC can be configured to define wchar_t as a typedef of unsigned // short. It defines _NATIVE_WCHAR_T_DEFINED when wchar_t is a native // type. When wchar_t is a typedef, defining an overload for const // wchar_t* would cause unsigned short* be printed as a wide string, // possibly causing invalid memory accesses. #if !defined(_MSC_VER) || defined(_NATIVE_WCHAR_T_DEFINED) // Overloads for wide C strings GTEST_API_ void PrintTo(const wchar_t* s, ::std::ostream* os); inline void PrintTo(wchar_t* s, ::std::ostream* os) { PrintTo(ImplicitCast_(s), os); } #endif // Overload for C arrays. Multi-dimensional arrays are printed // properly. // Prints the given number of elements in an array, without printing // the curly braces. template void PrintRawArrayTo(const T a[], size_t count, ::std::ostream* os) { UniversalPrint(a[0], os); for (size_t i = 1; i != count; i++) { *os << ", "; UniversalPrint(a[i], os); } } // Overloads for ::string and ::std::string. #if GTEST_HAS_GLOBAL_STRING GTEST_API_ void PrintStringTo(const ::string&s, ::std::ostream* os); inline void PrintTo(const ::string& s, ::std::ostream* os) { PrintStringTo(s, os); } #endif // GTEST_HAS_GLOBAL_STRING GTEST_API_ void PrintStringTo(const ::std::string&s, ::std::ostream* os); inline void PrintTo(const ::std::string& s, ::std::ostream* os) { PrintStringTo(s, os); } // Overloads for ::wstring and ::std::wstring. #if GTEST_HAS_GLOBAL_WSTRING GTEST_API_ void PrintWideStringTo(const ::wstring&s, ::std::ostream* os); inline void PrintTo(const ::wstring& s, ::std::ostream* os) { PrintWideStringTo(s, os); } #endif // GTEST_HAS_GLOBAL_WSTRING #if GTEST_HAS_STD_WSTRING GTEST_API_ void PrintWideStringTo(const ::std::wstring&s, ::std::ostream* os); inline void PrintTo(const ::std::wstring& s, ::std::ostream* os) { PrintWideStringTo(s, os); } #endif // GTEST_HAS_STD_WSTRING #if GTEST_HAS_TR1_TUPLE // Overload for ::std::tr1::tuple. Needed for printing function arguments, // which are packed as tuples. // Helper function for printing a tuple. T must be instantiated with // a tuple type. template void PrintTupleTo(const T& t, ::std::ostream* os); // Overloaded PrintTo() for tuples of various arities. We support // tuples of up-to 10 fields. The following implementation works // regardless of whether tr1::tuple is implemented using the // non-standard variadic template feature or not. inline void PrintTo(const ::std::tr1::tuple<>& t, ::std::ostream* os) { PrintTupleTo(t, os); } template void PrintTo(const ::std::tr1::tuple& t, ::std::ostream* os) { PrintTupleTo(t, os); } template void PrintTo(const ::std::tr1::tuple& t, ::std::ostream* os) { PrintTupleTo(t, os); } template void PrintTo(const ::std::tr1::tuple& t, ::std::ostream* os) { PrintTupleTo(t, os); } template void PrintTo(const ::std::tr1::tuple& t, ::std::ostream* os) { PrintTupleTo(t, os); } template void PrintTo(const ::std::tr1::tuple& t, ::std::ostream* os) { PrintTupleTo(t, os); } template void PrintTo(const ::std::tr1::tuple& t, ::std::ostream* os) { PrintTupleTo(t, os); } template void PrintTo(const ::std::tr1::tuple& t, ::std::ostream* os) { PrintTupleTo(t, os); } template void PrintTo(const ::std::tr1::tuple& t, ::std::ostream* os) { PrintTupleTo(t, os); } template void PrintTo(const ::std::tr1::tuple& t, ::std::ostream* os) { PrintTupleTo(t, os); } template void PrintTo( const ::std::tr1::tuple& t, ::std::ostream* os) { PrintTupleTo(t, os); } #endif // GTEST_HAS_TR1_TUPLE // Overload for std::pair. template void PrintTo(const ::std::pair& value, ::std::ostream* os) { *os << '('; // We cannot use UniversalPrint(value.first, os) here, as T1 may be // a reference type. The same for printing value.second. UniversalPrinter::Print(value.first, os); *os << ", "; UniversalPrinter::Print(value.second, os); *os << ')'; } // Implements printing a non-reference type T by letting the compiler // pick the right overload of PrintTo() for T. template class UniversalPrinter { public: // MSVC warns about adding const to a function type, so we want to // disable the warning. #ifdef _MSC_VER # pragma warning(push) // Saves the current warning state. # pragma warning(disable:4180) // Temporarily disables warning 4180. #endif // _MSC_VER // Note: we deliberately don't call this PrintTo(), as that name // conflicts with ::testing::internal::PrintTo in the body of the // function. static void Print(const T& value, ::std::ostream* os) { // By default, ::testing::internal::PrintTo() is used for printing // the value. // // Thanks to Koenig look-up, if T is a class and has its own // PrintTo() function defined in its namespace, that function will // be visible here. Since it is more specific than the generic ones // in ::testing::internal, it will be picked by the compiler in the // following statement - exactly what we want. PrintTo(value, os); } #ifdef _MSC_VER # pragma warning(pop) // Restores the warning state. #endif // _MSC_VER }; // UniversalPrintArray(begin, len, os) prints an array of 'len' // elements, starting at address 'begin'. template void UniversalPrintArray(const T* begin, size_t len, ::std::ostream* os) { if (len == 0) { *os << "{}"; } else { *os << "{ "; const size_t kThreshold = 18; const size_t kChunkSize = 8; // If the array has more than kThreshold elements, we'll have to // omit some details by printing only the first and the last // kChunkSize elements. // TODO(wan@google.com): let the user control the threshold using a flag. if (len <= kThreshold) { PrintRawArrayTo(begin, len, os); } else { PrintRawArrayTo(begin, kChunkSize, os); *os << ", ..., "; PrintRawArrayTo(begin + len - kChunkSize, kChunkSize, os); } *os << " }"; } } // This overload prints a (const) char array compactly. GTEST_API_ void UniversalPrintArray( const char* begin, size_t len, ::std::ostream* os); // This overload prints a (const) wchar_t array compactly. GTEST_API_ void UniversalPrintArray( const wchar_t* begin, size_t len, ::std::ostream* os); // Implements printing an array type T[N]. template class UniversalPrinter { public: // Prints the given array, omitting some elements when there are too // many. static void Print(const T (&a)[N], ::std::ostream* os) { UniversalPrintArray(a, N, os); } }; // Implements printing a reference type T&. template class UniversalPrinter { public: // MSVC warns about adding const to a function type, so we want to // disable the warning. #ifdef _MSC_VER # pragma warning(push) // Saves the current warning state. # pragma warning(disable:4180) // Temporarily disables warning 4180. #endif // _MSC_VER static void Print(const T& value, ::std::ostream* os) { // Prints the address of the value. We use reinterpret_cast here // as static_cast doesn't compile when T is a function type. *os << "@" << reinterpret_cast(&value) << " "; // Then prints the value itself. UniversalPrint(value, os); } #ifdef _MSC_VER # pragma warning(pop) // Restores the warning state. #endif // _MSC_VER }; // Prints a value tersely: for a reference type, the referenced value // (but not the address) is printed; for a (const) char pointer, the // NUL-terminated string (but not the pointer) is printed. template class UniversalTersePrinter { public: static void Print(const T& value, ::std::ostream* os) { UniversalPrint(value, os); } }; template class UniversalTersePrinter { public: static void Print(const T& value, ::std::ostream* os) { UniversalPrint(value, os); } }; template class UniversalTersePrinter { public: static void Print(const T (&value)[N], ::std::ostream* os) { UniversalPrinter::Print(value, os); } }; template <> class UniversalTersePrinter { public: static void Print(const char* str, ::std::ostream* os) { if (str == NULL) { *os << "NULL"; } else { UniversalPrint(string(str), os); } } }; template <> class UniversalTersePrinter { public: static void Print(char* str, ::std::ostream* os) { UniversalTersePrinter::Print(str, os); } }; #if GTEST_HAS_STD_WSTRING template <> class UniversalTersePrinter { public: static void Print(const wchar_t* str, ::std::ostream* os) { if (str == NULL) { *os << "NULL"; } else { UniversalPrint(::std::wstring(str), os); } } }; #endif template <> class UniversalTersePrinter { public: static void Print(wchar_t* str, ::std::ostream* os) { UniversalTersePrinter::Print(str, os); } }; template void UniversalTersePrint(const T& value, ::std::ostream* os) { UniversalTersePrinter::Print(value, os); } // Prints a value using the type inferred by the compiler. The // difference between this and UniversalTersePrint() is that for a // (const) char pointer, this prints both the pointer and the // NUL-terminated string. template void UniversalPrint(const T& value, ::std::ostream* os) { // A workarond for the bug in VC++ 7.1 that prevents us from instantiating // UniversalPrinter with T directly. typedef T T1; UniversalPrinter::Print(value, os); } #if GTEST_HAS_TR1_TUPLE typedef ::std::vector Strings; // This helper template allows PrintTo() for tuples and // UniversalTersePrintTupleFieldsToStrings() to be defined by // induction on the number of tuple fields. The idea is that // TuplePrefixPrinter::PrintPrefixTo(t, os) prints the first N // fields in tuple t, and can be defined in terms of // TuplePrefixPrinter. // The inductive case. template struct TuplePrefixPrinter { // Prints the first N fields of a tuple. template static void PrintPrefixTo(const Tuple& t, ::std::ostream* os) { TuplePrefixPrinter::PrintPrefixTo(t, os); *os << ", "; UniversalPrinter::type> ::Print(::std::tr1::get(t), os); } // Tersely prints the first N fields of a tuple to a string vector, // one element for each field. template static void TersePrintPrefixToStrings(const Tuple& t, Strings* strings) { TuplePrefixPrinter::TersePrintPrefixToStrings(t, strings); ::std::stringstream ss; UniversalTersePrint(::std::tr1::get(t), &ss); strings->push_back(ss.str()); } }; // Base cases. template <> struct TuplePrefixPrinter<0> { template static void PrintPrefixTo(const Tuple&, ::std::ostream*) {} template static void TersePrintPrefixToStrings(const Tuple&, Strings*) {} }; // We have to specialize the entire TuplePrefixPrinter<> class // template here, even though the definition of // TersePrintPrefixToStrings() is the same as the generic version, as // Embarcadero (formerly CodeGear, formerly Borland) C++ doesn't // support specializing a method template of a class template. template <> struct TuplePrefixPrinter<1> { template static void PrintPrefixTo(const Tuple& t, ::std::ostream* os) { UniversalPrinter::type>:: Print(::std::tr1::get<0>(t), os); } template static void TersePrintPrefixToStrings(const Tuple& t, Strings* strings) { ::std::stringstream ss; UniversalTersePrint(::std::tr1::get<0>(t), &ss); strings->push_back(ss.str()); } }; // Helper function for printing a tuple. T must be instantiated with // a tuple type. template void PrintTupleTo(const T& t, ::std::ostream* os) { *os << "("; TuplePrefixPrinter< ::std::tr1::tuple_size::value>:: PrintPrefixTo(t, os); *os << ")"; } // Prints the fields of a tuple tersely to a string vector, one // element for each field. See the comment before // UniversalTersePrint() for how we define "tersely". template Strings UniversalTersePrintTupleFieldsToStrings(const Tuple& value) { Strings result; TuplePrefixPrinter< ::std::tr1::tuple_size::value>:: TersePrintPrefixToStrings(value, &result); return result; } #endif // GTEST_HAS_TR1_TUPLE } // namespace internal template ::std::string PrintToString(const T& value) { ::std::stringstream ss; internal::UniversalTersePrinter::Print(value, &ss); return ss.str(); } } // namespace testing #endif // GTEST_INCLUDE_GTEST_GTEST_PRINTERS_H_ ffc-2019.1.0.post0/libs/gtest-1.7.0/include/gtest/gtest-spi.h000066400000000000000000000233401346026536000231360ustar00rootroot00000000000000// Copyright 2007, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: wan@google.com (Zhanyong Wan) // // Utilities for testing Google Test itself and code that uses Google Test // (e.g. frameworks built on top of Google Test). #ifndef GTEST_INCLUDE_GTEST_GTEST_SPI_H_ #define GTEST_INCLUDE_GTEST_GTEST_SPI_H_ #include "gtest/gtest.h" namespace testing { // This helper class can be used to mock out Google Test failure reporting // so that we can test Google Test or code that builds on Google Test. // // An object of this class appends a TestPartResult object to the // TestPartResultArray object given in the constructor whenever a Google Test // failure is reported. It can either intercept only failures that are // generated in the same thread that created this object or it can intercept // all generated failures. The scope of this mock object can be controlled with // the second argument to the two arguments constructor. class GTEST_API_ ScopedFakeTestPartResultReporter : public TestPartResultReporterInterface { public: // The two possible mocking modes of this object. enum InterceptMode { INTERCEPT_ONLY_CURRENT_THREAD, // Intercepts only thread local failures. INTERCEPT_ALL_THREADS // Intercepts all failures. }; // The c'tor sets this object as the test part result reporter used // by Google Test. The 'result' parameter specifies where to report the // results. This reporter will only catch failures generated in the current // thread. DEPRECATED explicit ScopedFakeTestPartResultReporter(TestPartResultArray* result); // Same as above, but you can choose the interception scope of this object. ScopedFakeTestPartResultReporter(InterceptMode intercept_mode, TestPartResultArray* result); // The d'tor restores the previous test part result reporter. virtual ~ScopedFakeTestPartResultReporter(); // Appends the TestPartResult object to the TestPartResultArray // received in the constructor. // // This method is from the TestPartResultReporterInterface // interface. virtual void ReportTestPartResult(const TestPartResult& result); private: void Init(); const InterceptMode intercept_mode_; TestPartResultReporterInterface* old_reporter_; TestPartResultArray* const result_; GTEST_DISALLOW_COPY_AND_ASSIGN_(ScopedFakeTestPartResultReporter); }; namespace internal { // A helper class for implementing EXPECT_FATAL_FAILURE() and // EXPECT_NONFATAL_FAILURE(). Its destructor verifies that the given // TestPartResultArray contains exactly one failure that has the given // type and contains the given substring. If that's not the case, a // non-fatal failure will be generated. class GTEST_API_ SingleFailureChecker { public: // The constructor remembers the arguments. SingleFailureChecker(const TestPartResultArray* results, TestPartResult::Type type, const string& substr); ~SingleFailureChecker(); private: const TestPartResultArray* const results_; const TestPartResult::Type type_; const string substr_; GTEST_DISALLOW_COPY_AND_ASSIGN_(SingleFailureChecker); }; } // namespace internal } // namespace testing // A set of macros for testing Google Test assertions or code that's expected // to generate Google Test fatal failures. It verifies that the given // statement will cause exactly one fatal Google Test failure with 'substr' // being part of the failure message. // // There are two different versions of this macro. EXPECT_FATAL_FAILURE only // affects and considers failures generated in the current thread and // EXPECT_FATAL_FAILURE_ON_ALL_THREADS does the same but for all threads. // // The verification of the assertion is done correctly even when the statement // throws an exception or aborts the current function. // // Known restrictions: // - 'statement' cannot reference local non-static variables or // non-static members of the current object. // - 'statement' cannot return a value. // - You cannot stream a failure message to this macro. // // Note that even though the implementations of the following two // macros are much alike, we cannot refactor them to use a common // helper macro, due to some peculiarity in how the preprocessor // works. The AcceptsMacroThatExpandsToUnprotectedComma test in // gtest_unittest.cc will fail to compile if we do that. #define EXPECT_FATAL_FAILURE(statement, substr) \ do { \ class GTestExpectFatalFailureHelper {\ public:\ static void Execute() { statement; }\ };\ ::testing::TestPartResultArray gtest_failures;\ ::testing::internal::SingleFailureChecker gtest_checker(\ >est_failures, ::testing::TestPartResult::kFatalFailure, (substr));\ {\ ::testing::ScopedFakeTestPartResultReporter gtest_reporter(\ ::testing::ScopedFakeTestPartResultReporter:: \ INTERCEPT_ONLY_CURRENT_THREAD, >est_failures);\ GTestExpectFatalFailureHelper::Execute();\ }\ } while (::testing::internal::AlwaysFalse()) #define EXPECT_FATAL_FAILURE_ON_ALL_THREADS(statement, substr) \ do { \ class GTestExpectFatalFailureHelper {\ public:\ static void Execute() { statement; }\ };\ ::testing::TestPartResultArray gtest_failures;\ ::testing::internal::SingleFailureChecker gtest_checker(\ >est_failures, ::testing::TestPartResult::kFatalFailure, (substr));\ {\ ::testing::ScopedFakeTestPartResultReporter gtest_reporter(\ ::testing::ScopedFakeTestPartResultReporter:: \ INTERCEPT_ALL_THREADS, >est_failures);\ GTestExpectFatalFailureHelper::Execute();\ }\ } while (::testing::internal::AlwaysFalse()) // A macro for testing Google Test assertions or code that's expected to // generate Google Test non-fatal failures. It asserts that the given // statement will cause exactly one non-fatal Google Test failure with 'substr' // being part of the failure message. // // There are two different versions of this macro. EXPECT_NONFATAL_FAILURE only // affects and considers failures generated in the current thread and // EXPECT_NONFATAL_FAILURE_ON_ALL_THREADS does the same but for all threads. // // 'statement' is allowed to reference local variables and members of // the current object. // // The verification of the assertion is done correctly even when the statement // throws an exception or aborts the current function. // // Known restrictions: // - You cannot stream a failure message to this macro. // // Note that even though the implementations of the following two // macros are much alike, we cannot refactor them to use a common // helper macro, due to some peculiarity in how the preprocessor // works. If we do that, the code won't compile when the user gives // EXPECT_NONFATAL_FAILURE() a statement that contains a macro that // expands to code containing an unprotected comma. The // AcceptsMacroThatExpandsToUnprotectedComma test in gtest_unittest.cc // catches that. // // For the same reason, we have to write // if (::testing::internal::AlwaysTrue()) { statement; } // instead of // GTEST_SUPPRESS_UNREACHABLE_CODE_WARNING_BELOW_(statement) // to avoid an MSVC warning on unreachable code. #define EXPECT_NONFATAL_FAILURE(statement, substr) \ do {\ ::testing::TestPartResultArray gtest_failures;\ ::testing::internal::SingleFailureChecker gtest_checker(\ >est_failures, ::testing::TestPartResult::kNonFatalFailure, \ (substr));\ {\ ::testing::ScopedFakeTestPartResultReporter gtest_reporter(\ ::testing::ScopedFakeTestPartResultReporter:: \ INTERCEPT_ONLY_CURRENT_THREAD, >est_failures);\ if (::testing::internal::AlwaysTrue()) { statement; }\ }\ } while (::testing::internal::AlwaysFalse()) #define EXPECT_NONFATAL_FAILURE_ON_ALL_THREADS(statement, substr) \ do {\ ::testing::TestPartResultArray gtest_failures;\ ::testing::internal::SingleFailureChecker gtest_checker(\ >est_failures, ::testing::TestPartResult::kNonFatalFailure, \ (substr));\ {\ ::testing::ScopedFakeTestPartResultReporter gtest_reporter(\ ::testing::ScopedFakeTestPartResultReporter::INTERCEPT_ALL_THREADS, \ >est_failures);\ if (::testing::internal::AlwaysTrue()) { statement; }\ }\ } while (::testing::internal::AlwaysFalse()) #endif // GTEST_INCLUDE_GTEST_GTEST_SPI_H_ ffc-2019.1.0.post0/libs/gtest-1.7.0/include/gtest/gtest-test-part.h000066400000000000000000000145551346026536000242760ustar00rootroot00000000000000// Copyright 2008, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: mheule@google.com (Markus Heule) // #ifndef GTEST_INCLUDE_GTEST_GTEST_TEST_PART_H_ #define GTEST_INCLUDE_GTEST_GTEST_TEST_PART_H_ #include #include #include "gtest/internal/gtest-internal.h" #include "gtest/internal/gtest-string.h" namespace testing { // A copyable object representing the result of a test part (i.e. an // assertion or an explicit FAIL(), ADD_FAILURE(), or SUCCESS()). // // Don't inherit from TestPartResult as its destructor is not virtual. class GTEST_API_ TestPartResult { public: // The possible outcomes of a test part (i.e. an assertion or an // explicit SUCCEED(), FAIL(), or ADD_FAILURE()). enum Type { kSuccess, // Succeeded. kNonFatalFailure, // Failed but the test can continue. kFatalFailure // Failed and the test should be terminated. }; // C'tor. TestPartResult does NOT have a default constructor. // Always use this constructor (with parameters) to create a // TestPartResult object. TestPartResult(Type a_type, const char* a_file_name, int a_line_number, const char* a_message) : type_(a_type), file_name_(a_file_name == NULL ? "" : a_file_name), line_number_(a_line_number), summary_(ExtractSummary(a_message)), message_(a_message) { } // Gets the outcome of the test part. Type type() const { return type_; } // Gets the name of the source file where the test part took place, or // NULL if it's unknown. const char* file_name() const { return file_name_.empty() ? NULL : file_name_.c_str(); } // Gets the line in the source file where the test part took place, // or -1 if it's unknown. int line_number() const { return line_number_; } // Gets the summary of the failure message. const char* summary() const { return summary_.c_str(); } // Gets the message associated with the test part. const char* message() const { return message_.c_str(); } // Returns true iff the test part passed. bool passed() const { return type_ == kSuccess; } // Returns true iff the test part failed. bool failed() const { return type_ != kSuccess; } // Returns true iff the test part non-fatally failed. bool nonfatally_failed() const { return type_ == kNonFatalFailure; } // Returns true iff the test part fatally failed. bool fatally_failed() const { return type_ == kFatalFailure; } private: Type type_; // Gets the summary of the failure message by omitting the stack // trace in it. static std::string ExtractSummary(const char* message); // The name of the source file where the test part took place, or // "" if the source file is unknown. std::string file_name_; // The line in the source file where the test part took place, or -1 // if the line number is unknown. int line_number_; std::string summary_; // The test failure summary. std::string message_; // The test failure message. }; // Prints a TestPartResult object. std::ostream& operator<<(std::ostream& os, const TestPartResult& result); // An array of TestPartResult objects. // // Don't inherit from TestPartResultArray as its destructor is not // virtual. class GTEST_API_ TestPartResultArray { public: TestPartResultArray() {} // Appends the given TestPartResult to the array. void Append(const TestPartResult& result); // Returns the TestPartResult at the given index (0-based). const TestPartResult& GetTestPartResult(int index) const; // Returns the number of TestPartResult objects in the array. int size() const; private: std::vector array_; GTEST_DISALLOW_COPY_AND_ASSIGN_(TestPartResultArray); }; // This interface knows how to report a test part result. class TestPartResultReporterInterface { public: virtual ~TestPartResultReporterInterface() {} virtual void ReportTestPartResult(const TestPartResult& result) = 0; }; namespace internal { // This helper class is used by {ASSERT|EXPECT}_NO_FATAL_FAILURE to check if a // statement generates new fatal failures. To do so it registers itself as the // current test part result reporter. Besides checking if fatal failures were // reported, it only delegates the reporting to the former result reporter. // The original result reporter is restored in the destructor. // INTERNAL IMPLEMENTATION - DO NOT USE IN A USER PROGRAM. class GTEST_API_ HasNewFatalFailureHelper : public TestPartResultReporterInterface { public: HasNewFatalFailureHelper(); virtual ~HasNewFatalFailureHelper(); virtual void ReportTestPartResult(const TestPartResult& result); bool has_new_fatal_failure() const { return has_new_fatal_failure_; } private: bool has_new_fatal_failure_; TestPartResultReporterInterface* original_reporter_; GTEST_DISALLOW_COPY_AND_ASSIGN_(HasNewFatalFailureHelper); }; } // namespace internal } // namespace testing #endif // GTEST_INCLUDE_GTEST_GTEST_TEST_PART_H_ ffc-2019.1.0.post0/libs/gtest-1.7.0/include/gtest/gtest-typed-test.h000066400000000000000000000240021346026536000244410ustar00rootroot00000000000000// Copyright 2008 Google Inc. // All Rights Reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: wan@google.com (Zhanyong Wan) #ifndef GTEST_INCLUDE_GTEST_GTEST_TYPED_TEST_H_ #define GTEST_INCLUDE_GTEST_GTEST_TYPED_TEST_H_ // This header implements typed tests and type-parameterized tests. // Typed (aka type-driven) tests repeat the same test for types in a // list. You must know which types you want to test with when writing // typed tests. Here's how you do it: #if 0 // First, define a fixture class template. It should be parameterized // by a type. Remember to derive it from testing::Test. template class FooTest : public testing::Test { public: ... typedef std::list List; static T shared_; T value_; }; // Next, associate a list of types with the test case, which will be // repeated for each type in the list. The typedef is necessary for // the macro to parse correctly. typedef testing::Types MyTypes; TYPED_TEST_CASE(FooTest, MyTypes); // If the type list contains only one type, you can write that type // directly without Types<...>: // TYPED_TEST_CASE(FooTest, int); // Then, use TYPED_TEST() instead of TEST_F() to define as many typed // tests for this test case as you want. TYPED_TEST(FooTest, DoesBlah) { // Inside a test, refer to TypeParam to get the type parameter. // Since we are inside a derived class template, C++ requires use to // visit the members of FooTest via 'this'. TypeParam n = this->value_; // To visit static members of the fixture, add the TestFixture:: // prefix. n += TestFixture::shared_; // To refer to typedefs in the fixture, add the "typename // TestFixture::" prefix. typename TestFixture::List values; values.push_back(n); ... } TYPED_TEST(FooTest, HasPropertyA) { ... } #endif // 0 // Type-parameterized tests are abstract test patterns parameterized // by a type. Compared with typed tests, type-parameterized tests // allow you to define the test pattern without knowing what the type // parameters are. The defined pattern can be instantiated with // different types any number of times, in any number of translation // units. // // If you are designing an interface or concept, you can define a // suite of type-parameterized tests to verify properties that any // valid implementation of the interface/concept should have. Then, // each implementation can easily instantiate the test suite to verify // that it conforms to the requirements, without having to write // similar tests repeatedly. Here's an example: #if 0 // First, define a fixture class template. It should be parameterized // by a type. Remember to derive it from testing::Test. template class FooTest : public testing::Test { ... }; // Next, declare that you will define a type-parameterized test case // (the _P suffix is for "parameterized" or "pattern", whichever you // prefer): TYPED_TEST_CASE_P(FooTest); // Then, use TYPED_TEST_P() to define as many type-parameterized tests // for this type-parameterized test case as you want. TYPED_TEST_P(FooTest, DoesBlah) { // Inside a test, refer to TypeParam to get the type parameter. TypeParam n = 0; ... } TYPED_TEST_P(FooTest, HasPropertyA) { ... } // Now the tricky part: you need to register all test patterns before // you can instantiate them. The first argument of the macro is the // test case name; the rest are the names of the tests in this test // case. REGISTER_TYPED_TEST_CASE_P(FooTest, DoesBlah, HasPropertyA); // Finally, you are free to instantiate the pattern with the types you // want. If you put the above code in a header file, you can #include // it in multiple C++ source files and instantiate it multiple times. // // To distinguish different instances of the pattern, the first // argument to the INSTANTIATE_* macro is a prefix that will be added // to the actual test case name. Remember to pick unique prefixes for // different instances. typedef testing::Types MyTypes; INSTANTIATE_TYPED_TEST_CASE_P(My, FooTest, MyTypes); // If the type list contains only one type, you can write that type // directly without Types<...>: // INSTANTIATE_TYPED_TEST_CASE_P(My, FooTest, int); #endif // 0 #include "gtest/internal/gtest-port.h" #include "gtest/internal/gtest-type-util.h" // Implements typed tests. #if GTEST_HAS_TYPED_TEST // INTERNAL IMPLEMENTATION - DO NOT USE IN USER CODE. // // Expands to the name of the typedef for the type parameters of the // given test case. # define GTEST_TYPE_PARAMS_(TestCaseName) gtest_type_params_##TestCaseName##_ // The 'Types' template argument below must have spaces around it // since some compilers may choke on '>>' when passing a template // instance (e.g. Types) # define TYPED_TEST_CASE(CaseName, Types) \ typedef ::testing::internal::TypeList< Types >::type \ GTEST_TYPE_PARAMS_(CaseName) # define TYPED_TEST(CaseName, TestName) \ template \ class GTEST_TEST_CLASS_NAME_(CaseName, TestName) \ : public CaseName { \ private: \ typedef CaseName TestFixture; \ typedef gtest_TypeParam_ TypeParam; \ virtual void TestBody(); \ }; \ bool gtest_##CaseName##_##TestName##_registered_ GTEST_ATTRIBUTE_UNUSED_ = \ ::testing::internal::TypeParameterizedTest< \ CaseName, \ ::testing::internal::TemplateSel< \ GTEST_TEST_CLASS_NAME_(CaseName, TestName)>, \ GTEST_TYPE_PARAMS_(CaseName)>::Register(\ "", #CaseName, #TestName, 0); \ template \ void GTEST_TEST_CLASS_NAME_(CaseName, TestName)::TestBody() #endif // GTEST_HAS_TYPED_TEST // Implements type-parameterized tests. #if GTEST_HAS_TYPED_TEST_P // INTERNAL IMPLEMENTATION - DO NOT USE IN USER CODE. // // Expands to the namespace name that the type-parameterized tests for // the given type-parameterized test case are defined in. The exact // name of the namespace is subject to change without notice. # define GTEST_CASE_NAMESPACE_(TestCaseName) \ gtest_case_##TestCaseName##_ // INTERNAL IMPLEMENTATION - DO NOT USE IN USER CODE. // // Expands to the name of the variable used to remember the names of // the defined tests in the given test case. # define GTEST_TYPED_TEST_CASE_P_STATE_(TestCaseName) \ gtest_typed_test_case_p_state_##TestCaseName##_ // INTERNAL IMPLEMENTATION - DO NOT USE IN USER CODE DIRECTLY. // // Expands to the name of the variable used to remember the names of // the registered tests in the given test case. # define GTEST_REGISTERED_TEST_NAMES_(TestCaseName) \ gtest_registered_test_names_##TestCaseName##_ // The variables defined in the type-parameterized test macros are // static as typically these macros are used in a .h file that can be // #included in multiple translation units linked together. # define TYPED_TEST_CASE_P(CaseName) \ static ::testing::internal::TypedTestCasePState \ GTEST_TYPED_TEST_CASE_P_STATE_(CaseName) # define TYPED_TEST_P(CaseName, TestName) \ namespace GTEST_CASE_NAMESPACE_(CaseName) { \ template \ class TestName : public CaseName { \ private: \ typedef CaseName TestFixture; \ typedef gtest_TypeParam_ TypeParam; \ virtual void TestBody(); \ }; \ static bool gtest_##TestName##_defined_ GTEST_ATTRIBUTE_UNUSED_ = \ GTEST_TYPED_TEST_CASE_P_STATE_(CaseName).AddTestName(\ __FILE__, __LINE__, #CaseName, #TestName); \ } \ template \ void GTEST_CASE_NAMESPACE_(CaseName)::TestName::TestBody() # define REGISTER_TYPED_TEST_CASE_P(CaseName, ...) \ namespace GTEST_CASE_NAMESPACE_(CaseName) { \ typedef ::testing::internal::Templates<__VA_ARGS__>::type gtest_AllTests_; \ } \ static const char* const GTEST_REGISTERED_TEST_NAMES_(CaseName) = \ GTEST_TYPED_TEST_CASE_P_STATE_(CaseName).VerifyRegisteredTestNames(\ __FILE__, __LINE__, #__VA_ARGS__) // The 'Types' template argument below must have spaces around it // since some compilers may choke on '>>' when passing a template // instance (e.g. Types) # define INSTANTIATE_TYPED_TEST_CASE_P(Prefix, CaseName, Types) \ bool gtest_##Prefix##_##CaseName GTEST_ATTRIBUTE_UNUSED_ = \ ::testing::internal::TypeParameterizedTestCase::type>::Register(\ #Prefix, #CaseName, GTEST_REGISTERED_TEST_NAMES_(CaseName)) #endif // GTEST_HAS_TYPED_TEST_P #endif // GTEST_INCLUDE_GTEST_GTEST_TYPED_TEST_H_ ffc-2019.1.0.post0/libs/gtest-1.7.0/include/gtest/gtest.h000066400000000000000000002545621346026536000223610ustar00rootroot00000000000000// Copyright 2005, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: wan@google.com (Zhanyong Wan) // // The Google C++ Testing Framework (Google Test) // // This header file defines the public API for Google Test. It should be // included by any test program that uses Google Test. // // IMPORTANT NOTE: Due to limitation of the C++ language, we have to // leave some internal implementation details in this header file. // They are clearly marked by comments like this: // // // INTERNAL IMPLEMENTATION - DO NOT USE IN A USER PROGRAM. // // Such code is NOT meant to be used by a user directly, and is subject // to CHANGE WITHOUT NOTICE. Therefore DO NOT DEPEND ON IT in a user // program! // // Acknowledgment: Google Test borrowed the idea of automatic test // registration from Barthelemy Dagenais' (barthelemy@prologique.com) // easyUnit framework. #ifndef GTEST_INCLUDE_GTEST_GTEST_H_ #define GTEST_INCLUDE_GTEST_GTEST_H_ #include #include #include #include "gtest/internal/gtest-internal.h" #include "gtest/internal/gtest-string.h" #include "gtest/gtest-death-test.h" #include "gtest/gtest-message.h" #include "gtest/gtest-param-test.h" #include "gtest/gtest-printers.h" #include "gtest/gtest_prod.h" #include "gtest/gtest-test-part.h" #include "gtest/gtest-typed-test.h" // Depending on the platform, different string classes are available. // On Linux, in addition to ::std::string, Google also makes use of // class ::string, which has the same interface as ::std::string, but // has a different implementation. // // The user can define GTEST_HAS_GLOBAL_STRING to 1 to indicate that // ::string is available AND is a distinct type to ::std::string, or // define it to 0 to indicate otherwise. // // If the user's ::std::string and ::string are the same class due to // aliasing, he should define GTEST_HAS_GLOBAL_STRING to 0. // // If the user doesn't define GTEST_HAS_GLOBAL_STRING, it is defined // heuristically. namespace testing { // Declares the flags. // This flag temporary enables the disabled tests. GTEST_DECLARE_bool_(also_run_disabled_tests); // This flag brings the debugger on an assertion failure. GTEST_DECLARE_bool_(break_on_failure); // This flag controls whether Google Test catches all test-thrown exceptions // and logs them as failures. GTEST_DECLARE_bool_(catch_exceptions); // This flag enables using colors in terminal output. Available values are // "yes" to enable colors, "no" (disable colors), or "auto" (the default) // to let Google Test decide. GTEST_DECLARE_string_(color); // This flag sets up the filter to select by name using a glob pattern // the tests to run. If the filter is not given all tests are executed. GTEST_DECLARE_string_(filter); // This flag causes the Google Test to list tests. None of the tests listed // are actually run if the flag is provided. GTEST_DECLARE_bool_(list_tests); // This flag controls whether Google Test emits a detailed XML report to a file // in addition to its normal textual output. GTEST_DECLARE_string_(output); // This flags control whether Google Test prints the elapsed time for each // test. GTEST_DECLARE_bool_(print_time); // This flag specifies the random number seed. GTEST_DECLARE_int32_(random_seed); // This flag sets how many times the tests are repeated. The default value // is 1. If the value is -1 the tests are repeating forever. GTEST_DECLARE_int32_(repeat); // This flag controls whether Google Test includes Google Test internal // stack frames in failure stack traces. GTEST_DECLARE_bool_(show_internal_stack_frames); // When this flag is specified, tests' order is randomized on every iteration. GTEST_DECLARE_bool_(shuffle); // This flag specifies the maximum number of stack frames to be // printed in a failure message. GTEST_DECLARE_int32_(stack_trace_depth); // When this flag is specified, a failed assertion will throw an // exception if exceptions are enabled, or exit the program with a // non-zero code otherwise. GTEST_DECLARE_bool_(throw_on_failure); // When this flag is set with a "host:port" string, on supported // platforms test results are streamed to the specified port on // the specified host machine. GTEST_DECLARE_string_(stream_result_to); // The upper limit for valid stack trace depths. const int kMaxStackTraceDepth = 100; namespace internal { class AssertHelper; class DefaultGlobalTestPartResultReporter; class ExecDeathTest; class NoExecDeathTest; class FinalSuccessChecker; class GTestFlagSaver; class StreamingListenerTest; class TestResultAccessor; class TestEventListenersAccessor; class TestEventRepeater; class UnitTestRecordPropertyTestHelper; class WindowsDeathTest; class UnitTestImpl* GetUnitTestImpl(); void ReportFailureInUnknownLocation(TestPartResult::Type result_type, const std::string& message); } // namespace internal // The friend relationship of some of these classes is cyclic. // If we don't forward declare them the compiler might confuse the classes // in friendship clauses with same named classes on the scope. class Test; class TestCase; class TestInfo; class UnitTest; // A class for indicating whether an assertion was successful. When // the assertion wasn't successful, the AssertionResult object // remembers a non-empty message that describes how it failed. // // To create an instance of this class, use one of the factory functions // (AssertionSuccess() and AssertionFailure()). // // This class is useful for two purposes: // 1. Defining predicate functions to be used with Boolean test assertions // EXPECT_TRUE/EXPECT_FALSE and their ASSERT_ counterparts // 2. Defining predicate-format functions to be // used with predicate assertions (ASSERT_PRED_FORMAT*, etc). // // For example, if you define IsEven predicate: // // testing::AssertionResult IsEven(int n) { // if ((n % 2) == 0) // return testing::AssertionSuccess(); // else // return testing::AssertionFailure() << n << " is odd"; // } // // Then the failed expectation EXPECT_TRUE(IsEven(Fib(5))) // will print the message // // Value of: IsEven(Fib(5)) // Actual: false (5 is odd) // Expected: true // // instead of a more opaque // // Value of: IsEven(Fib(5)) // Actual: false // Expected: true // // in case IsEven is a simple Boolean predicate. // // If you expect your predicate to be reused and want to support informative // messages in EXPECT_FALSE and ASSERT_FALSE (negative assertions show up // about half as often as positive ones in our tests), supply messages for // both success and failure cases: // // testing::AssertionResult IsEven(int n) { // if ((n % 2) == 0) // return testing::AssertionSuccess() << n << " is even"; // else // return testing::AssertionFailure() << n << " is odd"; // } // // Then a statement EXPECT_FALSE(IsEven(Fib(6))) will print // // Value of: IsEven(Fib(6)) // Actual: true (8 is even) // Expected: false // // NB: Predicates that support negative Boolean assertions have reduced // performance in positive ones so be careful not to use them in tests // that have lots (tens of thousands) of positive Boolean assertions. // // To use this class with EXPECT_PRED_FORMAT assertions such as: // // // Verifies that Foo() returns an even number. // EXPECT_PRED_FORMAT1(IsEven, Foo()); // // you need to define: // // testing::AssertionResult IsEven(const char* expr, int n) { // if ((n % 2) == 0) // return testing::AssertionSuccess(); // else // return testing::AssertionFailure() // << "Expected: " << expr << " is even\n Actual: it's " << n; // } // // If Foo() returns 5, you will see the following message: // // Expected: Foo() is even // Actual: it's 5 // class GTEST_API_ AssertionResult { public: // Copy constructor. // Used in EXPECT_TRUE/FALSE(assertion_result). AssertionResult(const AssertionResult& other); // Used in the EXPECT_TRUE/FALSE(bool_expression). explicit AssertionResult(bool success) : success_(success) {} // Returns true iff the assertion succeeded. operator bool() const { return success_; } // NOLINT // Returns the assertion's negation. Used with EXPECT/ASSERT_FALSE. AssertionResult operator!() const; // Returns the text streamed into this AssertionResult. Test assertions // use it when they fail (i.e., the predicate's outcome doesn't match the // assertion's expectation). When nothing has been streamed into the // object, returns an empty string. const char* message() const { return message_.get() != NULL ? message_->c_str() : ""; } // TODO(vladl@google.com): Remove this after making sure no clients use it. // Deprecated; please use message() instead. const char* failure_message() const { return message(); } // Streams a custom failure message into this object. template AssertionResult& operator<<(const T& value) { AppendMessage(Message() << value); return *this; } // Allows streaming basic output manipulators such as endl or flush into // this object. AssertionResult& operator<<( ::std::ostream& (*basic_manipulator)(::std::ostream& stream)) { AppendMessage(Message() << basic_manipulator); return *this; } private: // Appends the contents of message to message_. void AppendMessage(const Message& a_message) { if (message_.get() == NULL) message_.reset(new ::std::string); message_->append(a_message.GetString().c_str()); } // Stores result of the assertion predicate. bool success_; // Stores the message describing the condition in case the expectation // construct is not satisfied with the predicate's outcome. // Referenced via a pointer to avoid taking too much stack frame space // with test assertions. internal::scoped_ptr< ::std::string> message_; GTEST_DISALLOW_ASSIGN_(AssertionResult); }; // Makes a successful assertion result. GTEST_API_ AssertionResult AssertionSuccess(); // Makes a failed assertion result. GTEST_API_ AssertionResult AssertionFailure(); // Makes a failed assertion result with the given failure message. // Deprecated; use AssertionFailure() << msg. GTEST_API_ AssertionResult AssertionFailure(const Message& msg); // The abstract class that all tests inherit from. // // In Google Test, a unit test program contains one or many TestCases, and // each TestCase contains one or many Tests. // // When you define a test using the TEST macro, you don't need to // explicitly derive from Test - the TEST macro automatically does // this for you. // // The only time you derive from Test is when defining a test fixture // to be used a TEST_F. For example: // // class FooTest : public testing::Test { // protected: // virtual void SetUp() { ... } // virtual void TearDown() { ... } // ... // }; // // TEST_F(FooTest, Bar) { ... } // TEST_F(FooTest, Baz) { ... } // // Test is not copyable. class GTEST_API_ Test { public: friend class TestInfo; // Defines types for pointers to functions that set up and tear down // a test case. typedef internal::SetUpTestCaseFunc SetUpTestCaseFunc; typedef internal::TearDownTestCaseFunc TearDownTestCaseFunc; // The d'tor is virtual as we intend to inherit from Test. virtual ~Test(); // Sets up the stuff shared by all tests in this test case. // // Google Test will call Foo::SetUpTestCase() before running the first // test in test case Foo. Hence a sub-class can define its own // SetUpTestCase() method to shadow the one defined in the super // class. static void SetUpTestCase() {} // Tears down the stuff shared by all tests in this test case. // // Google Test will call Foo::TearDownTestCase() after running the last // test in test case Foo. Hence a sub-class can define its own // TearDownTestCase() method to shadow the one defined in the super // class. static void TearDownTestCase() {} // Returns true iff the current test has a fatal failure. static bool HasFatalFailure(); // Returns true iff the current test has a non-fatal failure. static bool HasNonfatalFailure(); // Returns true iff the current test has a (either fatal or // non-fatal) failure. static bool HasFailure() { return HasFatalFailure() || HasNonfatalFailure(); } // Logs a property for the current test, test case, or for the entire // invocation of the test program when used outside of the context of a // test case. Only the last value for a given key is remembered. These // are public static so they can be called from utility functions that are // not members of the test fixture. Calls to RecordProperty made during // lifespan of the test (from the moment its constructor starts to the // moment its destructor finishes) will be output in XML as attributes of // the element. Properties recorded from fixture's // SetUpTestCase or TearDownTestCase are logged as attributes of the // corresponding element. Calls to RecordProperty made in the // global context (before or after invocation of RUN_ALL_TESTS and from // SetUp/TearDown method of Environment objects registered with Google // Test) will be output as attributes of the element. static void RecordProperty(const std::string& key, const std::string& value); static void RecordProperty(const std::string& key, int value); protected: // Creates a Test object. Test(); // Sets up the test fixture. virtual void SetUp(); // Tears down the test fixture. virtual void TearDown(); private: // Returns true iff the current test has the same fixture class as // the first test in the current test case. static bool HasSameFixtureClass(); // Runs the test after the test fixture has been set up. // // A sub-class must implement this to define the test logic. // // DO NOT OVERRIDE THIS FUNCTION DIRECTLY IN A USER PROGRAM. // Instead, use the TEST or TEST_F macro. virtual void TestBody() = 0; // Sets up, executes, and tears down the test. void Run(); // Deletes self. We deliberately pick an unusual name for this // internal method to avoid clashing with names used in user TESTs. void DeleteSelf_() { delete this; } // Uses a GTestFlagSaver to save and restore all Google Test flags. const internal::GTestFlagSaver* const gtest_flag_saver_; // Often a user mis-spells SetUp() as Setup() and spends a long time // wondering why it is never called by Google Test. The declaration of // the following method is solely for catching such an error at // compile time: // // - The return type is deliberately chosen to be not void, so it // will be a conflict if a user declares void Setup() in his test // fixture. // // - This method is private, so it will be another compiler error // if a user calls it from his test fixture. // // DO NOT OVERRIDE THIS FUNCTION. // // If you see an error about overriding the following function or // about it being private, you have mis-spelled SetUp() as Setup(). struct Setup_should_be_spelled_SetUp {}; virtual Setup_should_be_spelled_SetUp* Setup() { return NULL; } // We disallow copying Tests. GTEST_DISALLOW_COPY_AND_ASSIGN_(Test); }; typedef internal::TimeInMillis TimeInMillis; // A copyable object representing a user specified test property which can be // output as a key/value string pair. // // Don't inherit from TestProperty as its destructor is not virtual. class TestProperty { public: // C'tor. TestProperty does NOT have a default constructor. // Always use this constructor (with parameters) to create a // TestProperty object. TestProperty(const std::string& a_key, const std::string& a_value) : key_(a_key), value_(a_value) { } // Gets the user supplied key. const char* key() const { return key_.c_str(); } // Gets the user supplied value. const char* value() const { return value_.c_str(); } // Sets a new value, overriding the one supplied in the constructor. void SetValue(const std::string& new_value) { value_ = new_value; } private: // The key supplied by the user. std::string key_; // The value supplied by the user. std::string value_; }; // The result of a single Test. This includes a list of // TestPartResults, a list of TestProperties, a count of how many // death tests there are in the Test, and how much time it took to run // the Test. // // TestResult is not copyable. class GTEST_API_ TestResult { public: // Creates an empty TestResult. TestResult(); // D'tor. Do not inherit from TestResult. ~TestResult(); // Gets the number of all test parts. This is the sum of the number // of successful test parts and the number of failed test parts. int total_part_count() const; // Returns the number of the test properties. int test_property_count() const; // Returns true iff the test passed (i.e. no test part failed). bool Passed() const { return !Failed(); } // Returns true iff the test failed. bool Failed() const; // Returns true iff the test fatally failed. bool HasFatalFailure() const; // Returns true iff the test has a non-fatal failure. bool HasNonfatalFailure() const; // Returns the elapsed time, in milliseconds. TimeInMillis elapsed_time() const { return elapsed_time_; } // Returns the i-th test part result among all the results. i can range // from 0 to test_property_count() - 1. If i is not in that range, aborts // the program. const TestPartResult& GetTestPartResult(int i) const; // Returns the i-th test property. i can range from 0 to // test_property_count() - 1. If i is not in that range, aborts the // program. const TestProperty& GetTestProperty(int i) const; private: friend class TestInfo; friend class TestCase; friend class UnitTest; friend class internal::DefaultGlobalTestPartResultReporter; friend class internal::ExecDeathTest; friend class internal::TestResultAccessor; friend class internal::UnitTestImpl; friend class internal::WindowsDeathTest; // Gets the vector of TestPartResults. const std::vector& test_part_results() const { return test_part_results_; } // Gets the vector of TestProperties. const std::vector& test_properties() const { return test_properties_; } // Sets the elapsed time. void set_elapsed_time(TimeInMillis elapsed) { elapsed_time_ = elapsed; } // Adds a test property to the list. The property is validated and may add // a non-fatal failure if invalid (e.g., if it conflicts with reserved // key names). If a property is already recorded for the same key, the // value will be updated, rather than storing multiple values for the same // key. xml_element specifies the element for which the property is being // recorded and is used for validation. void RecordProperty(const std::string& xml_element, const TestProperty& test_property); // Adds a failure if the key is a reserved attribute of Google Test // testcase tags. Returns true if the property is valid. // TODO(russr): Validate attribute names are legal and human readable. static bool ValidateTestProperty(const std::string& xml_element, const TestProperty& test_property); // Adds a test part result to the list. void AddTestPartResult(const TestPartResult& test_part_result); // Returns the death test count. int death_test_count() const { return death_test_count_; } // Increments the death test count, returning the new count. int increment_death_test_count() { return ++death_test_count_; } // Clears the test part results. void ClearTestPartResults(); // Clears the object. void Clear(); // Protects mutable state of the property vector and of owned // properties, whose values may be updated. internal::Mutex test_properites_mutex_; // The vector of TestPartResults std::vector test_part_results_; // The vector of TestProperties std::vector test_properties_; // Running count of death tests. int death_test_count_; // The elapsed time, in milliseconds. TimeInMillis elapsed_time_; // We disallow copying TestResult. GTEST_DISALLOW_COPY_AND_ASSIGN_(TestResult); }; // class TestResult // A TestInfo object stores the following information about a test: // // Test case name // Test name // Whether the test should be run // A function pointer that creates the test object when invoked // Test result // // The constructor of TestInfo registers itself with the UnitTest // singleton such that the RUN_ALL_TESTS() macro knows which tests to // run. class GTEST_API_ TestInfo { public: // Destructs a TestInfo object. This function is not virtual, so // don't inherit from TestInfo. ~TestInfo(); // Returns the test case name. const char* test_case_name() const { return test_case_name_.c_str(); } // Returns the test name. const char* name() const { return name_.c_str(); } // Returns the name of the parameter type, or NULL if this is not a typed // or a type-parameterized test. const char* type_param() const { if (type_param_.get() != NULL) return type_param_->c_str(); return NULL; } // Returns the text representation of the value parameter, or NULL if this // is not a value-parameterized test. const char* value_param() const { if (value_param_.get() != NULL) return value_param_->c_str(); return NULL; } // Returns true if this test should run, that is if the test is not // disabled (or it is disabled but the also_run_disabled_tests flag has // been specified) and its full name matches the user-specified filter. // // Google Test allows the user to filter the tests by their full names. // The full name of a test Bar in test case Foo is defined as // "Foo.Bar". Only the tests that match the filter will run. // // A filter is a colon-separated list of glob (not regex) patterns, // optionally followed by a '-' and a colon-separated list of // negative patterns (tests to exclude). A test is run if it // matches one of the positive patterns and does not match any of // the negative patterns. // // For example, *A*:Foo.* is a filter that matches any string that // contains the character 'A' or starts with "Foo.". bool should_run() const { return should_run_; } // Returns true iff this test will appear in the XML report. bool is_reportable() const { // For now, the XML report includes all tests matching the filter. // In the future, we may trim tests that are excluded because of // sharding. return matches_filter_; } // Returns the result of the test. const TestResult* result() const { return &result_; } private: #if GTEST_HAS_DEATH_TEST friend class internal::DefaultDeathTestFactory; #endif // GTEST_HAS_DEATH_TEST friend class Test; friend class TestCase; friend class internal::UnitTestImpl; friend class internal::StreamingListenerTest; friend TestInfo* internal::MakeAndRegisterTestInfo( const char* test_case_name, const char* name, const char* type_param, const char* value_param, internal::TypeId fixture_class_id, Test::SetUpTestCaseFunc set_up_tc, Test::TearDownTestCaseFunc tear_down_tc, internal::TestFactoryBase* factory); // Constructs a TestInfo object. The newly constructed instance assumes // ownership of the factory object. TestInfo(const std::string& test_case_name, const std::string& name, const char* a_type_param, // NULL if not a type-parameterized test const char* a_value_param, // NULL if not a value-parameterized test internal::TypeId fixture_class_id, internal::TestFactoryBase* factory); // Increments the number of death tests encountered in this test so // far. int increment_death_test_count() { return result_.increment_death_test_count(); } // Creates the test object, runs it, records its result, and then // deletes it. void Run(); static void ClearTestResult(TestInfo* test_info) { test_info->result_.Clear(); } // These fields are immutable properties of the test. const std::string test_case_name_; // Test case name const std::string name_; // Test name // Name of the parameter type, or NULL if this is not a typed or a // type-parameterized test. const internal::scoped_ptr type_param_; // Text representation of the value parameter, or NULL if this is not a // value-parameterized test. const internal::scoped_ptr value_param_; const internal::TypeId fixture_class_id_; // ID of the test fixture class bool should_run_; // True iff this test should run bool is_disabled_; // True iff this test is disabled bool matches_filter_; // True if this test matches the // user-specified filter. internal::TestFactoryBase* const factory_; // The factory that creates // the test object // This field is mutable and needs to be reset before running the // test for the second time. TestResult result_; GTEST_DISALLOW_COPY_AND_ASSIGN_(TestInfo); }; // A test case, which consists of a vector of TestInfos. // // TestCase is not copyable. class GTEST_API_ TestCase { public: // Creates a TestCase with the given name. // // TestCase does NOT have a default constructor. Always use this // constructor to create a TestCase object. // // Arguments: // // name: name of the test case // a_type_param: the name of the test's type parameter, or NULL if // this is not a type-parameterized test. // set_up_tc: pointer to the function that sets up the test case // tear_down_tc: pointer to the function that tears down the test case TestCase(const char* name, const char* a_type_param, Test::SetUpTestCaseFunc set_up_tc, Test::TearDownTestCaseFunc tear_down_tc); // Destructor of TestCase. virtual ~TestCase(); // Gets the name of the TestCase. const char* name() const { return name_.c_str(); } // Returns the name of the parameter type, or NULL if this is not a // type-parameterized test case. const char* type_param() const { if (type_param_.get() != NULL) return type_param_->c_str(); return NULL; } // Returns true if any test in this test case should run. bool should_run() const { return should_run_; } // Gets the number of successful tests in this test case. int successful_test_count() const; // Gets the number of failed tests in this test case. int failed_test_count() const; // Gets the number of disabled tests that will be reported in the XML report. int reportable_disabled_test_count() const; // Gets the number of disabled tests in this test case. int disabled_test_count() const; // Gets the number of tests to be printed in the XML report. int reportable_test_count() const; // Get the number of tests in this test case that should run. int test_to_run_count() const; // Gets the number of all tests in this test case. int total_test_count() const; // Returns true iff the test case passed. bool Passed() const { return !Failed(); } // Returns true iff the test case failed. bool Failed() const { return failed_test_count() > 0; } // Returns the elapsed time, in milliseconds. TimeInMillis elapsed_time() const { return elapsed_time_; } // Returns the i-th test among all the tests. i can range from 0 to // total_test_count() - 1. If i is not in that range, returns NULL. const TestInfo* GetTestInfo(int i) const; // Returns the TestResult that holds test properties recorded during // execution of SetUpTestCase and TearDownTestCase. const TestResult& ad_hoc_test_result() const { return ad_hoc_test_result_; } private: friend class Test; friend class internal::UnitTestImpl; // Gets the (mutable) vector of TestInfos in this TestCase. std::vector& test_info_list() { return test_info_list_; } // Gets the (immutable) vector of TestInfos in this TestCase. const std::vector& test_info_list() const { return test_info_list_; } // Returns the i-th test among all the tests. i can range from 0 to // total_test_count() - 1. If i is not in that range, returns NULL. TestInfo* GetMutableTestInfo(int i); // Sets the should_run member. void set_should_run(bool should) { should_run_ = should; } // Adds a TestInfo to this test case. Will delete the TestInfo upon // destruction of the TestCase object. void AddTestInfo(TestInfo * test_info); // Clears the results of all tests in this test case. void ClearResult(); // Clears the results of all tests in the given test case. static void ClearTestCaseResult(TestCase* test_case) { test_case->ClearResult(); } // Runs every test in this TestCase. void Run(); // Runs SetUpTestCase() for this TestCase. This wrapper is needed // for catching exceptions thrown from SetUpTestCase(). void RunSetUpTestCase() { (*set_up_tc_)(); } // Runs TearDownTestCase() for this TestCase. This wrapper is // needed for catching exceptions thrown from TearDownTestCase(). void RunTearDownTestCase() { (*tear_down_tc_)(); } // Returns true iff test passed. static bool TestPassed(const TestInfo* test_info) { return test_info->should_run() && test_info->result()->Passed(); } // Returns true iff test failed. static bool TestFailed(const TestInfo* test_info) { return test_info->should_run() && test_info->result()->Failed(); } // Returns true iff the test is disabled and will be reported in the XML // report. static bool TestReportableDisabled(const TestInfo* test_info) { return test_info->is_reportable() && test_info->is_disabled_; } // Returns true iff test is disabled. static bool TestDisabled(const TestInfo* test_info) { return test_info->is_disabled_; } // Returns true iff this test will appear in the XML report. static bool TestReportable(const TestInfo* test_info) { return test_info->is_reportable(); } // Returns true if the given test should run. static bool ShouldRunTest(const TestInfo* test_info) { return test_info->should_run(); } // Shuffles the tests in this test case. void ShuffleTests(internal::Random* random); // Restores the test order to before the first shuffle. void UnshuffleTests(); // Name of the test case. std::string name_; // Name of the parameter type, or NULL if this is not a typed or a // type-parameterized test. const internal::scoped_ptr type_param_; // The vector of TestInfos in their original order. It owns the // elements in the vector. std::vector test_info_list_; // Provides a level of indirection for the test list to allow easy // shuffling and restoring the test order. The i-th element in this // vector is the index of the i-th test in the shuffled test list. std::vector test_indices_; // Pointer to the function that sets up the test case. Test::SetUpTestCaseFunc set_up_tc_; // Pointer to the function that tears down the test case. Test::TearDownTestCaseFunc tear_down_tc_; // True iff any test in this test case should run. bool should_run_; // Elapsed time, in milliseconds. TimeInMillis elapsed_time_; // Holds test properties recorded during execution of SetUpTestCase and // TearDownTestCase. TestResult ad_hoc_test_result_; // We disallow copying TestCases. GTEST_DISALLOW_COPY_AND_ASSIGN_(TestCase); }; // An Environment object is capable of setting up and tearing down an // environment. The user should subclass this to define his own // environment(s). // // An Environment object does the set-up and tear-down in virtual // methods SetUp() and TearDown() instead of the constructor and the // destructor, as: // // 1. You cannot safely throw from a destructor. This is a problem // as in some cases Google Test is used where exceptions are enabled, and // we may want to implement ASSERT_* using exceptions where they are // available. // 2. You cannot use ASSERT_* directly in a constructor or // destructor. class Environment { public: // The d'tor is virtual as we need to subclass Environment. virtual ~Environment() {} // Override this to define how to set up the environment. virtual void SetUp() {} // Override this to define how to tear down the environment. virtual void TearDown() {} private: // If you see an error about overriding the following function or // about it being private, you have mis-spelled SetUp() as Setup(). struct Setup_should_be_spelled_SetUp {}; virtual Setup_should_be_spelled_SetUp* Setup() { return NULL; } }; // The interface for tracing execution of tests. The methods are organized in // the order the corresponding events are fired. class TestEventListener { public: virtual ~TestEventListener() {} // Fired before any test activity starts. virtual void OnTestProgramStart(const UnitTest& unit_test) = 0; // Fired before each iteration of tests starts. There may be more than // one iteration if GTEST_FLAG(repeat) is set. iteration is the iteration // index, starting from 0. virtual void OnTestIterationStart(const UnitTest& unit_test, int iteration) = 0; // Fired before environment set-up for each iteration of tests starts. virtual void OnEnvironmentsSetUpStart(const UnitTest& unit_test) = 0; // Fired after environment set-up for each iteration of tests ends. virtual void OnEnvironmentsSetUpEnd(const UnitTest& unit_test) = 0; // Fired before the test case starts. virtual void OnTestCaseStart(const TestCase& test_case) = 0; // Fired before the test starts. virtual void OnTestStart(const TestInfo& test_info) = 0; // Fired after a failed assertion or a SUCCEED() invocation. virtual void OnTestPartResult(const TestPartResult& test_part_result) = 0; // Fired after the test ends. virtual void OnTestEnd(const TestInfo& test_info) = 0; // Fired after the test case ends. virtual void OnTestCaseEnd(const TestCase& test_case) = 0; // Fired before environment tear-down for each iteration of tests starts. virtual void OnEnvironmentsTearDownStart(const UnitTest& unit_test) = 0; // Fired after environment tear-down for each iteration of tests ends. virtual void OnEnvironmentsTearDownEnd(const UnitTest& unit_test) = 0; // Fired after each iteration of tests finishes. virtual void OnTestIterationEnd(const UnitTest& unit_test, int iteration) = 0; // Fired after all test activities have ended. virtual void OnTestProgramEnd(const UnitTest& unit_test) = 0; }; // The convenience class for users who need to override just one or two // methods and are not concerned that a possible change to a signature of // the methods they override will not be caught during the build. For // comments about each method please see the definition of TestEventListener // above. class EmptyTestEventListener : public TestEventListener { public: virtual void OnTestProgramStart(const UnitTest& /*unit_test*/) {} virtual void OnTestIterationStart(const UnitTest& /*unit_test*/, int /*iteration*/) {} virtual void OnEnvironmentsSetUpStart(const UnitTest& /*unit_test*/) {} virtual void OnEnvironmentsSetUpEnd(const UnitTest& /*unit_test*/) {} virtual void OnTestCaseStart(const TestCase& /*test_case*/) {} virtual void OnTestStart(const TestInfo& /*test_info*/) {} virtual void OnTestPartResult(const TestPartResult& /*test_part_result*/) {} virtual void OnTestEnd(const TestInfo& /*test_info*/) {} virtual void OnTestCaseEnd(const TestCase& /*test_case*/) {} virtual void OnEnvironmentsTearDownStart(const UnitTest& /*unit_test*/) {} virtual void OnEnvironmentsTearDownEnd(const UnitTest& /*unit_test*/) {} virtual void OnTestIterationEnd(const UnitTest& /*unit_test*/, int /*iteration*/) {} virtual void OnTestProgramEnd(const UnitTest& /*unit_test*/) {} }; // TestEventListeners lets users add listeners to track events in Google Test. class GTEST_API_ TestEventListeners { public: TestEventListeners(); ~TestEventListeners(); // Appends an event listener to the end of the list. Google Test assumes // the ownership of the listener (i.e. it will delete the listener when // the test program finishes). void Append(TestEventListener* listener); // Removes the given event listener from the list and returns it. It then // becomes the caller's responsibility to delete the listener. Returns // NULL if the listener is not found in the list. TestEventListener* Release(TestEventListener* listener); // Returns the standard listener responsible for the default console // output. Can be removed from the listeners list to shut down default // console output. Note that removing this object from the listener list // with Release transfers its ownership to the caller and makes this // function return NULL the next time. TestEventListener* default_result_printer() const { return default_result_printer_; } // Returns the standard listener responsible for the default XML output // controlled by the --gtest_output=xml flag. Can be removed from the // listeners list by users who want to shut down the default XML output // controlled by this flag and substitute it with custom one. Note that // removing this object from the listener list with Release transfers its // ownership to the caller and makes this function return NULL the next // time. TestEventListener* default_xml_generator() const { return default_xml_generator_; } private: friend class TestCase; friend class TestInfo; friend class internal::DefaultGlobalTestPartResultReporter; friend class internal::NoExecDeathTest; friend class internal::TestEventListenersAccessor; friend class internal::UnitTestImpl; // Returns repeater that broadcasts the TestEventListener events to all // subscribers. TestEventListener* repeater(); // Sets the default_result_printer attribute to the provided listener. // The listener is also added to the listener list and previous // default_result_printer is removed from it and deleted. The listener can // also be NULL in which case it will not be added to the list. Does // nothing if the previous and the current listener objects are the same. void SetDefaultResultPrinter(TestEventListener* listener); // Sets the default_xml_generator attribute to the provided listener. The // listener is also added to the listener list and previous // default_xml_generator is removed from it and deleted. The listener can // also be NULL in which case it will not be added to the list. Does // nothing if the previous and the current listener objects are the same. void SetDefaultXmlGenerator(TestEventListener* listener); // Controls whether events will be forwarded by the repeater to the // listeners in the list. bool EventForwardingEnabled() const; void SuppressEventForwarding(); // The actual list of listeners. internal::TestEventRepeater* repeater_; // Listener responsible for the standard result output. TestEventListener* default_result_printer_; // Listener responsible for the creation of the XML output file. TestEventListener* default_xml_generator_; // We disallow copying TestEventListeners. GTEST_DISALLOW_COPY_AND_ASSIGN_(TestEventListeners); }; // A UnitTest consists of a vector of TestCases. // // This is a singleton class. The only instance of UnitTest is // created when UnitTest::GetInstance() is first called. This // instance is never deleted. // // UnitTest is not copyable. // // This class is thread-safe as long as the methods are called // according to their specification. class GTEST_API_ UnitTest { public: // Gets the singleton UnitTest object. The first time this method // is called, a UnitTest object is constructed and returned. // Consecutive calls will return the same object. static UnitTest* GetInstance(); // Runs all tests in this UnitTest object and prints the result. // Returns 0 if successful, or 1 otherwise. // // This method can only be called from the main thread. // // INTERNAL IMPLEMENTATION - DO NOT USE IN A USER PROGRAM. int Run() GTEST_MUST_USE_RESULT_; // Returns the working directory when the first TEST() or TEST_F() // was executed. The UnitTest object owns the string. const char* original_working_dir() const; // Returns the TestCase object for the test that's currently running, // or NULL if no test is running. const TestCase* current_test_case() const GTEST_LOCK_EXCLUDED_(mutex_); // Returns the TestInfo object for the test that's currently running, // or NULL if no test is running. const TestInfo* current_test_info() const GTEST_LOCK_EXCLUDED_(mutex_); // Returns the random seed used at the start of the current test run. int random_seed() const; #if GTEST_HAS_PARAM_TEST // Returns the ParameterizedTestCaseRegistry object used to keep track of // value-parameterized tests and instantiate and register them. // // INTERNAL IMPLEMENTATION - DO NOT USE IN A USER PROGRAM. internal::ParameterizedTestCaseRegistry& parameterized_test_registry() GTEST_LOCK_EXCLUDED_(mutex_); #endif // GTEST_HAS_PARAM_TEST // Gets the number of successful test cases. int successful_test_case_count() const; // Gets the number of failed test cases. int failed_test_case_count() const; // Gets the number of all test cases. int total_test_case_count() const; // Gets the number of all test cases that contain at least one test // that should run. int test_case_to_run_count() const; // Gets the number of successful tests. int successful_test_count() const; // Gets the number of failed tests. int failed_test_count() const; // Gets the number of disabled tests that will be reported in the XML report. int reportable_disabled_test_count() const; // Gets the number of disabled tests. int disabled_test_count() const; // Gets the number of tests to be printed in the XML report. int reportable_test_count() const; // Gets the number of all tests. int total_test_count() const; // Gets the number of tests that should run. int test_to_run_count() const; // Gets the time of the test program start, in ms from the start of the // UNIX epoch. TimeInMillis start_timestamp() const; // Gets the elapsed time, in milliseconds. TimeInMillis elapsed_time() const; // Returns true iff the unit test passed (i.e. all test cases passed). bool Passed() const; // Returns true iff the unit test failed (i.e. some test case failed // or something outside of all tests failed). bool Failed() const; // Gets the i-th test case among all the test cases. i can range from 0 to // total_test_case_count() - 1. If i is not in that range, returns NULL. const TestCase* GetTestCase(int i) const; // Returns the TestResult containing information on test failures and // properties logged outside of individual test cases. const TestResult& ad_hoc_test_result() const; // Returns the list of event listeners that can be used to track events // inside Google Test. TestEventListeners& listeners(); private: // Registers and returns a global test environment. When a test // program is run, all global test environments will be set-up in // the order they were registered. After all tests in the program // have finished, all global test environments will be torn-down in // the *reverse* order they were registered. // // The UnitTest object takes ownership of the given environment. // // This method can only be called from the main thread. Environment* AddEnvironment(Environment* env); // Adds a TestPartResult to the current TestResult object. All // Google Test assertion macros (e.g. ASSERT_TRUE, EXPECT_EQ, etc) // eventually call this to report their results. The user code // should use the assertion macros instead of calling this directly. void AddTestPartResult(TestPartResult::Type result_type, const char* file_name, int line_number, const std::string& message, const std::string& os_stack_trace) GTEST_LOCK_EXCLUDED_(mutex_); // Adds a TestProperty to the current TestResult object when invoked from // inside a test, to current TestCase's ad_hoc_test_result_ when invoked // from SetUpTestCase or TearDownTestCase, or to the global property set // when invoked elsewhere. If the result already contains a property with // the same key, the value will be updated. void RecordProperty(const std::string& key, const std::string& value); // Gets the i-th test case among all the test cases. i can range from 0 to // total_test_case_count() - 1. If i is not in that range, returns NULL. TestCase* GetMutableTestCase(int i); // Accessors for the implementation object. internal::UnitTestImpl* impl() { return impl_; } const internal::UnitTestImpl* impl() const { return impl_; } // These classes and funcions are friends as they need to access private // members of UnitTest. friend class Test; friend class internal::AssertHelper; friend class internal::ScopedTrace; friend class internal::StreamingListenerTest; friend class internal::UnitTestRecordPropertyTestHelper; friend Environment* AddGlobalTestEnvironment(Environment* env); friend internal::UnitTestImpl* internal::GetUnitTestImpl(); friend void internal::ReportFailureInUnknownLocation( TestPartResult::Type result_type, const std::string& message); // Creates an empty UnitTest. UnitTest(); // D'tor virtual ~UnitTest(); // Pushes a trace defined by SCOPED_TRACE() on to the per-thread // Google Test trace stack. void PushGTestTrace(const internal::TraceInfo& trace) GTEST_LOCK_EXCLUDED_(mutex_); // Pops a trace from the per-thread Google Test trace stack. void PopGTestTrace() GTEST_LOCK_EXCLUDED_(mutex_); // Protects mutable state in *impl_. This is mutable as some const // methods need to lock it too. mutable internal::Mutex mutex_; // Opaque implementation object. This field is never changed once // the object is constructed. We don't mark it as const here, as // doing so will cause a warning in the constructor of UnitTest. // Mutable state in *impl_ is protected by mutex_. internal::UnitTestImpl* impl_; // We disallow copying UnitTest. GTEST_DISALLOW_COPY_AND_ASSIGN_(UnitTest); }; // A convenient wrapper for adding an environment for the test // program. // // You should call this before RUN_ALL_TESTS() is called, probably in // main(). If you use gtest_main, you need to call this before main() // starts for it to take effect. For example, you can define a global // variable like this: // // testing::Environment* const foo_env = // testing::AddGlobalTestEnvironment(new FooEnvironment); // // However, we strongly recommend you to write your own main() and // call AddGlobalTestEnvironment() there, as relying on initialization // of global variables makes the code harder to read and may cause // problems when you register multiple environments from different // translation units and the environments have dependencies among them // (remember that the compiler doesn't guarantee the order in which // global variables from different translation units are initialized). inline Environment* AddGlobalTestEnvironment(Environment* env) { return UnitTest::GetInstance()->AddEnvironment(env); } // Initializes Google Test. This must be called before calling // RUN_ALL_TESTS(). In particular, it parses a command line for the // flags that Google Test recognizes. Whenever a Google Test flag is // seen, it is removed from argv, and *argc is decremented. // // No value is returned. Instead, the Google Test flag variables are // updated. // // Calling the function for the second time has no user-visible effect. GTEST_API_ void InitGoogleTest(int* argc, char** argv); // This overloaded version can be used in Windows programs compiled in // UNICODE mode. GTEST_API_ void InitGoogleTest(int* argc, wchar_t** argv); namespace internal { // FormatForComparison::Format(value) formats a // value of type ToPrint that is an operand of a comparison assertion // (e.g. ASSERT_EQ). OtherOperand is the type of the other operand in // the comparison, and is used to help determine the best way to // format the value. In particular, when the value is a C string // (char pointer) and the other operand is an STL string object, we // want to format the C string as a string, since we know it is // compared by value with the string object. If the value is a char // pointer but the other operand is not an STL string object, we don't // know whether the pointer is supposed to point to a NUL-terminated // string, and thus want to print it as a pointer to be safe. // // INTERNAL IMPLEMENTATION - DO NOT USE IN A USER PROGRAM. // The default case. template class FormatForComparison { public: static ::std::string Format(const ToPrint& value) { return ::testing::PrintToString(value); } }; // Array. template class FormatForComparison { public: static ::std::string Format(const ToPrint* value) { return FormatForComparison::Format(value); } }; // By default, print C string as pointers to be safe, as we don't know // whether they actually point to a NUL-terminated string. #define GTEST_IMPL_FORMAT_C_STRING_AS_POINTER_(CharType) \ template \ class FormatForComparison { \ public: \ static ::std::string Format(CharType* value) { \ return ::testing::PrintToString(static_cast(value)); \ } \ } GTEST_IMPL_FORMAT_C_STRING_AS_POINTER_(char); GTEST_IMPL_FORMAT_C_STRING_AS_POINTER_(const char); GTEST_IMPL_FORMAT_C_STRING_AS_POINTER_(wchar_t); GTEST_IMPL_FORMAT_C_STRING_AS_POINTER_(const wchar_t); #undef GTEST_IMPL_FORMAT_C_STRING_AS_POINTER_ // If a C string is compared with an STL string object, we know it's meant // to point to a NUL-terminated string, and thus can print it as a string. #define GTEST_IMPL_FORMAT_C_STRING_AS_STRING_(CharType, OtherStringType) \ template <> \ class FormatForComparison { \ public: \ static ::std::string Format(CharType* value) { \ return ::testing::PrintToString(value); \ } \ } GTEST_IMPL_FORMAT_C_STRING_AS_STRING_(char, ::std::string); GTEST_IMPL_FORMAT_C_STRING_AS_STRING_(const char, ::std::string); #if GTEST_HAS_GLOBAL_STRING GTEST_IMPL_FORMAT_C_STRING_AS_STRING_(char, ::string); GTEST_IMPL_FORMAT_C_STRING_AS_STRING_(const char, ::string); #endif #if GTEST_HAS_GLOBAL_WSTRING GTEST_IMPL_FORMAT_C_STRING_AS_STRING_(wchar_t, ::wstring); GTEST_IMPL_FORMAT_C_STRING_AS_STRING_(const wchar_t, ::wstring); #endif #if GTEST_HAS_STD_WSTRING GTEST_IMPL_FORMAT_C_STRING_AS_STRING_(wchar_t, ::std::wstring); GTEST_IMPL_FORMAT_C_STRING_AS_STRING_(const wchar_t, ::std::wstring); #endif #undef GTEST_IMPL_FORMAT_C_STRING_AS_STRING_ // Formats a comparison assertion (e.g. ASSERT_EQ, EXPECT_LT, and etc) // operand to be used in a failure message. The type (but not value) // of the other operand may affect the format. This allows us to // print a char* as a raw pointer when it is compared against another // char* or void*, and print it as a C string when it is compared // against an std::string object, for example. // // INTERNAL IMPLEMENTATION - DO NOT USE IN A USER PROGRAM. template std::string FormatForComparisonFailureMessage( const T1& value, const T2& /* other_operand */) { return FormatForComparison::Format(value); } // The helper function for {ASSERT|EXPECT}_EQ. template AssertionResult CmpHelperEQ(const char* expected_expression, const char* actual_expression, const T1& expected, const T2& actual) { #ifdef _MSC_VER # pragma warning(push) // Saves the current warning state. # pragma warning(disable:4389) // Temporarily disables warning on // signed/unsigned mismatch. #endif if (expected == actual) { return AssertionSuccess(); } #ifdef _MSC_VER # pragma warning(pop) // Restores the warning state. #endif return EqFailure(expected_expression, actual_expression, FormatForComparisonFailureMessage(expected, actual), FormatForComparisonFailureMessage(actual, expected), false); } // With this overloaded version, we allow anonymous enums to be used // in {ASSERT|EXPECT}_EQ when compiled with gcc 4, as anonymous enums // can be implicitly cast to BiggestInt. GTEST_API_ AssertionResult CmpHelperEQ(const char* expected_expression, const char* actual_expression, BiggestInt expected, BiggestInt actual); // The helper class for {ASSERT|EXPECT}_EQ. The template argument // lhs_is_null_literal is true iff the first argument to ASSERT_EQ() // is a null pointer literal. The following default implementation is // for lhs_is_null_literal being false. template class EqHelper { public: // This templatized version is for the general case. template static AssertionResult Compare(const char* expected_expression, const char* actual_expression, const T1& expected, const T2& actual) { return CmpHelperEQ(expected_expression, actual_expression, expected, actual); } // With this overloaded version, we allow anonymous enums to be used // in {ASSERT|EXPECT}_EQ when compiled with gcc 4, as anonymous // enums can be implicitly cast to BiggestInt. // // Even though its body looks the same as the above version, we // cannot merge the two, as it will make anonymous enums unhappy. static AssertionResult Compare(const char* expected_expression, const char* actual_expression, BiggestInt expected, BiggestInt actual) { return CmpHelperEQ(expected_expression, actual_expression, expected, actual); } }; // This specialization is used when the first argument to ASSERT_EQ() // is a null pointer literal, like NULL, false, or 0. template <> class EqHelper { public: // We define two overloaded versions of Compare(). The first // version will be picked when the second argument to ASSERT_EQ() is // NOT a pointer, e.g. ASSERT_EQ(0, AnIntFunction()) or // EXPECT_EQ(false, a_bool). template static AssertionResult Compare( const char* expected_expression, const char* actual_expression, const T1& expected, const T2& actual, // The following line prevents this overload from being considered if T2 // is not a pointer type. We need this because ASSERT_EQ(NULL, my_ptr) // expands to Compare("", "", NULL, my_ptr), which requires a conversion // to match the Secret* in the other overload, which would otherwise make // this template match better. typename EnableIf::value>::type* = 0) { return CmpHelperEQ(expected_expression, actual_expression, expected, actual); } // This version will be picked when the second argument to ASSERT_EQ() is a // pointer, e.g. ASSERT_EQ(NULL, a_pointer). template static AssertionResult Compare( const char* expected_expression, const char* actual_expression, // We used to have a second template parameter instead of Secret*. That // template parameter would deduce to 'long', making this a better match // than the first overload even without the first overload's EnableIf. // Unfortunately, gcc with -Wconversion-null warns when "passing NULL to // non-pointer argument" (even a deduced integral argument), so the old // implementation caused warnings in user code. Secret* /* expected (NULL) */, T* actual) { // We already know that 'expected' is a null pointer. return CmpHelperEQ(expected_expression, actual_expression, static_cast(NULL), actual); } }; // A macro for implementing the helper functions needed to implement // ASSERT_?? and EXPECT_??. It is here just to avoid copy-and-paste // of similar code. // // For each templatized helper function, we also define an overloaded // version for BiggestInt in order to reduce code bloat and allow // anonymous enums to be used with {ASSERT|EXPECT}_?? when compiled // with gcc 4. // // INTERNAL IMPLEMENTATION - DO NOT USE IN A USER PROGRAM. #define GTEST_IMPL_CMP_HELPER_(op_name, op)\ template \ AssertionResult CmpHelper##op_name(const char* expr1, const char* expr2, \ const T1& val1, const T2& val2) {\ if (val1 op val2) {\ return AssertionSuccess();\ } else {\ return AssertionFailure() \ << "Expected: (" << expr1 << ") " #op " (" << expr2\ << "), actual: " << FormatForComparisonFailureMessage(val1, val2)\ << " vs " << FormatForComparisonFailureMessage(val2, val1);\ }\ }\ GTEST_API_ AssertionResult CmpHelper##op_name(\ const char* expr1, const char* expr2, BiggestInt val1, BiggestInt val2) // INTERNAL IMPLEMENTATION - DO NOT USE IN A USER PROGRAM. // Implements the helper function for {ASSERT|EXPECT}_NE GTEST_IMPL_CMP_HELPER_(NE, !=); // Implements the helper function for {ASSERT|EXPECT}_LE GTEST_IMPL_CMP_HELPER_(LE, <=); // Implements the helper function for {ASSERT|EXPECT}_LT GTEST_IMPL_CMP_HELPER_(LT, <); // Implements the helper function for {ASSERT|EXPECT}_GE GTEST_IMPL_CMP_HELPER_(GE, >=); // Implements the helper function for {ASSERT|EXPECT}_GT GTEST_IMPL_CMP_HELPER_(GT, >); #undef GTEST_IMPL_CMP_HELPER_ // The helper function for {ASSERT|EXPECT}_STREQ. // // INTERNAL IMPLEMENTATION - DO NOT USE IN A USER PROGRAM. GTEST_API_ AssertionResult CmpHelperSTREQ(const char* expected_expression, const char* actual_expression, const char* expected, const char* actual); // The helper function for {ASSERT|EXPECT}_STRCASEEQ. // // INTERNAL IMPLEMENTATION - DO NOT USE IN A USER PROGRAM. GTEST_API_ AssertionResult CmpHelperSTRCASEEQ(const char* expected_expression, const char* actual_expression, const char* expected, const char* actual); // The helper function for {ASSERT|EXPECT}_STRNE. // // INTERNAL IMPLEMENTATION - DO NOT USE IN A USER PROGRAM. GTEST_API_ AssertionResult CmpHelperSTRNE(const char* s1_expression, const char* s2_expression, const char* s1, const char* s2); // The helper function for {ASSERT|EXPECT}_STRCASENE. // // INTERNAL IMPLEMENTATION - DO NOT USE IN A USER PROGRAM. GTEST_API_ AssertionResult CmpHelperSTRCASENE(const char* s1_expression, const char* s2_expression, const char* s1, const char* s2); // Helper function for *_STREQ on wide strings. // // INTERNAL IMPLEMENTATION - DO NOT USE IN A USER PROGRAM. GTEST_API_ AssertionResult CmpHelperSTREQ(const char* expected_expression, const char* actual_expression, const wchar_t* expected, const wchar_t* actual); // Helper function for *_STRNE on wide strings. // // INTERNAL IMPLEMENTATION - DO NOT USE IN A USER PROGRAM. GTEST_API_ AssertionResult CmpHelperSTRNE(const char* s1_expression, const char* s2_expression, const wchar_t* s1, const wchar_t* s2); } // namespace internal // IsSubstring() and IsNotSubstring() are intended to be used as the // first argument to {EXPECT,ASSERT}_PRED_FORMAT2(), not by // themselves. They check whether needle is a substring of haystack // (NULL is considered a substring of itself only), and return an // appropriate error message when they fail. // // The {needle,haystack}_expr arguments are the stringified // expressions that generated the two real arguments. GTEST_API_ AssertionResult IsSubstring( const char* needle_expr, const char* haystack_expr, const char* needle, const char* haystack); GTEST_API_ AssertionResult IsSubstring( const char* needle_expr, const char* haystack_expr, const wchar_t* needle, const wchar_t* haystack); GTEST_API_ AssertionResult IsNotSubstring( const char* needle_expr, const char* haystack_expr, const char* needle, const char* haystack); GTEST_API_ AssertionResult IsNotSubstring( const char* needle_expr, const char* haystack_expr, const wchar_t* needle, const wchar_t* haystack); GTEST_API_ AssertionResult IsSubstring( const char* needle_expr, const char* haystack_expr, const ::std::string& needle, const ::std::string& haystack); GTEST_API_ AssertionResult IsNotSubstring( const char* needle_expr, const char* haystack_expr, const ::std::string& needle, const ::std::string& haystack); #if GTEST_HAS_STD_WSTRING GTEST_API_ AssertionResult IsSubstring( const char* needle_expr, const char* haystack_expr, const ::std::wstring& needle, const ::std::wstring& haystack); GTEST_API_ AssertionResult IsNotSubstring( const char* needle_expr, const char* haystack_expr, const ::std::wstring& needle, const ::std::wstring& haystack); #endif // GTEST_HAS_STD_WSTRING namespace internal { // Helper template function for comparing floating-points. // // Template parameter: // // RawType: the raw floating-point type (either float or double) // // INTERNAL IMPLEMENTATION - DO NOT USE IN A USER PROGRAM. template AssertionResult CmpHelperFloatingPointEQ(const char* expected_expression, const char* actual_expression, RawType expected, RawType actual) { const FloatingPoint lhs(expected), rhs(actual); if (lhs.AlmostEquals(rhs)) { return AssertionSuccess(); } ::std::stringstream expected_ss; expected_ss << std::setprecision(std::numeric_limits::digits10 + 2) << expected; ::std::stringstream actual_ss; actual_ss << std::setprecision(std::numeric_limits::digits10 + 2) << actual; return EqFailure(expected_expression, actual_expression, StringStreamToString(&expected_ss), StringStreamToString(&actual_ss), false); } // Helper function for implementing ASSERT_NEAR. // // INTERNAL IMPLEMENTATION - DO NOT USE IN A USER PROGRAM. GTEST_API_ AssertionResult DoubleNearPredFormat(const char* expr1, const char* expr2, const char* abs_error_expr, double val1, double val2, double abs_error); // INTERNAL IMPLEMENTATION - DO NOT USE IN USER CODE. // A class that enables one to stream messages to assertion macros class GTEST_API_ AssertHelper { public: // Constructor. AssertHelper(TestPartResult::Type type, const char* file, int line, const char* message); ~AssertHelper(); // Message assignment is a semantic trick to enable assertion // streaming; see the GTEST_MESSAGE_ macro below. void operator=(const Message& message) const; private: // We put our data in a struct so that the size of the AssertHelper class can // be as small as possible. This is important because gcc is incapable of // re-using stack space even for temporary variables, so every EXPECT_EQ // reserves stack space for another AssertHelper. struct AssertHelperData { AssertHelperData(TestPartResult::Type t, const char* srcfile, int line_num, const char* msg) : type(t), file(srcfile), line(line_num), message(msg) { } TestPartResult::Type const type; const char* const file; int const line; std::string const message; private: GTEST_DISALLOW_COPY_AND_ASSIGN_(AssertHelperData); }; AssertHelperData* const data_; GTEST_DISALLOW_COPY_AND_ASSIGN_(AssertHelper); }; } // namespace internal #if GTEST_HAS_PARAM_TEST // The pure interface class that all value-parameterized tests inherit from. // A value-parameterized class must inherit from both ::testing::Test and // ::testing::WithParamInterface. In most cases that just means inheriting // from ::testing::TestWithParam, but more complicated test hierarchies // may need to inherit from Test and WithParamInterface at different levels. // // This interface has support for accessing the test parameter value via // the GetParam() method. // // Use it with one of the parameter generator defining functions, like Range(), // Values(), ValuesIn(), Bool(), and Combine(). // // class FooTest : public ::testing::TestWithParam { // protected: // FooTest() { // // Can use GetParam() here. // } // virtual ~FooTest() { // // Can use GetParam() here. // } // virtual void SetUp() { // // Can use GetParam() here. // } // virtual void TearDown { // // Can use GetParam() here. // } // }; // TEST_P(FooTest, DoesBar) { // // Can use GetParam() method here. // Foo foo; // ASSERT_TRUE(foo.DoesBar(GetParam())); // } // INSTANTIATE_TEST_CASE_P(OneToTenRange, FooTest, ::testing::Range(1, 10)); template class WithParamInterface { public: typedef T ParamType; virtual ~WithParamInterface() {} // The current parameter value. Is also available in the test fixture's // constructor. This member function is non-static, even though it only // references static data, to reduce the opportunity for incorrect uses // like writing 'WithParamInterface::GetParam()' for a test that // uses a fixture whose parameter type is int. const ParamType& GetParam() const { GTEST_CHECK_(parameter_ != NULL) << "GetParam() can only be called inside a value-parameterized test " << "-- did you intend to write TEST_P instead of TEST_F?"; return *parameter_; } private: // Sets parameter value. The caller is responsible for making sure the value // remains alive and unchanged throughout the current test. static void SetParam(const ParamType* parameter) { parameter_ = parameter; } // Static value used for accessing parameter during a test lifetime. static const ParamType* parameter_; // TestClass must be a subclass of WithParamInterface and Test. template friend class internal::ParameterizedTestFactory; }; template const T* WithParamInterface::parameter_ = NULL; // Most value-parameterized classes can ignore the existence of // WithParamInterface, and can just inherit from ::testing::TestWithParam. template class TestWithParam : public Test, public WithParamInterface { }; #endif // GTEST_HAS_PARAM_TEST // Macros for indicating success/failure in test code. // ADD_FAILURE unconditionally adds a failure to the current test. // SUCCEED generates a success - it doesn't automatically make the // current test successful, as a test is only successful when it has // no failure. // // EXPECT_* verifies that a certain condition is satisfied. If not, // it behaves like ADD_FAILURE. In particular: // // EXPECT_TRUE verifies that a Boolean condition is true. // EXPECT_FALSE verifies that a Boolean condition is false. // // FAIL and ASSERT_* are similar to ADD_FAILURE and EXPECT_*, except // that they will also abort the current function on failure. People // usually want the fail-fast behavior of FAIL and ASSERT_*, but those // writing data-driven tests often find themselves using ADD_FAILURE // and EXPECT_* more. // Generates a nonfatal failure with a generic message. #define ADD_FAILURE() GTEST_NONFATAL_FAILURE_("Failed") // Generates a nonfatal failure at the given source file location with // a generic message. #define ADD_FAILURE_AT(file, line) \ GTEST_MESSAGE_AT_(file, line, "Failed", \ ::testing::TestPartResult::kNonFatalFailure) // Generates a fatal failure with a generic message. #define GTEST_FAIL() GTEST_FATAL_FAILURE_("Failed") // Define this macro to 1 to omit the definition of FAIL(), which is a // generic name and clashes with some other libraries. #if !GTEST_DONT_DEFINE_FAIL # define FAIL() GTEST_FAIL() #endif // Generates a success with a generic message. #define GTEST_SUCCEED() GTEST_SUCCESS_("Succeeded") // Define this macro to 1 to omit the definition of SUCCEED(), which // is a generic name and clashes with some other libraries. #if !GTEST_DONT_DEFINE_SUCCEED # define SUCCEED() GTEST_SUCCEED() #endif // Macros for testing exceptions. // // * {ASSERT|EXPECT}_THROW(statement, expected_exception): // Tests that the statement throws the expected exception. // * {ASSERT|EXPECT}_NO_THROW(statement): // Tests that the statement doesn't throw any exception. // * {ASSERT|EXPECT}_ANY_THROW(statement): // Tests that the statement throws an exception. #define EXPECT_THROW(statement, expected_exception) \ GTEST_TEST_THROW_(statement, expected_exception, GTEST_NONFATAL_FAILURE_) #define EXPECT_NO_THROW(statement) \ GTEST_TEST_NO_THROW_(statement, GTEST_NONFATAL_FAILURE_) #define EXPECT_ANY_THROW(statement) \ GTEST_TEST_ANY_THROW_(statement, GTEST_NONFATAL_FAILURE_) #define ASSERT_THROW(statement, expected_exception) \ GTEST_TEST_THROW_(statement, expected_exception, GTEST_FATAL_FAILURE_) #define ASSERT_NO_THROW(statement) \ GTEST_TEST_NO_THROW_(statement, GTEST_FATAL_FAILURE_) #define ASSERT_ANY_THROW(statement) \ GTEST_TEST_ANY_THROW_(statement, GTEST_FATAL_FAILURE_) // Boolean assertions. Condition can be either a Boolean expression or an // AssertionResult. For more information on how to use AssertionResult with // these macros see comments on that class. #define EXPECT_TRUE(condition) \ GTEST_TEST_BOOLEAN_(condition, #condition, false, true, \ GTEST_NONFATAL_FAILURE_) #define EXPECT_FALSE(condition) \ GTEST_TEST_BOOLEAN_(!(condition), #condition, true, false, \ GTEST_NONFATAL_FAILURE_) #define ASSERT_TRUE(condition) \ GTEST_TEST_BOOLEAN_(condition, #condition, false, true, \ GTEST_FATAL_FAILURE_) #define ASSERT_FALSE(condition) \ GTEST_TEST_BOOLEAN_(!(condition), #condition, true, false, \ GTEST_FATAL_FAILURE_) // Includes the auto-generated header that implements a family of // generic predicate assertion macros. #include "gtest/gtest_pred_impl.h" // Macros for testing equalities and inequalities. // // * {ASSERT|EXPECT}_EQ(expected, actual): Tests that expected == actual // * {ASSERT|EXPECT}_NE(v1, v2): Tests that v1 != v2 // * {ASSERT|EXPECT}_LT(v1, v2): Tests that v1 < v2 // * {ASSERT|EXPECT}_LE(v1, v2): Tests that v1 <= v2 // * {ASSERT|EXPECT}_GT(v1, v2): Tests that v1 > v2 // * {ASSERT|EXPECT}_GE(v1, v2): Tests that v1 >= v2 // // When they are not, Google Test prints both the tested expressions and // their actual values. The values must be compatible built-in types, // or you will get a compiler error. By "compatible" we mean that the // values can be compared by the respective operator. // // Note: // // 1. It is possible to make a user-defined type work with // {ASSERT|EXPECT}_??(), but that requires overloading the // comparison operators and is thus discouraged by the Google C++ // Usage Guide. Therefore, you are advised to use the // {ASSERT|EXPECT}_TRUE() macro to assert that two objects are // equal. // // 2. The {ASSERT|EXPECT}_??() macros do pointer comparisons on // pointers (in particular, C strings). Therefore, if you use it // with two C strings, you are testing how their locations in memory // are related, not how their content is related. To compare two C // strings by content, use {ASSERT|EXPECT}_STR*(). // // 3. {ASSERT|EXPECT}_EQ(expected, actual) is preferred to // {ASSERT|EXPECT}_TRUE(expected == actual), as the former tells you // what the actual value is when it fails, and similarly for the // other comparisons. // // 4. Do not depend on the order in which {ASSERT|EXPECT}_??() // evaluate their arguments, which is undefined. // // 5. These macros evaluate their arguments exactly once. // // Examples: // // EXPECT_NE(5, Foo()); // EXPECT_EQ(NULL, a_pointer); // ASSERT_LT(i, array_size); // ASSERT_GT(records.size(), 0) << "There is no record left."; #define EXPECT_EQ(expected, actual) \ EXPECT_PRED_FORMAT2(::testing::internal:: \ EqHelper::Compare, \ expected, actual) #define EXPECT_NE(expected, actual) \ EXPECT_PRED_FORMAT2(::testing::internal::CmpHelperNE, expected, actual) #define EXPECT_LE(val1, val2) \ EXPECT_PRED_FORMAT2(::testing::internal::CmpHelperLE, val1, val2) #define EXPECT_LT(val1, val2) \ EXPECT_PRED_FORMAT2(::testing::internal::CmpHelperLT, val1, val2) #define EXPECT_GE(val1, val2) \ EXPECT_PRED_FORMAT2(::testing::internal::CmpHelperGE, val1, val2) #define EXPECT_GT(val1, val2) \ EXPECT_PRED_FORMAT2(::testing::internal::CmpHelperGT, val1, val2) #define GTEST_ASSERT_EQ(expected, actual) \ ASSERT_PRED_FORMAT2(::testing::internal:: \ EqHelper::Compare, \ expected, actual) #define GTEST_ASSERT_NE(val1, val2) \ ASSERT_PRED_FORMAT2(::testing::internal::CmpHelperNE, val1, val2) #define GTEST_ASSERT_LE(val1, val2) \ ASSERT_PRED_FORMAT2(::testing::internal::CmpHelperLE, val1, val2) #define GTEST_ASSERT_LT(val1, val2) \ ASSERT_PRED_FORMAT2(::testing::internal::CmpHelperLT, val1, val2) #define GTEST_ASSERT_GE(val1, val2) \ ASSERT_PRED_FORMAT2(::testing::internal::CmpHelperGE, val1, val2) #define GTEST_ASSERT_GT(val1, val2) \ ASSERT_PRED_FORMAT2(::testing::internal::CmpHelperGT, val1, val2) // Define macro GTEST_DONT_DEFINE_ASSERT_XY to 1 to omit the definition of // ASSERT_XY(), which clashes with some users' own code. #if !GTEST_DONT_DEFINE_ASSERT_EQ # define ASSERT_EQ(val1, val2) GTEST_ASSERT_EQ(val1, val2) #endif #if !GTEST_DONT_DEFINE_ASSERT_NE # define ASSERT_NE(val1, val2) GTEST_ASSERT_NE(val1, val2) #endif #if !GTEST_DONT_DEFINE_ASSERT_LE # define ASSERT_LE(val1, val2) GTEST_ASSERT_LE(val1, val2) #endif #if !GTEST_DONT_DEFINE_ASSERT_LT # define ASSERT_LT(val1, val2) GTEST_ASSERT_LT(val1, val2) #endif #if !GTEST_DONT_DEFINE_ASSERT_GE # define ASSERT_GE(val1, val2) GTEST_ASSERT_GE(val1, val2) #endif #if !GTEST_DONT_DEFINE_ASSERT_GT # define ASSERT_GT(val1, val2) GTEST_ASSERT_GT(val1, val2) #endif // C-string Comparisons. All tests treat NULL and any non-NULL string // as different. Two NULLs are equal. // // * {ASSERT|EXPECT}_STREQ(s1, s2): Tests that s1 == s2 // * {ASSERT|EXPECT}_STRNE(s1, s2): Tests that s1 != s2 // * {ASSERT|EXPECT}_STRCASEEQ(s1, s2): Tests that s1 == s2, ignoring case // * {ASSERT|EXPECT}_STRCASENE(s1, s2): Tests that s1 != s2, ignoring case // // For wide or narrow string objects, you can use the // {ASSERT|EXPECT}_??() macros. // // Don't depend on the order in which the arguments are evaluated, // which is undefined. // // These macros evaluate their arguments exactly once. #define EXPECT_STREQ(expected, actual) \ EXPECT_PRED_FORMAT2(::testing::internal::CmpHelperSTREQ, expected, actual) #define EXPECT_STRNE(s1, s2) \ EXPECT_PRED_FORMAT2(::testing::internal::CmpHelperSTRNE, s1, s2) #define EXPECT_STRCASEEQ(expected, actual) \ EXPECT_PRED_FORMAT2(::testing::internal::CmpHelperSTRCASEEQ, expected, actual) #define EXPECT_STRCASENE(s1, s2)\ EXPECT_PRED_FORMAT2(::testing::internal::CmpHelperSTRCASENE, s1, s2) #define ASSERT_STREQ(expected, actual) \ ASSERT_PRED_FORMAT2(::testing::internal::CmpHelperSTREQ, expected, actual) #define ASSERT_STRNE(s1, s2) \ ASSERT_PRED_FORMAT2(::testing::internal::CmpHelperSTRNE, s1, s2) #define ASSERT_STRCASEEQ(expected, actual) \ ASSERT_PRED_FORMAT2(::testing::internal::CmpHelperSTRCASEEQ, expected, actual) #define ASSERT_STRCASENE(s1, s2)\ ASSERT_PRED_FORMAT2(::testing::internal::CmpHelperSTRCASENE, s1, s2) // Macros for comparing floating-point numbers. // // * {ASSERT|EXPECT}_FLOAT_EQ(expected, actual): // Tests that two float values are almost equal. // * {ASSERT|EXPECT}_DOUBLE_EQ(expected, actual): // Tests that two double values are almost equal. // * {ASSERT|EXPECT}_NEAR(v1, v2, abs_error): // Tests that v1 and v2 are within the given distance to each other. // // Google Test uses ULP-based comparison to automatically pick a default // error bound that is appropriate for the operands. See the // FloatingPoint template class in gtest-internal.h if you are // interested in the implementation details. #define EXPECT_FLOAT_EQ(expected, actual)\ EXPECT_PRED_FORMAT2(::testing::internal::CmpHelperFloatingPointEQ, \ expected, actual) #define EXPECT_DOUBLE_EQ(expected, actual)\ EXPECT_PRED_FORMAT2(::testing::internal::CmpHelperFloatingPointEQ, \ expected, actual) #define ASSERT_FLOAT_EQ(expected, actual)\ ASSERT_PRED_FORMAT2(::testing::internal::CmpHelperFloatingPointEQ, \ expected, actual) #define ASSERT_DOUBLE_EQ(expected, actual)\ ASSERT_PRED_FORMAT2(::testing::internal::CmpHelperFloatingPointEQ, \ expected, actual) #define EXPECT_NEAR(val1, val2, abs_error)\ EXPECT_PRED_FORMAT3(::testing::internal::DoubleNearPredFormat, \ val1, val2, abs_error) #define ASSERT_NEAR(val1, val2, abs_error)\ ASSERT_PRED_FORMAT3(::testing::internal::DoubleNearPredFormat, \ val1, val2, abs_error) // These predicate format functions work on floating-point values, and // can be used in {ASSERT|EXPECT}_PRED_FORMAT2*(), e.g. // // EXPECT_PRED_FORMAT2(testing::DoubleLE, Foo(), 5.0); // Asserts that val1 is less than, or almost equal to, val2. Fails // otherwise. In particular, it fails if either val1 or val2 is NaN. GTEST_API_ AssertionResult FloatLE(const char* expr1, const char* expr2, float val1, float val2); GTEST_API_ AssertionResult DoubleLE(const char* expr1, const char* expr2, double val1, double val2); #if GTEST_OS_WINDOWS // Macros that test for HRESULT failure and success, these are only useful // on Windows, and rely on Windows SDK macros and APIs to compile. // // * {ASSERT|EXPECT}_HRESULT_{SUCCEEDED|FAILED}(expr) // // When expr unexpectedly fails or succeeds, Google Test prints the // expected result and the actual result with both a human-readable // string representation of the error, if available, as well as the // hex result code. # define EXPECT_HRESULT_SUCCEEDED(expr) \ EXPECT_PRED_FORMAT1(::testing::internal::IsHRESULTSuccess, (expr)) # define ASSERT_HRESULT_SUCCEEDED(expr) \ ASSERT_PRED_FORMAT1(::testing::internal::IsHRESULTSuccess, (expr)) # define EXPECT_HRESULT_FAILED(expr) \ EXPECT_PRED_FORMAT1(::testing::internal::IsHRESULTFailure, (expr)) # define ASSERT_HRESULT_FAILED(expr) \ ASSERT_PRED_FORMAT1(::testing::internal::IsHRESULTFailure, (expr)) #endif // GTEST_OS_WINDOWS // Macros that execute statement and check that it doesn't generate new fatal // failures in the current thread. // // * {ASSERT|EXPECT}_NO_FATAL_FAILURE(statement); // // Examples: // // EXPECT_NO_FATAL_FAILURE(Process()); // ASSERT_NO_FATAL_FAILURE(Process()) << "Process() failed"; // #define ASSERT_NO_FATAL_FAILURE(statement) \ GTEST_TEST_NO_FATAL_FAILURE_(statement, GTEST_FATAL_FAILURE_) #define EXPECT_NO_FATAL_FAILURE(statement) \ GTEST_TEST_NO_FATAL_FAILURE_(statement, GTEST_NONFATAL_FAILURE_) // Causes a trace (including the source file path, the current line // number, and the given message) to be included in every test failure // message generated by code in the current scope. The effect is // undone when the control leaves the current scope. // // The message argument can be anything streamable to std::ostream. // // In the implementation, we include the current line number as part // of the dummy variable name, thus allowing multiple SCOPED_TRACE()s // to appear in the same block - as long as they are on different // lines. #define SCOPED_TRACE(message) \ ::testing::internal::ScopedTrace GTEST_CONCAT_TOKEN_(gtest_trace_, __LINE__)(\ __FILE__, __LINE__, ::testing::Message() << (message)) // Compile-time assertion for type equality. // StaticAssertTypeEq() compiles iff type1 and type2 are // the same type. The value it returns is not interesting. // // Instead of making StaticAssertTypeEq a class template, we make it a // function template that invokes a helper class template. This // prevents a user from misusing StaticAssertTypeEq by // defining objects of that type. // // CAVEAT: // // When used inside a method of a class template, // StaticAssertTypeEq() is effective ONLY IF the method is // instantiated. For example, given: // // template class Foo { // public: // void Bar() { testing::StaticAssertTypeEq(); } // }; // // the code: // // void Test1() { Foo foo; } // // will NOT generate a compiler error, as Foo::Bar() is never // actually instantiated. Instead, you need: // // void Test2() { Foo foo; foo.Bar(); } // // to cause a compiler error. template bool StaticAssertTypeEq() { (void)internal::StaticAssertTypeEqHelper(); return true; } // Defines a test. // // The first parameter is the name of the test case, and the second // parameter is the name of the test within the test case. // // The convention is to end the test case name with "Test". For // example, a test case for the Foo class can be named FooTest. // // The user should put his test code between braces after using this // macro. Example: // // TEST(FooTest, InitializesCorrectly) { // Foo foo; // EXPECT_TRUE(foo.StatusIsOK()); // } // Note that we call GetTestTypeId() instead of GetTypeId< // ::testing::Test>() here to get the type ID of testing::Test. This // is to work around a suspected linker bug when using Google Test as // a framework on Mac OS X. The bug causes GetTypeId< // ::testing::Test>() to return different values depending on whether // the call is from the Google Test framework itself or from user test // code. GetTestTypeId() is guaranteed to always return the same // value, as it always calls GetTypeId<>() from the Google Test // framework. #define GTEST_TEST(test_case_name, test_name)\ GTEST_TEST_(test_case_name, test_name, \ ::testing::Test, ::testing::internal::GetTestTypeId()) // Define this macro to 1 to omit the definition of TEST(), which // is a generic name and clashes with some other libraries. #if !GTEST_DONT_DEFINE_TEST # define TEST(test_case_name, test_name) GTEST_TEST(test_case_name, test_name) #endif // Defines a test that uses a test fixture. // // The first parameter is the name of the test fixture class, which // also doubles as the test case name. The second parameter is the // name of the test within the test case. // // A test fixture class must be declared earlier. The user should put // his test code between braces after using this macro. Example: // // class FooTest : public testing::Test { // protected: // virtual void SetUp() { b_.AddElement(3); } // // Foo a_; // Foo b_; // }; // // TEST_F(FooTest, InitializesCorrectly) { // EXPECT_TRUE(a_.StatusIsOK()); // } // // TEST_F(FooTest, ReturnsElementCountCorrectly) { // EXPECT_EQ(0, a_.size()); // EXPECT_EQ(1, b_.size()); // } #define TEST_F(test_fixture, test_name)\ GTEST_TEST_(test_fixture, test_name, test_fixture, \ ::testing::internal::GetTypeId()) } // namespace testing // Use this function in main() to run all tests. It returns 0 if all // tests are successful, or 1 otherwise. // // RUN_ALL_TESTS() should be invoked after the command line has been // parsed by InitGoogleTest(). // // This function was formerly a macro; thus, it is in the global // namespace and has an all-caps name. int RUN_ALL_TESTS() GTEST_MUST_USE_RESULT_; inline int RUN_ALL_TESTS() { return ::testing::UnitTest::GetInstance()->Run(); } #endif // GTEST_INCLUDE_GTEST_GTEST_H_ ffc-2019.1.0.post0/libs/gtest-1.7.0/include/gtest/gtest_pred_impl.h000066400000000000000000000354511346026536000244060ustar00rootroot00000000000000// Copyright 2006, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // This file is AUTOMATICALLY GENERATED on 10/31/2011 by command // 'gen_gtest_pred_impl.py 5'. DO NOT EDIT BY HAND! // // Implements a family of generic predicate assertion macros. #ifndef GTEST_INCLUDE_GTEST_GTEST_PRED_IMPL_H_ #define GTEST_INCLUDE_GTEST_GTEST_PRED_IMPL_H_ // Makes sure this header is not included before gtest.h. #ifndef GTEST_INCLUDE_GTEST_GTEST_H_ # error Do not include gtest_pred_impl.h directly. Include gtest.h instead. #endif // GTEST_INCLUDE_GTEST_GTEST_H_ // This header implements a family of generic predicate assertion // macros: // // ASSERT_PRED_FORMAT1(pred_format, v1) // ASSERT_PRED_FORMAT2(pred_format, v1, v2) // ... // // where pred_format is a function or functor that takes n (in the // case of ASSERT_PRED_FORMATn) values and their source expression // text, and returns a testing::AssertionResult. See the definition // of ASSERT_EQ in gtest.h for an example. // // If you don't care about formatting, you can use the more // restrictive version: // // ASSERT_PRED1(pred, v1) // ASSERT_PRED2(pred, v1, v2) // ... // // where pred is an n-ary function or functor that returns bool, // and the values v1, v2, ..., must support the << operator for // streaming to std::ostream. // // We also define the EXPECT_* variations. // // For now we only support predicates whose arity is at most 5. // Please email googletestframework@googlegroups.com if you need // support for higher arities. // GTEST_ASSERT_ is the basic statement to which all of the assertions // in this file reduce. Don't use this in your code. #define GTEST_ASSERT_(expression, on_failure) \ GTEST_AMBIGUOUS_ELSE_BLOCKER_ \ if (const ::testing::AssertionResult gtest_ar = (expression)) \ ; \ else \ on_failure(gtest_ar.failure_message()) // Helper function for implementing {EXPECT|ASSERT}_PRED1. Don't use // this in your code. template AssertionResult AssertPred1Helper(const char* pred_text, const char* e1, Pred pred, const T1& v1) { if (pred(v1)) return AssertionSuccess(); return AssertionFailure() << pred_text << "(" << e1 << ") evaluates to false, where" << "\n" << e1 << " evaluates to " << v1; } // Internal macro for implementing {EXPECT|ASSERT}_PRED_FORMAT1. // Don't use this in your code. #define GTEST_PRED_FORMAT1_(pred_format, v1, on_failure)\ GTEST_ASSERT_(pred_format(#v1, v1), \ on_failure) // Internal macro for implementing {EXPECT|ASSERT}_PRED1. Don't use // this in your code. #define GTEST_PRED1_(pred, v1, on_failure)\ GTEST_ASSERT_(::testing::AssertPred1Helper(#pred, \ #v1, \ pred, \ v1), on_failure) // Unary predicate assertion macros. #define EXPECT_PRED_FORMAT1(pred_format, v1) \ GTEST_PRED_FORMAT1_(pred_format, v1, GTEST_NONFATAL_FAILURE_) #define EXPECT_PRED1(pred, v1) \ GTEST_PRED1_(pred, v1, GTEST_NONFATAL_FAILURE_) #define ASSERT_PRED_FORMAT1(pred_format, v1) \ GTEST_PRED_FORMAT1_(pred_format, v1, GTEST_FATAL_FAILURE_) #define ASSERT_PRED1(pred, v1) \ GTEST_PRED1_(pred, v1, GTEST_FATAL_FAILURE_) // Helper function for implementing {EXPECT|ASSERT}_PRED2. Don't use // this in your code. template AssertionResult AssertPred2Helper(const char* pred_text, const char* e1, const char* e2, Pred pred, const T1& v1, const T2& v2) { if (pred(v1, v2)) return AssertionSuccess(); return AssertionFailure() << pred_text << "(" << e1 << ", " << e2 << ") evaluates to false, where" << "\n" << e1 << " evaluates to " << v1 << "\n" << e2 << " evaluates to " << v2; } // Internal macro for implementing {EXPECT|ASSERT}_PRED_FORMAT2. // Don't use this in your code. #define GTEST_PRED_FORMAT2_(pred_format, v1, v2, on_failure)\ GTEST_ASSERT_(pred_format(#v1, #v2, v1, v2), \ on_failure) // Internal macro for implementing {EXPECT|ASSERT}_PRED2. Don't use // this in your code. #define GTEST_PRED2_(pred, v1, v2, on_failure)\ GTEST_ASSERT_(::testing::AssertPred2Helper(#pred, \ #v1, \ #v2, \ pred, \ v1, \ v2), on_failure) // Binary predicate assertion macros. #define EXPECT_PRED_FORMAT2(pred_format, v1, v2) \ GTEST_PRED_FORMAT2_(pred_format, v1, v2, GTEST_NONFATAL_FAILURE_) #define EXPECT_PRED2(pred, v1, v2) \ GTEST_PRED2_(pred, v1, v2, GTEST_NONFATAL_FAILURE_) #define ASSERT_PRED_FORMAT2(pred_format, v1, v2) \ GTEST_PRED_FORMAT2_(pred_format, v1, v2, GTEST_FATAL_FAILURE_) #define ASSERT_PRED2(pred, v1, v2) \ GTEST_PRED2_(pred, v1, v2, GTEST_FATAL_FAILURE_) // Helper function for implementing {EXPECT|ASSERT}_PRED3. Don't use // this in your code. template AssertionResult AssertPred3Helper(const char* pred_text, const char* e1, const char* e2, const char* e3, Pred pred, const T1& v1, const T2& v2, const T3& v3) { if (pred(v1, v2, v3)) return AssertionSuccess(); return AssertionFailure() << pred_text << "(" << e1 << ", " << e2 << ", " << e3 << ") evaluates to false, where" << "\n" << e1 << " evaluates to " << v1 << "\n" << e2 << " evaluates to " << v2 << "\n" << e3 << " evaluates to " << v3; } // Internal macro for implementing {EXPECT|ASSERT}_PRED_FORMAT3. // Don't use this in your code. #define GTEST_PRED_FORMAT3_(pred_format, v1, v2, v3, on_failure)\ GTEST_ASSERT_(pred_format(#v1, #v2, #v3, v1, v2, v3), \ on_failure) // Internal macro for implementing {EXPECT|ASSERT}_PRED3. Don't use // this in your code. #define GTEST_PRED3_(pred, v1, v2, v3, on_failure)\ GTEST_ASSERT_(::testing::AssertPred3Helper(#pred, \ #v1, \ #v2, \ #v3, \ pred, \ v1, \ v2, \ v3), on_failure) // Ternary predicate assertion macros. #define EXPECT_PRED_FORMAT3(pred_format, v1, v2, v3) \ GTEST_PRED_FORMAT3_(pred_format, v1, v2, v3, GTEST_NONFATAL_FAILURE_) #define EXPECT_PRED3(pred, v1, v2, v3) \ GTEST_PRED3_(pred, v1, v2, v3, GTEST_NONFATAL_FAILURE_) #define ASSERT_PRED_FORMAT3(pred_format, v1, v2, v3) \ GTEST_PRED_FORMAT3_(pred_format, v1, v2, v3, GTEST_FATAL_FAILURE_) #define ASSERT_PRED3(pred, v1, v2, v3) \ GTEST_PRED3_(pred, v1, v2, v3, GTEST_FATAL_FAILURE_) // Helper function for implementing {EXPECT|ASSERT}_PRED4. Don't use // this in your code. template AssertionResult AssertPred4Helper(const char* pred_text, const char* e1, const char* e2, const char* e3, const char* e4, Pred pred, const T1& v1, const T2& v2, const T3& v3, const T4& v4) { if (pred(v1, v2, v3, v4)) return AssertionSuccess(); return AssertionFailure() << pred_text << "(" << e1 << ", " << e2 << ", " << e3 << ", " << e4 << ") evaluates to false, where" << "\n" << e1 << " evaluates to " << v1 << "\n" << e2 << " evaluates to " << v2 << "\n" << e3 << " evaluates to " << v3 << "\n" << e4 << " evaluates to " << v4; } // Internal macro for implementing {EXPECT|ASSERT}_PRED_FORMAT4. // Don't use this in your code. #define GTEST_PRED_FORMAT4_(pred_format, v1, v2, v3, v4, on_failure)\ GTEST_ASSERT_(pred_format(#v1, #v2, #v3, #v4, v1, v2, v3, v4), \ on_failure) // Internal macro for implementing {EXPECT|ASSERT}_PRED4. Don't use // this in your code. #define GTEST_PRED4_(pred, v1, v2, v3, v4, on_failure)\ GTEST_ASSERT_(::testing::AssertPred4Helper(#pred, \ #v1, \ #v2, \ #v3, \ #v4, \ pred, \ v1, \ v2, \ v3, \ v4), on_failure) // 4-ary predicate assertion macros. #define EXPECT_PRED_FORMAT4(pred_format, v1, v2, v3, v4) \ GTEST_PRED_FORMAT4_(pred_format, v1, v2, v3, v4, GTEST_NONFATAL_FAILURE_) #define EXPECT_PRED4(pred, v1, v2, v3, v4) \ GTEST_PRED4_(pred, v1, v2, v3, v4, GTEST_NONFATAL_FAILURE_) #define ASSERT_PRED_FORMAT4(pred_format, v1, v2, v3, v4) \ GTEST_PRED_FORMAT4_(pred_format, v1, v2, v3, v4, GTEST_FATAL_FAILURE_) #define ASSERT_PRED4(pred, v1, v2, v3, v4) \ GTEST_PRED4_(pred, v1, v2, v3, v4, GTEST_FATAL_FAILURE_) // Helper function for implementing {EXPECT|ASSERT}_PRED5. Don't use // this in your code. template AssertionResult AssertPred5Helper(const char* pred_text, const char* e1, const char* e2, const char* e3, const char* e4, const char* e5, Pred pred, const T1& v1, const T2& v2, const T3& v3, const T4& v4, const T5& v5) { if (pred(v1, v2, v3, v4, v5)) return AssertionSuccess(); return AssertionFailure() << pred_text << "(" << e1 << ", " << e2 << ", " << e3 << ", " << e4 << ", " << e5 << ") evaluates to false, where" << "\n" << e1 << " evaluates to " << v1 << "\n" << e2 << " evaluates to " << v2 << "\n" << e3 << " evaluates to " << v3 << "\n" << e4 << " evaluates to " << v4 << "\n" << e5 << " evaluates to " << v5; } // Internal macro for implementing {EXPECT|ASSERT}_PRED_FORMAT5. // Don't use this in your code. #define GTEST_PRED_FORMAT5_(pred_format, v1, v2, v3, v4, v5, on_failure)\ GTEST_ASSERT_(pred_format(#v1, #v2, #v3, #v4, #v5, v1, v2, v3, v4, v5), \ on_failure) // Internal macro for implementing {EXPECT|ASSERT}_PRED5. Don't use // this in your code. #define GTEST_PRED5_(pred, v1, v2, v3, v4, v5, on_failure)\ GTEST_ASSERT_(::testing::AssertPred5Helper(#pred, \ #v1, \ #v2, \ #v3, \ #v4, \ #v5, \ pred, \ v1, \ v2, \ v3, \ v4, \ v5), on_failure) // 5-ary predicate assertion macros. #define EXPECT_PRED_FORMAT5(pred_format, v1, v2, v3, v4, v5) \ GTEST_PRED_FORMAT5_(pred_format, v1, v2, v3, v4, v5, GTEST_NONFATAL_FAILURE_) #define EXPECT_PRED5(pred, v1, v2, v3, v4, v5) \ GTEST_PRED5_(pred, v1, v2, v3, v4, v5, GTEST_NONFATAL_FAILURE_) #define ASSERT_PRED_FORMAT5(pred_format, v1, v2, v3, v4, v5) \ GTEST_PRED_FORMAT5_(pred_format, v1, v2, v3, v4, v5, GTEST_FATAL_FAILURE_) #define ASSERT_PRED5(pred, v1, v2, v3, v4, v5) \ GTEST_PRED5_(pred, v1, v2, v3, v4, v5, GTEST_FATAL_FAILURE_) #endif // GTEST_INCLUDE_GTEST_GTEST_PRED_IMPL_H_ ffc-2019.1.0.post0/libs/gtest-1.7.0/include/gtest/gtest_prod.h000066400000000000000000000044241346026536000233730ustar00rootroot00000000000000// Copyright 2006, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: wan@google.com (Zhanyong Wan) // // Google C++ Testing Framework definitions useful in production code. #ifndef GTEST_INCLUDE_GTEST_GTEST_PROD_H_ #define GTEST_INCLUDE_GTEST_GTEST_PROD_H_ // When you need to test the private or protected members of a class, // use the FRIEND_TEST macro to declare your tests as friends of the // class. For example: // // class MyClass { // private: // void MyMethod(); // FRIEND_TEST(MyClassTest, MyMethod); // }; // // class MyClassTest : public testing::Test { // // ... // }; // // TEST_F(MyClassTest, MyMethod) { // // Can call MyClass::MyMethod() here. // } #define FRIEND_TEST(test_case_name, test_name)\ friend class test_case_name##_##test_name##_Test #endif // GTEST_INCLUDE_GTEST_GTEST_PROD_H_ ffc-2019.1.0.post0/libs/gtest-1.7.0/include/gtest/internal/000077500000000000000000000000001346026536000226605ustar00rootroot00000000000000ffc-2019.1.0.post0/libs/gtest-1.7.0/include/gtest/internal/gtest-death-test-internal.h000066400000000000000000000321651346026536000300400ustar00rootroot00000000000000// Copyright 2005, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Authors: wan@google.com (Zhanyong Wan), eefacm@gmail.com (Sean Mcafee) // // The Google C++ Testing Framework (Google Test) // // This header file defines internal utilities needed for implementing // death tests. They are subject to change without notice. #ifndef GTEST_INCLUDE_GTEST_INTERNAL_GTEST_DEATH_TEST_INTERNAL_H_ #define GTEST_INCLUDE_GTEST_INTERNAL_GTEST_DEATH_TEST_INTERNAL_H_ #include "gtest/internal/gtest-internal.h" #include namespace testing { namespace internal { GTEST_DECLARE_string_(internal_run_death_test); // Names of the flags (needed for parsing Google Test flags). const char kDeathTestStyleFlag[] = "death_test_style"; const char kDeathTestUseFork[] = "death_test_use_fork"; const char kInternalRunDeathTestFlag[] = "internal_run_death_test"; #if GTEST_HAS_DEATH_TEST // DeathTest is a class that hides much of the complexity of the // GTEST_DEATH_TEST_ macro. It is abstract; its static Create method // returns a concrete class that depends on the prevailing death test // style, as defined by the --gtest_death_test_style and/or // --gtest_internal_run_death_test flags. // In describing the results of death tests, these terms are used with // the corresponding definitions: // // exit status: The integer exit information in the format specified // by wait(2) // exit code: The integer code passed to exit(3), _exit(2), or // returned from main() class GTEST_API_ DeathTest { public: // Create returns false if there was an error determining the // appropriate action to take for the current death test; for example, // if the gtest_death_test_style flag is set to an invalid value. // The LastMessage method will return a more detailed message in that // case. Otherwise, the DeathTest pointer pointed to by the "test" // argument is set. If the death test should be skipped, the pointer // is set to NULL; otherwise, it is set to the address of a new concrete // DeathTest object that controls the execution of the current test. static bool Create(const char* statement, const RE* regex, const char* file, int line, DeathTest** test); DeathTest(); virtual ~DeathTest() { } // A helper class that aborts a death test when it's deleted. class ReturnSentinel { public: explicit ReturnSentinel(DeathTest* test) : test_(test) { } ~ReturnSentinel() { test_->Abort(TEST_ENCOUNTERED_RETURN_STATEMENT); } private: DeathTest* const test_; GTEST_DISALLOW_COPY_AND_ASSIGN_(ReturnSentinel); } GTEST_ATTRIBUTE_UNUSED_; // An enumeration of possible roles that may be taken when a death // test is encountered. EXECUTE means that the death test logic should // be executed immediately. OVERSEE means that the program should prepare // the appropriate environment for a child process to execute the death // test, then wait for it to complete. enum TestRole { OVERSEE_TEST, EXECUTE_TEST }; // An enumeration of the three reasons that a test might be aborted. enum AbortReason { TEST_ENCOUNTERED_RETURN_STATEMENT, TEST_THREW_EXCEPTION, TEST_DID_NOT_DIE }; // Assumes one of the above roles. virtual TestRole AssumeRole() = 0; // Waits for the death test to finish and returns its status. virtual int Wait() = 0; // Returns true if the death test passed; that is, the test process // exited during the test, its exit status matches a user-supplied // predicate, and its stderr output matches a user-supplied regular // expression. // The user-supplied predicate may be a macro expression rather // than a function pointer or functor, or else Wait and Passed could // be combined. virtual bool Passed(bool exit_status_ok) = 0; // Signals that the death test did not die as expected. virtual void Abort(AbortReason reason) = 0; // Returns a human-readable outcome message regarding the outcome of // the last death test. static const char* LastMessage(); static void set_last_death_test_message(const std::string& message); private: // A string containing a description of the outcome of the last death test. static std::string last_death_test_message_; GTEST_DISALLOW_COPY_AND_ASSIGN_(DeathTest); }; // Factory interface for death tests. May be mocked out for testing. class DeathTestFactory { public: virtual ~DeathTestFactory() { } virtual bool Create(const char* statement, const RE* regex, const char* file, int line, DeathTest** test) = 0; }; // A concrete DeathTestFactory implementation for normal use. class DefaultDeathTestFactory : public DeathTestFactory { public: virtual bool Create(const char* statement, const RE* regex, const char* file, int line, DeathTest** test); }; // Returns true if exit_status describes a process that was terminated // by a signal, or exited normally with a nonzero exit code. GTEST_API_ bool ExitedUnsuccessfully(int exit_status); // Traps C++ exceptions escaping statement and reports them as test // failures. Note that trapping SEH exceptions is not implemented here. # if GTEST_HAS_EXCEPTIONS # define GTEST_EXECUTE_DEATH_TEST_STATEMENT_(statement, death_test) \ try { \ GTEST_SUPPRESS_UNREACHABLE_CODE_WARNING_BELOW_(statement); \ } catch (const ::std::exception& gtest_exception) { \ fprintf(\ stderr, \ "\n%s: Caught std::exception-derived exception escaping the " \ "death test statement. Exception message: %s\n", \ ::testing::internal::FormatFileLocation(__FILE__, __LINE__).c_str(), \ gtest_exception.what()); \ fflush(stderr); \ death_test->Abort(::testing::internal::DeathTest::TEST_THREW_EXCEPTION); \ } catch (...) { \ death_test->Abort(::testing::internal::DeathTest::TEST_THREW_EXCEPTION); \ } # else # define GTEST_EXECUTE_DEATH_TEST_STATEMENT_(statement, death_test) \ GTEST_SUPPRESS_UNREACHABLE_CODE_WARNING_BELOW_(statement) # endif // This macro is for implementing ASSERT_DEATH*, EXPECT_DEATH*, // ASSERT_EXIT*, and EXPECT_EXIT*. # define GTEST_DEATH_TEST_(statement, predicate, regex, fail) \ GTEST_AMBIGUOUS_ELSE_BLOCKER_ \ if (::testing::internal::AlwaysTrue()) { \ const ::testing::internal::RE& gtest_regex = (regex); \ ::testing::internal::DeathTest* gtest_dt; \ if (!::testing::internal::DeathTest::Create(#statement, >est_regex, \ __FILE__, __LINE__, >est_dt)) { \ goto GTEST_CONCAT_TOKEN_(gtest_label_, __LINE__); \ } \ if (gtest_dt != NULL) { \ ::testing::internal::scoped_ptr< ::testing::internal::DeathTest> \ gtest_dt_ptr(gtest_dt); \ switch (gtest_dt->AssumeRole()) { \ case ::testing::internal::DeathTest::OVERSEE_TEST: \ if (!gtest_dt->Passed(predicate(gtest_dt->Wait()))) { \ goto GTEST_CONCAT_TOKEN_(gtest_label_, __LINE__); \ } \ break; \ case ::testing::internal::DeathTest::EXECUTE_TEST: { \ ::testing::internal::DeathTest::ReturnSentinel \ gtest_sentinel(gtest_dt); \ GTEST_EXECUTE_DEATH_TEST_STATEMENT_(statement, gtest_dt); \ gtest_dt->Abort(::testing::internal::DeathTest::TEST_DID_NOT_DIE); \ break; \ } \ default: \ break; \ } \ } \ } else \ GTEST_CONCAT_TOKEN_(gtest_label_, __LINE__): \ fail(::testing::internal::DeathTest::LastMessage()) // The symbol "fail" here expands to something into which a message // can be streamed. // This macro is for implementing ASSERT/EXPECT_DEBUG_DEATH when compiled in // NDEBUG mode. In this case we need the statements to be executed, the regex is // ignored, and the macro must accept a streamed message even though the message // is never printed. # define GTEST_EXECUTE_STATEMENT_(statement, regex) \ GTEST_AMBIGUOUS_ELSE_BLOCKER_ \ if (::testing::internal::AlwaysTrue()) { \ GTEST_SUPPRESS_UNREACHABLE_CODE_WARNING_BELOW_(statement); \ } else \ ::testing::Message() // A class representing the parsed contents of the // --gtest_internal_run_death_test flag, as it existed when // RUN_ALL_TESTS was called. class InternalRunDeathTestFlag { public: InternalRunDeathTestFlag(const std::string& a_file, int a_line, int an_index, int a_write_fd) : file_(a_file), line_(a_line), index_(an_index), write_fd_(a_write_fd) {} ~InternalRunDeathTestFlag() { if (write_fd_ >= 0) posix::Close(write_fd_); } const std::string& file() const { return file_; } int line() const { return line_; } int index() const { return index_; } int write_fd() const { return write_fd_; } private: std::string file_; int line_; int index_; int write_fd_; GTEST_DISALLOW_COPY_AND_ASSIGN_(InternalRunDeathTestFlag); }; // Returns a newly created InternalRunDeathTestFlag object with fields // initialized from the GTEST_FLAG(internal_run_death_test) flag if // the flag is specified; otherwise returns NULL. InternalRunDeathTestFlag* ParseInternalRunDeathTestFlag(); #else // GTEST_HAS_DEATH_TEST // This macro is used for implementing macros such as // EXPECT_DEATH_IF_SUPPORTED and ASSERT_DEATH_IF_SUPPORTED on systems where // death tests are not supported. Those macros must compile on such systems // iff EXPECT_DEATH and ASSERT_DEATH compile with the same parameters on // systems that support death tests. This allows one to write such a macro // on a system that does not support death tests and be sure that it will // compile on a death-test supporting system. // // Parameters: // statement - A statement that a macro such as EXPECT_DEATH would test // for program termination. This macro has to make sure this // statement is compiled but not executed, to ensure that // EXPECT_DEATH_IF_SUPPORTED compiles with a certain // parameter iff EXPECT_DEATH compiles with it. // regex - A regex that a macro such as EXPECT_DEATH would use to test // the output of statement. This parameter has to be // compiled but not evaluated by this macro, to ensure that // this macro only accepts expressions that a macro such as // EXPECT_DEATH would accept. // terminator - Must be an empty statement for EXPECT_DEATH_IF_SUPPORTED // and a return statement for ASSERT_DEATH_IF_SUPPORTED. // This ensures that ASSERT_DEATH_IF_SUPPORTED will not // compile inside functions where ASSERT_DEATH doesn't // compile. // // The branch that has an always false condition is used to ensure that // statement and regex are compiled (and thus syntactically correct) but // never executed. The unreachable code macro protects the terminator // statement from generating an 'unreachable code' warning in case // statement unconditionally returns or throws. The Message constructor at // the end allows the syntax of streaming additional messages into the // macro, for compilational compatibility with EXPECT_DEATH/ASSERT_DEATH. # define GTEST_UNSUPPORTED_DEATH_TEST_(statement, regex, terminator) \ GTEST_AMBIGUOUS_ELSE_BLOCKER_ \ if (::testing::internal::AlwaysTrue()) { \ GTEST_LOG_(WARNING) \ << "Death tests are not supported on this platform.\n" \ << "Statement '" #statement "' cannot be verified."; \ } else if (::testing::internal::AlwaysFalse()) { \ ::testing::internal::RE::PartialMatch(".*", (regex)); \ GTEST_SUPPRESS_UNREACHABLE_CODE_WARNING_BELOW_(statement); \ terminator; \ } else \ ::testing::Message() #endif // GTEST_HAS_DEATH_TEST } // namespace internal } // namespace testing #endif // GTEST_INCLUDE_GTEST_INTERNAL_GTEST_DEATH_TEST_INTERNAL_H_ ffc-2019.1.0.post0/libs/gtest-1.7.0/include/gtest/internal/gtest-filepath.h000066400000000000000000000226031346026536000257540ustar00rootroot00000000000000// Copyright 2008, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: keith.ray@gmail.com (Keith Ray) // // Google Test filepath utilities // // This header file declares classes and functions used internally by // Google Test. They are subject to change without notice. // // This file is #included in . // Do not include this header file separately! #ifndef GTEST_INCLUDE_GTEST_INTERNAL_GTEST_FILEPATH_H_ #define GTEST_INCLUDE_GTEST_INTERNAL_GTEST_FILEPATH_H_ #include "gtest/internal/gtest-string.h" namespace testing { namespace internal { // FilePath - a class for file and directory pathname manipulation which // handles platform-specific conventions (like the pathname separator). // Used for helper functions for naming files in a directory for xml output. // Except for Set methods, all methods are const or static, which provides an // "immutable value object" -- useful for peace of mind. // A FilePath with a value ending in a path separator ("like/this/") represents // a directory, otherwise it is assumed to represent a file. In either case, // it may or may not represent an actual file or directory in the file system. // Names are NOT checked for syntax correctness -- no checking for illegal // characters, malformed paths, etc. class GTEST_API_ FilePath { public: FilePath() : pathname_("") { } FilePath(const FilePath& rhs) : pathname_(rhs.pathname_) { } explicit FilePath(const std::string& pathname) : pathname_(pathname) { Normalize(); } FilePath& operator=(const FilePath& rhs) { Set(rhs); return *this; } void Set(const FilePath& rhs) { pathname_ = rhs.pathname_; } const std::string& string() const { return pathname_; } const char* c_str() const { return pathname_.c_str(); } // Returns the current working directory, or "" if unsuccessful. static FilePath GetCurrentDir(); // Given directory = "dir", base_name = "test", number = 0, // extension = "xml", returns "dir/test.xml". If number is greater // than zero (e.g., 12), returns "dir/test_12.xml". // On Windows platform, uses \ as the separator rather than /. static FilePath MakeFileName(const FilePath& directory, const FilePath& base_name, int number, const char* extension); // Given directory = "dir", relative_path = "test.xml", // returns "dir/test.xml". // On Windows, uses \ as the separator rather than /. static FilePath ConcatPaths(const FilePath& directory, const FilePath& relative_path); // Returns a pathname for a file that does not currently exist. The pathname // will be directory/base_name.extension or // directory/base_name_.extension if directory/base_name.extension // already exists. The number will be incremented until a pathname is found // that does not already exist. // Examples: 'dir/foo_test.xml' or 'dir/foo_test_1.xml'. // There could be a race condition if two or more processes are calling this // function at the same time -- they could both pick the same filename. static FilePath GenerateUniqueFileName(const FilePath& directory, const FilePath& base_name, const char* extension); // Returns true iff the path is "". bool IsEmpty() const { return pathname_.empty(); } // If input name has a trailing separator character, removes it and returns // the name, otherwise return the name string unmodified. // On Windows platform, uses \ as the separator, other platforms use /. FilePath RemoveTrailingPathSeparator() const; // Returns a copy of the FilePath with the directory part removed. // Example: FilePath("path/to/file").RemoveDirectoryName() returns // FilePath("file"). If there is no directory part ("just_a_file"), it returns // the FilePath unmodified. If there is no file part ("just_a_dir/") it // returns an empty FilePath (""). // On Windows platform, '\' is the path separator, otherwise it is '/'. FilePath RemoveDirectoryName() const; // RemoveFileName returns the directory path with the filename removed. // Example: FilePath("path/to/file").RemoveFileName() returns "path/to/". // If the FilePath is "a_file" or "/a_file", RemoveFileName returns // FilePath("./") or, on Windows, FilePath(".\\"). If the filepath does // not have a file, like "just/a/dir/", it returns the FilePath unmodified. // On Windows platform, '\' is the path separator, otherwise it is '/'. FilePath RemoveFileName() const; // Returns a copy of the FilePath with the case-insensitive extension removed. // Example: FilePath("dir/file.exe").RemoveExtension("EXE") returns // FilePath("dir/file"). If a case-insensitive extension is not // found, returns a copy of the original FilePath. FilePath RemoveExtension(const char* extension) const; // Creates directories so that path exists. Returns true if successful or if // the directories already exist; returns false if unable to create // directories for any reason. Will also return false if the FilePath does // not represent a directory (that is, it doesn't end with a path separator). bool CreateDirectoriesRecursively() const; // Create the directory so that path exists. Returns true if successful or // if the directory already exists; returns false if unable to create the // directory for any reason, including if the parent directory does not // exist. Not named "CreateDirectory" because that's a macro on Windows. bool CreateFolder() const; // Returns true if FilePath describes something in the file-system, // either a file, directory, or whatever, and that something exists. bool FileOrDirectoryExists() const; // Returns true if pathname describes a directory in the file-system // that exists. bool DirectoryExists() const; // Returns true if FilePath ends with a path separator, which indicates that // it is intended to represent a directory. Returns false otherwise. // This does NOT check that a directory (or file) actually exists. bool IsDirectory() const; // Returns true if pathname describes a root directory. (Windows has one // root directory per disk drive.) bool IsRootDirectory() const; // Returns true if pathname describes an absolute path. bool IsAbsolutePath() const; private: // Replaces multiple consecutive separators with a single separator. // For example, "bar///foo" becomes "bar/foo". Does not eliminate other // redundancies that might be in a pathname involving "." or "..". // // A pathname with multiple consecutive separators may occur either through // user error or as a result of some scripts or APIs that generate a pathname // with a trailing separator. On other platforms the same API or script // may NOT generate a pathname with a trailing "/". Then elsewhere that // pathname may have another "/" and pathname components added to it, // without checking for the separator already being there. // The script language and operating system may allow paths like "foo//bar" // but some of the functions in FilePath will not handle that correctly. In // particular, RemoveTrailingPathSeparator() only removes one separator, and // it is called in CreateDirectoriesRecursively() assuming that it will change // a pathname from directory syntax (trailing separator) to filename syntax. // // On Windows this method also replaces the alternate path separator '/' with // the primary path separator '\\', so that for example "bar\\/\\foo" becomes // "bar\\foo". void Normalize(); // Returns a pointer to the last occurence of a valid path separator in // the FilePath. On Windows, for example, both '/' and '\' are valid path // separators. Returns NULL if no path separator was found. const char* FindLastPathSeparator() const; std::string pathname_; }; // class FilePath } // namespace internal } // namespace testing #endif // GTEST_INCLUDE_GTEST_INTERNAL_GTEST_FILEPATH_H_ ffc-2019.1.0.post0/libs/gtest-1.7.0/include/gtest/internal/gtest-internal.h000066400000000000000000001261611346026536000260000ustar00rootroot00000000000000// Copyright 2005, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Authors: wan@google.com (Zhanyong Wan), eefacm@gmail.com (Sean Mcafee) // // The Google C++ Testing Framework (Google Test) // // This header file declares functions and macros used internally by // Google Test. They are subject to change without notice. #ifndef GTEST_INCLUDE_GTEST_INTERNAL_GTEST_INTERNAL_H_ #define GTEST_INCLUDE_GTEST_INTERNAL_GTEST_INTERNAL_H_ #include "gtest/internal/gtest-port.h" #if GTEST_OS_LINUX # include # include # include # include #endif // GTEST_OS_LINUX #if GTEST_HAS_EXCEPTIONS # include #endif #include #include #include #include #include #include #include "gtest/gtest-message.h" #include "gtest/internal/gtest-string.h" #include "gtest/internal/gtest-filepath.h" #include "gtest/internal/gtest-type-util.h" // Due to C++ preprocessor weirdness, we need double indirection to // concatenate two tokens when one of them is __LINE__. Writing // // foo ## __LINE__ // // will result in the token foo__LINE__, instead of foo followed by // the current line number. For more details, see // http://www.parashift.com/c++-faq-lite/misc-technical-issues.html#faq-39.6 #define GTEST_CONCAT_TOKEN_(foo, bar) GTEST_CONCAT_TOKEN_IMPL_(foo, bar) #define GTEST_CONCAT_TOKEN_IMPL_(foo, bar) foo ## bar class ProtocolMessage; namespace proto2 { class Message; } namespace testing { // Forward declarations. class AssertionResult; // Result of an assertion. class Message; // Represents a failure message. class Test; // Represents a test. class TestInfo; // Information about a test. class TestPartResult; // Result of a test part. class UnitTest; // A collection of test cases. template ::std::string PrintToString(const T& value); namespace internal { struct TraceInfo; // Information about a trace point. class ScopedTrace; // Implements scoped trace. class TestInfoImpl; // Opaque implementation of TestInfo class UnitTestImpl; // Opaque implementation of UnitTest // How many times InitGoogleTest() has been called. GTEST_API_ extern int g_init_gtest_count; // The text used in failure messages to indicate the start of the // stack trace. GTEST_API_ extern const char kStackTraceMarker[]; // Two overloaded helpers for checking at compile time whether an // expression is a null pointer literal (i.e. NULL or any 0-valued // compile-time integral constant). Their return values have // different sizes, so we can use sizeof() to test which version is // picked by the compiler. These helpers have no implementations, as // we only need their signatures. // // Given IsNullLiteralHelper(x), the compiler will pick the first // version if x can be implicitly converted to Secret*, and pick the // second version otherwise. Since Secret is a secret and incomplete // type, the only expression a user can write that has type Secret* is // a null pointer literal. Therefore, we know that x is a null // pointer literal if and only if the first version is picked by the // compiler. char IsNullLiteralHelper(Secret* p); char (&IsNullLiteralHelper(...))[2]; // NOLINT // A compile-time bool constant that is true if and only if x is a // null pointer literal (i.e. NULL or any 0-valued compile-time // integral constant). #ifdef GTEST_ELLIPSIS_NEEDS_POD_ // We lose support for NULL detection where the compiler doesn't like // passing non-POD classes through ellipsis (...). # define GTEST_IS_NULL_LITERAL_(x) false #else # define GTEST_IS_NULL_LITERAL_(x) \ (sizeof(::testing::internal::IsNullLiteralHelper(x)) == 1) #endif // GTEST_ELLIPSIS_NEEDS_POD_ // Appends the user-supplied message to the Google-Test-generated message. GTEST_API_ std::string AppendUserMessage( const std::string& gtest_msg, const Message& user_msg); #if GTEST_HAS_EXCEPTIONS // This exception is thrown by (and only by) a failed Google Test // assertion when GTEST_FLAG(throw_on_failure) is true (if exceptions // are enabled). We derive it from std::runtime_error, which is for // errors presumably detectable only at run time. Since // std::runtime_error inherits from std::exception, many testing // frameworks know how to extract and print the message inside it. class GTEST_API_ GoogleTestFailureException : public ::std::runtime_error { public: explicit GoogleTestFailureException(const TestPartResult& failure); }; #endif // GTEST_HAS_EXCEPTIONS // A helper class for creating scoped traces in user programs. class GTEST_API_ ScopedTrace { public: // The c'tor pushes the given source file location and message onto // a trace stack maintained by Google Test. ScopedTrace(const char* file, int line, const Message& message); // The d'tor pops the info pushed by the c'tor. // // Note that the d'tor is not virtual in order to be efficient. // Don't inherit from ScopedTrace! ~ScopedTrace(); private: GTEST_DISALLOW_COPY_AND_ASSIGN_(ScopedTrace); } GTEST_ATTRIBUTE_UNUSED_; // A ScopedTrace object does its job in its // c'tor and d'tor. Therefore it doesn't // need to be used otherwise. // Constructs and returns the message for an equality assertion // (e.g. ASSERT_EQ, EXPECT_STREQ, etc) failure. // // The first four parameters are the expressions used in the assertion // and their values, as strings. For example, for ASSERT_EQ(foo, bar) // where foo is 5 and bar is 6, we have: // // expected_expression: "foo" // actual_expression: "bar" // expected_value: "5" // actual_value: "6" // // The ignoring_case parameter is true iff the assertion is a // *_STRCASEEQ*. When it's true, the string " (ignoring case)" will // be inserted into the message. GTEST_API_ AssertionResult EqFailure(const char* expected_expression, const char* actual_expression, const std::string& expected_value, const std::string& actual_value, bool ignoring_case); // Constructs a failure message for Boolean assertions such as EXPECT_TRUE. GTEST_API_ std::string GetBoolAssertionFailureMessage( const AssertionResult& assertion_result, const char* expression_text, const char* actual_predicate_value, const char* expected_predicate_value); // This template class represents an IEEE floating-point number // (either single-precision or double-precision, depending on the // template parameters). // // The purpose of this class is to do more sophisticated number // comparison. (Due to round-off error, etc, it's very unlikely that // two floating-points will be equal exactly. Hence a naive // comparison by the == operation often doesn't work.) // // Format of IEEE floating-point: // // The most-significant bit being the leftmost, an IEEE // floating-point looks like // // sign_bit exponent_bits fraction_bits // // Here, sign_bit is a single bit that designates the sign of the // number. // // For float, there are 8 exponent bits and 23 fraction bits. // // For double, there are 11 exponent bits and 52 fraction bits. // // More details can be found at // http://en.wikipedia.org/wiki/IEEE_floating-point_standard. // // Template parameter: // // RawType: the raw floating-point type (either float or double) template class FloatingPoint { public: // Defines the unsigned integer type that has the same size as the // floating point number. typedef typename TypeWithSize::UInt Bits; // Constants. // # of bits in a number. static const size_t kBitCount = 8*sizeof(RawType); // # of fraction bits in a number. static const size_t kFractionBitCount = std::numeric_limits::digits - 1; // # of exponent bits in a number. static const size_t kExponentBitCount = kBitCount - 1 - kFractionBitCount; // The mask for the sign bit. static const Bits kSignBitMask = static_cast(1) << (kBitCount - 1); // The mask for the fraction bits. static const Bits kFractionBitMask = ~static_cast(0) >> (kExponentBitCount + 1); // The mask for the exponent bits. static const Bits kExponentBitMask = ~(kSignBitMask | kFractionBitMask); // How many ULP's (Units in the Last Place) we want to tolerate when // comparing two numbers. The larger the value, the more error we // allow. A 0 value means that two numbers must be exactly the same // to be considered equal. // // The maximum error of a single floating-point operation is 0.5 // units in the last place. On Intel CPU's, all floating-point // calculations are done with 80-bit precision, while double has 64 // bits. Therefore, 4 should be enough for ordinary use. // // See the following article for more details on ULP: // http://randomascii.wordpress.com/2012/02/25/comparing-floating-point-numbers-2012-edition/ static const size_t kMaxUlps = 4; // Constructs a FloatingPoint from a raw floating-point number. // // On an Intel CPU, passing a non-normalized NAN (Not a Number) // around may change its bits, although the new value is guaranteed // to be also a NAN. Therefore, don't expect this constructor to // preserve the bits in x when x is a NAN. explicit FloatingPoint(const RawType& x) { u_.value_ = x; } // Static methods // Reinterprets a bit pattern as a floating-point number. // // This function is needed to test the AlmostEquals() method. static RawType ReinterpretBits(const Bits bits) { FloatingPoint fp(0); fp.u_.bits_ = bits; return fp.u_.value_; } // Returns the floating-point number that represent positive infinity. static RawType Infinity() { return ReinterpretBits(kExponentBitMask); } // Returns the maximum representable finite floating-point number. static RawType Max(); // Non-static methods // Returns the bits that represents this number. const Bits &bits() const { return u_.bits_; } // Returns the exponent bits of this number. Bits exponent_bits() const { return kExponentBitMask & u_.bits_; } // Returns the fraction bits of this number. Bits fraction_bits() const { return kFractionBitMask & u_.bits_; } // Returns the sign bit of this number. Bits sign_bit() const { return kSignBitMask & u_.bits_; } // Returns true iff this is NAN (not a number). bool is_nan() const { // It's a NAN if the exponent bits are all ones and the fraction // bits are not entirely zeros. return (exponent_bits() == kExponentBitMask) && (fraction_bits() != 0); } // Returns true iff this number is at most kMaxUlps ULP's away from // rhs. In particular, this function: // // - returns false if either number is (or both are) NAN. // - treats really large numbers as almost equal to infinity. // - thinks +0.0 and -0.0 are 0 DLP's apart. bool AlmostEquals(const FloatingPoint& rhs) const { // The IEEE standard says that any comparison operation involving // a NAN must return false. if (is_nan() || rhs.is_nan()) return false; return DistanceBetweenSignAndMagnitudeNumbers(u_.bits_, rhs.u_.bits_) <= kMaxUlps; } private: // The data type used to store the actual floating-point number. union FloatingPointUnion { RawType value_; // The raw floating-point number. Bits bits_; // The bits that represent the number. }; // Converts an integer from the sign-and-magnitude representation to // the biased representation. More precisely, let N be 2 to the // power of (kBitCount - 1), an integer x is represented by the // unsigned number x + N. // // For instance, // // -N + 1 (the most negative number representable using // sign-and-magnitude) is represented by 1; // 0 is represented by N; and // N - 1 (the biggest number representable using // sign-and-magnitude) is represented by 2N - 1. // // Read http://en.wikipedia.org/wiki/Signed_number_representations // for more details on signed number representations. static Bits SignAndMagnitudeToBiased(const Bits &sam) { if (kSignBitMask & sam) { // sam represents a negative number. return ~sam + 1; } else { // sam represents a positive number. return kSignBitMask | sam; } } // Given two numbers in the sign-and-magnitude representation, // returns the distance between them as an unsigned number. static Bits DistanceBetweenSignAndMagnitudeNumbers(const Bits &sam1, const Bits &sam2) { const Bits biased1 = SignAndMagnitudeToBiased(sam1); const Bits biased2 = SignAndMagnitudeToBiased(sam2); return (biased1 >= biased2) ? (biased1 - biased2) : (biased2 - biased1); } FloatingPointUnion u_; }; // We cannot use std::numeric_limits::max() as it clashes with the max() // macro defined by . template <> inline float FloatingPoint::Max() { return FLT_MAX; } template <> inline double FloatingPoint::Max() { return DBL_MAX; } // Typedefs the instances of the FloatingPoint template class that we // care to use. typedef FloatingPoint Float; typedef FloatingPoint Double; // In order to catch the mistake of putting tests that use different // test fixture classes in the same test case, we need to assign // unique IDs to fixture classes and compare them. The TypeId type is // used to hold such IDs. The user should treat TypeId as an opaque // type: the only operation allowed on TypeId values is to compare // them for equality using the == operator. typedef const void* TypeId; template class TypeIdHelper { public: // dummy_ must not have a const type. Otherwise an overly eager // compiler (e.g. MSVC 7.1 & 8.0) may try to merge // TypeIdHelper::dummy_ for different Ts as an "optimization". static bool dummy_; }; template bool TypeIdHelper::dummy_ = false; // GetTypeId() returns the ID of type T. Different values will be // returned for different types. Calling the function twice with the // same type argument is guaranteed to return the same ID. template TypeId GetTypeId() { // The compiler is required to allocate a different // TypeIdHelper::dummy_ variable for each T used to instantiate // the template. Therefore, the address of dummy_ is guaranteed to // be unique. return &(TypeIdHelper::dummy_); } // Returns the type ID of ::testing::Test. Always call this instead // of GetTypeId< ::testing::Test>() to get the type ID of // ::testing::Test, as the latter may give the wrong result due to a // suspected linker bug when compiling Google Test as a Mac OS X // framework. GTEST_API_ TypeId GetTestTypeId(); // Defines the abstract factory interface that creates instances // of a Test object. class TestFactoryBase { public: virtual ~TestFactoryBase() {} // Creates a test instance to run. The instance is both created and destroyed // within TestInfoImpl::Run() virtual Test* CreateTest() = 0; protected: TestFactoryBase() {} private: GTEST_DISALLOW_COPY_AND_ASSIGN_(TestFactoryBase); }; // This class provides implementation of TeastFactoryBase interface. // It is used in TEST and TEST_F macros. template class TestFactoryImpl : public TestFactoryBase { public: virtual Test* CreateTest() { return new TestClass; } }; #if GTEST_OS_WINDOWS // Predicate-formatters for implementing the HRESULT checking macros // {ASSERT|EXPECT}_HRESULT_{SUCCEEDED|FAILED} // We pass a long instead of HRESULT to avoid causing an // include dependency for the HRESULT type. GTEST_API_ AssertionResult IsHRESULTSuccess(const char* expr, long hr); // NOLINT GTEST_API_ AssertionResult IsHRESULTFailure(const char* expr, long hr); // NOLINT #endif // GTEST_OS_WINDOWS // Types of SetUpTestCase() and TearDownTestCase() functions. typedef void (*SetUpTestCaseFunc)(); typedef void (*TearDownTestCaseFunc)(); // Creates a new TestInfo object and registers it with Google Test; // returns the created object. // // Arguments: // // test_case_name: name of the test case // name: name of the test // type_param the name of the test's type parameter, or NULL if // this is not a typed or a type-parameterized test. // value_param text representation of the test's value parameter, // or NULL if this is not a type-parameterized test. // fixture_class_id: ID of the test fixture class // set_up_tc: pointer to the function that sets up the test case // tear_down_tc: pointer to the function that tears down the test case // factory: pointer to the factory that creates a test object. // The newly created TestInfo instance will assume // ownership of the factory object. GTEST_API_ TestInfo* MakeAndRegisterTestInfo( const char* test_case_name, const char* name, const char* type_param, const char* value_param, TypeId fixture_class_id, SetUpTestCaseFunc set_up_tc, TearDownTestCaseFunc tear_down_tc, TestFactoryBase* factory); // If *pstr starts with the given prefix, modifies *pstr to be right // past the prefix and returns true; otherwise leaves *pstr unchanged // and returns false. None of pstr, *pstr, and prefix can be NULL. GTEST_API_ bool SkipPrefix(const char* prefix, const char** pstr); #if GTEST_HAS_TYPED_TEST || GTEST_HAS_TYPED_TEST_P // State of the definition of a type-parameterized test case. class GTEST_API_ TypedTestCasePState { public: TypedTestCasePState() : registered_(false) {} // Adds the given test name to defined_test_names_ and return true // if the test case hasn't been registered; otherwise aborts the // program. bool AddTestName(const char* file, int line, const char* case_name, const char* test_name) { if (registered_) { fprintf(stderr, "%s Test %s must be defined before " "REGISTER_TYPED_TEST_CASE_P(%s, ...).\n", FormatFileLocation(file, line).c_str(), test_name, case_name); fflush(stderr); posix::Abort(); } defined_test_names_.insert(test_name); return true; } // Verifies that registered_tests match the test names in // defined_test_names_; returns registered_tests if successful, or // aborts the program otherwise. const char* VerifyRegisteredTestNames( const char* file, int line, const char* registered_tests); private: bool registered_; ::std::set defined_test_names_; }; // Skips to the first non-space char after the first comma in 'str'; // returns NULL if no comma is found in 'str'. inline const char* SkipComma(const char* str) { const char* comma = strchr(str, ','); if (comma == NULL) { return NULL; } while (IsSpace(*(++comma))) {} return comma; } // Returns the prefix of 'str' before the first comma in it; returns // the entire string if it contains no comma. inline std::string GetPrefixUntilComma(const char* str) { const char* comma = strchr(str, ','); return comma == NULL ? str : std::string(str, comma); } // TypeParameterizedTest::Register() // registers a list of type-parameterized tests with Google Test. The // return value is insignificant - we just need to return something // such that we can call this function in a namespace scope. // // Implementation note: The GTEST_TEMPLATE_ macro declares a template // template parameter. It's defined in gtest-type-util.h. template class TypeParameterizedTest { public: // 'index' is the index of the test in the type list 'Types' // specified in INSTANTIATE_TYPED_TEST_CASE_P(Prefix, TestCase, // Types). Valid values for 'index' are [0, N - 1] where N is the // length of Types. static bool Register(const char* prefix, const char* case_name, const char* test_names, int index) { typedef typename Types::Head Type; typedef Fixture FixtureClass; typedef typename GTEST_BIND_(TestSel, Type) TestClass; // First, registers the first type-parameterized test in the type // list. MakeAndRegisterTestInfo( (std::string(prefix) + (prefix[0] == '\0' ? "" : "/") + case_name + "/" + StreamableToString(index)).c_str(), GetPrefixUntilComma(test_names).c_str(), GetTypeName().c_str(), NULL, // No value parameter. GetTypeId(), TestClass::SetUpTestCase, TestClass::TearDownTestCase, new TestFactoryImpl); // Next, recurses (at compile time) with the tail of the type list. return TypeParameterizedTest ::Register(prefix, case_name, test_names, index + 1); } }; // The base case for the compile time recursion. template class TypeParameterizedTest { public: static bool Register(const char* /*prefix*/, const char* /*case_name*/, const char* /*test_names*/, int /*index*/) { return true; } }; // TypeParameterizedTestCase::Register() // registers *all combinations* of 'Tests' and 'Types' with Google // Test. The return value is insignificant - we just need to return // something such that we can call this function in a namespace scope. template class TypeParameterizedTestCase { public: static bool Register(const char* prefix, const char* case_name, const char* test_names) { typedef typename Tests::Head Head; // First, register the first test in 'Test' for each type in 'Types'. TypeParameterizedTest::Register( prefix, case_name, test_names, 0); // Next, recurses (at compile time) with the tail of the test list. return TypeParameterizedTestCase ::Register(prefix, case_name, SkipComma(test_names)); } }; // The base case for the compile time recursion. template class TypeParameterizedTestCase { public: static bool Register(const char* /*prefix*/, const char* /*case_name*/, const char* /*test_names*/) { return true; } }; #endif // GTEST_HAS_TYPED_TEST || GTEST_HAS_TYPED_TEST_P // Returns the current OS stack trace as an std::string. // // The maximum number of stack frames to be included is specified by // the gtest_stack_trace_depth flag. The skip_count parameter // specifies the number of top frames to be skipped, which doesn't // count against the number of frames to be included. // // For example, if Foo() calls Bar(), which in turn calls // GetCurrentOsStackTraceExceptTop(..., 1), Foo() will be included in // the trace but Bar() and GetCurrentOsStackTraceExceptTop() won't. GTEST_API_ std::string GetCurrentOsStackTraceExceptTop( UnitTest* unit_test, int skip_count); // Helpers for suppressing warnings on unreachable code or constant // condition. // Always returns true. GTEST_API_ bool AlwaysTrue(); // Always returns false. inline bool AlwaysFalse() { return !AlwaysTrue(); } // Helper for suppressing false warning from Clang on a const char* // variable declared in a conditional expression always being NULL in // the else branch. struct GTEST_API_ ConstCharPtr { ConstCharPtr(const char* str) : value(str) {} operator bool() const { return true; } const char* value; }; // A simple Linear Congruential Generator for generating random // numbers with a uniform distribution. Unlike rand() and srand(), it // doesn't use global state (and therefore can't interfere with user // code). Unlike rand_r(), it's portable. An LCG isn't very random, // but it's good enough for our purposes. class GTEST_API_ Random { public: static const UInt32 kMaxRange = 1u << 31; explicit Random(UInt32 seed) : state_(seed) {} void Reseed(UInt32 seed) { state_ = seed; } // Generates a random number from [0, range). Crashes if 'range' is // 0 or greater than kMaxRange. UInt32 Generate(UInt32 range); private: UInt32 state_; GTEST_DISALLOW_COPY_AND_ASSIGN_(Random); }; // Defining a variable of type CompileAssertTypesEqual will cause a // compiler error iff T1 and T2 are different types. template struct CompileAssertTypesEqual; template struct CompileAssertTypesEqual { }; // Removes the reference from a type if it is a reference type, // otherwise leaves it unchanged. This is the same as // tr1::remove_reference, which is not widely available yet. template struct RemoveReference { typedef T type; }; // NOLINT template struct RemoveReference { typedef T type; }; // NOLINT // A handy wrapper around RemoveReference that works when the argument // T depends on template parameters. #define GTEST_REMOVE_REFERENCE_(T) \ typename ::testing::internal::RemoveReference::type // Removes const from a type if it is a const type, otherwise leaves // it unchanged. This is the same as tr1::remove_const, which is not // widely available yet. template struct RemoveConst { typedef T type; }; // NOLINT template struct RemoveConst { typedef T type; }; // NOLINT // MSVC 8.0, Sun C++, and IBM XL C++ have a bug which causes the above // definition to fail to remove the const in 'const int[3]' and 'const // char[3][4]'. The following specialization works around the bug. template struct RemoveConst { typedef typename RemoveConst::type type[N]; }; #if defined(_MSC_VER) && _MSC_VER < 1400 // This is the only specialization that allows VC++ 7.1 to remove const in // 'const int[3] and 'const int[3][4]'. However, it causes trouble with GCC // and thus needs to be conditionally compiled. template struct RemoveConst { typedef typename RemoveConst::type type[N]; }; #endif // A handy wrapper around RemoveConst that works when the argument // T depends on template parameters. #define GTEST_REMOVE_CONST_(T) \ typename ::testing::internal::RemoveConst::type // Turns const U&, U&, const U, and U all into U. #define GTEST_REMOVE_REFERENCE_AND_CONST_(T) \ GTEST_REMOVE_CONST_(GTEST_REMOVE_REFERENCE_(T)) // Adds reference to a type if it is not a reference type, // otherwise leaves it unchanged. This is the same as // tr1::add_reference, which is not widely available yet. template struct AddReference { typedef T& type; }; // NOLINT template struct AddReference { typedef T& type; }; // NOLINT // A handy wrapper around AddReference that works when the argument T // depends on template parameters. #define GTEST_ADD_REFERENCE_(T) \ typename ::testing::internal::AddReference::type // Adds a reference to const on top of T as necessary. For example, // it transforms // // char ==> const char& // const char ==> const char& // char& ==> const char& // const char& ==> const char& // // The argument T must depend on some template parameters. #define GTEST_REFERENCE_TO_CONST_(T) \ GTEST_ADD_REFERENCE_(const GTEST_REMOVE_REFERENCE_(T)) // ImplicitlyConvertible::value is a compile-time bool // constant that's true iff type From can be implicitly converted to // type To. template class ImplicitlyConvertible { private: // We need the following helper functions only for their types. // They have no implementations. // MakeFrom() is an expression whose type is From. We cannot simply // use From(), as the type From may not have a public default // constructor. static From MakeFrom(); // These two functions are overloaded. Given an expression // Helper(x), the compiler will pick the first version if x can be // implicitly converted to type To; otherwise it will pick the // second version. // // The first version returns a value of size 1, and the second // version returns a value of size 2. Therefore, by checking the // size of Helper(x), which can be done at compile time, we can tell // which version of Helper() is used, and hence whether x can be // implicitly converted to type To. static char Helper(To); static char (&Helper(...))[2]; // NOLINT // We have to put the 'public' section after the 'private' section, // or MSVC refuses to compile the code. public: // MSVC warns about implicitly converting from double to int for // possible loss of data, so we need to temporarily disable the // warning. #ifdef _MSC_VER # pragma warning(push) // Saves the current warning state. # pragma warning(disable:4244) // Temporarily disables warning 4244. static const bool value = sizeof(Helper(ImplicitlyConvertible::MakeFrom())) == 1; # pragma warning(pop) // Restores the warning state. #elif defined(__BORLANDC__) // C++Builder cannot use member overload resolution during template // instantiation. The simplest workaround is to use its C++0x type traits // functions (C++Builder 2009 and above only). static const bool value = __is_convertible(From, To); #else static const bool value = sizeof(Helper(ImplicitlyConvertible::MakeFrom())) == 1; #endif // _MSV_VER }; template const bool ImplicitlyConvertible::value; // IsAProtocolMessage::value is a compile-time bool constant that's // true iff T is type ProtocolMessage, proto2::Message, or a subclass // of those. template struct IsAProtocolMessage : public bool_constant< ImplicitlyConvertible::value || ImplicitlyConvertible::value> { }; // When the compiler sees expression IsContainerTest(0), if C is an // STL-style container class, the first overload of IsContainerTest // will be viable (since both C::iterator* and C::const_iterator* are // valid types and NULL can be implicitly converted to them). It will // be picked over the second overload as 'int' is a perfect match for // the type of argument 0. If C::iterator or C::const_iterator is not // a valid type, the first overload is not viable, and the second // overload will be picked. Therefore, we can determine whether C is // a container class by checking the type of IsContainerTest(0). // The value of the expression is insignificant. // // Note that we look for both C::iterator and C::const_iterator. The // reason is that C++ injects the name of a class as a member of the // class itself (e.g. you can refer to class iterator as either // 'iterator' or 'iterator::iterator'). If we look for C::iterator // only, for example, we would mistakenly think that a class named // iterator is an STL container. // // Also note that the simpler approach of overloading // IsContainerTest(typename C::const_iterator*) and // IsContainerTest(...) doesn't work with Visual Age C++ and Sun C++. typedef int IsContainer; template IsContainer IsContainerTest(int /* dummy */, typename C::iterator* /* it */ = NULL, typename C::const_iterator* /* const_it */ = NULL) { return 0; } typedef char IsNotContainer; template IsNotContainer IsContainerTest(long /* dummy */) { return '\0'; } // EnableIf::type is void when 'Cond' is true, and // undefined when 'Cond' is false. To use SFINAE to make a function // overload only apply when a particular expression is true, add // "typename EnableIf::type* = 0" as the last parameter. template struct EnableIf; template<> struct EnableIf { typedef void type; }; // NOLINT // Utilities for native arrays. // ArrayEq() compares two k-dimensional native arrays using the // elements' operator==, where k can be any integer >= 0. When k is // 0, ArrayEq() degenerates into comparing a single pair of values. template bool ArrayEq(const T* lhs, size_t size, const U* rhs); // This generic version is used when k is 0. template inline bool ArrayEq(const T& lhs, const U& rhs) { return lhs == rhs; } // This overload is used when k >= 1. template inline bool ArrayEq(const T(&lhs)[N], const U(&rhs)[N]) { return internal::ArrayEq(lhs, N, rhs); } // This helper reduces code bloat. If we instead put its logic inside // the previous ArrayEq() function, arrays with different sizes would // lead to different copies of the template code. template bool ArrayEq(const T* lhs, size_t size, const U* rhs) { for (size_t i = 0; i != size; i++) { if (!internal::ArrayEq(lhs[i], rhs[i])) return false; } return true; } // Finds the first element in the iterator range [begin, end) that // equals elem. Element may be a native array type itself. template Iter ArrayAwareFind(Iter begin, Iter end, const Element& elem) { for (Iter it = begin; it != end; ++it) { if (internal::ArrayEq(*it, elem)) return it; } return end; } // CopyArray() copies a k-dimensional native array using the elements' // operator=, where k can be any integer >= 0. When k is 0, // CopyArray() degenerates into copying a single value. template void CopyArray(const T* from, size_t size, U* to); // This generic version is used when k is 0. template inline void CopyArray(const T& from, U* to) { *to = from; } // This overload is used when k >= 1. template inline void CopyArray(const T(&from)[N], U(*to)[N]) { internal::CopyArray(from, N, *to); } // This helper reduces code bloat. If we instead put its logic inside // the previous CopyArray() function, arrays with different sizes // would lead to different copies of the template code. template void CopyArray(const T* from, size_t size, U* to) { for (size_t i = 0; i != size; i++) { internal::CopyArray(from[i], to + i); } } // The relation between an NativeArray object (see below) and the // native array it represents. enum RelationToSource { kReference, // The NativeArray references the native array. kCopy // The NativeArray makes a copy of the native array and // owns the copy. }; // Adapts a native array to a read-only STL-style container. Instead // of the complete STL container concept, this adaptor only implements // members useful for Google Mock's container matchers. New members // should be added as needed. To simplify the implementation, we only // support Element being a raw type (i.e. having no top-level const or // reference modifier). It's the client's responsibility to satisfy // this requirement. Element can be an array type itself (hence // multi-dimensional arrays are supported). template class NativeArray { public: // STL-style container typedefs. typedef Element value_type; typedef Element* iterator; typedef const Element* const_iterator; // Constructs from a native array. NativeArray(const Element* array, size_t count, RelationToSource relation) { Init(array, count, relation); } // Copy constructor. NativeArray(const NativeArray& rhs) { Init(rhs.array_, rhs.size_, rhs.relation_to_source_); } ~NativeArray() { // Ensures that the user doesn't instantiate NativeArray with a // const or reference type. static_cast(StaticAssertTypeEqHelper()); if (relation_to_source_ == kCopy) delete[] array_; } // STL-style container methods. size_t size() const { return size_; } const_iterator begin() const { return array_; } const_iterator end() const { return array_ + size_; } bool operator==(const NativeArray& rhs) const { return size() == rhs.size() && ArrayEq(begin(), size(), rhs.begin()); } private: // Initializes this object; makes a copy of the input array if // 'relation' is kCopy. void Init(const Element* array, size_t a_size, RelationToSource relation) { if (relation == kReference) { array_ = array; } else { Element* const copy = new Element[a_size]; CopyArray(array, a_size, copy); array_ = copy; } size_ = a_size; relation_to_source_ = relation; } const Element* array_; size_t size_; RelationToSource relation_to_source_; GTEST_DISALLOW_ASSIGN_(NativeArray); }; } // namespace internal } // namespace testing #define GTEST_MESSAGE_AT_(file, line, message, result_type) \ ::testing::internal::AssertHelper(result_type, file, line, message) \ = ::testing::Message() #define GTEST_MESSAGE_(message, result_type) \ GTEST_MESSAGE_AT_(__FILE__, __LINE__, message, result_type) #define GTEST_FATAL_FAILURE_(message) \ return GTEST_MESSAGE_(message, ::testing::TestPartResult::kFatalFailure) #define GTEST_NONFATAL_FAILURE_(message) \ GTEST_MESSAGE_(message, ::testing::TestPartResult::kNonFatalFailure) #define GTEST_SUCCESS_(message) \ GTEST_MESSAGE_(message, ::testing::TestPartResult::kSuccess) // Suppresses MSVC warnings 4072 (unreachable code) for the code following // statement if it returns or throws (or doesn't return or throw in some // situations). #define GTEST_SUPPRESS_UNREACHABLE_CODE_WARNING_BELOW_(statement) \ if (::testing::internal::AlwaysTrue()) { statement; } #define GTEST_TEST_THROW_(statement, expected_exception, fail) \ GTEST_AMBIGUOUS_ELSE_BLOCKER_ \ if (::testing::internal::ConstCharPtr gtest_msg = "") { \ bool gtest_caught_expected = false; \ try { \ GTEST_SUPPRESS_UNREACHABLE_CODE_WARNING_BELOW_(statement); \ } \ catch (expected_exception const&) { \ gtest_caught_expected = true; \ } \ catch (...) { \ gtest_msg.value = \ "Expected: " #statement " throws an exception of type " \ #expected_exception ".\n Actual: it throws a different type."; \ goto GTEST_CONCAT_TOKEN_(gtest_label_testthrow_, __LINE__); \ } \ if (!gtest_caught_expected) { \ gtest_msg.value = \ "Expected: " #statement " throws an exception of type " \ #expected_exception ".\n Actual: it throws nothing."; \ goto GTEST_CONCAT_TOKEN_(gtest_label_testthrow_, __LINE__); \ } \ } else \ GTEST_CONCAT_TOKEN_(gtest_label_testthrow_, __LINE__): \ fail(gtest_msg.value) #define GTEST_TEST_NO_THROW_(statement, fail) \ GTEST_AMBIGUOUS_ELSE_BLOCKER_ \ if (::testing::internal::AlwaysTrue()) { \ try { \ GTEST_SUPPRESS_UNREACHABLE_CODE_WARNING_BELOW_(statement); \ } \ catch (...) { \ goto GTEST_CONCAT_TOKEN_(gtest_label_testnothrow_, __LINE__); \ } \ } else \ GTEST_CONCAT_TOKEN_(gtest_label_testnothrow_, __LINE__): \ fail("Expected: " #statement " doesn't throw an exception.\n" \ " Actual: it throws.") #define GTEST_TEST_ANY_THROW_(statement, fail) \ GTEST_AMBIGUOUS_ELSE_BLOCKER_ \ if (::testing::internal::AlwaysTrue()) { \ bool gtest_caught_any = false; \ try { \ GTEST_SUPPRESS_UNREACHABLE_CODE_WARNING_BELOW_(statement); \ } \ catch (...) { \ gtest_caught_any = true; \ } \ if (!gtest_caught_any) { \ goto GTEST_CONCAT_TOKEN_(gtest_label_testanythrow_, __LINE__); \ } \ } else \ GTEST_CONCAT_TOKEN_(gtest_label_testanythrow_, __LINE__): \ fail("Expected: " #statement " throws an exception.\n" \ " Actual: it doesn't.") // Implements Boolean test assertions such as EXPECT_TRUE. expression can be // either a boolean expression or an AssertionResult. text is a textual // represenation of expression as it was passed into the EXPECT_TRUE. #define GTEST_TEST_BOOLEAN_(expression, text, actual, expected, fail) \ GTEST_AMBIGUOUS_ELSE_BLOCKER_ \ if (const ::testing::AssertionResult gtest_ar_ = \ ::testing::AssertionResult(expression)) \ ; \ else \ fail(::testing::internal::GetBoolAssertionFailureMessage(\ gtest_ar_, text, #actual, #expected).c_str()) #define GTEST_TEST_NO_FATAL_FAILURE_(statement, fail) \ GTEST_AMBIGUOUS_ELSE_BLOCKER_ \ if (::testing::internal::AlwaysTrue()) { \ ::testing::internal::HasNewFatalFailureHelper gtest_fatal_failure_checker; \ GTEST_SUPPRESS_UNREACHABLE_CODE_WARNING_BELOW_(statement); \ if (gtest_fatal_failure_checker.has_new_fatal_failure()) { \ goto GTEST_CONCAT_TOKEN_(gtest_label_testnofatal_, __LINE__); \ } \ } else \ GTEST_CONCAT_TOKEN_(gtest_label_testnofatal_, __LINE__): \ fail("Expected: " #statement " doesn't generate new fatal " \ "failures in the current thread.\n" \ " Actual: it does.") // Expands to the name of the class that implements the given test. #define GTEST_TEST_CLASS_NAME_(test_case_name, test_name) \ test_case_name##_##test_name##_Test // Helper macro for defining tests. #define GTEST_TEST_(test_case_name, test_name, parent_class, parent_id)\ class GTEST_TEST_CLASS_NAME_(test_case_name, test_name) : public parent_class {\ public:\ GTEST_TEST_CLASS_NAME_(test_case_name, test_name)() {}\ private:\ virtual void TestBody();\ static ::testing::TestInfo* const test_info_ GTEST_ATTRIBUTE_UNUSED_;\ GTEST_DISALLOW_COPY_AND_ASSIGN_(\ GTEST_TEST_CLASS_NAME_(test_case_name, test_name));\ };\ \ ::testing::TestInfo* const GTEST_TEST_CLASS_NAME_(test_case_name, test_name)\ ::test_info_ =\ ::testing::internal::MakeAndRegisterTestInfo(\ #test_case_name, #test_name, NULL, NULL, \ (parent_id), \ parent_class::SetUpTestCase, \ parent_class::TearDownTestCase, \ new ::testing::internal::TestFactoryImpl<\ GTEST_TEST_CLASS_NAME_(test_case_name, test_name)>);\ void GTEST_TEST_CLASS_NAME_(test_case_name, test_name)::TestBody() #endif // GTEST_INCLUDE_GTEST_INTERNAL_GTEST_INTERNAL_H_ ffc-2019.1.0.post0/libs/gtest-1.7.0/include/gtest/internal/gtest-linked_ptr.h000066400000000000000000000176451346026536000263250ustar00rootroot00000000000000// Copyright 2003 Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Authors: Dan Egnor (egnor@google.com) // // A "smart" pointer type with reference tracking. Every pointer to a // particular object is kept on a circular linked list. When the last pointer // to an object is destroyed or reassigned, the object is deleted. // // Used properly, this deletes the object when the last reference goes away. // There are several caveats: // - Like all reference counting schemes, cycles lead to leaks. // - Each smart pointer is actually two pointers (8 bytes instead of 4). // - Every time a pointer is assigned, the entire list of pointers to that // object is traversed. This class is therefore NOT SUITABLE when there // will often be more than two or three pointers to a particular object. // - References are only tracked as long as linked_ptr<> objects are copied. // If a linked_ptr<> is converted to a raw pointer and back, BAD THINGS // will happen (double deletion). // // A good use of this class is storing object references in STL containers. // You can safely put linked_ptr<> in a vector<>. // Other uses may not be as good. // // Note: If you use an incomplete type with linked_ptr<>, the class // *containing* linked_ptr<> must have a constructor and destructor (even // if they do nothing!). // // Bill Gibbons suggested we use something like this. // // Thread Safety: // Unlike other linked_ptr implementations, in this implementation // a linked_ptr object is thread-safe in the sense that: // - it's safe to copy linked_ptr objects concurrently, // - it's safe to copy *from* a linked_ptr and read its underlying // raw pointer (e.g. via get()) concurrently, and // - it's safe to write to two linked_ptrs that point to the same // shared object concurrently. // TODO(wan@google.com): rename this to safe_linked_ptr to avoid // confusion with normal linked_ptr. #ifndef GTEST_INCLUDE_GTEST_INTERNAL_GTEST_LINKED_PTR_H_ #define GTEST_INCLUDE_GTEST_INTERNAL_GTEST_LINKED_PTR_H_ #include #include #include "gtest/internal/gtest-port.h" namespace testing { namespace internal { // Protects copying of all linked_ptr objects. GTEST_API_ GTEST_DECLARE_STATIC_MUTEX_(g_linked_ptr_mutex); // This is used internally by all instances of linked_ptr<>. It needs to be // a non-template class because different types of linked_ptr<> can refer to // the same object (linked_ptr(obj) vs linked_ptr(obj)). // So, it needs to be possible for different types of linked_ptr to participate // in the same circular linked list, so we need a single class type here. // // DO NOT USE THIS CLASS DIRECTLY YOURSELF. Use linked_ptr. class linked_ptr_internal { public: // Create a new circle that includes only this instance. void join_new() { next_ = this; } // Many linked_ptr operations may change p.link_ for some linked_ptr // variable p in the same circle as this object. Therefore we need // to prevent two such operations from occurring concurrently. // // Note that different types of linked_ptr objects can coexist in a // circle (e.g. linked_ptr, linked_ptr, and // linked_ptr). Therefore we must use a single mutex to // protect all linked_ptr objects. This can create serious // contention in production code, but is acceptable in a testing // framework. // Join an existing circle. void join(linked_ptr_internal const* ptr) GTEST_LOCK_EXCLUDED_(g_linked_ptr_mutex) { MutexLock lock(&g_linked_ptr_mutex); linked_ptr_internal const* p = ptr; while (p->next_ != ptr) p = p->next_; p->next_ = this; next_ = ptr; } // Leave whatever circle we're part of. Returns true if we were the // last member of the circle. Once this is done, you can join() another. bool depart() GTEST_LOCK_EXCLUDED_(g_linked_ptr_mutex) { MutexLock lock(&g_linked_ptr_mutex); if (next_ == this) return true; linked_ptr_internal const* p = next_; while (p->next_ != this) p = p->next_; p->next_ = next_; return false; } private: mutable linked_ptr_internal const* next_; }; template class linked_ptr { public: typedef T element_type; // Take over ownership of a raw pointer. This should happen as soon as // possible after the object is created. explicit linked_ptr(T* ptr = NULL) { capture(ptr); } ~linked_ptr() { depart(); } // Copy an existing linked_ptr<>, adding ourselves to the list of references. template linked_ptr(linked_ptr const& ptr) { copy(&ptr); } linked_ptr(linked_ptr const& ptr) { // NOLINT assert(&ptr != this); copy(&ptr); } // Assignment releases the old value and acquires the new. template linked_ptr& operator=(linked_ptr const& ptr) { depart(); copy(&ptr); return *this; } linked_ptr& operator=(linked_ptr const& ptr) { if (&ptr != this) { depart(); copy(&ptr); } return *this; } // Smart pointer members. void reset(T* ptr = NULL) { depart(); capture(ptr); } T* get() const { return value_; } T* operator->() const { return value_; } T& operator*() const { return *value_; } bool operator==(T* p) const { return value_ == p; } bool operator!=(T* p) const { return value_ != p; } template bool operator==(linked_ptr const& ptr) const { return value_ == ptr.get(); } template bool operator!=(linked_ptr const& ptr) const { return value_ != ptr.get(); } private: template friend class linked_ptr; T* value_; linked_ptr_internal link_; void depart() { if (link_.depart()) delete value_; } void capture(T* ptr) { value_ = ptr; link_.join_new(); } template void copy(linked_ptr const* ptr) { value_ = ptr->get(); if (value_) link_.join(&ptr->link_); else link_.join_new(); } }; template inline bool operator==(T* ptr, const linked_ptr& x) { return ptr == x.get(); } template inline bool operator!=(T* ptr, const linked_ptr& x) { return ptr != x.get(); } // A function to convert T* into linked_ptr // Doing e.g. make_linked_ptr(new FooBarBaz(arg)) is a shorter notation // for linked_ptr >(new FooBarBaz(arg)) template linked_ptr make_linked_ptr(T* ptr) { return linked_ptr(ptr); } } // namespace internal } // namespace testing #endif // GTEST_INCLUDE_GTEST_INTERNAL_GTEST_LINKED_PTR_H_ ffc-2019.1.0.post0/libs/gtest-1.7.0/include/gtest/internal/gtest-param-util-generated.h000066400000000000000000005672601346026536000302040ustar00rootroot00000000000000// This file was GENERATED by command: // pump.py gtest-param-util-generated.h.pump // DO NOT EDIT BY HAND!!! // Copyright 2008 Google Inc. // All Rights Reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: vladl@google.com (Vlad Losev) // Type and function utilities for implementing parameterized tests. // This file is generated by a SCRIPT. DO NOT EDIT BY HAND! // // Currently Google Test supports at most 50 arguments in Values, // and at most 10 arguments in Combine. Please contact // googletestframework@googlegroups.com if you need more. // Please note that the number of arguments to Combine is limited // by the maximum arity of the implementation of tr1::tuple which is // currently set at 10. #ifndef GTEST_INCLUDE_GTEST_INTERNAL_GTEST_PARAM_UTIL_GENERATED_H_ #define GTEST_INCLUDE_GTEST_INTERNAL_GTEST_PARAM_UTIL_GENERATED_H_ // scripts/fuse_gtest.py depends on gtest's own header being #included // *unconditionally*. Therefore these #includes cannot be moved // inside #if GTEST_HAS_PARAM_TEST. #include "gtest/internal/gtest-param-util.h" #include "gtest/internal/gtest-port.h" #if GTEST_HAS_PARAM_TEST namespace testing { // Forward declarations of ValuesIn(), which is implemented in // include/gtest/gtest-param-test.h. template internal::ParamGenerator< typename ::testing::internal::IteratorTraits::value_type> ValuesIn(ForwardIterator begin, ForwardIterator end); template internal::ParamGenerator ValuesIn(const T (&array)[N]); template internal::ParamGenerator ValuesIn( const Container& container); namespace internal { // Used in the Values() function to provide polymorphic capabilities. template class ValueArray1 { public: explicit ValueArray1(T1 v1) : v1_(v1) {} template operator ParamGenerator() const { return ValuesIn(&v1_, &v1_ + 1); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray1& other); const T1 v1_; }; template class ValueArray2 { public: ValueArray2(T1 v1, T2 v2) : v1_(v1), v2_(v2) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray2& other); const T1 v1_; const T2 v2_; }; template class ValueArray3 { public: ValueArray3(T1 v1, T2 v2, T3 v3) : v1_(v1), v2_(v2), v3_(v3) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray3& other); const T1 v1_; const T2 v2_; const T3 v3_; }; template class ValueArray4 { public: ValueArray4(T1 v1, T2 v2, T3 v3, T4 v4) : v1_(v1), v2_(v2), v3_(v3), v4_(v4) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray4& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; }; template class ValueArray5 { public: ValueArray5(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray5& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; }; template class ValueArray6 { public: ValueArray6(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray6& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; }; template class ValueArray7 { public: ValueArray7(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray7& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; }; template class ValueArray8 { public: ValueArray8(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray8& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; }; template class ValueArray9 { public: ValueArray9(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray9& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; }; template class ValueArray10 { public: ValueArray10(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray10& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; }; template class ValueArray11 { public: ValueArray11(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray11& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; }; template class ValueArray12 { public: ValueArray12(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray12& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; }; template class ValueArray13 { public: ValueArray13(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray13& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; }; template class ValueArray14 { public: ValueArray14(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray14& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; }; template class ValueArray15 { public: ValueArray15(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray15& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; }; template class ValueArray16 { public: ValueArray16(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray16& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; }; template class ValueArray17 { public: ValueArray17(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_), static_cast(v17_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray17& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; const T17 v17_; }; template class ValueArray18 { public: ValueArray18(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_), static_cast(v17_), static_cast(v18_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray18& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; const T17 v17_; const T18 v18_; }; template class ValueArray19 { public: ValueArray19(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_), static_cast(v17_), static_cast(v18_), static_cast(v19_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray19& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; const T17 v17_; const T18 v18_; const T19 v19_; }; template class ValueArray20 { public: ValueArray20(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_), static_cast(v17_), static_cast(v18_), static_cast(v19_), static_cast(v20_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray20& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; const T17 v17_; const T18 v18_; const T19 v19_; const T20 v20_; }; template class ValueArray21 { public: ValueArray21(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_), static_cast(v17_), static_cast(v18_), static_cast(v19_), static_cast(v20_), static_cast(v21_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray21& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; const T17 v17_; const T18 v18_; const T19 v19_; const T20 v20_; const T21 v21_; }; template class ValueArray22 { public: ValueArray22(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_), static_cast(v17_), static_cast(v18_), static_cast(v19_), static_cast(v20_), static_cast(v21_), static_cast(v22_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray22& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; const T17 v17_; const T18 v18_; const T19 v19_; const T20 v20_; const T21 v21_; const T22 v22_; }; template class ValueArray23 { public: ValueArray23(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22), v23_(v23) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_), static_cast(v17_), static_cast(v18_), static_cast(v19_), static_cast(v20_), static_cast(v21_), static_cast(v22_), static_cast(v23_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray23& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; const T17 v17_; const T18 v18_; const T19 v19_; const T20 v20_; const T21 v21_; const T22 v22_; const T23 v23_; }; template class ValueArray24 { public: ValueArray24(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22), v23_(v23), v24_(v24) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_), static_cast(v17_), static_cast(v18_), static_cast(v19_), static_cast(v20_), static_cast(v21_), static_cast(v22_), static_cast(v23_), static_cast(v24_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray24& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; const T17 v17_; const T18 v18_; const T19 v19_; const T20 v20_; const T21 v21_; const T22 v22_; const T23 v23_; const T24 v24_; }; template class ValueArray25 { public: ValueArray25(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22), v23_(v23), v24_(v24), v25_(v25) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_), static_cast(v17_), static_cast(v18_), static_cast(v19_), static_cast(v20_), static_cast(v21_), static_cast(v22_), static_cast(v23_), static_cast(v24_), static_cast(v25_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray25& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; const T17 v17_; const T18 v18_; const T19 v19_; const T20 v20_; const T21 v21_; const T22 v22_; const T23 v23_; const T24 v24_; const T25 v25_; }; template class ValueArray26 { public: ValueArray26(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22), v23_(v23), v24_(v24), v25_(v25), v26_(v26) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_), static_cast(v17_), static_cast(v18_), static_cast(v19_), static_cast(v20_), static_cast(v21_), static_cast(v22_), static_cast(v23_), static_cast(v24_), static_cast(v25_), static_cast(v26_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray26& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; const T17 v17_; const T18 v18_; const T19 v19_; const T20 v20_; const T21 v21_; const T22 v22_; const T23 v23_; const T24 v24_; const T25 v25_; const T26 v26_; }; template class ValueArray27 { public: ValueArray27(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22), v23_(v23), v24_(v24), v25_(v25), v26_(v26), v27_(v27) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_), static_cast(v17_), static_cast(v18_), static_cast(v19_), static_cast(v20_), static_cast(v21_), static_cast(v22_), static_cast(v23_), static_cast(v24_), static_cast(v25_), static_cast(v26_), static_cast(v27_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray27& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; const T17 v17_; const T18 v18_; const T19 v19_; const T20 v20_; const T21 v21_; const T22 v22_; const T23 v23_; const T24 v24_; const T25 v25_; const T26 v26_; const T27 v27_; }; template class ValueArray28 { public: ValueArray28(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22), v23_(v23), v24_(v24), v25_(v25), v26_(v26), v27_(v27), v28_(v28) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_), static_cast(v17_), static_cast(v18_), static_cast(v19_), static_cast(v20_), static_cast(v21_), static_cast(v22_), static_cast(v23_), static_cast(v24_), static_cast(v25_), static_cast(v26_), static_cast(v27_), static_cast(v28_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray28& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; const T17 v17_; const T18 v18_; const T19 v19_; const T20 v20_; const T21 v21_; const T22 v22_; const T23 v23_; const T24 v24_; const T25 v25_; const T26 v26_; const T27 v27_; const T28 v28_; }; template class ValueArray29 { public: ValueArray29(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22), v23_(v23), v24_(v24), v25_(v25), v26_(v26), v27_(v27), v28_(v28), v29_(v29) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_), static_cast(v17_), static_cast(v18_), static_cast(v19_), static_cast(v20_), static_cast(v21_), static_cast(v22_), static_cast(v23_), static_cast(v24_), static_cast(v25_), static_cast(v26_), static_cast(v27_), static_cast(v28_), static_cast(v29_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray29& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; const T17 v17_; const T18 v18_; const T19 v19_; const T20 v20_; const T21 v21_; const T22 v22_; const T23 v23_; const T24 v24_; const T25 v25_; const T26 v26_; const T27 v27_; const T28 v28_; const T29 v29_; }; template class ValueArray30 { public: ValueArray30(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22), v23_(v23), v24_(v24), v25_(v25), v26_(v26), v27_(v27), v28_(v28), v29_(v29), v30_(v30) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_), static_cast(v17_), static_cast(v18_), static_cast(v19_), static_cast(v20_), static_cast(v21_), static_cast(v22_), static_cast(v23_), static_cast(v24_), static_cast(v25_), static_cast(v26_), static_cast(v27_), static_cast(v28_), static_cast(v29_), static_cast(v30_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray30& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; const T17 v17_; const T18 v18_; const T19 v19_; const T20 v20_; const T21 v21_; const T22 v22_; const T23 v23_; const T24 v24_; const T25 v25_; const T26 v26_; const T27 v27_; const T28 v28_; const T29 v29_; const T30 v30_; }; template class ValueArray31 { public: ValueArray31(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22), v23_(v23), v24_(v24), v25_(v25), v26_(v26), v27_(v27), v28_(v28), v29_(v29), v30_(v30), v31_(v31) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_), static_cast(v17_), static_cast(v18_), static_cast(v19_), static_cast(v20_), static_cast(v21_), static_cast(v22_), static_cast(v23_), static_cast(v24_), static_cast(v25_), static_cast(v26_), static_cast(v27_), static_cast(v28_), static_cast(v29_), static_cast(v30_), static_cast(v31_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray31& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; const T17 v17_; const T18 v18_; const T19 v19_; const T20 v20_; const T21 v21_; const T22 v22_; const T23 v23_; const T24 v24_; const T25 v25_; const T26 v26_; const T27 v27_; const T28 v28_; const T29 v29_; const T30 v30_; const T31 v31_; }; template class ValueArray32 { public: ValueArray32(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22), v23_(v23), v24_(v24), v25_(v25), v26_(v26), v27_(v27), v28_(v28), v29_(v29), v30_(v30), v31_(v31), v32_(v32) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_), static_cast(v17_), static_cast(v18_), static_cast(v19_), static_cast(v20_), static_cast(v21_), static_cast(v22_), static_cast(v23_), static_cast(v24_), static_cast(v25_), static_cast(v26_), static_cast(v27_), static_cast(v28_), static_cast(v29_), static_cast(v30_), static_cast(v31_), static_cast(v32_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray32& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; const T17 v17_; const T18 v18_; const T19 v19_; const T20 v20_; const T21 v21_; const T22 v22_; const T23 v23_; const T24 v24_; const T25 v25_; const T26 v26_; const T27 v27_; const T28 v28_; const T29 v29_; const T30 v30_; const T31 v31_; const T32 v32_; }; template class ValueArray33 { public: ValueArray33(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22), v23_(v23), v24_(v24), v25_(v25), v26_(v26), v27_(v27), v28_(v28), v29_(v29), v30_(v30), v31_(v31), v32_(v32), v33_(v33) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_), static_cast(v17_), static_cast(v18_), static_cast(v19_), static_cast(v20_), static_cast(v21_), static_cast(v22_), static_cast(v23_), static_cast(v24_), static_cast(v25_), static_cast(v26_), static_cast(v27_), static_cast(v28_), static_cast(v29_), static_cast(v30_), static_cast(v31_), static_cast(v32_), static_cast(v33_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray33& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; const T17 v17_; const T18 v18_; const T19 v19_; const T20 v20_; const T21 v21_; const T22 v22_; const T23 v23_; const T24 v24_; const T25 v25_; const T26 v26_; const T27 v27_; const T28 v28_; const T29 v29_; const T30 v30_; const T31 v31_; const T32 v32_; const T33 v33_; }; template class ValueArray34 { public: ValueArray34(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22), v23_(v23), v24_(v24), v25_(v25), v26_(v26), v27_(v27), v28_(v28), v29_(v29), v30_(v30), v31_(v31), v32_(v32), v33_(v33), v34_(v34) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_), static_cast(v17_), static_cast(v18_), static_cast(v19_), static_cast(v20_), static_cast(v21_), static_cast(v22_), static_cast(v23_), static_cast(v24_), static_cast(v25_), static_cast(v26_), static_cast(v27_), static_cast(v28_), static_cast(v29_), static_cast(v30_), static_cast(v31_), static_cast(v32_), static_cast(v33_), static_cast(v34_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray34& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; const T17 v17_; const T18 v18_; const T19 v19_; const T20 v20_; const T21 v21_; const T22 v22_; const T23 v23_; const T24 v24_; const T25 v25_; const T26 v26_; const T27 v27_; const T28 v28_; const T29 v29_; const T30 v30_; const T31 v31_; const T32 v32_; const T33 v33_; const T34 v34_; }; template class ValueArray35 { public: ValueArray35(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22), v23_(v23), v24_(v24), v25_(v25), v26_(v26), v27_(v27), v28_(v28), v29_(v29), v30_(v30), v31_(v31), v32_(v32), v33_(v33), v34_(v34), v35_(v35) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_), static_cast(v17_), static_cast(v18_), static_cast(v19_), static_cast(v20_), static_cast(v21_), static_cast(v22_), static_cast(v23_), static_cast(v24_), static_cast(v25_), static_cast(v26_), static_cast(v27_), static_cast(v28_), static_cast(v29_), static_cast(v30_), static_cast(v31_), static_cast(v32_), static_cast(v33_), static_cast(v34_), static_cast(v35_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray35& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; const T17 v17_; const T18 v18_; const T19 v19_; const T20 v20_; const T21 v21_; const T22 v22_; const T23 v23_; const T24 v24_; const T25 v25_; const T26 v26_; const T27 v27_; const T28 v28_; const T29 v29_; const T30 v30_; const T31 v31_; const T32 v32_; const T33 v33_; const T34 v34_; const T35 v35_; }; template class ValueArray36 { public: ValueArray36(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35, T36 v36) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22), v23_(v23), v24_(v24), v25_(v25), v26_(v26), v27_(v27), v28_(v28), v29_(v29), v30_(v30), v31_(v31), v32_(v32), v33_(v33), v34_(v34), v35_(v35), v36_(v36) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_), static_cast(v17_), static_cast(v18_), static_cast(v19_), static_cast(v20_), static_cast(v21_), static_cast(v22_), static_cast(v23_), static_cast(v24_), static_cast(v25_), static_cast(v26_), static_cast(v27_), static_cast(v28_), static_cast(v29_), static_cast(v30_), static_cast(v31_), static_cast(v32_), static_cast(v33_), static_cast(v34_), static_cast(v35_), static_cast(v36_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray36& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; const T17 v17_; const T18 v18_; const T19 v19_; const T20 v20_; const T21 v21_; const T22 v22_; const T23 v23_; const T24 v24_; const T25 v25_; const T26 v26_; const T27 v27_; const T28 v28_; const T29 v29_; const T30 v30_; const T31 v31_; const T32 v32_; const T33 v33_; const T34 v34_; const T35 v35_; const T36 v36_; }; template class ValueArray37 { public: ValueArray37(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35, T36 v36, T37 v37) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22), v23_(v23), v24_(v24), v25_(v25), v26_(v26), v27_(v27), v28_(v28), v29_(v29), v30_(v30), v31_(v31), v32_(v32), v33_(v33), v34_(v34), v35_(v35), v36_(v36), v37_(v37) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_), static_cast(v17_), static_cast(v18_), static_cast(v19_), static_cast(v20_), static_cast(v21_), static_cast(v22_), static_cast(v23_), static_cast(v24_), static_cast(v25_), static_cast(v26_), static_cast(v27_), static_cast(v28_), static_cast(v29_), static_cast(v30_), static_cast(v31_), static_cast(v32_), static_cast(v33_), static_cast(v34_), static_cast(v35_), static_cast(v36_), static_cast(v37_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray37& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; const T17 v17_; const T18 v18_; const T19 v19_; const T20 v20_; const T21 v21_; const T22 v22_; const T23 v23_; const T24 v24_; const T25 v25_; const T26 v26_; const T27 v27_; const T28 v28_; const T29 v29_; const T30 v30_; const T31 v31_; const T32 v32_; const T33 v33_; const T34 v34_; const T35 v35_; const T36 v36_; const T37 v37_; }; template class ValueArray38 { public: ValueArray38(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35, T36 v36, T37 v37, T38 v38) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22), v23_(v23), v24_(v24), v25_(v25), v26_(v26), v27_(v27), v28_(v28), v29_(v29), v30_(v30), v31_(v31), v32_(v32), v33_(v33), v34_(v34), v35_(v35), v36_(v36), v37_(v37), v38_(v38) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_), static_cast(v17_), static_cast(v18_), static_cast(v19_), static_cast(v20_), static_cast(v21_), static_cast(v22_), static_cast(v23_), static_cast(v24_), static_cast(v25_), static_cast(v26_), static_cast(v27_), static_cast(v28_), static_cast(v29_), static_cast(v30_), static_cast(v31_), static_cast(v32_), static_cast(v33_), static_cast(v34_), static_cast(v35_), static_cast(v36_), static_cast(v37_), static_cast(v38_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray38& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; const T17 v17_; const T18 v18_; const T19 v19_; const T20 v20_; const T21 v21_; const T22 v22_; const T23 v23_; const T24 v24_; const T25 v25_; const T26 v26_; const T27 v27_; const T28 v28_; const T29 v29_; const T30 v30_; const T31 v31_; const T32 v32_; const T33 v33_; const T34 v34_; const T35 v35_; const T36 v36_; const T37 v37_; const T38 v38_; }; template class ValueArray39 { public: ValueArray39(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35, T36 v36, T37 v37, T38 v38, T39 v39) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22), v23_(v23), v24_(v24), v25_(v25), v26_(v26), v27_(v27), v28_(v28), v29_(v29), v30_(v30), v31_(v31), v32_(v32), v33_(v33), v34_(v34), v35_(v35), v36_(v36), v37_(v37), v38_(v38), v39_(v39) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_), static_cast(v17_), static_cast(v18_), static_cast(v19_), static_cast(v20_), static_cast(v21_), static_cast(v22_), static_cast(v23_), static_cast(v24_), static_cast(v25_), static_cast(v26_), static_cast(v27_), static_cast(v28_), static_cast(v29_), static_cast(v30_), static_cast(v31_), static_cast(v32_), static_cast(v33_), static_cast(v34_), static_cast(v35_), static_cast(v36_), static_cast(v37_), static_cast(v38_), static_cast(v39_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray39& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; const T17 v17_; const T18 v18_; const T19 v19_; const T20 v20_; const T21 v21_; const T22 v22_; const T23 v23_; const T24 v24_; const T25 v25_; const T26 v26_; const T27 v27_; const T28 v28_; const T29 v29_; const T30 v30_; const T31 v31_; const T32 v32_; const T33 v33_; const T34 v34_; const T35 v35_; const T36 v36_; const T37 v37_; const T38 v38_; const T39 v39_; }; template class ValueArray40 { public: ValueArray40(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35, T36 v36, T37 v37, T38 v38, T39 v39, T40 v40) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22), v23_(v23), v24_(v24), v25_(v25), v26_(v26), v27_(v27), v28_(v28), v29_(v29), v30_(v30), v31_(v31), v32_(v32), v33_(v33), v34_(v34), v35_(v35), v36_(v36), v37_(v37), v38_(v38), v39_(v39), v40_(v40) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_), static_cast(v17_), static_cast(v18_), static_cast(v19_), static_cast(v20_), static_cast(v21_), static_cast(v22_), static_cast(v23_), static_cast(v24_), static_cast(v25_), static_cast(v26_), static_cast(v27_), static_cast(v28_), static_cast(v29_), static_cast(v30_), static_cast(v31_), static_cast(v32_), static_cast(v33_), static_cast(v34_), static_cast(v35_), static_cast(v36_), static_cast(v37_), static_cast(v38_), static_cast(v39_), static_cast(v40_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray40& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; const T17 v17_; const T18 v18_; const T19 v19_; const T20 v20_; const T21 v21_; const T22 v22_; const T23 v23_; const T24 v24_; const T25 v25_; const T26 v26_; const T27 v27_; const T28 v28_; const T29 v29_; const T30 v30_; const T31 v31_; const T32 v32_; const T33 v33_; const T34 v34_; const T35 v35_; const T36 v36_; const T37 v37_; const T38 v38_; const T39 v39_; const T40 v40_; }; template class ValueArray41 { public: ValueArray41(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35, T36 v36, T37 v37, T38 v38, T39 v39, T40 v40, T41 v41) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22), v23_(v23), v24_(v24), v25_(v25), v26_(v26), v27_(v27), v28_(v28), v29_(v29), v30_(v30), v31_(v31), v32_(v32), v33_(v33), v34_(v34), v35_(v35), v36_(v36), v37_(v37), v38_(v38), v39_(v39), v40_(v40), v41_(v41) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_), static_cast(v17_), static_cast(v18_), static_cast(v19_), static_cast(v20_), static_cast(v21_), static_cast(v22_), static_cast(v23_), static_cast(v24_), static_cast(v25_), static_cast(v26_), static_cast(v27_), static_cast(v28_), static_cast(v29_), static_cast(v30_), static_cast(v31_), static_cast(v32_), static_cast(v33_), static_cast(v34_), static_cast(v35_), static_cast(v36_), static_cast(v37_), static_cast(v38_), static_cast(v39_), static_cast(v40_), static_cast(v41_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray41& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; const T17 v17_; const T18 v18_; const T19 v19_; const T20 v20_; const T21 v21_; const T22 v22_; const T23 v23_; const T24 v24_; const T25 v25_; const T26 v26_; const T27 v27_; const T28 v28_; const T29 v29_; const T30 v30_; const T31 v31_; const T32 v32_; const T33 v33_; const T34 v34_; const T35 v35_; const T36 v36_; const T37 v37_; const T38 v38_; const T39 v39_; const T40 v40_; const T41 v41_; }; template class ValueArray42 { public: ValueArray42(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35, T36 v36, T37 v37, T38 v38, T39 v39, T40 v40, T41 v41, T42 v42) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22), v23_(v23), v24_(v24), v25_(v25), v26_(v26), v27_(v27), v28_(v28), v29_(v29), v30_(v30), v31_(v31), v32_(v32), v33_(v33), v34_(v34), v35_(v35), v36_(v36), v37_(v37), v38_(v38), v39_(v39), v40_(v40), v41_(v41), v42_(v42) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_), static_cast(v17_), static_cast(v18_), static_cast(v19_), static_cast(v20_), static_cast(v21_), static_cast(v22_), static_cast(v23_), static_cast(v24_), static_cast(v25_), static_cast(v26_), static_cast(v27_), static_cast(v28_), static_cast(v29_), static_cast(v30_), static_cast(v31_), static_cast(v32_), static_cast(v33_), static_cast(v34_), static_cast(v35_), static_cast(v36_), static_cast(v37_), static_cast(v38_), static_cast(v39_), static_cast(v40_), static_cast(v41_), static_cast(v42_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray42& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; const T17 v17_; const T18 v18_; const T19 v19_; const T20 v20_; const T21 v21_; const T22 v22_; const T23 v23_; const T24 v24_; const T25 v25_; const T26 v26_; const T27 v27_; const T28 v28_; const T29 v29_; const T30 v30_; const T31 v31_; const T32 v32_; const T33 v33_; const T34 v34_; const T35 v35_; const T36 v36_; const T37 v37_; const T38 v38_; const T39 v39_; const T40 v40_; const T41 v41_; const T42 v42_; }; template class ValueArray43 { public: ValueArray43(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35, T36 v36, T37 v37, T38 v38, T39 v39, T40 v40, T41 v41, T42 v42, T43 v43) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22), v23_(v23), v24_(v24), v25_(v25), v26_(v26), v27_(v27), v28_(v28), v29_(v29), v30_(v30), v31_(v31), v32_(v32), v33_(v33), v34_(v34), v35_(v35), v36_(v36), v37_(v37), v38_(v38), v39_(v39), v40_(v40), v41_(v41), v42_(v42), v43_(v43) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_), static_cast(v17_), static_cast(v18_), static_cast(v19_), static_cast(v20_), static_cast(v21_), static_cast(v22_), static_cast(v23_), static_cast(v24_), static_cast(v25_), static_cast(v26_), static_cast(v27_), static_cast(v28_), static_cast(v29_), static_cast(v30_), static_cast(v31_), static_cast(v32_), static_cast(v33_), static_cast(v34_), static_cast(v35_), static_cast(v36_), static_cast(v37_), static_cast(v38_), static_cast(v39_), static_cast(v40_), static_cast(v41_), static_cast(v42_), static_cast(v43_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray43& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; const T17 v17_; const T18 v18_; const T19 v19_; const T20 v20_; const T21 v21_; const T22 v22_; const T23 v23_; const T24 v24_; const T25 v25_; const T26 v26_; const T27 v27_; const T28 v28_; const T29 v29_; const T30 v30_; const T31 v31_; const T32 v32_; const T33 v33_; const T34 v34_; const T35 v35_; const T36 v36_; const T37 v37_; const T38 v38_; const T39 v39_; const T40 v40_; const T41 v41_; const T42 v42_; const T43 v43_; }; template class ValueArray44 { public: ValueArray44(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35, T36 v36, T37 v37, T38 v38, T39 v39, T40 v40, T41 v41, T42 v42, T43 v43, T44 v44) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22), v23_(v23), v24_(v24), v25_(v25), v26_(v26), v27_(v27), v28_(v28), v29_(v29), v30_(v30), v31_(v31), v32_(v32), v33_(v33), v34_(v34), v35_(v35), v36_(v36), v37_(v37), v38_(v38), v39_(v39), v40_(v40), v41_(v41), v42_(v42), v43_(v43), v44_(v44) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_), static_cast(v17_), static_cast(v18_), static_cast(v19_), static_cast(v20_), static_cast(v21_), static_cast(v22_), static_cast(v23_), static_cast(v24_), static_cast(v25_), static_cast(v26_), static_cast(v27_), static_cast(v28_), static_cast(v29_), static_cast(v30_), static_cast(v31_), static_cast(v32_), static_cast(v33_), static_cast(v34_), static_cast(v35_), static_cast(v36_), static_cast(v37_), static_cast(v38_), static_cast(v39_), static_cast(v40_), static_cast(v41_), static_cast(v42_), static_cast(v43_), static_cast(v44_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray44& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; const T17 v17_; const T18 v18_; const T19 v19_; const T20 v20_; const T21 v21_; const T22 v22_; const T23 v23_; const T24 v24_; const T25 v25_; const T26 v26_; const T27 v27_; const T28 v28_; const T29 v29_; const T30 v30_; const T31 v31_; const T32 v32_; const T33 v33_; const T34 v34_; const T35 v35_; const T36 v36_; const T37 v37_; const T38 v38_; const T39 v39_; const T40 v40_; const T41 v41_; const T42 v42_; const T43 v43_; const T44 v44_; }; template class ValueArray45 { public: ValueArray45(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35, T36 v36, T37 v37, T38 v38, T39 v39, T40 v40, T41 v41, T42 v42, T43 v43, T44 v44, T45 v45) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22), v23_(v23), v24_(v24), v25_(v25), v26_(v26), v27_(v27), v28_(v28), v29_(v29), v30_(v30), v31_(v31), v32_(v32), v33_(v33), v34_(v34), v35_(v35), v36_(v36), v37_(v37), v38_(v38), v39_(v39), v40_(v40), v41_(v41), v42_(v42), v43_(v43), v44_(v44), v45_(v45) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_), static_cast(v17_), static_cast(v18_), static_cast(v19_), static_cast(v20_), static_cast(v21_), static_cast(v22_), static_cast(v23_), static_cast(v24_), static_cast(v25_), static_cast(v26_), static_cast(v27_), static_cast(v28_), static_cast(v29_), static_cast(v30_), static_cast(v31_), static_cast(v32_), static_cast(v33_), static_cast(v34_), static_cast(v35_), static_cast(v36_), static_cast(v37_), static_cast(v38_), static_cast(v39_), static_cast(v40_), static_cast(v41_), static_cast(v42_), static_cast(v43_), static_cast(v44_), static_cast(v45_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray45& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; const T17 v17_; const T18 v18_; const T19 v19_; const T20 v20_; const T21 v21_; const T22 v22_; const T23 v23_; const T24 v24_; const T25 v25_; const T26 v26_; const T27 v27_; const T28 v28_; const T29 v29_; const T30 v30_; const T31 v31_; const T32 v32_; const T33 v33_; const T34 v34_; const T35 v35_; const T36 v36_; const T37 v37_; const T38 v38_; const T39 v39_; const T40 v40_; const T41 v41_; const T42 v42_; const T43 v43_; const T44 v44_; const T45 v45_; }; template class ValueArray46 { public: ValueArray46(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35, T36 v36, T37 v37, T38 v38, T39 v39, T40 v40, T41 v41, T42 v42, T43 v43, T44 v44, T45 v45, T46 v46) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22), v23_(v23), v24_(v24), v25_(v25), v26_(v26), v27_(v27), v28_(v28), v29_(v29), v30_(v30), v31_(v31), v32_(v32), v33_(v33), v34_(v34), v35_(v35), v36_(v36), v37_(v37), v38_(v38), v39_(v39), v40_(v40), v41_(v41), v42_(v42), v43_(v43), v44_(v44), v45_(v45), v46_(v46) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_), static_cast(v17_), static_cast(v18_), static_cast(v19_), static_cast(v20_), static_cast(v21_), static_cast(v22_), static_cast(v23_), static_cast(v24_), static_cast(v25_), static_cast(v26_), static_cast(v27_), static_cast(v28_), static_cast(v29_), static_cast(v30_), static_cast(v31_), static_cast(v32_), static_cast(v33_), static_cast(v34_), static_cast(v35_), static_cast(v36_), static_cast(v37_), static_cast(v38_), static_cast(v39_), static_cast(v40_), static_cast(v41_), static_cast(v42_), static_cast(v43_), static_cast(v44_), static_cast(v45_), static_cast(v46_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray46& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; const T17 v17_; const T18 v18_; const T19 v19_; const T20 v20_; const T21 v21_; const T22 v22_; const T23 v23_; const T24 v24_; const T25 v25_; const T26 v26_; const T27 v27_; const T28 v28_; const T29 v29_; const T30 v30_; const T31 v31_; const T32 v32_; const T33 v33_; const T34 v34_; const T35 v35_; const T36 v36_; const T37 v37_; const T38 v38_; const T39 v39_; const T40 v40_; const T41 v41_; const T42 v42_; const T43 v43_; const T44 v44_; const T45 v45_; const T46 v46_; }; template class ValueArray47 { public: ValueArray47(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35, T36 v36, T37 v37, T38 v38, T39 v39, T40 v40, T41 v41, T42 v42, T43 v43, T44 v44, T45 v45, T46 v46, T47 v47) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22), v23_(v23), v24_(v24), v25_(v25), v26_(v26), v27_(v27), v28_(v28), v29_(v29), v30_(v30), v31_(v31), v32_(v32), v33_(v33), v34_(v34), v35_(v35), v36_(v36), v37_(v37), v38_(v38), v39_(v39), v40_(v40), v41_(v41), v42_(v42), v43_(v43), v44_(v44), v45_(v45), v46_(v46), v47_(v47) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_), static_cast(v17_), static_cast(v18_), static_cast(v19_), static_cast(v20_), static_cast(v21_), static_cast(v22_), static_cast(v23_), static_cast(v24_), static_cast(v25_), static_cast(v26_), static_cast(v27_), static_cast(v28_), static_cast(v29_), static_cast(v30_), static_cast(v31_), static_cast(v32_), static_cast(v33_), static_cast(v34_), static_cast(v35_), static_cast(v36_), static_cast(v37_), static_cast(v38_), static_cast(v39_), static_cast(v40_), static_cast(v41_), static_cast(v42_), static_cast(v43_), static_cast(v44_), static_cast(v45_), static_cast(v46_), static_cast(v47_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray47& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; const T17 v17_; const T18 v18_; const T19 v19_; const T20 v20_; const T21 v21_; const T22 v22_; const T23 v23_; const T24 v24_; const T25 v25_; const T26 v26_; const T27 v27_; const T28 v28_; const T29 v29_; const T30 v30_; const T31 v31_; const T32 v32_; const T33 v33_; const T34 v34_; const T35 v35_; const T36 v36_; const T37 v37_; const T38 v38_; const T39 v39_; const T40 v40_; const T41 v41_; const T42 v42_; const T43 v43_; const T44 v44_; const T45 v45_; const T46 v46_; const T47 v47_; }; template class ValueArray48 { public: ValueArray48(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35, T36 v36, T37 v37, T38 v38, T39 v39, T40 v40, T41 v41, T42 v42, T43 v43, T44 v44, T45 v45, T46 v46, T47 v47, T48 v48) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22), v23_(v23), v24_(v24), v25_(v25), v26_(v26), v27_(v27), v28_(v28), v29_(v29), v30_(v30), v31_(v31), v32_(v32), v33_(v33), v34_(v34), v35_(v35), v36_(v36), v37_(v37), v38_(v38), v39_(v39), v40_(v40), v41_(v41), v42_(v42), v43_(v43), v44_(v44), v45_(v45), v46_(v46), v47_(v47), v48_(v48) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_), static_cast(v17_), static_cast(v18_), static_cast(v19_), static_cast(v20_), static_cast(v21_), static_cast(v22_), static_cast(v23_), static_cast(v24_), static_cast(v25_), static_cast(v26_), static_cast(v27_), static_cast(v28_), static_cast(v29_), static_cast(v30_), static_cast(v31_), static_cast(v32_), static_cast(v33_), static_cast(v34_), static_cast(v35_), static_cast(v36_), static_cast(v37_), static_cast(v38_), static_cast(v39_), static_cast(v40_), static_cast(v41_), static_cast(v42_), static_cast(v43_), static_cast(v44_), static_cast(v45_), static_cast(v46_), static_cast(v47_), static_cast(v48_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray48& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; const T17 v17_; const T18 v18_; const T19 v19_; const T20 v20_; const T21 v21_; const T22 v22_; const T23 v23_; const T24 v24_; const T25 v25_; const T26 v26_; const T27 v27_; const T28 v28_; const T29 v29_; const T30 v30_; const T31 v31_; const T32 v32_; const T33 v33_; const T34 v34_; const T35 v35_; const T36 v36_; const T37 v37_; const T38 v38_; const T39 v39_; const T40 v40_; const T41 v41_; const T42 v42_; const T43 v43_; const T44 v44_; const T45 v45_; const T46 v46_; const T47 v47_; const T48 v48_; }; template class ValueArray49 { public: ValueArray49(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35, T36 v36, T37 v37, T38 v38, T39 v39, T40 v40, T41 v41, T42 v42, T43 v43, T44 v44, T45 v45, T46 v46, T47 v47, T48 v48, T49 v49) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22), v23_(v23), v24_(v24), v25_(v25), v26_(v26), v27_(v27), v28_(v28), v29_(v29), v30_(v30), v31_(v31), v32_(v32), v33_(v33), v34_(v34), v35_(v35), v36_(v36), v37_(v37), v38_(v38), v39_(v39), v40_(v40), v41_(v41), v42_(v42), v43_(v43), v44_(v44), v45_(v45), v46_(v46), v47_(v47), v48_(v48), v49_(v49) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_), static_cast(v17_), static_cast(v18_), static_cast(v19_), static_cast(v20_), static_cast(v21_), static_cast(v22_), static_cast(v23_), static_cast(v24_), static_cast(v25_), static_cast(v26_), static_cast(v27_), static_cast(v28_), static_cast(v29_), static_cast(v30_), static_cast(v31_), static_cast(v32_), static_cast(v33_), static_cast(v34_), static_cast(v35_), static_cast(v36_), static_cast(v37_), static_cast(v38_), static_cast(v39_), static_cast(v40_), static_cast(v41_), static_cast(v42_), static_cast(v43_), static_cast(v44_), static_cast(v45_), static_cast(v46_), static_cast(v47_), static_cast(v48_), static_cast(v49_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray49& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; const T17 v17_; const T18 v18_; const T19 v19_; const T20 v20_; const T21 v21_; const T22 v22_; const T23 v23_; const T24 v24_; const T25 v25_; const T26 v26_; const T27 v27_; const T28 v28_; const T29 v29_; const T30 v30_; const T31 v31_; const T32 v32_; const T33 v33_; const T34 v34_; const T35 v35_; const T36 v36_; const T37 v37_; const T38 v38_; const T39 v39_; const T40 v40_; const T41 v41_; const T42 v42_; const T43 v43_; const T44 v44_; const T45 v45_; const T46 v46_; const T47 v47_; const T48 v48_; const T49 v49_; }; template class ValueArray50 { public: ValueArray50(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5, T6 v6, T7 v7, T8 v8, T9 v9, T10 v10, T11 v11, T12 v12, T13 v13, T14 v14, T15 v15, T16 v16, T17 v17, T18 v18, T19 v19, T20 v20, T21 v21, T22 v22, T23 v23, T24 v24, T25 v25, T26 v26, T27 v27, T28 v28, T29 v29, T30 v30, T31 v31, T32 v32, T33 v33, T34 v34, T35 v35, T36 v36, T37 v37, T38 v38, T39 v39, T40 v40, T41 v41, T42 v42, T43 v43, T44 v44, T45 v45, T46 v46, T47 v47, T48 v48, T49 v49, T50 v50) : v1_(v1), v2_(v2), v3_(v3), v4_(v4), v5_(v5), v6_(v6), v7_(v7), v8_(v8), v9_(v9), v10_(v10), v11_(v11), v12_(v12), v13_(v13), v14_(v14), v15_(v15), v16_(v16), v17_(v17), v18_(v18), v19_(v19), v20_(v20), v21_(v21), v22_(v22), v23_(v23), v24_(v24), v25_(v25), v26_(v26), v27_(v27), v28_(v28), v29_(v29), v30_(v30), v31_(v31), v32_(v32), v33_(v33), v34_(v34), v35_(v35), v36_(v36), v37_(v37), v38_(v38), v39_(v39), v40_(v40), v41_(v41), v42_(v42), v43_(v43), v44_(v44), v45_(v45), v46_(v46), v47_(v47), v48_(v48), v49_(v49), v50_(v50) {} template operator ParamGenerator() const { const T array[] = {static_cast(v1_), static_cast(v2_), static_cast(v3_), static_cast(v4_), static_cast(v5_), static_cast(v6_), static_cast(v7_), static_cast(v8_), static_cast(v9_), static_cast(v10_), static_cast(v11_), static_cast(v12_), static_cast(v13_), static_cast(v14_), static_cast(v15_), static_cast(v16_), static_cast(v17_), static_cast(v18_), static_cast(v19_), static_cast(v20_), static_cast(v21_), static_cast(v22_), static_cast(v23_), static_cast(v24_), static_cast(v25_), static_cast(v26_), static_cast(v27_), static_cast(v28_), static_cast(v29_), static_cast(v30_), static_cast(v31_), static_cast(v32_), static_cast(v33_), static_cast(v34_), static_cast(v35_), static_cast(v36_), static_cast(v37_), static_cast(v38_), static_cast(v39_), static_cast(v40_), static_cast(v41_), static_cast(v42_), static_cast(v43_), static_cast(v44_), static_cast(v45_), static_cast(v46_), static_cast(v47_), static_cast(v48_), static_cast(v49_), static_cast(v50_)}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray50& other); const T1 v1_; const T2 v2_; const T3 v3_; const T4 v4_; const T5 v5_; const T6 v6_; const T7 v7_; const T8 v8_; const T9 v9_; const T10 v10_; const T11 v11_; const T12 v12_; const T13 v13_; const T14 v14_; const T15 v15_; const T16 v16_; const T17 v17_; const T18 v18_; const T19 v19_; const T20 v20_; const T21 v21_; const T22 v22_; const T23 v23_; const T24 v24_; const T25 v25_; const T26 v26_; const T27 v27_; const T28 v28_; const T29 v29_; const T30 v30_; const T31 v31_; const T32 v32_; const T33 v33_; const T34 v34_; const T35 v35_; const T36 v36_; const T37 v37_; const T38 v38_; const T39 v39_; const T40 v40_; const T41 v41_; const T42 v42_; const T43 v43_; const T44 v44_; const T45 v45_; const T46 v46_; const T47 v47_; const T48 v48_; const T49 v49_; const T50 v50_; }; # if GTEST_HAS_COMBINE // INTERNAL IMPLEMENTATION - DO NOT USE IN USER CODE. // // Generates values from the Cartesian product of values produced // by the argument generators. // template class CartesianProductGenerator2 : public ParamGeneratorInterface< ::std::tr1::tuple > { public: typedef ::std::tr1::tuple ParamType; CartesianProductGenerator2(const ParamGenerator& g1, const ParamGenerator& g2) : g1_(g1), g2_(g2) {} virtual ~CartesianProductGenerator2() {} virtual ParamIteratorInterface* Begin() const { return new Iterator(this, g1_, g1_.begin(), g2_, g2_.begin()); } virtual ParamIteratorInterface* End() const { return new Iterator(this, g1_, g1_.end(), g2_, g2_.end()); } private: class Iterator : public ParamIteratorInterface { public: Iterator(const ParamGeneratorInterface* base, const ParamGenerator& g1, const typename ParamGenerator::iterator& current1, const ParamGenerator& g2, const typename ParamGenerator::iterator& current2) : base_(base), begin1_(g1.begin()), end1_(g1.end()), current1_(current1), begin2_(g2.begin()), end2_(g2.end()), current2_(current2) { ComputeCurrentValue(); } virtual ~Iterator() {} virtual const ParamGeneratorInterface* BaseGenerator() const { return base_; } // Advance should not be called on beyond-of-range iterators // so no component iterators must be beyond end of range, either. virtual void Advance() { assert(!AtEnd()); ++current2_; if (current2_ == end2_) { current2_ = begin2_; ++current1_; } ComputeCurrentValue(); } virtual ParamIteratorInterface* Clone() const { return new Iterator(*this); } virtual const ParamType* Current() const { return ¤t_value_; } virtual bool Equals(const ParamIteratorInterface& other) const { // Having the same base generator guarantees that the other // iterator is of the same type and we can downcast. GTEST_CHECK_(BaseGenerator() == other.BaseGenerator()) << "The program attempted to compare iterators " << "from different generators." << std::endl; const Iterator* typed_other = CheckedDowncastToActualType(&other); // We must report iterators equal if they both point beyond their // respective ranges. That can happen in a variety of fashions, // so we have to consult AtEnd(). return (AtEnd() && typed_other->AtEnd()) || ( current1_ == typed_other->current1_ && current2_ == typed_other->current2_); } private: Iterator(const Iterator& other) : base_(other.base_), begin1_(other.begin1_), end1_(other.end1_), current1_(other.current1_), begin2_(other.begin2_), end2_(other.end2_), current2_(other.current2_) { ComputeCurrentValue(); } void ComputeCurrentValue() { if (!AtEnd()) current_value_ = ParamType(*current1_, *current2_); } bool AtEnd() const { // We must report iterator past the end of the range when either of the // component iterators has reached the end of its range. return current1_ == end1_ || current2_ == end2_; } // No implementation - assignment is unsupported. void operator=(const Iterator& other); const ParamGeneratorInterface* const base_; // begin[i]_ and end[i]_ define the i-th range that Iterator traverses. // current[i]_ is the actual traversing iterator. const typename ParamGenerator::iterator begin1_; const typename ParamGenerator::iterator end1_; typename ParamGenerator::iterator current1_; const typename ParamGenerator::iterator begin2_; const typename ParamGenerator::iterator end2_; typename ParamGenerator::iterator current2_; ParamType current_value_; }; // class CartesianProductGenerator2::Iterator // No implementation - assignment is unsupported. void operator=(const CartesianProductGenerator2& other); const ParamGenerator g1_; const ParamGenerator g2_; }; // class CartesianProductGenerator2 template class CartesianProductGenerator3 : public ParamGeneratorInterface< ::std::tr1::tuple > { public: typedef ::std::tr1::tuple ParamType; CartesianProductGenerator3(const ParamGenerator& g1, const ParamGenerator& g2, const ParamGenerator& g3) : g1_(g1), g2_(g2), g3_(g3) {} virtual ~CartesianProductGenerator3() {} virtual ParamIteratorInterface* Begin() const { return new Iterator(this, g1_, g1_.begin(), g2_, g2_.begin(), g3_, g3_.begin()); } virtual ParamIteratorInterface* End() const { return new Iterator(this, g1_, g1_.end(), g2_, g2_.end(), g3_, g3_.end()); } private: class Iterator : public ParamIteratorInterface { public: Iterator(const ParamGeneratorInterface* base, const ParamGenerator& g1, const typename ParamGenerator::iterator& current1, const ParamGenerator& g2, const typename ParamGenerator::iterator& current2, const ParamGenerator& g3, const typename ParamGenerator::iterator& current3) : base_(base), begin1_(g1.begin()), end1_(g1.end()), current1_(current1), begin2_(g2.begin()), end2_(g2.end()), current2_(current2), begin3_(g3.begin()), end3_(g3.end()), current3_(current3) { ComputeCurrentValue(); } virtual ~Iterator() {} virtual const ParamGeneratorInterface* BaseGenerator() const { return base_; } // Advance should not be called on beyond-of-range iterators // so no component iterators must be beyond end of range, either. virtual void Advance() { assert(!AtEnd()); ++current3_; if (current3_ == end3_) { current3_ = begin3_; ++current2_; } if (current2_ == end2_) { current2_ = begin2_; ++current1_; } ComputeCurrentValue(); } virtual ParamIteratorInterface* Clone() const { return new Iterator(*this); } virtual const ParamType* Current() const { return ¤t_value_; } virtual bool Equals(const ParamIteratorInterface& other) const { // Having the same base generator guarantees that the other // iterator is of the same type and we can downcast. GTEST_CHECK_(BaseGenerator() == other.BaseGenerator()) << "The program attempted to compare iterators " << "from different generators." << std::endl; const Iterator* typed_other = CheckedDowncastToActualType(&other); // We must report iterators equal if they both point beyond their // respective ranges. That can happen in a variety of fashions, // so we have to consult AtEnd(). return (AtEnd() && typed_other->AtEnd()) || ( current1_ == typed_other->current1_ && current2_ == typed_other->current2_ && current3_ == typed_other->current3_); } private: Iterator(const Iterator& other) : base_(other.base_), begin1_(other.begin1_), end1_(other.end1_), current1_(other.current1_), begin2_(other.begin2_), end2_(other.end2_), current2_(other.current2_), begin3_(other.begin3_), end3_(other.end3_), current3_(other.current3_) { ComputeCurrentValue(); } void ComputeCurrentValue() { if (!AtEnd()) current_value_ = ParamType(*current1_, *current2_, *current3_); } bool AtEnd() const { // We must report iterator past the end of the range when either of the // component iterators has reached the end of its range. return current1_ == end1_ || current2_ == end2_ || current3_ == end3_; } // No implementation - assignment is unsupported. void operator=(const Iterator& other); const ParamGeneratorInterface* const base_; // begin[i]_ and end[i]_ define the i-th range that Iterator traverses. // current[i]_ is the actual traversing iterator. const typename ParamGenerator::iterator begin1_; const typename ParamGenerator::iterator end1_; typename ParamGenerator::iterator current1_; const typename ParamGenerator::iterator begin2_; const typename ParamGenerator::iterator end2_; typename ParamGenerator::iterator current2_; const typename ParamGenerator::iterator begin3_; const typename ParamGenerator::iterator end3_; typename ParamGenerator::iterator current3_; ParamType current_value_; }; // class CartesianProductGenerator3::Iterator // No implementation - assignment is unsupported. void operator=(const CartesianProductGenerator3& other); const ParamGenerator g1_; const ParamGenerator g2_; const ParamGenerator g3_; }; // class CartesianProductGenerator3 template class CartesianProductGenerator4 : public ParamGeneratorInterface< ::std::tr1::tuple > { public: typedef ::std::tr1::tuple ParamType; CartesianProductGenerator4(const ParamGenerator& g1, const ParamGenerator& g2, const ParamGenerator& g3, const ParamGenerator& g4) : g1_(g1), g2_(g2), g3_(g3), g4_(g4) {} virtual ~CartesianProductGenerator4() {} virtual ParamIteratorInterface* Begin() const { return new Iterator(this, g1_, g1_.begin(), g2_, g2_.begin(), g3_, g3_.begin(), g4_, g4_.begin()); } virtual ParamIteratorInterface* End() const { return new Iterator(this, g1_, g1_.end(), g2_, g2_.end(), g3_, g3_.end(), g4_, g4_.end()); } private: class Iterator : public ParamIteratorInterface { public: Iterator(const ParamGeneratorInterface* base, const ParamGenerator& g1, const typename ParamGenerator::iterator& current1, const ParamGenerator& g2, const typename ParamGenerator::iterator& current2, const ParamGenerator& g3, const typename ParamGenerator::iterator& current3, const ParamGenerator& g4, const typename ParamGenerator::iterator& current4) : base_(base), begin1_(g1.begin()), end1_(g1.end()), current1_(current1), begin2_(g2.begin()), end2_(g2.end()), current2_(current2), begin3_(g3.begin()), end3_(g3.end()), current3_(current3), begin4_(g4.begin()), end4_(g4.end()), current4_(current4) { ComputeCurrentValue(); } virtual ~Iterator() {} virtual const ParamGeneratorInterface* BaseGenerator() const { return base_; } // Advance should not be called on beyond-of-range iterators // so no component iterators must be beyond end of range, either. virtual void Advance() { assert(!AtEnd()); ++current4_; if (current4_ == end4_) { current4_ = begin4_; ++current3_; } if (current3_ == end3_) { current3_ = begin3_; ++current2_; } if (current2_ == end2_) { current2_ = begin2_; ++current1_; } ComputeCurrentValue(); } virtual ParamIteratorInterface* Clone() const { return new Iterator(*this); } virtual const ParamType* Current() const { return ¤t_value_; } virtual bool Equals(const ParamIteratorInterface& other) const { // Having the same base generator guarantees that the other // iterator is of the same type and we can downcast. GTEST_CHECK_(BaseGenerator() == other.BaseGenerator()) << "The program attempted to compare iterators " << "from different generators." << std::endl; const Iterator* typed_other = CheckedDowncastToActualType(&other); // We must report iterators equal if they both point beyond their // respective ranges. That can happen in a variety of fashions, // so we have to consult AtEnd(). return (AtEnd() && typed_other->AtEnd()) || ( current1_ == typed_other->current1_ && current2_ == typed_other->current2_ && current3_ == typed_other->current3_ && current4_ == typed_other->current4_); } private: Iterator(const Iterator& other) : base_(other.base_), begin1_(other.begin1_), end1_(other.end1_), current1_(other.current1_), begin2_(other.begin2_), end2_(other.end2_), current2_(other.current2_), begin3_(other.begin3_), end3_(other.end3_), current3_(other.current3_), begin4_(other.begin4_), end4_(other.end4_), current4_(other.current4_) { ComputeCurrentValue(); } void ComputeCurrentValue() { if (!AtEnd()) current_value_ = ParamType(*current1_, *current2_, *current3_, *current4_); } bool AtEnd() const { // We must report iterator past the end of the range when either of the // component iterators has reached the end of its range. return current1_ == end1_ || current2_ == end2_ || current3_ == end3_ || current4_ == end4_; } // No implementation - assignment is unsupported. void operator=(const Iterator& other); const ParamGeneratorInterface* const base_; // begin[i]_ and end[i]_ define the i-th range that Iterator traverses. // current[i]_ is the actual traversing iterator. const typename ParamGenerator::iterator begin1_; const typename ParamGenerator::iterator end1_; typename ParamGenerator::iterator current1_; const typename ParamGenerator::iterator begin2_; const typename ParamGenerator::iterator end2_; typename ParamGenerator::iterator current2_; const typename ParamGenerator::iterator begin3_; const typename ParamGenerator::iterator end3_; typename ParamGenerator::iterator current3_; const typename ParamGenerator::iterator begin4_; const typename ParamGenerator::iterator end4_; typename ParamGenerator::iterator current4_; ParamType current_value_; }; // class CartesianProductGenerator4::Iterator // No implementation - assignment is unsupported. void operator=(const CartesianProductGenerator4& other); const ParamGenerator g1_; const ParamGenerator g2_; const ParamGenerator g3_; const ParamGenerator g4_; }; // class CartesianProductGenerator4 template class CartesianProductGenerator5 : public ParamGeneratorInterface< ::std::tr1::tuple > { public: typedef ::std::tr1::tuple ParamType; CartesianProductGenerator5(const ParamGenerator& g1, const ParamGenerator& g2, const ParamGenerator& g3, const ParamGenerator& g4, const ParamGenerator& g5) : g1_(g1), g2_(g2), g3_(g3), g4_(g4), g5_(g5) {} virtual ~CartesianProductGenerator5() {} virtual ParamIteratorInterface* Begin() const { return new Iterator(this, g1_, g1_.begin(), g2_, g2_.begin(), g3_, g3_.begin(), g4_, g4_.begin(), g5_, g5_.begin()); } virtual ParamIteratorInterface* End() const { return new Iterator(this, g1_, g1_.end(), g2_, g2_.end(), g3_, g3_.end(), g4_, g4_.end(), g5_, g5_.end()); } private: class Iterator : public ParamIteratorInterface { public: Iterator(const ParamGeneratorInterface* base, const ParamGenerator& g1, const typename ParamGenerator::iterator& current1, const ParamGenerator& g2, const typename ParamGenerator::iterator& current2, const ParamGenerator& g3, const typename ParamGenerator::iterator& current3, const ParamGenerator& g4, const typename ParamGenerator::iterator& current4, const ParamGenerator& g5, const typename ParamGenerator::iterator& current5) : base_(base), begin1_(g1.begin()), end1_(g1.end()), current1_(current1), begin2_(g2.begin()), end2_(g2.end()), current2_(current2), begin3_(g3.begin()), end3_(g3.end()), current3_(current3), begin4_(g4.begin()), end4_(g4.end()), current4_(current4), begin5_(g5.begin()), end5_(g5.end()), current5_(current5) { ComputeCurrentValue(); } virtual ~Iterator() {} virtual const ParamGeneratorInterface* BaseGenerator() const { return base_; } // Advance should not be called on beyond-of-range iterators // so no component iterators must be beyond end of range, either. virtual void Advance() { assert(!AtEnd()); ++current5_; if (current5_ == end5_) { current5_ = begin5_; ++current4_; } if (current4_ == end4_) { current4_ = begin4_; ++current3_; } if (current3_ == end3_) { current3_ = begin3_; ++current2_; } if (current2_ == end2_) { current2_ = begin2_; ++current1_; } ComputeCurrentValue(); } virtual ParamIteratorInterface* Clone() const { return new Iterator(*this); } virtual const ParamType* Current() const { return ¤t_value_; } virtual bool Equals(const ParamIteratorInterface& other) const { // Having the same base generator guarantees that the other // iterator is of the same type and we can downcast. GTEST_CHECK_(BaseGenerator() == other.BaseGenerator()) << "The program attempted to compare iterators " << "from different generators." << std::endl; const Iterator* typed_other = CheckedDowncastToActualType(&other); // We must report iterators equal if they both point beyond their // respective ranges. That can happen in a variety of fashions, // so we have to consult AtEnd(). return (AtEnd() && typed_other->AtEnd()) || ( current1_ == typed_other->current1_ && current2_ == typed_other->current2_ && current3_ == typed_other->current3_ && current4_ == typed_other->current4_ && current5_ == typed_other->current5_); } private: Iterator(const Iterator& other) : base_(other.base_), begin1_(other.begin1_), end1_(other.end1_), current1_(other.current1_), begin2_(other.begin2_), end2_(other.end2_), current2_(other.current2_), begin3_(other.begin3_), end3_(other.end3_), current3_(other.current3_), begin4_(other.begin4_), end4_(other.end4_), current4_(other.current4_), begin5_(other.begin5_), end5_(other.end5_), current5_(other.current5_) { ComputeCurrentValue(); } void ComputeCurrentValue() { if (!AtEnd()) current_value_ = ParamType(*current1_, *current2_, *current3_, *current4_, *current5_); } bool AtEnd() const { // We must report iterator past the end of the range when either of the // component iterators has reached the end of its range. return current1_ == end1_ || current2_ == end2_ || current3_ == end3_ || current4_ == end4_ || current5_ == end5_; } // No implementation - assignment is unsupported. void operator=(const Iterator& other); const ParamGeneratorInterface* const base_; // begin[i]_ and end[i]_ define the i-th range that Iterator traverses. // current[i]_ is the actual traversing iterator. const typename ParamGenerator::iterator begin1_; const typename ParamGenerator::iterator end1_; typename ParamGenerator::iterator current1_; const typename ParamGenerator::iterator begin2_; const typename ParamGenerator::iterator end2_; typename ParamGenerator::iterator current2_; const typename ParamGenerator::iterator begin3_; const typename ParamGenerator::iterator end3_; typename ParamGenerator::iterator current3_; const typename ParamGenerator::iterator begin4_; const typename ParamGenerator::iterator end4_; typename ParamGenerator::iterator current4_; const typename ParamGenerator::iterator begin5_; const typename ParamGenerator::iterator end5_; typename ParamGenerator::iterator current5_; ParamType current_value_; }; // class CartesianProductGenerator5::Iterator // No implementation - assignment is unsupported. void operator=(const CartesianProductGenerator5& other); const ParamGenerator g1_; const ParamGenerator g2_; const ParamGenerator g3_; const ParamGenerator g4_; const ParamGenerator g5_; }; // class CartesianProductGenerator5 template class CartesianProductGenerator6 : public ParamGeneratorInterface< ::std::tr1::tuple > { public: typedef ::std::tr1::tuple ParamType; CartesianProductGenerator6(const ParamGenerator& g1, const ParamGenerator& g2, const ParamGenerator& g3, const ParamGenerator& g4, const ParamGenerator& g5, const ParamGenerator& g6) : g1_(g1), g2_(g2), g3_(g3), g4_(g4), g5_(g5), g6_(g6) {} virtual ~CartesianProductGenerator6() {} virtual ParamIteratorInterface* Begin() const { return new Iterator(this, g1_, g1_.begin(), g2_, g2_.begin(), g3_, g3_.begin(), g4_, g4_.begin(), g5_, g5_.begin(), g6_, g6_.begin()); } virtual ParamIteratorInterface* End() const { return new Iterator(this, g1_, g1_.end(), g2_, g2_.end(), g3_, g3_.end(), g4_, g4_.end(), g5_, g5_.end(), g6_, g6_.end()); } private: class Iterator : public ParamIteratorInterface { public: Iterator(const ParamGeneratorInterface* base, const ParamGenerator& g1, const typename ParamGenerator::iterator& current1, const ParamGenerator& g2, const typename ParamGenerator::iterator& current2, const ParamGenerator& g3, const typename ParamGenerator::iterator& current3, const ParamGenerator& g4, const typename ParamGenerator::iterator& current4, const ParamGenerator& g5, const typename ParamGenerator::iterator& current5, const ParamGenerator& g6, const typename ParamGenerator::iterator& current6) : base_(base), begin1_(g1.begin()), end1_(g1.end()), current1_(current1), begin2_(g2.begin()), end2_(g2.end()), current2_(current2), begin3_(g3.begin()), end3_(g3.end()), current3_(current3), begin4_(g4.begin()), end4_(g4.end()), current4_(current4), begin5_(g5.begin()), end5_(g5.end()), current5_(current5), begin6_(g6.begin()), end6_(g6.end()), current6_(current6) { ComputeCurrentValue(); } virtual ~Iterator() {} virtual const ParamGeneratorInterface* BaseGenerator() const { return base_; } // Advance should not be called on beyond-of-range iterators // so no component iterators must be beyond end of range, either. virtual void Advance() { assert(!AtEnd()); ++current6_; if (current6_ == end6_) { current6_ = begin6_; ++current5_; } if (current5_ == end5_) { current5_ = begin5_; ++current4_; } if (current4_ == end4_) { current4_ = begin4_; ++current3_; } if (current3_ == end3_) { current3_ = begin3_; ++current2_; } if (current2_ == end2_) { current2_ = begin2_; ++current1_; } ComputeCurrentValue(); } virtual ParamIteratorInterface* Clone() const { return new Iterator(*this); } virtual const ParamType* Current() const { return ¤t_value_; } virtual bool Equals(const ParamIteratorInterface& other) const { // Having the same base generator guarantees that the other // iterator is of the same type and we can downcast. GTEST_CHECK_(BaseGenerator() == other.BaseGenerator()) << "The program attempted to compare iterators " << "from different generators." << std::endl; const Iterator* typed_other = CheckedDowncastToActualType(&other); // We must report iterators equal if they both point beyond their // respective ranges. That can happen in a variety of fashions, // so we have to consult AtEnd(). return (AtEnd() && typed_other->AtEnd()) || ( current1_ == typed_other->current1_ && current2_ == typed_other->current2_ && current3_ == typed_other->current3_ && current4_ == typed_other->current4_ && current5_ == typed_other->current5_ && current6_ == typed_other->current6_); } private: Iterator(const Iterator& other) : base_(other.base_), begin1_(other.begin1_), end1_(other.end1_), current1_(other.current1_), begin2_(other.begin2_), end2_(other.end2_), current2_(other.current2_), begin3_(other.begin3_), end3_(other.end3_), current3_(other.current3_), begin4_(other.begin4_), end4_(other.end4_), current4_(other.current4_), begin5_(other.begin5_), end5_(other.end5_), current5_(other.current5_), begin6_(other.begin6_), end6_(other.end6_), current6_(other.current6_) { ComputeCurrentValue(); } void ComputeCurrentValue() { if (!AtEnd()) current_value_ = ParamType(*current1_, *current2_, *current3_, *current4_, *current5_, *current6_); } bool AtEnd() const { // We must report iterator past the end of the range when either of the // component iterators has reached the end of its range. return current1_ == end1_ || current2_ == end2_ || current3_ == end3_ || current4_ == end4_ || current5_ == end5_ || current6_ == end6_; } // No implementation - assignment is unsupported. void operator=(const Iterator& other); const ParamGeneratorInterface* const base_; // begin[i]_ and end[i]_ define the i-th range that Iterator traverses. // current[i]_ is the actual traversing iterator. const typename ParamGenerator::iterator begin1_; const typename ParamGenerator::iterator end1_; typename ParamGenerator::iterator current1_; const typename ParamGenerator::iterator begin2_; const typename ParamGenerator::iterator end2_; typename ParamGenerator::iterator current2_; const typename ParamGenerator::iterator begin3_; const typename ParamGenerator::iterator end3_; typename ParamGenerator::iterator current3_; const typename ParamGenerator::iterator begin4_; const typename ParamGenerator::iterator end4_; typename ParamGenerator::iterator current4_; const typename ParamGenerator::iterator begin5_; const typename ParamGenerator::iterator end5_; typename ParamGenerator::iterator current5_; const typename ParamGenerator::iterator begin6_; const typename ParamGenerator::iterator end6_; typename ParamGenerator::iterator current6_; ParamType current_value_; }; // class CartesianProductGenerator6::Iterator // No implementation - assignment is unsupported. void operator=(const CartesianProductGenerator6& other); const ParamGenerator g1_; const ParamGenerator g2_; const ParamGenerator g3_; const ParamGenerator g4_; const ParamGenerator g5_; const ParamGenerator g6_; }; // class CartesianProductGenerator6 template class CartesianProductGenerator7 : public ParamGeneratorInterface< ::std::tr1::tuple > { public: typedef ::std::tr1::tuple ParamType; CartesianProductGenerator7(const ParamGenerator& g1, const ParamGenerator& g2, const ParamGenerator& g3, const ParamGenerator& g4, const ParamGenerator& g5, const ParamGenerator& g6, const ParamGenerator& g7) : g1_(g1), g2_(g2), g3_(g3), g4_(g4), g5_(g5), g6_(g6), g7_(g7) {} virtual ~CartesianProductGenerator7() {} virtual ParamIteratorInterface* Begin() const { return new Iterator(this, g1_, g1_.begin(), g2_, g2_.begin(), g3_, g3_.begin(), g4_, g4_.begin(), g5_, g5_.begin(), g6_, g6_.begin(), g7_, g7_.begin()); } virtual ParamIteratorInterface* End() const { return new Iterator(this, g1_, g1_.end(), g2_, g2_.end(), g3_, g3_.end(), g4_, g4_.end(), g5_, g5_.end(), g6_, g6_.end(), g7_, g7_.end()); } private: class Iterator : public ParamIteratorInterface { public: Iterator(const ParamGeneratorInterface* base, const ParamGenerator& g1, const typename ParamGenerator::iterator& current1, const ParamGenerator& g2, const typename ParamGenerator::iterator& current2, const ParamGenerator& g3, const typename ParamGenerator::iterator& current3, const ParamGenerator& g4, const typename ParamGenerator::iterator& current4, const ParamGenerator& g5, const typename ParamGenerator::iterator& current5, const ParamGenerator& g6, const typename ParamGenerator::iterator& current6, const ParamGenerator& g7, const typename ParamGenerator::iterator& current7) : base_(base), begin1_(g1.begin()), end1_(g1.end()), current1_(current1), begin2_(g2.begin()), end2_(g2.end()), current2_(current2), begin3_(g3.begin()), end3_(g3.end()), current3_(current3), begin4_(g4.begin()), end4_(g4.end()), current4_(current4), begin5_(g5.begin()), end5_(g5.end()), current5_(current5), begin6_(g6.begin()), end6_(g6.end()), current6_(current6), begin7_(g7.begin()), end7_(g7.end()), current7_(current7) { ComputeCurrentValue(); } virtual ~Iterator() {} virtual const ParamGeneratorInterface* BaseGenerator() const { return base_; } // Advance should not be called on beyond-of-range iterators // so no component iterators must be beyond end of range, either. virtual void Advance() { assert(!AtEnd()); ++current7_; if (current7_ == end7_) { current7_ = begin7_; ++current6_; } if (current6_ == end6_) { current6_ = begin6_; ++current5_; } if (current5_ == end5_) { current5_ = begin5_; ++current4_; } if (current4_ == end4_) { current4_ = begin4_; ++current3_; } if (current3_ == end3_) { current3_ = begin3_; ++current2_; } if (current2_ == end2_) { current2_ = begin2_; ++current1_; } ComputeCurrentValue(); } virtual ParamIteratorInterface* Clone() const { return new Iterator(*this); } virtual const ParamType* Current() const { return ¤t_value_; } virtual bool Equals(const ParamIteratorInterface& other) const { // Having the same base generator guarantees that the other // iterator is of the same type and we can downcast. GTEST_CHECK_(BaseGenerator() == other.BaseGenerator()) << "The program attempted to compare iterators " << "from different generators." << std::endl; const Iterator* typed_other = CheckedDowncastToActualType(&other); // We must report iterators equal if they both point beyond their // respective ranges. That can happen in a variety of fashions, // so we have to consult AtEnd(). return (AtEnd() && typed_other->AtEnd()) || ( current1_ == typed_other->current1_ && current2_ == typed_other->current2_ && current3_ == typed_other->current3_ && current4_ == typed_other->current4_ && current5_ == typed_other->current5_ && current6_ == typed_other->current6_ && current7_ == typed_other->current7_); } private: Iterator(const Iterator& other) : base_(other.base_), begin1_(other.begin1_), end1_(other.end1_), current1_(other.current1_), begin2_(other.begin2_), end2_(other.end2_), current2_(other.current2_), begin3_(other.begin3_), end3_(other.end3_), current3_(other.current3_), begin4_(other.begin4_), end4_(other.end4_), current4_(other.current4_), begin5_(other.begin5_), end5_(other.end5_), current5_(other.current5_), begin6_(other.begin6_), end6_(other.end6_), current6_(other.current6_), begin7_(other.begin7_), end7_(other.end7_), current7_(other.current7_) { ComputeCurrentValue(); } void ComputeCurrentValue() { if (!AtEnd()) current_value_ = ParamType(*current1_, *current2_, *current3_, *current4_, *current5_, *current6_, *current7_); } bool AtEnd() const { // We must report iterator past the end of the range when either of the // component iterators has reached the end of its range. return current1_ == end1_ || current2_ == end2_ || current3_ == end3_ || current4_ == end4_ || current5_ == end5_ || current6_ == end6_ || current7_ == end7_; } // No implementation - assignment is unsupported. void operator=(const Iterator& other); const ParamGeneratorInterface* const base_; // begin[i]_ and end[i]_ define the i-th range that Iterator traverses. // current[i]_ is the actual traversing iterator. const typename ParamGenerator::iterator begin1_; const typename ParamGenerator::iterator end1_; typename ParamGenerator::iterator current1_; const typename ParamGenerator::iterator begin2_; const typename ParamGenerator::iterator end2_; typename ParamGenerator::iterator current2_; const typename ParamGenerator::iterator begin3_; const typename ParamGenerator::iterator end3_; typename ParamGenerator::iterator current3_; const typename ParamGenerator::iterator begin4_; const typename ParamGenerator::iterator end4_; typename ParamGenerator::iterator current4_; const typename ParamGenerator::iterator begin5_; const typename ParamGenerator::iterator end5_; typename ParamGenerator::iterator current5_; const typename ParamGenerator::iterator begin6_; const typename ParamGenerator::iterator end6_; typename ParamGenerator::iterator current6_; const typename ParamGenerator::iterator begin7_; const typename ParamGenerator::iterator end7_; typename ParamGenerator::iterator current7_; ParamType current_value_; }; // class CartesianProductGenerator7::Iterator // No implementation - assignment is unsupported. void operator=(const CartesianProductGenerator7& other); const ParamGenerator g1_; const ParamGenerator g2_; const ParamGenerator g3_; const ParamGenerator g4_; const ParamGenerator g5_; const ParamGenerator g6_; const ParamGenerator g7_; }; // class CartesianProductGenerator7 template class CartesianProductGenerator8 : public ParamGeneratorInterface< ::std::tr1::tuple > { public: typedef ::std::tr1::tuple ParamType; CartesianProductGenerator8(const ParamGenerator& g1, const ParamGenerator& g2, const ParamGenerator& g3, const ParamGenerator& g4, const ParamGenerator& g5, const ParamGenerator& g6, const ParamGenerator& g7, const ParamGenerator& g8) : g1_(g1), g2_(g2), g3_(g3), g4_(g4), g5_(g5), g6_(g6), g7_(g7), g8_(g8) {} virtual ~CartesianProductGenerator8() {} virtual ParamIteratorInterface* Begin() const { return new Iterator(this, g1_, g1_.begin(), g2_, g2_.begin(), g3_, g3_.begin(), g4_, g4_.begin(), g5_, g5_.begin(), g6_, g6_.begin(), g7_, g7_.begin(), g8_, g8_.begin()); } virtual ParamIteratorInterface* End() const { return new Iterator(this, g1_, g1_.end(), g2_, g2_.end(), g3_, g3_.end(), g4_, g4_.end(), g5_, g5_.end(), g6_, g6_.end(), g7_, g7_.end(), g8_, g8_.end()); } private: class Iterator : public ParamIteratorInterface { public: Iterator(const ParamGeneratorInterface* base, const ParamGenerator& g1, const typename ParamGenerator::iterator& current1, const ParamGenerator& g2, const typename ParamGenerator::iterator& current2, const ParamGenerator& g3, const typename ParamGenerator::iterator& current3, const ParamGenerator& g4, const typename ParamGenerator::iterator& current4, const ParamGenerator& g5, const typename ParamGenerator::iterator& current5, const ParamGenerator& g6, const typename ParamGenerator::iterator& current6, const ParamGenerator& g7, const typename ParamGenerator::iterator& current7, const ParamGenerator& g8, const typename ParamGenerator::iterator& current8) : base_(base), begin1_(g1.begin()), end1_(g1.end()), current1_(current1), begin2_(g2.begin()), end2_(g2.end()), current2_(current2), begin3_(g3.begin()), end3_(g3.end()), current3_(current3), begin4_(g4.begin()), end4_(g4.end()), current4_(current4), begin5_(g5.begin()), end5_(g5.end()), current5_(current5), begin6_(g6.begin()), end6_(g6.end()), current6_(current6), begin7_(g7.begin()), end7_(g7.end()), current7_(current7), begin8_(g8.begin()), end8_(g8.end()), current8_(current8) { ComputeCurrentValue(); } virtual ~Iterator() {} virtual const ParamGeneratorInterface* BaseGenerator() const { return base_; } // Advance should not be called on beyond-of-range iterators // so no component iterators must be beyond end of range, either. virtual void Advance() { assert(!AtEnd()); ++current8_; if (current8_ == end8_) { current8_ = begin8_; ++current7_; } if (current7_ == end7_) { current7_ = begin7_; ++current6_; } if (current6_ == end6_) { current6_ = begin6_; ++current5_; } if (current5_ == end5_) { current5_ = begin5_; ++current4_; } if (current4_ == end4_) { current4_ = begin4_; ++current3_; } if (current3_ == end3_) { current3_ = begin3_; ++current2_; } if (current2_ == end2_) { current2_ = begin2_; ++current1_; } ComputeCurrentValue(); } virtual ParamIteratorInterface* Clone() const { return new Iterator(*this); } virtual const ParamType* Current() const { return ¤t_value_; } virtual bool Equals(const ParamIteratorInterface& other) const { // Having the same base generator guarantees that the other // iterator is of the same type and we can downcast. GTEST_CHECK_(BaseGenerator() == other.BaseGenerator()) << "The program attempted to compare iterators " << "from different generators." << std::endl; const Iterator* typed_other = CheckedDowncastToActualType(&other); // We must report iterators equal if they both point beyond their // respective ranges. That can happen in a variety of fashions, // so we have to consult AtEnd(). return (AtEnd() && typed_other->AtEnd()) || ( current1_ == typed_other->current1_ && current2_ == typed_other->current2_ && current3_ == typed_other->current3_ && current4_ == typed_other->current4_ && current5_ == typed_other->current5_ && current6_ == typed_other->current6_ && current7_ == typed_other->current7_ && current8_ == typed_other->current8_); } private: Iterator(const Iterator& other) : base_(other.base_), begin1_(other.begin1_), end1_(other.end1_), current1_(other.current1_), begin2_(other.begin2_), end2_(other.end2_), current2_(other.current2_), begin3_(other.begin3_), end3_(other.end3_), current3_(other.current3_), begin4_(other.begin4_), end4_(other.end4_), current4_(other.current4_), begin5_(other.begin5_), end5_(other.end5_), current5_(other.current5_), begin6_(other.begin6_), end6_(other.end6_), current6_(other.current6_), begin7_(other.begin7_), end7_(other.end7_), current7_(other.current7_), begin8_(other.begin8_), end8_(other.end8_), current8_(other.current8_) { ComputeCurrentValue(); } void ComputeCurrentValue() { if (!AtEnd()) current_value_ = ParamType(*current1_, *current2_, *current3_, *current4_, *current5_, *current6_, *current7_, *current8_); } bool AtEnd() const { // We must report iterator past the end of the range when either of the // component iterators has reached the end of its range. return current1_ == end1_ || current2_ == end2_ || current3_ == end3_ || current4_ == end4_ || current5_ == end5_ || current6_ == end6_ || current7_ == end7_ || current8_ == end8_; } // No implementation - assignment is unsupported. void operator=(const Iterator& other); const ParamGeneratorInterface* const base_; // begin[i]_ and end[i]_ define the i-th range that Iterator traverses. // current[i]_ is the actual traversing iterator. const typename ParamGenerator::iterator begin1_; const typename ParamGenerator::iterator end1_; typename ParamGenerator::iterator current1_; const typename ParamGenerator::iterator begin2_; const typename ParamGenerator::iterator end2_; typename ParamGenerator::iterator current2_; const typename ParamGenerator::iterator begin3_; const typename ParamGenerator::iterator end3_; typename ParamGenerator::iterator current3_; const typename ParamGenerator::iterator begin4_; const typename ParamGenerator::iterator end4_; typename ParamGenerator::iterator current4_; const typename ParamGenerator::iterator begin5_; const typename ParamGenerator::iterator end5_; typename ParamGenerator::iterator current5_; const typename ParamGenerator::iterator begin6_; const typename ParamGenerator::iterator end6_; typename ParamGenerator::iterator current6_; const typename ParamGenerator::iterator begin7_; const typename ParamGenerator::iterator end7_; typename ParamGenerator::iterator current7_; const typename ParamGenerator::iterator begin8_; const typename ParamGenerator::iterator end8_; typename ParamGenerator::iterator current8_; ParamType current_value_; }; // class CartesianProductGenerator8::Iterator // No implementation - assignment is unsupported. void operator=(const CartesianProductGenerator8& other); const ParamGenerator g1_; const ParamGenerator g2_; const ParamGenerator g3_; const ParamGenerator g4_; const ParamGenerator g5_; const ParamGenerator g6_; const ParamGenerator g7_; const ParamGenerator g8_; }; // class CartesianProductGenerator8 template class CartesianProductGenerator9 : public ParamGeneratorInterface< ::std::tr1::tuple > { public: typedef ::std::tr1::tuple ParamType; CartesianProductGenerator9(const ParamGenerator& g1, const ParamGenerator& g2, const ParamGenerator& g3, const ParamGenerator& g4, const ParamGenerator& g5, const ParamGenerator& g6, const ParamGenerator& g7, const ParamGenerator& g8, const ParamGenerator& g9) : g1_(g1), g2_(g2), g3_(g3), g4_(g4), g5_(g5), g6_(g6), g7_(g7), g8_(g8), g9_(g9) {} virtual ~CartesianProductGenerator9() {} virtual ParamIteratorInterface* Begin() const { return new Iterator(this, g1_, g1_.begin(), g2_, g2_.begin(), g3_, g3_.begin(), g4_, g4_.begin(), g5_, g5_.begin(), g6_, g6_.begin(), g7_, g7_.begin(), g8_, g8_.begin(), g9_, g9_.begin()); } virtual ParamIteratorInterface* End() const { return new Iterator(this, g1_, g1_.end(), g2_, g2_.end(), g3_, g3_.end(), g4_, g4_.end(), g5_, g5_.end(), g6_, g6_.end(), g7_, g7_.end(), g8_, g8_.end(), g9_, g9_.end()); } private: class Iterator : public ParamIteratorInterface { public: Iterator(const ParamGeneratorInterface* base, const ParamGenerator& g1, const typename ParamGenerator::iterator& current1, const ParamGenerator& g2, const typename ParamGenerator::iterator& current2, const ParamGenerator& g3, const typename ParamGenerator::iterator& current3, const ParamGenerator& g4, const typename ParamGenerator::iterator& current4, const ParamGenerator& g5, const typename ParamGenerator::iterator& current5, const ParamGenerator& g6, const typename ParamGenerator::iterator& current6, const ParamGenerator& g7, const typename ParamGenerator::iterator& current7, const ParamGenerator& g8, const typename ParamGenerator::iterator& current8, const ParamGenerator& g9, const typename ParamGenerator::iterator& current9) : base_(base), begin1_(g1.begin()), end1_(g1.end()), current1_(current1), begin2_(g2.begin()), end2_(g2.end()), current2_(current2), begin3_(g3.begin()), end3_(g3.end()), current3_(current3), begin4_(g4.begin()), end4_(g4.end()), current4_(current4), begin5_(g5.begin()), end5_(g5.end()), current5_(current5), begin6_(g6.begin()), end6_(g6.end()), current6_(current6), begin7_(g7.begin()), end7_(g7.end()), current7_(current7), begin8_(g8.begin()), end8_(g8.end()), current8_(current8), begin9_(g9.begin()), end9_(g9.end()), current9_(current9) { ComputeCurrentValue(); } virtual ~Iterator() {} virtual const ParamGeneratorInterface* BaseGenerator() const { return base_; } // Advance should not be called on beyond-of-range iterators // so no component iterators must be beyond end of range, either. virtual void Advance() { assert(!AtEnd()); ++current9_; if (current9_ == end9_) { current9_ = begin9_; ++current8_; } if (current8_ == end8_) { current8_ = begin8_; ++current7_; } if (current7_ == end7_) { current7_ = begin7_; ++current6_; } if (current6_ == end6_) { current6_ = begin6_; ++current5_; } if (current5_ == end5_) { current5_ = begin5_; ++current4_; } if (current4_ == end4_) { current4_ = begin4_; ++current3_; } if (current3_ == end3_) { current3_ = begin3_; ++current2_; } if (current2_ == end2_) { current2_ = begin2_; ++current1_; } ComputeCurrentValue(); } virtual ParamIteratorInterface* Clone() const { return new Iterator(*this); } virtual const ParamType* Current() const { return ¤t_value_; } virtual bool Equals(const ParamIteratorInterface& other) const { // Having the same base generator guarantees that the other // iterator is of the same type and we can downcast. GTEST_CHECK_(BaseGenerator() == other.BaseGenerator()) << "The program attempted to compare iterators " << "from different generators." << std::endl; const Iterator* typed_other = CheckedDowncastToActualType(&other); // We must report iterators equal if they both point beyond their // respective ranges. That can happen in a variety of fashions, // so we have to consult AtEnd(). return (AtEnd() && typed_other->AtEnd()) || ( current1_ == typed_other->current1_ && current2_ == typed_other->current2_ && current3_ == typed_other->current3_ && current4_ == typed_other->current4_ && current5_ == typed_other->current5_ && current6_ == typed_other->current6_ && current7_ == typed_other->current7_ && current8_ == typed_other->current8_ && current9_ == typed_other->current9_); } private: Iterator(const Iterator& other) : base_(other.base_), begin1_(other.begin1_), end1_(other.end1_), current1_(other.current1_), begin2_(other.begin2_), end2_(other.end2_), current2_(other.current2_), begin3_(other.begin3_), end3_(other.end3_), current3_(other.current3_), begin4_(other.begin4_), end4_(other.end4_), current4_(other.current4_), begin5_(other.begin5_), end5_(other.end5_), current5_(other.current5_), begin6_(other.begin6_), end6_(other.end6_), current6_(other.current6_), begin7_(other.begin7_), end7_(other.end7_), current7_(other.current7_), begin8_(other.begin8_), end8_(other.end8_), current8_(other.current8_), begin9_(other.begin9_), end9_(other.end9_), current9_(other.current9_) { ComputeCurrentValue(); } void ComputeCurrentValue() { if (!AtEnd()) current_value_ = ParamType(*current1_, *current2_, *current3_, *current4_, *current5_, *current6_, *current7_, *current8_, *current9_); } bool AtEnd() const { // We must report iterator past the end of the range when either of the // component iterators has reached the end of its range. return current1_ == end1_ || current2_ == end2_ || current3_ == end3_ || current4_ == end4_ || current5_ == end5_ || current6_ == end6_ || current7_ == end7_ || current8_ == end8_ || current9_ == end9_; } // No implementation - assignment is unsupported. void operator=(const Iterator& other); const ParamGeneratorInterface* const base_; // begin[i]_ and end[i]_ define the i-th range that Iterator traverses. // current[i]_ is the actual traversing iterator. const typename ParamGenerator::iterator begin1_; const typename ParamGenerator::iterator end1_; typename ParamGenerator::iterator current1_; const typename ParamGenerator::iterator begin2_; const typename ParamGenerator::iterator end2_; typename ParamGenerator::iterator current2_; const typename ParamGenerator::iterator begin3_; const typename ParamGenerator::iterator end3_; typename ParamGenerator::iterator current3_; const typename ParamGenerator::iterator begin4_; const typename ParamGenerator::iterator end4_; typename ParamGenerator::iterator current4_; const typename ParamGenerator::iterator begin5_; const typename ParamGenerator::iterator end5_; typename ParamGenerator::iterator current5_; const typename ParamGenerator::iterator begin6_; const typename ParamGenerator::iterator end6_; typename ParamGenerator::iterator current6_; const typename ParamGenerator::iterator begin7_; const typename ParamGenerator::iterator end7_; typename ParamGenerator::iterator current7_; const typename ParamGenerator::iterator begin8_; const typename ParamGenerator::iterator end8_; typename ParamGenerator::iterator current8_; const typename ParamGenerator::iterator begin9_; const typename ParamGenerator::iterator end9_; typename ParamGenerator::iterator current9_; ParamType current_value_; }; // class CartesianProductGenerator9::Iterator // No implementation - assignment is unsupported. void operator=(const CartesianProductGenerator9& other); const ParamGenerator g1_; const ParamGenerator g2_; const ParamGenerator g3_; const ParamGenerator g4_; const ParamGenerator g5_; const ParamGenerator g6_; const ParamGenerator g7_; const ParamGenerator g8_; const ParamGenerator g9_; }; // class CartesianProductGenerator9 template class CartesianProductGenerator10 : public ParamGeneratorInterface< ::std::tr1::tuple > { public: typedef ::std::tr1::tuple ParamType; CartesianProductGenerator10(const ParamGenerator& g1, const ParamGenerator& g2, const ParamGenerator& g3, const ParamGenerator& g4, const ParamGenerator& g5, const ParamGenerator& g6, const ParamGenerator& g7, const ParamGenerator& g8, const ParamGenerator& g9, const ParamGenerator& g10) : g1_(g1), g2_(g2), g3_(g3), g4_(g4), g5_(g5), g6_(g6), g7_(g7), g8_(g8), g9_(g9), g10_(g10) {} virtual ~CartesianProductGenerator10() {} virtual ParamIteratorInterface* Begin() const { return new Iterator(this, g1_, g1_.begin(), g2_, g2_.begin(), g3_, g3_.begin(), g4_, g4_.begin(), g5_, g5_.begin(), g6_, g6_.begin(), g7_, g7_.begin(), g8_, g8_.begin(), g9_, g9_.begin(), g10_, g10_.begin()); } virtual ParamIteratorInterface* End() const { return new Iterator(this, g1_, g1_.end(), g2_, g2_.end(), g3_, g3_.end(), g4_, g4_.end(), g5_, g5_.end(), g6_, g6_.end(), g7_, g7_.end(), g8_, g8_.end(), g9_, g9_.end(), g10_, g10_.end()); } private: class Iterator : public ParamIteratorInterface { public: Iterator(const ParamGeneratorInterface* base, const ParamGenerator& g1, const typename ParamGenerator::iterator& current1, const ParamGenerator& g2, const typename ParamGenerator::iterator& current2, const ParamGenerator& g3, const typename ParamGenerator::iterator& current3, const ParamGenerator& g4, const typename ParamGenerator::iterator& current4, const ParamGenerator& g5, const typename ParamGenerator::iterator& current5, const ParamGenerator& g6, const typename ParamGenerator::iterator& current6, const ParamGenerator& g7, const typename ParamGenerator::iterator& current7, const ParamGenerator& g8, const typename ParamGenerator::iterator& current8, const ParamGenerator& g9, const typename ParamGenerator::iterator& current9, const ParamGenerator& g10, const typename ParamGenerator::iterator& current10) : base_(base), begin1_(g1.begin()), end1_(g1.end()), current1_(current1), begin2_(g2.begin()), end2_(g2.end()), current2_(current2), begin3_(g3.begin()), end3_(g3.end()), current3_(current3), begin4_(g4.begin()), end4_(g4.end()), current4_(current4), begin5_(g5.begin()), end5_(g5.end()), current5_(current5), begin6_(g6.begin()), end6_(g6.end()), current6_(current6), begin7_(g7.begin()), end7_(g7.end()), current7_(current7), begin8_(g8.begin()), end8_(g8.end()), current8_(current8), begin9_(g9.begin()), end9_(g9.end()), current9_(current9), begin10_(g10.begin()), end10_(g10.end()), current10_(current10) { ComputeCurrentValue(); } virtual ~Iterator() {} virtual const ParamGeneratorInterface* BaseGenerator() const { return base_; } // Advance should not be called on beyond-of-range iterators // so no component iterators must be beyond end of range, either. virtual void Advance() { assert(!AtEnd()); ++current10_; if (current10_ == end10_) { current10_ = begin10_; ++current9_; } if (current9_ == end9_) { current9_ = begin9_; ++current8_; } if (current8_ == end8_) { current8_ = begin8_; ++current7_; } if (current7_ == end7_) { current7_ = begin7_; ++current6_; } if (current6_ == end6_) { current6_ = begin6_; ++current5_; } if (current5_ == end5_) { current5_ = begin5_; ++current4_; } if (current4_ == end4_) { current4_ = begin4_; ++current3_; } if (current3_ == end3_) { current3_ = begin3_; ++current2_; } if (current2_ == end2_) { current2_ = begin2_; ++current1_; } ComputeCurrentValue(); } virtual ParamIteratorInterface* Clone() const { return new Iterator(*this); } virtual const ParamType* Current() const { return ¤t_value_; } virtual bool Equals(const ParamIteratorInterface& other) const { // Having the same base generator guarantees that the other // iterator is of the same type and we can downcast. GTEST_CHECK_(BaseGenerator() == other.BaseGenerator()) << "The program attempted to compare iterators " << "from different generators." << std::endl; const Iterator* typed_other = CheckedDowncastToActualType(&other); // We must report iterators equal if they both point beyond their // respective ranges. That can happen in a variety of fashions, // so we have to consult AtEnd(). return (AtEnd() && typed_other->AtEnd()) || ( current1_ == typed_other->current1_ && current2_ == typed_other->current2_ && current3_ == typed_other->current3_ && current4_ == typed_other->current4_ && current5_ == typed_other->current5_ && current6_ == typed_other->current6_ && current7_ == typed_other->current7_ && current8_ == typed_other->current8_ && current9_ == typed_other->current9_ && current10_ == typed_other->current10_); } private: Iterator(const Iterator& other) : base_(other.base_), begin1_(other.begin1_), end1_(other.end1_), current1_(other.current1_), begin2_(other.begin2_), end2_(other.end2_), current2_(other.current2_), begin3_(other.begin3_), end3_(other.end3_), current3_(other.current3_), begin4_(other.begin4_), end4_(other.end4_), current4_(other.current4_), begin5_(other.begin5_), end5_(other.end5_), current5_(other.current5_), begin6_(other.begin6_), end6_(other.end6_), current6_(other.current6_), begin7_(other.begin7_), end7_(other.end7_), current7_(other.current7_), begin8_(other.begin8_), end8_(other.end8_), current8_(other.current8_), begin9_(other.begin9_), end9_(other.end9_), current9_(other.current9_), begin10_(other.begin10_), end10_(other.end10_), current10_(other.current10_) { ComputeCurrentValue(); } void ComputeCurrentValue() { if (!AtEnd()) current_value_ = ParamType(*current1_, *current2_, *current3_, *current4_, *current5_, *current6_, *current7_, *current8_, *current9_, *current10_); } bool AtEnd() const { // We must report iterator past the end of the range when either of the // component iterators has reached the end of its range. return current1_ == end1_ || current2_ == end2_ || current3_ == end3_ || current4_ == end4_ || current5_ == end5_ || current6_ == end6_ || current7_ == end7_ || current8_ == end8_ || current9_ == end9_ || current10_ == end10_; } // No implementation - assignment is unsupported. void operator=(const Iterator& other); const ParamGeneratorInterface* const base_; // begin[i]_ and end[i]_ define the i-th range that Iterator traverses. // current[i]_ is the actual traversing iterator. const typename ParamGenerator::iterator begin1_; const typename ParamGenerator::iterator end1_; typename ParamGenerator::iterator current1_; const typename ParamGenerator::iterator begin2_; const typename ParamGenerator::iterator end2_; typename ParamGenerator::iterator current2_; const typename ParamGenerator::iterator begin3_; const typename ParamGenerator::iterator end3_; typename ParamGenerator::iterator current3_; const typename ParamGenerator::iterator begin4_; const typename ParamGenerator::iterator end4_; typename ParamGenerator::iterator current4_; const typename ParamGenerator::iterator begin5_; const typename ParamGenerator::iterator end5_; typename ParamGenerator::iterator current5_; const typename ParamGenerator::iterator begin6_; const typename ParamGenerator::iterator end6_; typename ParamGenerator::iterator current6_; const typename ParamGenerator::iterator begin7_; const typename ParamGenerator::iterator end7_; typename ParamGenerator::iterator current7_; const typename ParamGenerator::iterator begin8_; const typename ParamGenerator::iterator end8_; typename ParamGenerator::iterator current8_; const typename ParamGenerator::iterator begin9_; const typename ParamGenerator::iterator end9_; typename ParamGenerator::iterator current9_; const typename ParamGenerator::iterator begin10_; const typename ParamGenerator::iterator end10_; typename ParamGenerator::iterator current10_; ParamType current_value_; }; // class CartesianProductGenerator10::Iterator // No implementation - assignment is unsupported. void operator=(const CartesianProductGenerator10& other); const ParamGenerator g1_; const ParamGenerator g2_; const ParamGenerator g3_; const ParamGenerator g4_; const ParamGenerator g5_; const ParamGenerator g6_; const ParamGenerator g7_; const ParamGenerator g8_; const ParamGenerator g9_; const ParamGenerator g10_; }; // class CartesianProductGenerator10 // INTERNAL IMPLEMENTATION - DO NOT USE IN USER CODE. // // Helper classes providing Combine() with polymorphic features. They allow // casting CartesianProductGeneratorN to ParamGenerator if T is // convertible to U. // template class CartesianProductHolder2 { public: CartesianProductHolder2(const Generator1& g1, const Generator2& g2) : g1_(g1), g2_(g2) {} template operator ParamGenerator< ::std::tr1::tuple >() const { return ParamGenerator< ::std::tr1::tuple >( new CartesianProductGenerator2( static_cast >(g1_), static_cast >(g2_))); } private: // No implementation - assignment is unsupported. void operator=(const CartesianProductHolder2& other); const Generator1 g1_; const Generator2 g2_; }; // class CartesianProductHolder2 template class CartesianProductHolder3 { public: CartesianProductHolder3(const Generator1& g1, const Generator2& g2, const Generator3& g3) : g1_(g1), g2_(g2), g3_(g3) {} template operator ParamGenerator< ::std::tr1::tuple >() const { return ParamGenerator< ::std::tr1::tuple >( new CartesianProductGenerator3( static_cast >(g1_), static_cast >(g2_), static_cast >(g3_))); } private: // No implementation - assignment is unsupported. void operator=(const CartesianProductHolder3& other); const Generator1 g1_; const Generator2 g2_; const Generator3 g3_; }; // class CartesianProductHolder3 template class CartesianProductHolder4 { public: CartesianProductHolder4(const Generator1& g1, const Generator2& g2, const Generator3& g3, const Generator4& g4) : g1_(g1), g2_(g2), g3_(g3), g4_(g4) {} template operator ParamGenerator< ::std::tr1::tuple >() const { return ParamGenerator< ::std::tr1::tuple >( new CartesianProductGenerator4( static_cast >(g1_), static_cast >(g2_), static_cast >(g3_), static_cast >(g4_))); } private: // No implementation - assignment is unsupported. void operator=(const CartesianProductHolder4& other); const Generator1 g1_; const Generator2 g2_; const Generator3 g3_; const Generator4 g4_; }; // class CartesianProductHolder4 template class CartesianProductHolder5 { public: CartesianProductHolder5(const Generator1& g1, const Generator2& g2, const Generator3& g3, const Generator4& g4, const Generator5& g5) : g1_(g1), g2_(g2), g3_(g3), g4_(g4), g5_(g5) {} template operator ParamGenerator< ::std::tr1::tuple >() const { return ParamGenerator< ::std::tr1::tuple >( new CartesianProductGenerator5( static_cast >(g1_), static_cast >(g2_), static_cast >(g3_), static_cast >(g4_), static_cast >(g5_))); } private: // No implementation - assignment is unsupported. void operator=(const CartesianProductHolder5& other); const Generator1 g1_; const Generator2 g2_; const Generator3 g3_; const Generator4 g4_; const Generator5 g5_; }; // class CartesianProductHolder5 template class CartesianProductHolder6 { public: CartesianProductHolder6(const Generator1& g1, const Generator2& g2, const Generator3& g3, const Generator4& g4, const Generator5& g5, const Generator6& g6) : g1_(g1), g2_(g2), g3_(g3), g4_(g4), g5_(g5), g6_(g6) {} template operator ParamGenerator< ::std::tr1::tuple >() const { return ParamGenerator< ::std::tr1::tuple >( new CartesianProductGenerator6( static_cast >(g1_), static_cast >(g2_), static_cast >(g3_), static_cast >(g4_), static_cast >(g5_), static_cast >(g6_))); } private: // No implementation - assignment is unsupported. void operator=(const CartesianProductHolder6& other); const Generator1 g1_; const Generator2 g2_; const Generator3 g3_; const Generator4 g4_; const Generator5 g5_; const Generator6 g6_; }; // class CartesianProductHolder6 template class CartesianProductHolder7 { public: CartesianProductHolder7(const Generator1& g1, const Generator2& g2, const Generator3& g3, const Generator4& g4, const Generator5& g5, const Generator6& g6, const Generator7& g7) : g1_(g1), g2_(g2), g3_(g3), g4_(g4), g5_(g5), g6_(g6), g7_(g7) {} template operator ParamGenerator< ::std::tr1::tuple >() const { return ParamGenerator< ::std::tr1::tuple >( new CartesianProductGenerator7( static_cast >(g1_), static_cast >(g2_), static_cast >(g3_), static_cast >(g4_), static_cast >(g5_), static_cast >(g6_), static_cast >(g7_))); } private: // No implementation - assignment is unsupported. void operator=(const CartesianProductHolder7& other); const Generator1 g1_; const Generator2 g2_; const Generator3 g3_; const Generator4 g4_; const Generator5 g5_; const Generator6 g6_; const Generator7 g7_; }; // class CartesianProductHolder7 template class CartesianProductHolder8 { public: CartesianProductHolder8(const Generator1& g1, const Generator2& g2, const Generator3& g3, const Generator4& g4, const Generator5& g5, const Generator6& g6, const Generator7& g7, const Generator8& g8) : g1_(g1), g2_(g2), g3_(g3), g4_(g4), g5_(g5), g6_(g6), g7_(g7), g8_(g8) {} template operator ParamGenerator< ::std::tr1::tuple >() const { return ParamGenerator< ::std::tr1::tuple >( new CartesianProductGenerator8( static_cast >(g1_), static_cast >(g2_), static_cast >(g3_), static_cast >(g4_), static_cast >(g5_), static_cast >(g6_), static_cast >(g7_), static_cast >(g8_))); } private: // No implementation - assignment is unsupported. void operator=(const CartesianProductHolder8& other); const Generator1 g1_; const Generator2 g2_; const Generator3 g3_; const Generator4 g4_; const Generator5 g5_; const Generator6 g6_; const Generator7 g7_; const Generator8 g8_; }; // class CartesianProductHolder8 template class CartesianProductHolder9 { public: CartesianProductHolder9(const Generator1& g1, const Generator2& g2, const Generator3& g3, const Generator4& g4, const Generator5& g5, const Generator6& g6, const Generator7& g7, const Generator8& g8, const Generator9& g9) : g1_(g1), g2_(g2), g3_(g3), g4_(g4), g5_(g5), g6_(g6), g7_(g7), g8_(g8), g9_(g9) {} template operator ParamGenerator< ::std::tr1::tuple >() const { return ParamGenerator< ::std::tr1::tuple >( new CartesianProductGenerator9( static_cast >(g1_), static_cast >(g2_), static_cast >(g3_), static_cast >(g4_), static_cast >(g5_), static_cast >(g6_), static_cast >(g7_), static_cast >(g8_), static_cast >(g9_))); } private: // No implementation - assignment is unsupported. void operator=(const CartesianProductHolder9& other); const Generator1 g1_; const Generator2 g2_; const Generator3 g3_; const Generator4 g4_; const Generator5 g5_; const Generator6 g6_; const Generator7 g7_; const Generator8 g8_; const Generator9 g9_; }; // class CartesianProductHolder9 template class CartesianProductHolder10 { public: CartesianProductHolder10(const Generator1& g1, const Generator2& g2, const Generator3& g3, const Generator4& g4, const Generator5& g5, const Generator6& g6, const Generator7& g7, const Generator8& g8, const Generator9& g9, const Generator10& g10) : g1_(g1), g2_(g2), g3_(g3), g4_(g4), g5_(g5), g6_(g6), g7_(g7), g8_(g8), g9_(g9), g10_(g10) {} template operator ParamGenerator< ::std::tr1::tuple >() const { return ParamGenerator< ::std::tr1::tuple >( new CartesianProductGenerator10( static_cast >(g1_), static_cast >(g2_), static_cast >(g3_), static_cast >(g4_), static_cast >(g5_), static_cast >(g6_), static_cast >(g7_), static_cast >(g8_), static_cast >(g9_), static_cast >(g10_))); } private: // No implementation - assignment is unsupported. void operator=(const CartesianProductHolder10& other); const Generator1 g1_; const Generator2 g2_; const Generator3 g3_; const Generator4 g4_; const Generator5 g5_; const Generator6 g6_; const Generator7 g7_; const Generator8 g8_; const Generator9 g9_; const Generator10 g10_; }; // class CartesianProductHolder10 # endif // GTEST_HAS_COMBINE } // namespace internal } // namespace testing #endif // GTEST_HAS_PARAM_TEST #endif // GTEST_INCLUDE_GTEST_INTERNAL_GTEST_PARAM_UTIL_GENERATED_H_ ffc-2019.1.0.post0/libs/gtest-1.7.0/include/gtest/internal/gtest-param-util-generated.h.pump000066400000000000000000000223101346026536000311420ustar00rootroot00000000000000$$ -*- mode: c++; -*- $var n = 50 $$ Maximum length of Values arguments we want to support. $var maxtuple = 10 $$ Maximum number of Combine arguments we want to support. // Copyright 2008 Google Inc. // All Rights Reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: vladl@google.com (Vlad Losev) // Type and function utilities for implementing parameterized tests. // This file is generated by a SCRIPT. DO NOT EDIT BY HAND! // // Currently Google Test supports at most $n arguments in Values, // and at most $maxtuple arguments in Combine. Please contact // googletestframework@googlegroups.com if you need more. // Please note that the number of arguments to Combine is limited // by the maximum arity of the implementation of tr1::tuple which is // currently set at $maxtuple. #ifndef GTEST_INCLUDE_GTEST_INTERNAL_GTEST_PARAM_UTIL_GENERATED_H_ #define GTEST_INCLUDE_GTEST_INTERNAL_GTEST_PARAM_UTIL_GENERATED_H_ // scripts/fuse_gtest.py depends on gtest's own header being #included // *unconditionally*. Therefore these #includes cannot be moved // inside #if GTEST_HAS_PARAM_TEST. #include "gtest/internal/gtest-param-util.h" #include "gtest/internal/gtest-port.h" #if GTEST_HAS_PARAM_TEST namespace testing { // Forward declarations of ValuesIn(), which is implemented in // include/gtest/gtest-param-test.h. template internal::ParamGenerator< typename ::testing::internal::IteratorTraits::value_type> ValuesIn(ForwardIterator begin, ForwardIterator end); template internal::ParamGenerator ValuesIn(const T (&array)[N]); template internal::ParamGenerator ValuesIn( const Container& container); namespace internal { // Used in the Values() function to provide polymorphic capabilities. template class ValueArray1 { public: explicit ValueArray1(T1 v1) : v1_(v1) {} template operator ParamGenerator() const { return ValuesIn(&v1_, &v1_ + 1); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray1& other); const T1 v1_; }; $range i 2..n $for i [[ $range j 1..i template <$for j, [[typename T$j]]> class ValueArray$i { public: ValueArray$i($for j, [[T$j v$j]]) : $for j, [[v$(j)_(v$j)]] {} template operator ParamGenerator() const { const T array[] = {$for j, [[static_cast(v$(j)_)]]}; return ValuesIn(array); } private: // No implementation - assignment is unsupported. void operator=(const ValueArray$i& other); $for j [[ const T$j v$(j)_; ]] }; ]] # if GTEST_HAS_COMBINE // INTERNAL IMPLEMENTATION - DO NOT USE IN USER CODE. // // Generates values from the Cartesian product of values produced // by the argument generators. // $range i 2..maxtuple $for i [[ $range j 1..i $range k 2..i template <$for j, [[typename T$j]]> class CartesianProductGenerator$i : public ParamGeneratorInterface< ::std::tr1::tuple<$for j, [[T$j]]> > { public: typedef ::std::tr1::tuple<$for j, [[T$j]]> ParamType; CartesianProductGenerator$i($for j, [[const ParamGenerator& g$j]]) : $for j, [[g$(j)_(g$j)]] {} virtual ~CartesianProductGenerator$i() {} virtual ParamIteratorInterface* Begin() const { return new Iterator(this, $for j, [[g$(j)_, g$(j)_.begin()]]); } virtual ParamIteratorInterface* End() const { return new Iterator(this, $for j, [[g$(j)_, g$(j)_.end()]]); } private: class Iterator : public ParamIteratorInterface { public: Iterator(const ParamGeneratorInterface* base, $for j, [[ const ParamGenerator& g$j, const typename ParamGenerator::iterator& current$(j)]]) : base_(base), $for j, [[ begin$(j)_(g$j.begin()), end$(j)_(g$j.end()), current$(j)_(current$j) ]] { ComputeCurrentValue(); } virtual ~Iterator() {} virtual const ParamGeneratorInterface* BaseGenerator() const { return base_; } // Advance should not be called on beyond-of-range iterators // so no component iterators must be beyond end of range, either. virtual void Advance() { assert(!AtEnd()); ++current$(i)_; $for k [[ if (current$(i+2-k)_ == end$(i+2-k)_) { current$(i+2-k)_ = begin$(i+2-k)_; ++current$(i+2-k-1)_; } ]] ComputeCurrentValue(); } virtual ParamIteratorInterface* Clone() const { return new Iterator(*this); } virtual const ParamType* Current() const { return ¤t_value_; } virtual bool Equals(const ParamIteratorInterface& other) const { // Having the same base generator guarantees that the other // iterator is of the same type and we can downcast. GTEST_CHECK_(BaseGenerator() == other.BaseGenerator()) << "The program attempted to compare iterators " << "from different generators." << std::endl; const Iterator* typed_other = CheckedDowncastToActualType(&other); // We must report iterators equal if they both point beyond their // respective ranges. That can happen in a variety of fashions, // so we have to consult AtEnd(). return (AtEnd() && typed_other->AtEnd()) || ($for j && [[ current$(j)_ == typed_other->current$(j)_ ]]); } private: Iterator(const Iterator& other) : base_(other.base_), $for j, [[ begin$(j)_(other.begin$(j)_), end$(j)_(other.end$(j)_), current$(j)_(other.current$(j)_) ]] { ComputeCurrentValue(); } void ComputeCurrentValue() { if (!AtEnd()) current_value_ = ParamType($for j, [[*current$(j)_]]); } bool AtEnd() const { // We must report iterator past the end of the range when either of the // component iterators has reached the end of its range. return $for j || [[ current$(j)_ == end$(j)_ ]]; } // No implementation - assignment is unsupported. void operator=(const Iterator& other); const ParamGeneratorInterface* const base_; // begin[i]_ and end[i]_ define the i-th range that Iterator traverses. // current[i]_ is the actual traversing iterator. $for j [[ const typename ParamGenerator::iterator begin$(j)_; const typename ParamGenerator::iterator end$(j)_; typename ParamGenerator::iterator current$(j)_; ]] ParamType current_value_; }; // class CartesianProductGenerator$i::Iterator // No implementation - assignment is unsupported. void operator=(const CartesianProductGenerator$i& other); $for j [[ const ParamGenerator g$(j)_; ]] }; // class CartesianProductGenerator$i ]] // INTERNAL IMPLEMENTATION - DO NOT USE IN USER CODE. // // Helper classes providing Combine() with polymorphic features. They allow // casting CartesianProductGeneratorN to ParamGenerator if T is // convertible to U. // $range i 2..maxtuple $for i [[ $range j 1..i template <$for j, [[class Generator$j]]> class CartesianProductHolder$i { public: CartesianProductHolder$i($for j, [[const Generator$j& g$j]]) : $for j, [[g$(j)_(g$j)]] {} template <$for j, [[typename T$j]]> operator ParamGenerator< ::std::tr1::tuple<$for j, [[T$j]]> >() const { return ParamGenerator< ::std::tr1::tuple<$for j, [[T$j]]> >( new CartesianProductGenerator$i<$for j, [[T$j]]>( $for j,[[ static_cast >(g$(j)_) ]])); } private: // No implementation - assignment is unsupported. void operator=(const CartesianProductHolder$i& other); $for j [[ const Generator$j g$(j)_; ]] }; // class CartesianProductHolder$i ]] # endif // GTEST_HAS_COMBINE } // namespace internal } // namespace testing #endif // GTEST_HAS_PARAM_TEST #endif // GTEST_INCLUDE_GTEST_INTERNAL_GTEST_PARAM_UTIL_GENERATED_H_ ffc-2019.1.0.post0/libs/gtest-1.7.0/include/gtest/internal/gtest-param-util.h000066400000000000000000000571771346026536000262510ustar00rootroot00000000000000// Copyright 2008 Google Inc. // All Rights Reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: vladl@google.com (Vlad Losev) // Type and function utilities for implementing parameterized tests. #ifndef GTEST_INCLUDE_GTEST_INTERNAL_GTEST_PARAM_UTIL_H_ #define GTEST_INCLUDE_GTEST_INTERNAL_GTEST_PARAM_UTIL_H_ #include #include #include // scripts/fuse_gtest.py depends on gtest's own header being #included // *unconditionally*. Therefore these #includes cannot be moved // inside #if GTEST_HAS_PARAM_TEST. #include "gtest/internal/gtest-internal.h" #include "gtest/internal/gtest-linked_ptr.h" #include "gtest/internal/gtest-port.h" #include "gtest/gtest-printers.h" #if GTEST_HAS_PARAM_TEST namespace testing { namespace internal { // INTERNAL IMPLEMENTATION - DO NOT USE IN USER CODE. // // Outputs a message explaining invalid registration of different // fixture class for the same test case. This may happen when // TEST_P macro is used to define two tests with the same name // but in different namespaces. GTEST_API_ void ReportInvalidTestCaseType(const char* test_case_name, const char* file, int line); template class ParamGeneratorInterface; template class ParamGenerator; // Interface for iterating over elements provided by an implementation // of ParamGeneratorInterface. template class ParamIteratorInterface { public: virtual ~ParamIteratorInterface() {} // A pointer to the base generator instance. // Used only for the purposes of iterator comparison // to make sure that two iterators belong to the same generator. virtual const ParamGeneratorInterface* BaseGenerator() const = 0; // Advances iterator to point to the next element // provided by the generator. The caller is responsible // for not calling Advance() on an iterator equal to // BaseGenerator()->End(). virtual void Advance() = 0; // Clones the iterator object. Used for implementing copy semantics // of ParamIterator. virtual ParamIteratorInterface* Clone() const = 0; // Dereferences the current iterator and provides (read-only) access // to the pointed value. It is the caller's responsibility not to call // Current() on an iterator equal to BaseGenerator()->End(). // Used for implementing ParamGenerator::operator*(). virtual const T* Current() const = 0; // Determines whether the given iterator and other point to the same // element in the sequence generated by the generator. // Used for implementing ParamGenerator::operator==(). virtual bool Equals(const ParamIteratorInterface& other) const = 0; }; // Class iterating over elements provided by an implementation of // ParamGeneratorInterface. It wraps ParamIteratorInterface // and implements the const forward iterator concept. template class ParamIterator { public: typedef T value_type; typedef const T& reference; typedef ptrdiff_t difference_type; // ParamIterator assumes ownership of the impl_ pointer. ParamIterator(const ParamIterator& other) : impl_(other.impl_->Clone()) {} ParamIterator& operator=(const ParamIterator& other) { if (this != &other) impl_.reset(other.impl_->Clone()); return *this; } const T& operator*() const { return *impl_->Current(); } const T* operator->() const { return impl_->Current(); } // Prefix version of operator++. ParamIterator& operator++() { impl_->Advance(); return *this; } // Postfix version of operator++. ParamIterator operator++(int /*unused*/) { ParamIteratorInterface* clone = impl_->Clone(); impl_->Advance(); return ParamIterator(clone); } bool operator==(const ParamIterator& other) const { return impl_.get() == other.impl_.get() || impl_->Equals(*other.impl_); } bool operator!=(const ParamIterator& other) const { return !(*this == other); } private: friend class ParamGenerator; explicit ParamIterator(ParamIteratorInterface* impl) : impl_(impl) {} scoped_ptr > impl_; }; // ParamGeneratorInterface is the binary interface to access generators // defined in other translation units. template class ParamGeneratorInterface { public: typedef T ParamType; virtual ~ParamGeneratorInterface() {} // Generator interface definition virtual ParamIteratorInterface* Begin() const = 0; virtual ParamIteratorInterface* End() const = 0; }; // Wraps ParamGeneratorInterface and provides general generator syntax // compatible with the STL Container concept. // This class implements copy initialization semantics and the contained // ParamGeneratorInterface instance is shared among all copies // of the original object. This is possible because that instance is immutable. template class ParamGenerator { public: typedef ParamIterator iterator; explicit ParamGenerator(ParamGeneratorInterface* impl) : impl_(impl) {} ParamGenerator(const ParamGenerator& other) : impl_(other.impl_) {} ParamGenerator& operator=(const ParamGenerator& other) { impl_ = other.impl_; return *this; } iterator begin() const { return iterator(impl_->Begin()); } iterator end() const { return iterator(impl_->End()); } private: linked_ptr > impl_; }; // Generates values from a range of two comparable values. Can be used to // generate sequences of user-defined types that implement operator+() and // operator<(). // This class is used in the Range() function. template class RangeGenerator : public ParamGeneratorInterface { public: RangeGenerator(T begin, T end, IncrementT step) : begin_(begin), end_(end), step_(step), end_index_(CalculateEndIndex(begin, end, step)) {} virtual ~RangeGenerator() {} virtual ParamIteratorInterface* Begin() const { return new Iterator(this, begin_, 0, step_); } virtual ParamIteratorInterface* End() const { return new Iterator(this, end_, end_index_, step_); } private: class Iterator : public ParamIteratorInterface { public: Iterator(const ParamGeneratorInterface* base, T value, int index, IncrementT step) : base_(base), value_(value), index_(index), step_(step) {} virtual ~Iterator() {} virtual const ParamGeneratorInterface* BaseGenerator() const { return base_; } virtual void Advance() { value_ = value_ + step_; index_++; } virtual ParamIteratorInterface* Clone() const { return new Iterator(*this); } virtual const T* Current() const { return &value_; } virtual bool Equals(const ParamIteratorInterface& other) const { // Having the same base generator guarantees that the other // iterator is of the same type and we can downcast. GTEST_CHECK_(BaseGenerator() == other.BaseGenerator()) << "The program attempted to compare iterators " << "from different generators." << std::endl; const int other_index = CheckedDowncastToActualType(&other)->index_; return index_ == other_index; } private: Iterator(const Iterator& other) : ParamIteratorInterface(), base_(other.base_), value_(other.value_), index_(other.index_), step_(other.step_) {} // No implementation - assignment is unsupported. void operator=(const Iterator& other); const ParamGeneratorInterface* const base_; T value_; int index_; const IncrementT step_; }; // class RangeGenerator::Iterator static int CalculateEndIndex(const T& begin, const T& end, const IncrementT& step) { int end_index = 0; for (T i = begin; i < end; i = i + step) end_index++; return end_index; } // No implementation - assignment is unsupported. void operator=(const RangeGenerator& other); const T begin_; const T end_; const IncrementT step_; // The index for the end() iterator. All the elements in the generated // sequence are indexed (0-based) to aid iterator comparison. const int end_index_; }; // class RangeGenerator // Generates values from a pair of STL-style iterators. Used in the // ValuesIn() function. The elements are copied from the source range // since the source can be located on the stack, and the generator // is likely to persist beyond that stack frame. template class ValuesInIteratorRangeGenerator : public ParamGeneratorInterface { public: template ValuesInIteratorRangeGenerator(ForwardIterator begin, ForwardIterator end) : container_(begin, end) {} virtual ~ValuesInIteratorRangeGenerator() {} virtual ParamIteratorInterface* Begin() const { return new Iterator(this, container_.begin()); } virtual ParamIteratorInterface* End() const { return new Iterator(this, container_.end()); } private: typedef typename ::std::vector ContainerType; class Iterator : public ParamIteratorInterface { public: Iterator(const ParamGeneratorInterface* base, typename ContainerType::const_iterator iterator) : base_(base), iterator_(iterator) {} virtual ~Iterator() {} virtual const ParamGeneratorInterface* BaseGenerator() const { return base_; } virtual void Advance() { ++iterator_; value_.reset(); } virtual ParamIteratorInterface* Clone() const { return new Iterator(*this); } // We need to use cached value referenced by iterator_ because *iterator_ // can return a temporary object (and of type other then T), so just // having "return &*iterator_;" doesn't work. // value_ is updated here and not in Advance() because Advance() // can advance iterator_ beyond the end of the range, and we cannot // detect that fact. The client code, on the other hand, is // responsible for not calling Current() on an out-of-range iterator. virtual const T* Current() const { if (value_.get() == NULL) value_.reset(new T(*iterator_)); return value_.get(); } virtual bool Equals(const ParamIteratorInterface& other) const { // Having the same base generator guarantees that the other // iterator is of the same type and we can downcast. GTEST_CHECK_(BaseGenerator() == other.BaseGenerator()) << "The program attempted to compare iterators " << "from different generators." << std::endl; return iterator_ == CheckedDowncastToActualType(&other)->iterator_; } private: Iterator(const Iterator& other) // The explicit constructor call suppresses a false warning // emitted by gcc when supplied with the -Wextra option. : ParamIteratorInterface(), base_(other.base_), iterator_(other.iterator_) {} const ParamGeneratorInterface* const base_; typename ContainerType::const_iterator iterator_; // A cached value of *iterator_. We keep it here to allow access by // pointer in the wrapping iterator's operator->(). // value_ needs to be mutable to be accessed in Current(). // Use of scoped_ptr helps manage cached value's lifetime, // which is bound by the lifespan of the iterator itself. mutable scoped_ptr value_; }; // class ValuesInIteratorRangeGenerator::Iterator // No implementation - assignment is unsupported. void operator=(const ValuesInIteratorRangeGenerator& other); const ContainerType container_; }; // class ValuesInIteratorRangeGenerator // INTERNAL IMPLEMENTATION - DO NOT USE IN USER CODE. // // Stores a parameter value and later creates tests parameterized with that // value. template class ParameterizedTestFactory : public TestFactoryBase { public: typedef typename TestClass::ParamType ParamType; explicit ParameterizedTestFactory(ParamType parameter) : parameter_(parameter) {} virtual Test* CreateTest() { TestClass::SetParam(¶meter_); return new TestClass(); } private: const ParamType parameter_; GTEST_DISALLOW_COPY_AND_ASSIGN_(ParameterizedTestFactory); }; // INTERNAL IMPLEMENTATION - DO NOT USE IN USER CODE. // // TestMetaFactoryBase is a base class for meta-factories that create // test factories for passing into MakeAndRegisterTestInfo function. template class TestMetaFactoryBase { public: virtual ~TestMetaFactoryBase() {} virtual TestFactoryBase* CreateTestFactory(ParamType parameter) = 0; }; // INTERNAL IMPLEMENTATION - DO NOT USE IN USER CODE. // // TestMetaFactory creates test factories for passing into // MakeAndRegisterTestInfo function. Since MakeAndRegisterTestInfo receives // ownership of test factory pointer, same factory object cannot be passed // into that method twice. But ParameterizedTestCaseInfo is going to call // it for each Test/Parameter value combination. Thus it needs meta factory // creator class. template class TestMetaFactory : public TestMetaFactoryBase { public: typedef typename TestCase::ParamType ParamType; TestMetaFactory() {} virtual TestFactoryBase* CreateTestFactory(ParamType parameter) { return new ParameterizedTestFactory(parameter); } private: GTEST_DISALLOW_COPY_AND_ASSIGN_(TestMetaFactory); }; // INTERNAL IMPLEMENTATION - DO NOT USE IN USER CODE. // // ParameterizedTestCaseInfoBase is a generic interface // to ParameterizedTestCaseInfo classes. ParameterizedTestCaseInfoBase // accumulates test information provided by TEST_P macro invocations // and generators provided by INSTANTIATE_TEST_CASE_P macro invocations // and uses that information to register all resulting test instances // in RegisterTests method. The ParameterizeTestCaseRegistry class holds // a collection of pointers to the ParameterizedTestCaseInfo objects // and calls RegisterTests() on each of them when asked. class ParameterizedTestCaseInfoBase { public: virtual ~ParameterizedTestCaseInfoBase() {} // Base part of test case name for display purposes. virtual const string& GetTestCaseName() const = 0; // Test case id to verify identity. virtual TypeId GetTestCaseTypeId() const = 0; // UnitTest class invokes this method to register tests in this // test case right before running them in RUN_ALL_TESTS macro. // This method should not be called more then once on any single // instance of a ParameterizedTestCaseInfoBase derived class. virtual void RegisterTests() = 0; protected: ParameterizedTestCaseInfoBase() {} private: GTEST_DISALLOW_COPY_AND_ASSIGN_(ParameterizedTestCaseInfoBase); }; // INTERNAL IMPLEMENTATION - DO NOT USE IN USER CODE. // // ParameterizedTestCaseInfo accumulates tests obtained from TEST_P // macro invocations for a particular test case and generators // obtained from INSTANTIATE_TEST_CASE_P macro invocations for that // test case. It registers tests with all values generated by all // generators when asked. template class ParameterizedTestCaseInfo : public ParameterizedTestCaseInfoBase { public: // ParamType and GeneratorCreationFunc are private types but are required // for declarations of public methods AddTestPattern() and // AddTestCaseInstantiation(). typedef typename TestCase::ParamType ParamType; // A function that returns an instance of appropriate generator type. typedef ParamGenerator(GeneratorCreationFunc)(); explicit ParameterizedTestCaseInfo(const char* name) : test_case_name_(name) {} // Test case base name for display purposes. virtual const string& GetTestCaseName() const { return test_case_name_; } // Test case id to verify identity. virtual TypeId GetTestCaseTypeId() const { return GetTypeId(); } // TEST_P macro uses AddTestPattern() to record information // about a single test in a LocalTestInfo structure. // test_case_name is the base name of the test case (without invocation // prefix). test_base_name is the name of an individual test without // parameter index. For the test SequenceA/FooTest.DoBar/1 FooTest is // test case base name and DoBar is test base name. void AddTestPattern(const char* test_case_name, const char* test_base_name, TestMetaFactoryBase* meta_factory) { tests_.push_back(linked_ptr(new TestInfo(test_case_name, test_base_name, meta_factory))); } // INSTANTIATE_TEST_CASE_P macro uses AddGenerator() to record information // about a generator. int AddTestCaseInstantiation(const string& instantiation_name, GeneratorCreationFunc* func, const char* /* file */, int /* line */) { instantiations_.push_back(::std::make_pair(instantiation_name, func)); return 0; // Return value used only to run this method in namespace scope. } // UnitTest class invokes this method to register tests in this test case // test cases right before running tests in RUN_ALL_TESTS macro. // This method should not be called more then once on any single // instance of a ParameterizedTestCaseInfoBase derived class. // UnitTest has a guard to prevent from calling this method more then once. virtual void RegisterTests() { for (typename TestInfoContainer::iterator test_it = tests_.begin(); test_it != tests_.end(); ++test_it) { linked_ptr test_info = *test_it; for (typename InstantiationContainer::iterator gen_it = instantiations_.begin(); gen_it != instantiations_.end(); ++gen_it) { const string& instantiation_name = gen_it->first; ParamGenerator generator((*gen_it->second)()); string test_case_name; if ( !instantiation_name.empty() ) test_case_name = instantiation_name + "/"; test_case_name += test_info->test_case_base_name; int i = 0; for (typename ParamGenerator::iterator param_it = generator.begin(); param_it != generator.end(); ++param_it, ++i) { Message test_name_stream; test_name_stream << test_info->test_base_name << "/" << i; MakeAndRegisterTestInfo( test_case_name.c_str(), test_name_stream.GetString().c_str(), NULL, // No type parameter. PrintToString(*param_it).c_str(), GetTestCaseTypeId(), TestCase::SetUpTestCase, TestCase::TearDownTestCase, test_info->test_meta_factory->CreateTestFactory(*param_it)); } // for param_it } // for gen_it } // for test_it } // RegisterTests private: // LocalTestInfo structure keeps information about a single test registered // with TEST_P macro. struct TestInfo { TestInfo(const char* a_test_case_base_name, const char* a_test_base_name, TestMetaFactoryBase* a_test_meta_factory) : test_case_base_name(a_test_case_base_name), test_base_name(a_test_base_name), test_meta_factory(a_test_meta_factory) {} const string test_case_base_name; const string test_base_name; const scoped_ptr > test_meta_factory; }; typedef ::std::vector > TestInfoContainer; // Keeps pairs of // received from INSTANTIATE_TEST_CASE_P macros. typedef ::std::vector > InstantiationContainer; const string test_case_name_; TestInfoContainer tests_; InstantiationContainer instantiations_; GTEST_DISALLOW_COPY_AND_ASSIGN_(ParameterizedTestCaseInfo); }; // class ParameterizedTestCaseInfo // INTERNAL IMPLEMENTATION - DO NOT USE IN USER CODE. // // ParameterizedTestCaseRegistry contains a map of ParameterizedTestCaseInfoBase // classes accessed by test case names. TEST_P and INSTANTIATE_TEST_CASE_P // macros use it to locate their corresponding ParameterizedTestCaseInfo // descriptors. class ParameterizedTestCaseRegistry { public: ParameterizedTestCaseRegistry() {} ~ParameterizedTestCaseRegistry() { for (TestCaseInfoContainer::iterator it = test_case_infos_.begin(); it != test_case_infos_.end(); ++it) { delete *it; } } // Looks up or creates and returns a structure containing information about // tests and instantiations of a particular test case. template ParameterizedTestCaseInfo* GetTestCasePatternHolder( const char* test_case_name, const char* file, int line) { ParameterizedTestCaseInfo* typed_test_info = NULL; for (TestCaseInfoContainer::iterator it = test_case_infos_.begin(); it != test_case_infos_.end(); ++it) { if ((*it)->GetTestCaseName() == test_case_name) { if ((*it)->GetTestCaseTypeId() != GetTypeId()) { // Complain about incorrect usage of Google Test facilities // and terminate the program since we cannot guaranty correct // test case setup and tear-down in this case. ReportInvalidTestCaseType(test_case_name, file, line); posix::Abort(); } else { // At this point we are sure that the object we found is of the same // type we are looking for, so we downcast it to that type // without further checks. typed_test_info = CheckedDowncastToActualType< ParameterizedTestCaseInfo >(*it); } break; } } if (typed_test_info == NULL) { typed_test_info = new ParameterizedTestCaseInfo(test_case_name); test_case_infos_.push_back(typed_test_info); } return typed_test_info; } void RegisterTests() { for (TestCaseInfoContainer::iterator it = test_case_infos_.begin(); it != test_case_infos_.end(); ++it) { (*it)->RegisterTests(); } } private: typedef ::std::vector TestCaseInfoContainer; TestCaseInfoContainer test_case_infos_; GTEST_DISALLOW_COPY_AND_ASSIGN_(ParameterizedTestCaseRegistry); }; } // namespace internal } // namespace testing #endif // GTEST_HAS_PARAM_TEST #endif // GTEST_INCLUDE_GTEST_INTERNAL_GTEST_PARAM_UTIL_H_ ffc-2019.1.0.post0/libs/gtest-1.7.0/include/gtest/internal/gtest-port.h000066400000000000000000002062741346026536000251540ustar00rootroot00000000000000// Copyright 2005, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Authors: wan@google.com (Zhanyong Wan) // // Low-level types and utilities for porting Google Test to various // platforms. They are subject to change without notice. DO NOT USE // THEM IN USER CODE. // // This file is fundamental to Google Test. All other Google Test source // files are expected to #include this. Therefore, it cannot #include // any other Google Test header. #ifndef GTEST_INCLUDE_GTEST_INTERNAL_GTEST_PORT_H_ #define GTEST_INCLUDE_GTEST_INTERNAL_GTEST_PORT_H_ // The user can define the following macros in the build script to // control Google Test's behavior. If the user doesn't define a macro // in this list, Google Test will define it. // // GTEST_HAS_CLONE - Define it to 1/0 to indicate that clone(2) // is/isn't available. // GTEST_HAS_EXCEPTIONS - Define it to 1/0 to indicate that exceptions // are enabled. // GTEST_HAS_GLOBAL_STRING - Define it to 1/0 to indicate that ::string // is/isn't available (some systems define // ::string, which is different to std::string). // GTEST_HAS_GLOBAL_WSTRING - Define it to 1/0 to indicate that ::string // is/isn't available (some systems define // ::wstring, which is different to std::wstring). // GTEST_HAS_POSIX_RE - Define it to 1/0 to indicate that POSIX regular // expressions are/aren't available. // GTEST_HAS_PTHREAD - Define it to 1/0 to indicate that // is/isn't available. // GTEST_HAS_RTTI - Define it to 1/0 to indicate that RTTI is/isn't // enabled. // GTEST_HAS_STD_WSTRING - Define it to 1/0 to indicate that // std::wstring does/doesn't work (Google Test can // be used where std::wstring is unavailable). // GTEST_HAS_TR1_TUPLE - Define it to 1/0 to indicate tr1::tuple // is/isn't available. // GTEST_HAS_SEH - Define it to 1/0 to indicate whether the // compiler supports Microsoft's "Structured // Exception Handling". // GTEST_HAS_STREAM_REDIRECTION // - Define it to 1/0 to indicate whether the // platform supports I/O stream redirection using // dup() and dup2(). // GTEST_USE_OWN_TR1_TUPLE - Define it to 1/0 to indicate whether Google // Test's own tr1 tuple implementation should be // used. Unused when the user sets // GTEST_HAS_TR1_TUPLE to 0. // GTEST_LANG_CXX11 - Define it to 1/0 to indicate that Google Test // is building in C++11/C++98 mode. // GTEST_LINKED_AS_SHARED_LIBRARY // - Define to 1 when compiling tests that use // Google Test as a shared library (known as // DLL on Windows). // GTEST_CREATE_SHARED_LIBRARY // - Define to 1 when compiling Google Test itself // as a shared library. // This header defines the following utilities: // // Macros indicating the current platform (defined to 1 if compiled on // the given platform; otherwise undefined): // GTEST_OS_AIX - IBM AIX // GTEST_OS_CYGWIN - Cygwin // GTEST_OS_HPUX - HP-UX // GTEST_OS_LINUX - Linux // GTEST_OS_LINUX_ANDROID - Google Android // GTEST_OS_MAC - Mac OS X // GTEST_OS_IOS - iOS // GTEST_OS_IOS_SIMULATOR - iOS simulator // GTEST_OS_NACL - Google Native Client (NaCl) // GTEST_OS_OPENBSD - OpenBSD // GTEST_OS_QNX - QNX // GTEST_OS_SOLARIS - Sun Solaris // GTEST_OS_SYMBIAN - Symbian // GTEST_OS_WINDOWS - Windows (Desktop, MinGW, or Mobile) // GTEST_OS_WINDOWS_DESKTOP - Windows Desktop // GTEST_OS_WINDOWS_MINGW - MinGW // GTEST_OS_WINDOWS_MOBILE - Windows Mobile // GTEST_OS_ZOS - z/OS // // Among the platforms, Cygwin, Linux, Max OS X, and Windows have the // most stable support. Since core members of the Google Test project // don't have access to other platforms, support for them may be less // stable. If you notice any problems on your platform, please notify // googletestframework@googlegroups.com (patches for fixing them are // even more welcome!). // // Note that it is possible that none of the GTEST_OS_* macros are defined. // // Macros indicating available Google Test features (defined to 1 if // the corresponding feature is supported; otherwise undefined): // GTEST_HAS_COMBINE - the Combine() function (for value-parameterized // tests) // GTEST_HAS_DEATH_TEST - death tests // GTEST_HAS_PARAM_TEST - value-parameterized tests // GTEST_HAS_TYPED_TEST - typed tests // GTEST_HAS_TYPED_TEST_P - type-parameterized tests // GTEST_USES_POSIX_RE - enhanced POSIX regex is used. Do not confuse with // GTEST_HAS_POSIX_RE (see above) which users can // define themselves. // GTEST_USES_SIMPLE_RE - our own simple regex is used; // the above two are mutually exclusive. // GTEST_CAN_COMPARE_NULL - accepts untyped NULL in EXPECT_EQ(). // // Macros for basic C++ coding: // GTEST_AMBIGUOUS_ELSE_BLOCKER_ - for disabling a gcc warning. // GTEST_ATTRIBUTE_UNUSED_ - declares that a class' instances or a // variable don't have to be used. // GTEST_DISALLOW_ASSIGN_ - disables operator=. // GTEST_DISALLOW_COPY_AND_ASSIGN_ - disables copy ctor and operator=. // GTEST_MUST_USE_RESULT_ - declares that a function's result must be used. // // Synchronization: // Mutex, MutexLock, ThreadLocal, GetThreadCount() // - synchronization primitives. // GTEST_IS_THREADSAFE - defined to 1 to indicate that the above // synchronization primitives have real implementations // and Google Test is thread-safe; or 0 otherwise. // // Template meta programming: // is_pointer - as in TR1; needed on Symbian and IBM XL C/C++ only. // IteratorTraits - partial implementation of std::iterator_traits, which // is not available in libCstd when compiled with Sun C++. // // Smart pointers: // scoped_ptr - as in TR2. // // Regular expressions: // RE - a simple regular expression class using the POSIX // Extended Regular Expression syntax on UNIX-like // platforms, or a reduced regular exception syntax on // other platforms, including Windows. // // Logging: // GTEST_LOG_() - logs messages at the specified severity level. // LogToStderr() - directs all log messages to stderr. // FlushInfoLog() - flushes informational log messages. // // Stdout and stderr capturing: // CaptureStdout() - starts capturing stdout. // GetCapturedStdout() - stops capturing stdout and returns the captured // string. // CaptureStderr() - starts capturing stderr. // GetCapturedStderr() - stops capturing stderr and returns the captured // string. // // Integer types: // TypeWithSize - maps an integer to a int type. // Int32, UInt32, Int64, UInt64, TimeInMillis // - integers of known sizes. // BiggestInt - the biggest signed integer type. // // Command-line utilities: // GTEST_FLAG() - references a flag. // GTEST_DECLARE_*() - declares a flag. // GTEST_DEFINE_*() - defines a flag. // GetInjectableArgvs() - returns the command line as a vector of strings. // // Environment variable utilities: // GetEnv() - gets the value of an environment variable. // BoolFromGTestEnv() - parses a bool environment variable. // Int32FromGTestEnv() - parses an Int32 environment variable. // StringFromGTestEnv() - parses a string environment variable. #include // for isspace, etc #include // for ptrdiff_t #include #include #include #ifndef _WIN32_WCE # include # include #endif // !_WIN32_WCE #if defined __APPLE__ # include # include #endif #include // NOLINT #include // NOLINT #include // NOLINT #define GTEST_DEV_EMAIL_ "googletestframework@@googlegroups.com" #define GTEST_FLAG_PREFIX_ "gtest_" #define GTEST_FLAG_PREFIX_DASH_ "gtest-" #define GTEST_FLAG_PREFIX_UPPER_ "GTEST_" #define GTEST_NAME_ "Google Test" #define GTEST_PROJECT_URL_ "http://code.google.com/p/googletest/" // Determines the version of gcc that is used to compile this. #ifdef __GNUC__ // 40302 means version 4.3.2. # define GTEST_GCC_VER_ \ (__GNUC__*10000 + __GNUC_MINOR__*100 + __GNUC_PATCHLEVEL__) #endif // __GNUC__ // Determines the platform on which Google Test is compiled. #ifdef __CYGWIN__ # define GTEST_OS_CYGWIN 1 #elif defined __SYMBIAN32__ # define GTEST_OS_SYMBIAN 1 #elif defined _WIN32 # define GTEST_OS_WINDOWS 1 # ifdef _WIN32_WCE # define GTEST_OS_WINDOWS_MOBILE 1 # elif defined(__MINGW__) || defined(__MINGW32__) # define GTEST_OS_WINDOWS_MINGW 1 # else # define GTEST_OS_WINDOWS_DESKTOP 1 # endif // _WIN32_WCE #elif defined __APPLE__ # define GTEST_OS_MAC 1 # if TARGET_OS_IPHONE # define GTEST_OS_IOS 1 # if TARGET_IPHONE_SIMULATOR # define GTEST_OS_IOS_SIMULATOR 1 # endif # endif #elif defined __linux__ # define GTEST_OS_LINUX 1 # if defined __ANDROID__ # define GTEST_OS_LINUX_ANDROID 1 # endif #elif defined __MVS__ # define GTEST_OS_ZOS 1 #elif defined(__sun) && defined(__SVR4) # define GTEST_OS_SOLARIS 1 #elif defined(_AIX) # define GTEST_OS_AIX 1 #elif defined(__hpux) # define GTEST_OS_HPUX 1 #elif defined __native_client__ # define GTEST_OS_NACL 1 #elif defined __OpenBSD__ # define GTEST_OS_OPENBSD 1 #elif defined __QNX__ # define GTEST_OS_QNX 1 #endif // __CYGWIN__ #ifndef GTEST_LANG_CXX11 // gcc and clang define __GXX_EXPERIMENTAL_CXX0X__ when // -std={c,gnu}++{0x,11} is passed. The C++11 standard specifies a // value for __cplusplus, and recent versions of clang, gcc, and // probably other compilers set that too in C++11 mode. # if __GXX_EXPERIMENTAL_CXX0X__ || __cplusplus >= 201103L // Compiling in at least C++11 mode. # define GTEST_LANG_CXX11 1 # else # define GTEST_LANG_CXX11 0 # endif #endif // Brings in definitions for functions used in the testing::internal::posix // namespace (read, write, close, chdir, isatty, stat). We do not currently // use them on Windows Mobile. #if !GTEST_OS_WINDOWS // This assumes that non-Windows OSes provide unistd.h. For OSes where this // is not the case, we need to include headers that provide the functions // mentioned above. # include # include #elif !GTEST_OS_WINDOWS_MOBILE # include # include #endif #if GTEST_OS_LINUX_ANDROID // Used to define __ANDROID_API__ matching the target NDK API level. # include // NOLINT #endif // Defines this to true iff Google Test can use POSIX regular expressions. #ifndef GTEST_HAS_POSIX_RE # if GTEST_OS_LINUX_ANDROID // On Android, is only available starting with Gingerbread. # define GTEST_HAS_POSIX_RE (__ANDROID_API__ >= 9) # else # define GTEST_HAS_POSIX_RE (!GTEST_OS_WINDOWS) # endif #endif #if GTEST_HAS_POSIX_RE // On some platforms, needs someone to define size_t, and // won't compile otherwise. We can #include it here as we already // included , which is guaranteed to define size_t through // . # include // NOLINT # define GTEST_USES_POSIX_RE 1 #elif GTEST_OS_WINDOWS // is not available on Windows. Use our own simple regex // implementation instead. # define GTEST_USES_SIMPLE_RE 1 #else // may not be available on this platform. Use our own // simple regex implementation instead. # define GTEST_USES_SIMPLE_RE 1 #endif // GTEST_HAS_POSIX_RE #ifndef GTEST_HAS_EXCEPTIONS // The user didn't tell us whether exceptions are enabled, so we need // to figure it out. # if defined(_MSC_VER) || defined(__BORLANDC__) // MSVC's and C++Builder's implementations of the STL use the _HAS_EXCEPTIONS // macro to enable exceptions, so we'll do the same. // Assumes that exceptions are enabled by default. # ifndef _HAS_EXCEPTIONS # define _HAS_EXCEPTIONS 1 # endif // _HAS_EXCEPTIONS # define GTEST_HAS_EXCEPTIONS _HAS_EXCEPTIONS # elif defined(__GNUC__) && __EXCEPTIONS // gcc defines __EXCEPTIONS to 1 iff exceptions are enabled. # define GTEST_HAS_EXCEPTIONS 1 # elif defined(__SUNPRO_CC) // Sun Pro CC supports exceptions. However, there is no compile-time way of // detecting whether they are enabled or not. Therefore, we assume that // they are enabled unless the user tells us otherwise. # define GTEST_HAS_EXCEPTIONS 1 # elif defined(__IBMCPP__) && __EXCEPTIONS // xlC defines __EXCEPTIONS to 1 iff exceptions are enabled. # define GTEST_HAS_EXCEPTIONS 1 # elif defined(__HP_aCC) // Exception handling is in effect by default in HP aCC compiler. It has to // be turned of by +noeh compiler option if desired. # define GTEST_HAS_EXCEPTIONS 1 # else // For other compilers, we assume exceptions are disabled to be // conservative. # define GTEST_HAS_EXCEPTIONS 0 # endif // defined(_MSC_VER) || defined(__BORLANDC__) #endif // GTEST_HAS_EXCEPTIONS #if !defined(GTEST_HAS_STD_STRING) // Even though we don't use this macro any longer, we keep it in case // some clients still depend on it. # define GTEST_HAS_STD_STRING 1 #elif !GTEST_HAS_STD_STRING // The user told us that ::std::string isn't available. # error "Google Test cannot be used where ::std::string isn't available." #endif // !defined(GTEST_HAS_STD_STRING) #ifndef GTEST_HAS_GLOBAL_STRING // The user didn't tell us whether ::string is available, so we need // to figure it out. # define GTEST_HAS_GLOBAL_STRING 0 #endif // GTEST_HAS_GLOBAL_STRING #ifndef GTEST_HAS_STD_WSTRING // The user didn't tell us whether ::std::wstring is available, so we need // to figure it out. // TODO(wan@google.com): uses autoconf to detect whether ::std::wstring // is available. // Cygwin 1.7 and below doesn't support ::std::wstring. // Solaris' libc++ doesn't support it either. Android has // no support for it at least as recent as Froyo (2.2). # define GTEST_HAS_STD_WSTRING \ (!(GTEST_OS_LINUX_ANDROID || GTEST_OS_CYGWIN || GTEST_OS_SOLARIS)) #endif // GTEST_HAS_STD_WSTRING #ifndef GTEST_HAS_GLOBAL_WSTRING // The user didn't tell us whether ::wstring is available, so we need // to figure it out. # define GTEST_HAS_GLOBAL_WSTRING \ (GTEST_HAS_STD_WSTRING && GTEST_HAS_GLOBAL_STRING) #endif // GTEST_HAS_GLOBAL_WSTRING // Determines whether RTTI is available. #ifndef GTEST_HAS_RTTI // The user didn't tell us whether RTTI is enabled, so we need to // figure it out. # ifdef _MSC_VER # ifdef _CPPRTTI // MSVC defines this macro iff RTTI is enabled. # define GTEST_HAS_RTTI 1 # else # define GTEST_HAS_RTTI 0 # endif // Starting with version 4.3.2, gcc defines __GXX_RTTI iff RTTI is enabled. # elif defined(__GNUC__) && (GTEST_GCC_VER_ >= 40302) # ifdef __GXX_RTTI // When building against STLport with the Android NDK and with // -frtti -fno-exceptions, the build fails at link time with undefined // references to __cxa_bad_typeid. Note sure if STL or toolchain bug, // so disable RTTI when detected. # if GTEST_OS_LINUX_ANDROID && defined(_STLPORT_MAJOR) && \ !defined(__EXCEPTIONS) # define GTEST_HAS_RTTI 0 # else # define GTEST_HAS_RTTI 1 # endif // GTEST_OS_LINUX_ANDROID && __STLPORT_MAJOR && !__EXCEPTIONS # else # define GTEST_HAS_RTTI 0 # endif // __GXX_RTTI // Clang defines __GXX_RTTI starting with version 3.0, but its manual recommends // using has_feature instead. has_feature(cxx_rtti) is supported since 2.7, the // first version with C++ support. # elif defined(__clang__) # define GTEST_HAS_RTTI __has_feature(cxx_rtti) // Starting with version 9.0 IBM Visual Age defines __RTTI_ALL__ to 1 if // both the typeid and dynamic_cast features are present. # elif defined(__IBMCPP__) && (__IBMCPP__ >= 900) # ifdef __RTTI_ALL__ # define GTEST_HAS_RTTI 1 # else # define GTEST_HAS_RTTI 0 # endif # else // For all other compilers, we assume RTTI is enabled. # define GTEST_HAS_RTTI 1 # endif // _MSC_VER #endif // GTEST_HAS_RTTI // It's this header's responsibility to #include when RTTI // is enabled. #if GTEST_HAS_RTTI # include #endif // Determines whether Google Test can use the pthreads library. #ifndef GTEST_HAS_PTHREAD // The user didn't tell us explicitly, so we assume pthreads support is // available on Linux and Mac. // // To disable threading support in Google Test, add -DGTEST_HAS_PTHREAD=0 // to your compiler flags. # define GTEST_HAS_PTHREAD (GTEST_OS_LINUX || GTEST_OS_MAC || GTEST_OS_HPUX \ || GTEST_OS_QNX) #endif // GTEST_HAS_PTHREAD #if GTEST_HAS_PTHREAD // gtest-port.h guarantees to #include when GTEST_HAS_PTHREAD is // true. # include // NOLINT // For timespec and nanosleep, used below. # include // NOLINT #endif // Determines whether Google Test can use tr1/tuple. You can define // this macro to 0 to prevent Google Test from using tuple (any // feature depending on tuple with be disabled in this mode). #ifndef GTEST_HAS_TR1_TUPLE # if GTEST_OS_LINUX_ANDROID && defined(_STLPORT_MAJOR) // STLport, provided with the Android NDK, has neither or . # define GTEST_HAS_TR1_TUPLE 0 # else // The user didn't tell us not to do it, so we assume it's OK. # define GTEST_HAS_TR1_TUPLE 1 # endif #endif // GTEST_HAS_TR1_TUPLE // Determines whether Google Test's own tr1 tuple implementation // should be used. #ifndef GTEST_USE_OWN_TR1_TUPLE // The user didn't tell us, so we need to figure it out. // We use our own TR1 tuple if we aren't sure the user has an // implementation of it already. At this time, libstdc++ 4.0.0+ and // MSVC 2010 are the only mainstream standard libraries that come // with a TR1 tuple implementation. NVIDIA's CUDA NVCC compiler // pretends to be GCC by defining __GNUC__ and friends, but cannot // compile GCC's tuple implementation. MSVC 2008 (9.0) provides TR1 // tuple in a 323 MB Feature Pack download, which we cannot assume the // user has. QNX's QCC compiler is a modified GCC but it doesn't // support TR1 tuple. libc++ only provides std::tuple, in C++11 mode, // and it can be used with some compilers that define __GNUC__. # if (defined(__GNUC__) && !defined(__CUDACC__) && (GTEST_GCC_VER_ >= 40000) \ && !GTEST_OS_QNX && !defined(_LIBCPP_VERSION)) || _MSC_VER >= 1600 # define GTEST_ENV_HAS_TR1_TUPLE_ 1 # endif // C++11 specifies that provides std::tuple. Use that if gtest is used // in C++11 mode and libstdc++ isn't very old (binaries targeting OS X 10.6 // can build with clang but need to use gcc4.2's libstdc++). # if GTEST_LANG_CXX11 && (!defined(__GLIBCXX__) || __GLIBCXX__ > 20110325) # define GTEST_ENV_HAS_STD_TUPLE_ 1 # endif # if GTEST_ENV_HAS_TR1_TUPLE_ || GTEST_ENV_HAS_STD_TUPLE_ # define GTEST_USE_OWN_TR1_TUPLE 0 # else # define GTEST_USE_OWN_TR1_TUPLE 1 # endif #endif // GTEST_USE_OWN_TR1_TUPLE // To avoid conditional compilation everywhere, we make it // gtest-port.h's responsibility to #include the header implementing // tr1/tuple. #if GTEST_HAS_TR1_TUPLE # if GTEST_USE_OWN_TR1_TUPLE # include "gtest/internal/gtest-tuple.h" # elif GTEST_ENV_HAS_STD_TUPLE_ # include // C++11 puts its tuple into the ::std namespace rather than // ::std::tr1. gtest expects tuple to live in ::std::tr1, so put it there. // This causes undefined behavior, but supported compilers react in // the way we intend. namespace std { namespace tr1 { using ::std::get; using ::std::make_tuple; using ::std::tuple; using ::std::tuple_element; using ::std::tuple_size; } } # elif GTEST_OS_SYMBIAN // On Symbian, BOOST_HAS_TR1_TUPLE causes Boost's TR1 tuple library to // use STLport's tuple implementation, which unfortunately doesn't // work as the copy of STLport distributed with Symbian is incomplete. // By making sure BOOST_HAS_TR1_TUPLE is undefined, we force Boost to // use its own tuple implementation. # ifdef BOOST_HAS_TR1_TUPLE # undef BOOST_HAS_TR1_TUPLE # endif // BOOST_HAS_TR1_TUPLE // This prevents , which defines // BOOST_HAS_TR1_TUPLE, from being #included by Boost's . # define BOOST_TR1_DETAIL_CONFIG_HPP_INCLUDED # include # elif defined(__GNUC__) && (GTEST_GCC_VER_ >= 40000) // GCC 4.0+ implements tr1/tuple in the header. This does // not conform to the TR1 spec, which requires the header to be . # if !GTEST_HAS_RTTI && GTEST_GCC_VER_ < 40302 // Until version 4.3.2, gcc has a bug that causes , // which is #included by , to not compile when RTTI is // disabled. _TR1_FUNCTIONAL is the header guard for // . Hence the following #define is a hack to prevent // from being included. # define _TR1_FUNCTIONAL 1 # include # undef _TR1_FUNCTIONAL // Allows the user to #include // if he chooses to. # else # include // NOLINT # endif // !GTEST_HAS_RTTI && GTEST_GCC_VER_ < 40302 # else // If the compiler is not GCC 4.0+, we assume the user is using a // spec-conforming TR1 implementation. # include // NOLINT # endif // GTEST_USE_OWN_TR1_TUPLE #endif // GTEST_HAS_TR1_TUPLE // Determines whether clone(2) is supported. // Usually it will only be available on Linux, excluding // Linux on the Itanium architecture. // Also see http://linux.die.net/man/2/clone. #ifndef GTEST_HAS_CLONE // The user didn't tell us, so we need to figure it out. # if GTEST_OS_LINUX && !defined(__ia64__) # if GTEST_OS_LINUX_ANDROID // On Android, clone() is only available on ARM starting with Gingerbread. # if defined(__arm__) && __ANDROID_API__ >= 9 # define GTEST_HAS_CLONE 1 # else # define GTEST_HAS_CLONE 0 # endif # else # define GTEST_HAS_CLONE 1 # endif # else # define GTEST_HAS_CLONE 0 # endif // GTEST_OS_LINUX && !defined(__ia64__) #endif // GTEST_HAS_CLONE // Determines whether to support stream redirection. This is used to test // output correctness and to implement death tests. #ifndef GTEST_HAS_STREAM_REDIRECTION // By default, we assume that stream redirection is supported on all // platforms except known mobile ones. # if GTEST_OS_WINDOWS_MOBILE || GTEST_OS_SYMBIAN # define GTEST_HAS_STREAM_REDIRECTION 0 # else # define GTEST_HAS_STREAM_REDIRECTION 1 # endif // !GTEST_OS_WINDOWS_MOBILE && !GTEST_OS_SYMBIAN #endif // GTEST_HAS_STREAM_REDIRECTION // Determines whether to support death tests. // Google Test does not support death tests for VC 7.1 and earlier as // abort() in a VC 7.1 application compiled as GUI in debug config // pops up a dialog window that cannot be suppressed programmatically. #if (GTEST_OS_LINUX || GTEST_OS_CYGWIN || GTEST_OS_SOLARIS || \ (GTEST_OS_MAC && !GTEST_OS_IOS) || GTEST_OS_IOS_SIMULATOR || \ (GTEST_OS_WINDOWS_DESKTOP && _MSC_VER >= 1400) || \ GTEST_OS_WINDOWS_MINGW || GTEST_OS_AIX || GTEST_OS_HPUX || \ GTEST_OS_OPENBSD || GTEST_OS_QNX) # define GTEST_HAS_DEATH_TEST 1 # include // NOLINT #endif // We don't support MSVC 7.1 with exceptions disabled now. Therefore // all the compilers we care about are adequate for supporting // value-parameterized tests. #define GTEST_HAS_PARAM_TEST 1 // Determines whether to support type-driven tests. // Typed tests need and variadic macros, which GCC, VC++ 8.0, // Sun Pro CC, IBM Visual Age, and HP aCC support. #if defined(__GNUC__) || (_MSC_VER >= 1400) || defined(__SUNPRO_CC) || \ defined(__IBMCPP__) || defined(__HP_aCC) # define GTEST_HAS_TYPED_TEST 1 # define GTEST_HAS_TYPED_TEST_P 1 #endif // Determines whether to support Combine(). This only makes sense when // value-parameterized tests are enabled. The implementation doesn't // work on Sun Studio since it doesn't understand templated conversion // operators. #if GTEST_HAS_PARAM_TEST && GTEST_HAS_TR1_TUPLE && !defined(__SUNPRO_CC) # define GTEST_HAS_COMBINE 1 #endif // Determines whether the system compiler uses UTF-16 for encoding wide strings. #define GTEST_WIDE_STRING_USES_UTF16_ \ (GTEST_OS_WINDOWS || GTEST_OS_CYGWIN || GTEST_OS_SYMBIAN || GTEST_OS_AIX) // Determines whether test results can be streamed to a socket. #if GTEST_OS_LINUX # define GTEST_CAN_STREAM_RESULTS_ 1 #endif // Defines some utility macros. // The GNU compiler emits a warning if nested "if" statements are followed by // an "else" statement and braces are not used to explicitly disambiguate the // "else" binding. This leads to problems with code like: // // if (gate) // ASSERT_*(condition) << "Some message"; // // The "switch (0) case 0:" idiom is used to suppress this. #ifdef __INTEL_COMPILER # define GTEST_AMBIGUOUS_ELSE_BLOCKER_ #else # define GTEST_AMBIGUOUS_ELSE_BLOCKER_ switch (0) case 0: default: // NOLINT #endif // Use this annotation at the end of a struct/class definition to // prevent the compiler from optimizing away instances that are never // used. This is useful when all interesting logic happens inside the // c'tor and / or d'tor. Example: // // struct Foo { // Foo() { ... } // } GTEST_ATTRIBUTE_UNUSED_; // // Also use it after a variable or parameter declaration to tell the // compiler the variable/parameter does not have to be used. #if defined(__GNUC__) && !defined(COMPILER_ICC) # define GTEST_ATTRIBUTE_UNUSED_ __attribute__ ((unused)) #else # define GTEST_ATTRIBUTE_UNUSED_ #endif // A macro to disallow operator= // This should be used in the private: declarations for a class. #define GTEST_DISALLOW_ASSIGN_(type)\ void operator=(type const &) // A macro to disallow copy constructor and operator= // This should be used in the private: declarations for a class. #define GTEST_DISALLOW_COPY_AND_ASSIGN_(type)\ type(type const &);\ GTEST_DISALLOW_ASSIGN_(type) // Tell the compiler to warn about unused return values for functions declared // with this macro. The macro should be used on function declarations // following the argument list: // // Sprocket* AllocateSprocket() GTEST_MUST_USE_RESULT_; #if defined(__GNUC__) && (GTEST_GCC_VER_ >= 30400) && !defined(COMPILER_ICC) # define GTEST_MUST_USE_RESULT_ __attribute__ ((warn_unused_result)) #else # define GTEST_MUST_USE_RESULT_ #endif // __GNUC__ && (GTEST_GCC_VER_ >= 30400) && !COMPILER_ICC // Determine whether the compiler supports Microsoft's Structured Exception // Handling. This is supported by several Windows compilers but generally // does not exist on any other system. #ifndef GTEST_HAS_SEH // The user didn't tell us, so we need to figure it out. # if defined(_MSC_VER) || defined(__BORLANDC__) // These two compilers are known to support SEH. # define GTEST_HAS_SEH 1 # else // Assume no SEH. # define GTEST_HAS_SEH 0 # endif #endif // GTEST_HAS_SEH #ifdef _MSC_VER # if GTEST_LINKED_AS_SHARED_LIBRARY # define GTEST_API_ __declspec(dllimport) # elif GTEST_CREATE_SHARED_LIBRARY # define GTEST_API_ __declspec(dllexport) # endif #endif // _MSC_VER #ifndef GTEST_API_ # define GTEST_API_ #endif #ifdef __GNUC__ // Ask the compiler to never inline a given function. # define GTEST_NO_INLINE_ __attribute__((noinline)) #else # define GTEST_NO_INLINE_ #endif // _LIBCPP_VERSION is defined by the libc++ library from the LLVM project. #if defined(__GLIBCXX__) || defined(_LIBCPP_VERSION) # define GTEST_HAS_CXXABI_H_ 1 #else # define GTEST_HAS_CXXABI_H_ 0 #endif namespace testing { class Message; namespace internal { // A secret type that Google Test users don't know about. It has no // definition on purpose. Therefore it's impossible to create a // Secret object, which is what we want. class Secret; // The GTEST_COMPILE_ASSERT_ macro can be used to verify that a compile time // expression is true. For example, you could use it to verify the // size of a static array: // // GTEST_COMPILE_ASSERT_(ARRAYSIZE(content_type_names) == CONTENT_NUM_TYPES, // content_type_names_incorrect_size); // // or to make sure a struct is smaller than a certain size: // // GTEST_COMPILE_ASSERT_(sizeof(foo) < 128, foo_too_large); // // The second argument to the macro is the name of the variable. If // the expression is false, most compilers will issue a warning/error // containing the name of the variable. template struct CompileAssert { }; #define GTEST_COMPILE_ASSERT_(expr, msg) \ typedef ::testing::internal::CompileAssert<(static_cast(expr))> \ msg[static_cast(expr) ? 1 : -1] GTEST_ATTRIBUTE_UNUSED_ // Implementation details of GTEST_COMPILE_ASSERT_: // // - GTEST_COMPILE_ASSERT_ works by defining an array type that has -1 // elements (and thus is invalid) when the expression is false. // // - The simpler definition // // #define GTEST_COMPILE_ASSERT_(expr, msg) typedef char msg[(expr) ? 1 : -1] // // does not work, as gcc supports variable-length arrays whose sizes // are determined at run-time (this is gcc's extension and not part // of the C++ standard). As a result, gcc fails to reject the // following code with the simple definition: // // int foo; // GTEST_COMPILE_ASSERT_(foo, msg); // not supposed to compile as foo is // // not a compile-time constant. // // - By using the type CompileAssert<(bool(expr))>, we ensures that // expr is a compile-time constant. (Template arguments must be // determined at compile-time.) // // - The outter parentheses in CompileAssert<(bool(expr))> are necessary // to work around a bug in gcc 3.4.4 and 4.0.1. If we had written // // CompileAssert // // instead, these compilers will refuse to compile // // GTEST_COMPILE_ASSERT_(5 > 0, some_message); // // (They seem to think the ">" in "5 > 0" marks the end of the // template argument list.) // // - The array size is (bool(expr) ? 1 : -1), instead of simply // // ((expr) ? 1 : -1). // // This is to avoid running into a bug in MS VC 7.1, which // causes ((0.0) ? 1 : -1) to incorrectly evaluate to 1. // StaticAssertTypeEqHelper is used by StaticAssertTypeEq defined in gtest.h. // // This template is declared, but intentionally undefined. template struct StaticAssertTypeEqHelper; template struct StaticAssertTypeEqHelper {}; #if GTEST_HAS_GLOBAL_STRING typedef ::string string; #else typedef ::std::string string; #endif // GTEST_HAS_GLOBAL_STRING #if GTEST_HAS_GLOBAL_WSTRING typedef ::wstring wstring; #elif GTEST_HAS_STD_WSTRING typedef ::std::wstring wstring; #endif // GTEST_HAS_GLOBAL_WSTRING // A helper for suppressing warnings on constant condition. It just // returns 'condition'. GTEST_API_ bool IsTrue(bool condition); // Defines scoped_ptr. // This implementation of scoped_ptr is PARTIAL - it only contains // enough stuff to satisfy Google Test's need. template class scoped_ptr { public: typedef T element_type; explicit scoped_ptr(T* p = NULL) : ptr_(p) {} ~scoped_ptr() { reset(); } T& operator*() const { return *ptr_; } T* operator->() const { return ptr_; } T* get() const { return ptr_; } T* release() { T* const ptr = ptr_; ptr_ = NULL; return ptr; } void reset(T* p = NULL) { if (p != ptr_) { if (IsTrue(sizeof(T) > 0)) { // Makes sure T is a complete type. delete ptr_; } ptr_ = p; } } private: T* ptr_; GTEST_DISALLOW_COPY_AND_ASSIGN_(scoped_ptr); }; // Defines RE. // A simple C++ wrapper for . It uses the POSIX Extended // Regular Expression syntax. class GTEST_API_ RE { public: // A copy constructor is required by the Standard to initialize object // references from r-values. RE(const RE& other) { Init(other.pattern()); } // Constructs an RE from a string. RE(const ::std::string& regex) { Init(regex.c_str()); } // NOLINT #if GTEST_HAS_GLOBAL_STRING RE(const ::string& regex) { Init(regex.c_str()); } // NOLINT #endif // GTEST_HAS_GLOBAL_STRING RE(const char* regex) { Init(regex); } // NOLINT ~RE(); // Returns the string representation of the regex. const char* pattern() const { return pattern_; } // FullMatch(str, re) returns true iff regular expression re matches // the entire str. // PartialMatch(str, re) returns true iff regular expression re // matches a substring of str (including str itself). // // TODO(wan@google.com): make FullMatch() and PartialMatch() work // when str contains NUL characters. static bool FullMatch(const ::std::string& str, const RE& re) { return FullMatch(str.c_str(), re); } static bool PartialMatch(const ::std::string& str, const RE& re) { return PartialMatch(str.c_str(), re); } #if GTEST_HAS_GLOBAL_STRING static bool FullMatch(const ::string& str, const RE& re) { return FullMatch(str.c_str(), re); } static bool PartialMatch(const ::string& str, const RE& re) { return PartialMatch(str.c_str(), re); } #endif // GTEST_HAS_GLOBAL_STRING static bool FullMatch(const char* str, const RE& re); static bool PartialMatch(const char* str, const RE& re); private: void Init(const char* regex); // We use a const char* instead of an std::string, as Google Test used to be // used where std::string is not available. TODO(wan@google.com): change to // std::string. const char* pattern_; bool is_valid_; #if GTEST_USES_POSIX_RE regex_t full_regex_; // For FullMatch(). regex_t partial_regex_; // For PartialMatch(). #else // GTEST_USES_SIMPLE_RE const char* full_pattern_; // For FullMatch(); #endif GTEST_DISALLOW_ASSIGN_(RE); }; // Formats a source file path and a line number as they would appear // in an error message from the compiler used to compile this code. GTEST_API_ ::std::string FormatFileLocation(const char* file, int line); // Formats a file location for compiler-independent XML output. // Although this function is not platform dependent, we put it next to // FormatFileLocation in order to contrast the two functions. GTEST_API_ ::std::string FormatCompilerIndependentFileLocation(const char* file, int line); // Defines logging utilities: // GTEST_LOG_(severity) - logs messages at the specified severity level. The // message itself is streamed into the macro. // LogToStderr() - directs all log messages to stderr. // FlushInfoLog() - flushes informational log messages. enum GTestLogSeverity { GTEST_INFO, GTEST_WARNING, GTEST_ERROR, GTEST_FATAL }; // Formats log entry severity, provides a stream object for streaming the // log message, and terminates the message with a newline when going out of // scope. class GTEST_API_ GTestLog { public: GTestLog(GTestLogSeverity severity, const char* file, int line); // Flushes the buffers and, if severity is GTEST_FATAL, aborts the program. ~GTestLog(); ::std::ostream& GetStream() { return ::std::cerr; } private: const GTestLogSeverity severity_; GTEST_DISALLOW_COPY_AND_ASSIGN_(GTestLog); }; #define GTEST_LOG_(severity) \ ::testing::internal::GTestLog(::testing::internal::GTEST_##severity, \ __FILE__, __LINE__).GetStream() inline void LogToStderr() {} inline void FlushInfoLog() { fflush(NULL); } // INTERNAL IMPLEMENTATION - DO NOT USE. // // GTEST_CHECK_ is an all-mode assert. It aborts the program if the condition // is not satisfied. // Synopsys: // GTEST_CHECK_(boolean_condition); // or // GTEST_CHECK_(boolean_condition) << "Additional message"; // // This checks the condition and if the condition is not satisfied // it prints message about the condition violation, including the // condition itself, plus additional message streamed into it, if any, // and then it aborts the program. It aborts the program irrespective of // whether it is built in the debug mode or not. #define GTEST_CHECK_(condition) \ GTEST_AMBIGUOUS_ELSE_BLOCKER_ \ if (::testing::internal::IsTrue(condition)) \ ; \ else \ GTEST_LOG_(FATAL) << "Condition " #condition " failed. " // An all-mode assert to verify that the given POSIX-style function // call returns 0 (indicating success). Known limitation: this // doesn't expand to a balanced 'if' statement, so enclose the macro // in {} if you need to use it as the only statement in an 'if' // branch. #define GTEST_CHECK_POSIX_SUCCESS_(posix_call) \ if (const int gtest_error = (posix_call)) \ GTEST_LOG_(FATAL) << #posix_call << "failed with error " \ << gtest_error // INTERNAL IMPLEMENTATION - DO NOT USE IN USER CODE. // // Use ImplicitCast_ as a safe version of static_cast for upcasting in // the type hierarchy (e.g. casting a Foo* to a SuperclassOfFoo* or a // const Foo*). When you use ImplicitCast_, the compiler checks that // the cast is safe. Such explicit ImplicitCast_s are necessary in // surprisingly many situations where C++ demands an exact type match // instead of an argument type convertable to a target type. // // The syntax for using ImplicitCast_ is the same as for static_cast: // // ImplicitCast_(expr) // // ImplicitCast_ would have been part of the C++ standard library, // but the proposal was submitted too late. It will probably make // its way into the language in the future. // // This relatively ugly name is intentional. It prevents clashes with // similar functions users may have (e.g., implicit_cast). The internal // namespace alone is not enough because the function can be found by ADL. template inline To ImplicitCast_(To x) { return x; } // When you upcast (that is, cast a pointer from type Foo to type // SuperclassOfFoo), it's fine to use ImplicitCast_<>, since upcasts // always succeed. When you downcast (that is, cast a pointer from // type Foo to type SubclassOfFoo), static_cast<> isn't safe, because // how do you know the pointer is really of type SubclassOfFoo? It // could be a bare Foo, or of type DifferentSubclassOfFoo. Thus, // when you downcast, you should use this macro. In debug mode, we // use dynamic_cast<> to double-check the downcast is legal (we die // if it's not). In normal mode, we do the efficient static_cast<> // instead. Thus, it's important to test in debug mode to make sure // the cast is legal! // This is the only place in the code we should use dynamic_cast<>. // In particular, you SHOULDN'T be using dynamic_cast<> in order to // do RTTI (eg code like this: // if (dynamic_cast(foo)) HandleASubclass1Object(foo); // if (dynamic_cast(foo)) HandleASubclass2Object(foo); // You should design the code some other way not to need this. // // This relatively ugly name is intentional. It prevents clashes with // similar functions users may have (e.g., down_cast). The internal // namespace alone is not enough because the function can be found by ADL. template // use like this: DownCast_(foo); inline To DownCast_(From* f) { // so we only accept pointers // Ensures that To is a sub-type of From *. This test is here only // for compile-time type checking, and has no overhead in an // optimized build at run-time, as it will be optimized away // completely. if (false) { const To to = NULL; ::testing::internal::ImplicitCast_(to); } #if GTEST_HAS_RTTI // RTTI: debug mode only! GTEST_CHECK_(f == NULL || dynamic_cast(f) != NULL); #endif return static_cast(f); } // Downcasts the pointer of type Base to Derived. // Derived must be a subclass of Base. The parameter MUST // point to a class of type Derived, not any subclass of it. // When RTTI is available, the function performs a runtime // check to enforce this. template Derived* CheckedDowncastToActualType(Base* base) { #if GTEST_HAS_RTTI GTEST_CHECK_(typeid(*base) == typeid(Derived)); return dynamic_cast(base); // NOLINT #else return static_cast(base); // Poor man's downcast. #endif } #if GTEST_HAS_STREAM_REDIRECTION // Defines the stderr capturer: // CaptureStdout - starts capturing stdout. // GetCapturedStdout - stops capturing stdout and returns the captured string. // CaptureStderr - starts capturing stderr. // GetCapturedStderr - stops capturing stderr and returns the captured string. // GTEST_API_ void CaptureStdout(); GTEST_API_ std::string GetCapturedStdout(); GTEST_API_ void CaptureStderr(); GTEST_API_ std::string GetCapturedStderr(); #endif // GTEST_HAS_STREAM_REDIRECTION #if GTEST_HAS_DEATH_TEST const ::std::vector& GetInjectableArgvs(); void SetInjectableArgvs(const ::std::vector* new_argvs); // A copy of all command line arguments. Set by InitGoogleTest(). extern ::std::vector g_argvs; #endif // GTEST_HAS_DEATH_TEST // Defines synchronization primitives. #if GTEST_HAS_PTHREAD // Sleeps for (roughly) n milli-seconds. This function is only for // testing Google Test's own constructs. Don't use it in user tests, // either directly or indirectly. inline void SleepMilliseconds(int n) { const timespec time = { 0, // 0 seconds. n * 1000L * 1000L, // And n ms. }; nanosleep(&time, NULL); } // Allows a controller thread to pause execution of newly created // threads until notified. Instances of this class must be created // and destroyed in the controller thread. // // This class is only for testing Google Test's own constructs. Do not // use it in user tests, either directly or indirectly. class Notification { public: Notification() : notified_(false) { GTEST_CHECK_POSIX_SUCCESS_(pthread_mutex_init(&mutex_, NULL)); } ~Notification() { pthread_mutex_destroy(&mutex_); } // Notifies all threads created with this notification to start. Must // be called from the controller thread. void Notify() { pthread_mutex_lock(&mutex_); notified_ = true; pthread_mutex_unlock(&mutex_); } // Blocks until the controller thread notifies. Must be called from a test // thread. void WaitForNotification() { for (;;) { pthread_mutex_lock(&mutex_); const bool notified = notified_; pthread_mutex_unlock(&mutex_); if (notified) break; SleepMilliseconds(10); } } private: pthread_mutex_t mutex_; bool notified_; GTEST_DISALLOW_COPY_AND_ASSIGN_(Notification); }; // As a C-function, ThreadFuncWithCLinkage cannot be templated itself. // Consequently, it cannot select a correct instantiation of ThreadWithParam // in order to call its Run(). Introducing ThreadWithParamBase as a // non-templated base class for ThreadWithParam allows us to bypass this // problem. class ThreadWithParamBase { public: virtual ~ThreadWithParamBase() {} virtual void Run() = 0; }; // pthread_create() accepts a pointer to a function type with the C linkage. // According to the Standard (7.5/1), function types with different linkages // are different even if they are otherwise identical. Some compilers (for // example, SunStudio) treat them as different types. Since class methods // cannot be defined with C-linkage we need to define a free C-function to // pass into pthread_create(). extern "C" inline void* ThreadFuncWithCLinkage(void* thread) { static_cast(thread)->Run(); return NULL; } // Helper class for testing Google Test's multi-threading constructs. // To use it, write: // // void ThreadFunc(int param) { /* Do things with param */ } // Notification thread_can_start; // ... // // The thread_can_start parameter is optional; you can supply NULL. // ThreadWithParam thread(&ThreadFunc, 5, &thread_can_start); // thread_can_start.Notify(); // // These classes are only for testing Google Test's own constructs. Do // not use them in user tests, either directly or indirectly. template class ThreadWithParam : public ThreadWithParamBase { public: typedef void (*UserThreadFunc)(T); ThreadWithParam( UserThreadFunc func, T param, Notification* thread_can_start) : func_(func), param_(param), thread_can_start_(thread_can_start), finished_(false) { ThreadWithParamBase* const base = this; // The thread can be created only after all fields except thread_ // have been initialized. GTEST_CHECK_POSIX_SUCCESS_( pthread_create(&thread_, 0, &ThreadFuncWithCLinkage, base)); } ~ThreadWithParam() { Join(); } void Join() { if (!finished_) { GTEST_CHECK_POSIX_SUCCESS_(pthread_join(thread_, 0)); finished_ = true; } } virtual void Run() { if (thread_can_start_ != NULL) thread_can_start_->WaitForNotification(); func_(param_); } private: const UserThreadFunc func_; // User-supplied thread function. const T param_; // User-supplied parameter to the thread function. // When non-NULL, used to block execution until the controller thread // notifies. Notification* const thread_can_start_; bool finished_; // true iff we know that the thread function has finished. pthread_t thread_; // The native thread object. GTEST_DISALLOW_COPY_AND_ASSIGN_(ThreadWithParam); }; // MutexBase and Mutex implement mutex on pthreads-based platforms. They // are used in conjunction with class MutexLock: // // Mutex mutex; // ... // MutexLock lock(&mutex); // Acquires the mutex and releases it at the end // // of the current scope. // // MutexBase implements behavior for both statically and dynamically // allocated mutexes. Do not use MutexBase directly. Instead, write // the following to define a static mutex: // // GTEST_DEFINE_STATIC_MUTEX_(g_some_mutex); // // You can forward declare a static mutex like this: // // GTEST_DECLARE_STATIC_MUTEX_(g_some_mutex); // // To create a dynamic mutex, just define an object of type Mutex. class MutexBase { public: // Acquires this mutex. void Lock() { GTEST_CHECK_POSIX_SUCCESS_(pthread_mutex_lock(&mutex_)); owner_ = pthread_self(); has_owner_ = true; } // Releases this mutex. void Unlock() { // Since the lock is being released the owner_ field should no longer be // considered valid. We don't protect writing to has_owner_ here, as it's // the caller's responsibility to ensure that the current thread holds the // mutex when this is called. has_owner_ = false; GTEST_CHECK_POSIX_SUCCESS_(pthread_mutex_unlock(&mutex_)); } // Does nothing if the current thread holds the mutex. Otherwise, crashes // with high probability. void AssertHeld() const { GTEST_CHECK_(has_owner_ && pthread_equal(owner_, pthread_self())) << "The current thread is not holding the mutex @" << this; } // A static mutex may be used before main() is entered. It may even // be used before the dynamic initialization stage. Therefore we // must be able to initialize a static mutex object at link time. // This means MutexBase has to be a POD and its member variables // have to be public. public: pthread_mutex_t mutex_; // The underlying pthread mutex. // has_owner_ indicates whether the owner_ field below contains a valid thread // ID and is therefore safe to inspect (e.g., to use in pthread_equal()). All // accesses to the owner_ field should be protected by a check of this field. // An alternative might be to memset() owner_ to all zeros, but there's no // guarantee that a zero'd pthread_t is necessarily invalid or even different // from pthread_self(). bool has_owner_; pthread_t owner_; // The thread holding the mutex. }; // Forward-declares a static mutex. # define GTEST_DECLARE_STATIC_MUTEX_(mutex) \ extern ::testing::internal::MutexBase mutex // Defines and statically (i.e. at link time) initializes a static mutex. // The initialization list here does not explicitly initialize each field, // instead relying on default initialization for the unspecified fields. In // particular, the owner_ field (a pthread_t) is not explicitly initialized. // This allows initialization to work whether pthread_t is a scalar or struct. // The flag -Wmissing-field-initializers must not be specified for this to work. # define GTEST_DEFINE_STATIC_MUTEX_(mutex) \ ::testing::internal::MutexBase mutex = { PTHREAD_MUTEX_INITIALIZER, false } // The Mutex class can only be used for mutexes created at runtime. It // shares its API with MutexBase otherwise. class Mutex : public MutexBase { public: Mutex() { GTEST_CHECK_POSIX_SUCCESS_(pthread_mutex_init(&mutex_, NULL)); has_owner_ = false; } ~Mutex() { GTEST_CHECK_POSIX_SUCCESS_(pthread_mutex_destroy(&mutex_)); } private: GTEST_DISALLOW_COPY_AND_ASSIGN_(Mutex); }; // We cannot name this class MutexLock as the ctor declaration would // conflict with a macro named MutexLock, which is defined on some // platforms. Hence the typedef trick below. class GTestMutexLock { public: explicit GTestMutexLock(MutexBase* mutex) : mutex_(mutex) { mutex_->Lock(); } ~GTestMutexLock() { mutex_->Unlock(); } private: MutexBase* const mutex_; GTEST_DISALLOW_COPY_AND_ASSIGN_(GTestMutexLock); }; typedef GTestMutexLock MutexLock; // Helpers for ThreadLocal. // pthread_key_create() requires DeleteThreadLocalValue() to have // C-linkage. Therefore it cannot be templatized to access // ThreadLocal. Hence the need for class // ThreadLocalValueHolderBase. class ThreadLocalValueHolderBase { public: virtual ~ThreadLocalValueHolderBase() {} }; // Called by pthread to delete thread-local data stored by // pthread_setspecific(). extern "C" inline void DeleteThreadLocalValue(void* value_holder) { delete static_cast(value_holder); } // Implements thread-local storage on pthreads-based systems. // // // Thread 1 // ThreadLocal tl(100); // 100 is the default value for each thread. // // // Thread 2 // tl.set(150); // Changes the value for thread 2 only. // EXPECT_EQ(150, tl.get()); // // // Thread 1 // EXPECT_EQ(100, tl.get()); // In thread 1, tl has the original value. // tl.set(200); // EXPECT_EQ(200, tl.get()); // // The template type argument T must have a public copy constructor. // In addition, the default ThreadLocal constructor requires T to have // a public default constructor. // // An object managed for a thread by a ThreadLocal instance is deleted // when the thread exits. Or, if the ThreadLocal instance dies in // that thread, when the ThreadLocal dies. It's the user's // responsibility to ensure that all other threads using a ThreadLocal // have exited when it dies, or the per-thread objects for those // threads will not be deleted. // // Google Test only uses global ThreadLocal objects. That means they // will die after main() has returned. Therefore, no per-thread // object managed by Google Test will be leaked as long as all threads // using Google Test have exited when main() returns. template class ThreadLocal { public: ThreadLocal() : key_(CreateKey()), default_() {} explicit ThreadLocal(const T& value) : key_(CreateKey()), default_(value) {} ~ThreadLocal() { // Destroys the managed object for the current thread, if any. DeleteThreadLocalValue(pthread_getspecific(key_)); // Releases resources associated with the key. This will *not* // delete managed objects for other threads. GTEST_CHECK_POSIX_SUCCESS_(pthread_key_delete(key_)); } T* pointer() { return GetOrCreateValue(); } const T* pointer() const { return GetOrCreateValue(); } const T& get() const { return *pointer(); } void set(const T& value) { *pointer() = value; } private: // Holds a value of type T. class ValueHolder : public ThreadLocalValueHolderBase { public: explicit ValueHolder(const T& value) : value_(value) {} T* pointer() { return &value_; } private: T value_; GTEST_DISALLOW_COPY_AND_ASSIGN_(ValueHolder); }; static pthread_key_t CreateKey() { pthread_key_t key; // When a thread exits, DeleteThreadLocalValue() will be called on // the object managed for that thread. GTEST_CHECK_POSIX_SUCCESS_( pthread_key_create(&key, &DeleteThreadLocalValue)); return key; } T* GetOrCreateValue() const { ThreadLocalValueHolderBase* const holder = static_cast(pthread_getspecific(key_)); if (holder != NULL) { return CheckedDowncastToActualType(holder)->pointer(); } ValueHolder* const new_holder = new ValueHolder(default_); ThreadLocalValueHolderBase* const holder_base = new_holder; GTEST_CHECK_POSIX_SUCCESS_(pthread_setspecific(key_, holder_base)); return new_holder->pointer(); } // A key pthreads uses for looking up per-thread values. const pthread_key_t key_; const T default_; // The default value for each thread. GTEST_DISALLOW_COPY_AND_ASSIGN_(ThreadLocal); }; # define GTEST_IS_THREADSAFE 1 #else // GTEST_HAS_PTHREAD // A dummy implementation of synchronization primitives (mutex, lock, // and thread-local variable). Necessary for compiling Google Test where // mutex is not supported - using Google Test in multiple threads is not // supported on such platforms. class Mutex { public: Mutex() {} void Lock() {} void Unlock() {} void AssertHeld() const {} }; # define GTEST_DECLARE_STATIC_MUTEX_(mutex) \ extern ::testing::internal::Mutex mutex # define GTEST_DEFINE_STATIC_MUTEX_(mutex) ::testing::internal::Mutex mutex class GTestMutexLock { public: explicit GTestMutexLock(Mutex*) {} // NOLINT }; typedef GTestMutexLock MutexLock; template class ThreadLocal { public: ThreadLocal() : value_() {} explicit ThreadLocal(const T& value) : value_(value) {} T* pointer() { return &value_; } const T* pointer() const { return &value_; } const T& get() const { return value_; } void set(const T& value) { value_ = value; } private: T value_; }; // The above synchronization primitives have dummy implementations. // Therefore Google Test is not thread-safe. # define GTEST_IS_THREADSAFE 0 #endif // GTEST_HAS_PTHREAD // Returns the number of threads running in the process, or 0 to indicate that // we cannot detect it. GTEST_API_ size_t GetThreadCount(); // Passing non-POD classes through ellipsis (...) crashes the ARM // compiler and generates a warning in Sun Studio. The Nokia Symbian // and the IBM XL C/C++ compiler try to instantiate a copy constructor // for objects passed through ellipsis (...), failing for uncopyable // objects. We define this to ensure that only POD is passed through // ellipsis on these systems. #if defined(__SYMBIAN32__) || defined(__IBMCPP__) || defined(__SUNPRO_CC) // We lose support for NULL detection where the compiler doesn't like // passing non-POD classes through ellipsis (...). # define GTEST_ELLIPSIS_NEEDS_POD_ 1 #else # define GTEST_CAN_COMPARE_NULL 1 #endif // The Nokia Symbian and IBM XL C/C++ compilers cannot decide between // const T& and const T* in a function template. These compilers // _can_ decide between class template specializations for T and T*, // so a tr1::type_traits-like is_pointer works. #if defined(__SYMBIAN32__) || defined(__IBMCPP__) # define GTEST_NEEDS_IS_POINTER_ 1 #endif template struct bool_constant { typedef bool_constant type; static const bool value = bool_value; }; template const bool bool_constant::value; typedef bool_constant false_type; typedef bool_constant true_type; template struct is_pointer : public false_type {}; template struct is_pointer : public true_type {}; template struct IteratorTraits { typedef typename Iterator::value_type value_type; }; template struct IteratorTraits { typedef T value_type; }; template struct IteratorTraits { typedef T value_type; }; #if GTEST_OS_WINDOWS # define GTEST_PATH_SEP_ "\\" # define GTEST_HAS_ALT_PATH_SEP_ 1 // The biggest signed integer type the compiler supports. typedef __int64 BiggestInt; #else # define GTEST_PATH_SEP_ "/" # define GTEST_HAS_ALT_PATH_SEP_ 0 typedef long long BiggestInt; // NOLINT #endif // GTEST_OS_WINDOWS // Utilities for char. // isspace(int ch) and friends accept an unsigned char or EOF. char // may be signed, depending on the compiler (or compiler flags). // Therefore we need to cast a char to unsigned char before calling // isspace(), etc. inline bool IsAlpha(char ch) { return isalpha(static_cast(ch)) != 0; } inline bool IsAlNum(char ch) { return isalnum(static_cast(ch)) != 0; } inline bool IsDigit(char ch) { return isdigit(static_cast(ch)) != 0; } inline bool IsLower(char ch) { return islower(static_cast(ch)) != 0; } inline bool IsSpace(char ch) { return isspace(static_cast(ch)) != 0; } inline bool IsUpper(char ch) { return isupper(static_cast(ch)) != 0; } inline bool IsXDigit(char ch) { return isxdigit(static_cast(ch)) != 0; } inline bool IsXDigit(wchar_t ch) { const unsigned char low_byte = static_cast(ch); return ch == low_byte && isxdigit(low_byte) != 0; } inline char ToLower(char ch) { return static_cast(tolower(static_cast(ch))); } inline char ToUpper(char ch) { return static_cast(toupper(static_cast(ch))); } // The testing::internal::posix namespace holds wrappers for common // POSIX functions. These wrappers hide the differences between // Windows/MSVC and POSIX systems. Since some compilers define these // standard functions as macros, the wrapper cannot have the same name // as the wrapped function. namespace posix { // Functions with a different name on Windows. #if GTEST_OS_WINDOWS typedef struct _stat StatStruct; # ifdef __BORLANDC__ inline int IsATTY(int fd) { return isatty(fd); } inline int StrCaseCmp(const char* s1, const char* s2) { return stricmp(s1, s2); } inline char* StrDup(const char* src) { return strdup(src); } # else // !__BORLANDC__ # if GTEST_OS_WINDOWS_MOBILE inline int IsATTY(int /* fd */) { return 0; } # else inline int IsATTY(int fd) { return _isatty(fd); } # endif // GTEST_OS_WINDOWS_MOBILE inline int StrCaseCmp(const char* s1, const char* s2) { return _stricmp(s1, s2); } inline char* StrDup(const char* src) { return _strdup(src); } # endif // __BORLANDC__ # if GTEST_OS_WINDOWS_MOBILE inline int FileNo(FILE* file) { return reinterpret_cast(_fileno(file)); } // Stat(), RmDir(), and IsDir() are not needed on Windows CE at this // time and thus not defined there. # else inline int FileNo(FILE* file) { return _fileno(file); } inline int Stat(const char* path, StatStruct* buf) { return _stat(path, buf); } inline int RmDir(const char* dir) { return _rmdir(dir); } inline bool IsDir(const StatStruct& st) { return (_S_IFDIR & st.st_mode) != 0; } # endif // GTEST_OS_WINDOWS_MOBILE #else typedef struct stat StatStruct; inline int FileNo(FILE* file) { return fileno(file); } inline int IsATTY(int fd) { return isatty(fd); } inline int Stat(const char* path, StatStruct* buf) { return stat(path, buf); } inline int StrCaseCmp(const char* s1, const char* s2) { return strcasecmp(s1, s2); } inline char* StrDup(const char* src) { return strdup(src); } inline int RmDir(const char* dir) { return rmdir(dir); } inline bool IsDir(const StatStruct& st) { return S_ISDIR(st.st_mode); } #endif // GTEST_OS_WINDOWS // Functions deprecated by MSVC 8.0. #ifdef _MSC_VER // Temporarily disable warning 4996 (deprecated function). # pragma warning(push) # pragma warning(disable:4996) #endif inline const char* StrNCpy(char* dest, const char* src, size_t n) { return strncpy(dest, src, n); } // ChDir(), FReopen(), FDOpen(), Read(), Write(), Close(), and // StrError() aren't needed on Windows CE at this time and thus not // defined there. #if !GTEST_OS_WINDOWS_MOBILE inline int ChDir(const char* dir) { return chdir(dir); } #endif inline FILE* FOpen(const char* path, const char* mode) { return fopen(path, mode); } #if !GTEST_OS_WINDOWS_MOBILE inline FILE *FReopen(const char* path, const char* mode, FILE* stream) { return freopen(path, mode, stream); } inline FILE* FDOpen(int fd, const char* mode) { return fdopen(fd, mode); } #endif inline int FClose(FILE* fp) { return fclose(fp); } #if !GTEST_OS_WINDOWS_MOBILE inline int Read(int fd, void* buf, unsigned int count) { return static_cast(read(fd, buf, count)); } inline int Write(int fd, const void* buf, unsigned int count) { return static_cast(write(fd, buf, count)); } inline int Close(int fd) { return close(fd); } inline const char* StrError(int errnum) { return strerror(errnum); } #endif inline const char* GetEnv(const char* name) { #if GTEST_OS_WINDOWS_MOBILE // We are on Windows CE, which has no environment variables. return NULL; #elif defined(__BORLANDC__) || defined(__SunOS_5_8) || defined(__SunOS_5_9) // Environment variables which we programmatically clear will be set to the // empty string rather than unset (NULL). Handle that case. const char* const env = getenv(name); return (env != NULL && env[0] != '\0') ? env : NULL; #else return getenv(name); #endif } #ifdef _MSC_VER # pragma warning(pop) // Restores the warning state. #endif #if GTEST_OS_WINDOWS_MOBILE // Windows CE has no C library. The abort() function is used in // several places in Google Test. This implementation provides a reasonable // imitation of standard behaviour. void Abort(); #else inline void Abort() { abort(); } #endif // GTEST_OS_WINDOWS_MOBILE } // namespace posix // MSVC "deprecates" snprintf and issues warnings wherever it is used. In // order to avoid these warnings, we need to use _snprintf or _snprintf_s on // MSVC-based platforms. We map the GTEST_SNPRINTF_ macro to the appropriate // function in order to achieve that. We use macro definition here because // snprintf is a variadic function. #if _MSC_VER >= 1400 && !GTEST_OS_WINDOWS_MOBILE // MSVC 2005 and above support variadic macros. # define GTEST_SNPRINTF_(buffer, size, format, ...) \ _snprintf_s(buffer, size, size, format, __VA_ARGS__) #elif defined(_MSC_VER) // Windows CE does not define _snprintf_s and MSVC prior to 2005 doesn't // complain about _snprintf. # define GTEST_SNPRINTF_ _snprintf #else # define GTEST_SNPRINTF_ snprintf #endif // The maximum number a BiggestInt can represent. This definition // works no matter BiggestInt is represented in one's complement or // two's complement. // // We cannot rely on numeric_limits in STL, as __int64 and long long // are not part of standard C++ and numeric_limits doesn't need to be // defined for them. const BiggestInt kMaxBiggestInt = ~(static_cast(1) << (8*sizeof(BiggestInt) - 1)); // This template class serves as a compile-time function from size to // type. It maps a size in bytes to a primitive type with that // size. e.g. // // TypeWithSize<4>::UInt // // is typedef-ed to be unsigned int (unsigned integer made up of 4 // bytes). // // Such functionality should belong to STL, but I cannot find it // there. // // Google Test uses this class in the implementation of floating-point // comparison. // // For now it only handles UInt (unsigned int) as that's all Google Test // needs. Other types can be easily added in the future if need // arises. template class TypeWithSize { public: // This prevents the user from using TypeWithSize with incorrect // values of N. typedef void UInt; }; // The specialization for size 4. template <> class TypeWithSize<4> { public: // unsigned int has size 4 in both gcc and MSVC. // // As base/basictypes.h doesn't compile on Windows, we cannot use // uint32, uint64, and etc here. typedef int Int; typedef unsigned int UInt; }; // The specialization for size 8. template <> class TypeWithSize<8> { public: #if GTEST_OS_WINDOWS typedef __int64 Int; typedef unsigned __int64 UInt; #else typedef long long Int; // NOLINT typedef unsigned long long UInt; // NOLINT #endif // GTEST_OS_WINDOWS }; // Integer types of known sizes. typedef TypeWithSize<4>::Int Int32; typedef TypeWithSize<4>::UInt UInt32; typedef TypeWithSize<8>::Int Int64; typedef TypeWithSize<8>::UInt UInt64; typedef TypeWithSize<8>::Int TimeInMillis; // Represents time in milliseconds. // Utilities for command line flags and environment variables. // Macro for referencing flags. #define GTEST_FLAG(name) FLAGS_gtest_##name // Macros for declaring flags. #define GTEST_DECLARE_bool_(name) GTEST_API_ extern bool GTEST_FLAG(name) #define GTEST_DECLARE_int32_(name) \ GTEST_API_ extern ::testing::internal::Int32 GTEST_FLAG(name) #define GTEST_DECLARE_string_(name) \ GTEST_API_ extern ::std::string GTEST_FLAG(name) // Macros for defining flags. #define GTEST_DEFINE_bool_(name, default_val, doc) \ GTEST_API_ bool GTEST_FLAG(name) = (default_val) #define GTEST_DEFINE_int32_(name, default_val, doc) \ GTEST_API_ ::testing::internal::Int32 GTEST_FLAG(name) = (default_val) #define GTEST_DEFINE_string_(name, default_val, doc) \ GTEST_API_ ::std::string GTEST_FLAG(name) = (default_val) // Thread annotations #define GTEST_EXCLUSIVE_LOCK_REQUIRED_(locks) #define GTEST_LOCK_EXCLUDED_(locks) // Parses 'str' for a 32-bit signed integer. If successful, writes the result // to *value and returns true; otherwise leaves *value unchanged and returns // false. // TODO(chandlerc): Find a better way to refactor flag and environment parsing // out of both gtest-port.cc and gtest.cc to avoid exporting this utility // function. bool ParseInt32(const Message& src_text, const char* str, Int32* value); // Parses a bool/Int32/string from the environment variable // corresponding to the given Google Test flag. bool BoolFromGTestEnv(const char* flag, bool default_val); GTEST_API_ Int32 Int32FromGTestEnv(const char* flag, Int32 default_val); const char* StringFromGTestEnv(const char* flag, const char* default_val); } // namespace internal } // namespace testing #endif // GTEST_INCLUDE_GTEST_INTERNAL_GTEST_PORT_H_ ffc-2019.1.0.post0/libs/gtest-1.7.0/include/gtest/internal/gtest-string.h000066400000000000000000000154701346026536000254720ustar00rootroot00000000000000// Copyright 2005, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Authors: wan@google.com (Zhanyong Wan), eefacm@gmail.com (Sean Mcafee) // // The Google C++ Testing Framework (Google Test) // // This header file declares the String class and functions used internally by // Google Test. They are subject to change without notice. They should not used // by code external to Google Test. // // This header file is #included by . // It should not be #included by other files. #ifndef GTEST_INCLUDE_GTEST_INTERNAL_GTEST_STRING_H_ #define GTEST_INCLUDE_GTEST_INTERNAL_GTEST_STRING_H_ #ifdef __BORLANDC__ // string.h is not guaranteed to provide strcpy on C++ Builder. # include #endif #include #include #include "gtest/internal/gtest-port.h" namespace testing { namespace internal { // String - an abstract class holding static string utilities. class GTEST_API_ String { public: // Static utility methods // Clones a 0-terminated C string, allocating memory using new. The // caller is responsible for deleting the return value using // delete[]. Returns the cloned string, or NULL if the input is // NULL. // // This is different from strdup() in string.h, which allocates // memory using malloc(). static const char* CloneCString(const char* c_str); #if GTEST_OS_WINDOWS_MOBILE // Windows CE does not have the 'ANSI' versions of Win32 APIs. To be // able to pass strings to Win32 APIs on CE we need to convert them // to 'Unicode', UTF-16. // Creates a UTF-16 wide string from the given ANSI string, allocating // memory using new. The caller is responsible for deleting the return // value using delete[]. Returns the wide string, or NULL if the // input is NULL. // // The wide string is created using the ANSI codepage (CP_ACP) to // match the behaviour of the ANSI versions of Win32 calls and the // C runtime. static LPCWSTR AnsiToUtf16(const char* c_str); // Creates an ANSI string from the given wide string, allocating // memory using new. The caller is responsible for deleting the return // value using delete[]. Returns the ANSI string, or NULL if the // input is NULL. // // The returned string is created using the ANSI codepage (CP_ACP) to // match the behaviour of the ANSI versions of Win32 calls and the // C runtime. static const char* Utf16ToAnsi(LPCWSTR utf16_str); #endif // Compares two C strings. Returns true iff they have the same content. // // Unlike strcmp(), this function can handle NULL argument(s). A // NULL C string is considered different to any non-NULL C string, // including the empty string. static bool CStringEquals(const char* lhs, const char* rhs); // Converts a wide C string to a String using the UTF-8 encoding. // NULL will be converted to "(null)". If an error occurred during // the conversion, "(failed to convert from wide string)" is // returned. static std::string ShowWideCString(const wchar_t* wide_c_str); // Compares two wide C strings. Returns true iff they have the same // content. // // Unlike wcscmp(), this function can handle NULL argument(s). A // NULL C string is considered different to any non-NULL C string, // including the empty string. static bool WideCStringEquals(const wchar_t* lhs, const wchar_t* rhs); // Compares two C strings, ignoring case. Returns true iff they // have the same content. // // Unlike strcasecmp(), this function can handle NULL argument(s). // A NULL C string is considered different to any non-NULL C string, // including the empty string. static bool CaseInsensitiveCStringEquals(const char* lhs, const char* rhs); // Compares two wide C strings, ignoring case. Returns true iff they // have the same content. // // Unlike wcscasecmp(), this function can handle NULL argument(s). // A NULL C string is considered different to any non-NULL wide C string, // including the empty string. // NB: The implementations on different platforms slightly differ. // On windows, this method uses _wcsicmp which compares according to LC_CTYPE // environment variable. On GNU platform this method uses wcscasecmp // which compares according to LC_CTYPE category of the current locale. // On MacOS X, it uses towlower, which also uses LC_CTYPE category of the // current locale. static bool CaseInsensitiveWideCStringEquals(const wchar_t* lhs, const wchar_t* rhs); // Returns true iff the given string ends with the given suffix, ignoring // case. Any string is considered to end with an empty suffix. static bool EndsWithCaseInsensitive( const std::string& str, const std::string& suffix); // Formats an int value as "%02d". static std::string FormatIntWidth2(int value); // "%02d" for width == 2 // Formats an int value as "%X". static std::string FormatHexInt(int value); // Formats a byte as "%02X". static std::string FormatByte(unsigned char value); private: String(); // Not meant to be instantiated. }; // class String // Gets the content of the stringstream's buffer as an std::string. Each '\0' // character in the buffer is replaced with "\\0". GTEST_API_ std::string StringStreamToString(::std::stringstream* stream); } // namespace internal } // namespace testing #endif // GTEST_INCLUDE_GTEST_INTERNAL_GTEST_STRING_H_ ffc-2019.1.0.post0/libs/gtest-1.7.0/include/gtest/internal/gtest-tuple.h000066400000000000000000000670771346026536000253270ustar00rootroot00000000000000// This file was GENERATED by command: // pump.py gtest-tuple.h.pump // DO NOT EDIT BY HAND!!! // Copyright 2009 Google Inc. // All Rights Reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: wan@google.com (Zhanyong Wan) // Implements a subset of TR1 tuple needed by Google Test and Google Mock. #ifndef GTEST_INCLUDE_GTEST_INTERNAL_GTEST_TUPLE_H_ #define GTEST_INCLUDE_GTEST_INTERNAL_GTEST_TUPLE_H_ #include // For ::std::pair. // The compiler used in Symbian has a bug that prevents us from declaring the // tuple template as a friend (it complains that tuple is redefined). This // hack bypasses the bug by declaring the members that should otherwise be // private as public. // Sun Studio versions < 12 also have the above bug. #if defined(__SYMBIAN32__) || (defined(__SUNPRO_CC) && __SUNPRO_CC < 0x590) # define GTEST_DECLARE_TUPLE_AS_FRIEND_ public: #else # define GTEST_DECLARE_TUPLE_AS_FRIEND_ \ template friend class tuple; \ private: #endif // GTEST_n_TUPLE_(T) is the type of an n-tuple. #define GTEST_0_TUPLE_(T) tuple<> #define GTEST_1_TUPLE_(T) tuple #define GTEST_2_TUPLE_(T) tuple #define GTEST_3_TUPLE_(T) tuple #define GTEST_4_TUPLE_(T) tuple #define GTEST_5_TUPLE_(T) tuple #define GTEST_6_TUPLE_(T) tuple #define GTEST_7_TUPLE_(T) tuple #define GTEST_8_TUPLE_(T) tuple #define GTEST_9_TUPLE_(T) tuple #define GTEST_10_TUPLE_(T) tuple // GTEST_n_TYPENAMES_(T) declares a list of n typenames. #define GTEST_0_TYPENAMES_(T) #define GTEST_1_TYPENAMES_(T) typename T##0 #define GTEST_2_TYPENAMES_(T) typename T##0, typename T##1 #define GTEST_3_TYPENAMES_(T) typename T##0, typename T##1, typename T##2 #define GTEST_4_TYPENAMES_(T) typename T##0, typename T##1, typename T##2, \ typename T##3 #define GTEST_5_TYPENAMES_(T) typename T##0, typename T##1, typename T##2, \ typename T##3, typename T##4 #define GTEST_6_TYPENAMES_(T) typename T##0, typename T##1, typename T##2, \ typename T##3, typename T##4, typename T##5 #define GTEST_7_TYPENAMES_(T) typename T##0, typename T##1, typename T##2, \ typename T##3, typename T##4, typename T##5, typename T##6 #define GTEST_8_TYPENAMES_(T) typename T##0, typename T##1, typename T##2, \ typename T##3, typename T##4, typename T##5, typename T##6, typename T##7 #define GTEST_9_TYPENAMES_(T) typename T##0, typename T##1, typename T##2, \ typename T##3, typename T##4, typename T##5, typename T##6, \ typename T##7, typename T##8 #define GTEST_10_TYPENAMES_(T) typename T##0, typename T##1, typename T##2, \ typename T##3, typename T##4, typename T##5, typename T##6, \ typename T##7, typename T##8, typename T##9 // In theory, defining stuff in the ::std namespace is undefined // behavior. We can do this as we are playing the role of a standard // library vendor. namespace std { namespace tr1 { template class tuple; // Anything in namespace gtest_internal is Google Test's INTERNAL // IMPLEMENTATION DETAIL and MUST NOT BE USED DIRECTLY in user code. namespace gtest_internal { // ByRef::type is T if T is a reference; otherwise it's const T&. template struct ByRef { typedef const T& type; }; // NOLINT template struct ByRef { typedef T& type; }; // NOLINT // A handy wrapper for ByRef. #define GTEST_BY_REF_(T) typename ::std::tr1::gtest_internal::ByRef::type // AddRef::type is T if T is a reference; otherwise it's T&. This // is the same as tr1::add_reference::type. template struct AddRef { typedef T& type; }; // NOLINT template struct AddRef { typedef T& type; }; // NOLINT // A handy wrapper for AddRef. #define GTEST_ADD_REF_(T) typename ::std::tr1::gtest_internal::AddRef::type // A helper for implementing get(). template class Get; // A helper for implementing tuple_element. kIndexValid is true // iff k < the number of fields in tuple type T. template struct TupleElement; template struct TupleElement { typedef T0 type; }; template struct TupleElement { typedef T1 type; }; template struct TupleElement { typedef T2 type; }; template struct TupleElement { typedef T3 type; }; template struct TupleElement { typedef T4 type; }; template struct TupleElement { typedef T5 type; }; template struct TupleElement { typedef T6 type; }; template struct TupleElement { typedef T7 type; }; template struct TupleElement { typedef T8 type; }; template struct TupleElement { typedef T9 type; }; } // namespace gtest_internal template <> class tuple<> { public: tuple() {} tuple(const tuple& /* t */) {} tuple& operator=(const tuple& /* t */) { return *this; } }; template class GTEST_1_TUPLE_(T) { public: template friend class gtest_internal::Get; tuple() : f0_() {} explicit tuple(GTEST_BY_REF_(T0) f0) : f0_(f0) {} tuple(const tuple& t) : f0_(t.f0_) {} template tuple(const GTEST_1_TUPLE_(U)& t) : f0_(t.f0_) {} tuple& operator=(const tuple& t) { return CopyFrom(t); } template tuple& operator=(const GTEST_1_TUPLE_(U)& t) { return CopyFrom(t); } GTEST_DECLARE_TUPLE_AS_FRIEND_ template tuple& CopyFrom(const GTEST_1_TUPLE_(U)& t) { f0_ = t.f0_; return *this; } T0 f0_; }; template class GTEST_2_TUPLE_(T) { public: template friend class gtest_internal::Get; tuple() : f0_(), f1_() {} explicit tuple(GTEST_BY_REF_(T0) f0, GTEST_BY_REF_(T1) f1) : f0_(f0), f1_(f1) {} tuple(const tuple& t) : f0_(t.f0_), f1_(t.f1_) {} template tuple(const GTEST_2_TUPLE_(U)& t) : f0_(t.f0_), f1_(t.f1_) {} template tuple(const ::std::pair& p) : f0_(p.first), f1_(p.second) {} tuple& operator=(const tuple& t) { return CopyFrom(t); } template tuple& operator=(const GTEST_2_TUPLE_(U)& t) { return CopyFrom(t); } template tuple& operator=(const ::std::pair& p) { f0_ = p.first; f1_ = p.second; return *this; } GTEST_DECLARE_TUPLE_AS_FRIEND_ template tuple& CopyFrom(const GTEST_2_TUPLE_(U)& t) { f0_ = t.f0_; f1_ = t.f1_; return *this; } T0 f0_; T1 f1_; }; template class GTEST_3_TUPLE_(T) { public: template friend class gtest_internal::Get; tuple() : f0_(), f1_(), f2_() {} explicit tuple(GTEST_BY_REF_(T0) f0, GTEST_BY_REF_(T1) f1, GTEST_BY_REF_(T2) f2) : f0_(f0), f1_(f1), f2_(f2) {} tuple(const tuple& t) : f0_(t.f0_), f1_(t.f1_), f2_(t.f2_) {} template tuple(const GTEST_3_TUPLE_(U)& t) : f0_(t.f0_), f1_(t.f1_), f2_(t.f2_) {} tuple& operator=(const tuple& t) { return CopyFrom(t); } template tuple& operator=(const GTEST_3_TUPLE_(U)& t) { return CopyFrom(t); } GTEST_DECLARE_TUPLE_AS_FRIEND_ template tuple& CopyFrom(const GTEST_3_TUPLE_(U)& t) { f0_ = t.f0_; f1_ = t.f1_; f2_ = t.f2_; return *this; } T0 f0_; T1 f1_; T2 f2_; }; template class GTEST_4_TUPLE_(T) { public: template friend class gtest_internal::Get; tuple() : f0_(), f1_(), f2_(), f3_() {} explicit tuple(GTEST_BY_REF_(T0) f0, GTEST_BY_REF_(T1) f1, GTEST_BY_REF_(T2) f2, GTEST_BY_REF_(T3) f3) : f0_(f0), f1_(f1), f2_(f2), f3_(f3) {} tuple(const tuple& t) : f0_(t.f0_), f1_(t.f1_), f2_(t.f2_), f3_(t.f3_) {} template tuple(const GTEST_4_TUPLE_(U)& t) : f0_(t.f0_), f1_(t.f1_), f2_(t.f2_), f3_(t.f3_) {} tuple& operator=(const tuple& t) { return CopyFrom(t); } template tuple& operator=(const GTEST_4_TUPLE_(U)& t) { return CopyFrom(t); } GTEST_DECLARE_TUPLE_AS_FRIEND_ template tuple& CopyFrom(const GTEST_4_TUPLE_(U)& t) { f0_ = t.f0_; f1_ = t.f1_; f2_ = t.f2_; f3_ = t.f3_; return *this; } T0 f0_; T1 f1_; T2 f2_; T3 f3_; }; template class GTEST_5_TUPLE_(T) { public: template friend class gtest_internal::Get; tuple() : f0_(), f1_(), f2_(), f3_(), f4_() {} explicit tuple(GTEST_BY_REF_(T0) f0, GTEST_BY_REF_(T1) f1, GTEST_BY_REF_(T2) f2, GTEST_BY_REF_(T3) f3, GTEST_BY_REF_(T4) f4) : f0_(f0), f1_(f1), f2_(f2), f3_(f3), f4_(f4) {} tuple(const tuple& t) : f0_(t.f0_), f1_(t.f1_), f2_(t.f2_), f3_(t.f3_), f4_(t.f4_) {} template tuple(const GTEST_5_TUPLE_(U)& t) : f0_(t.f0_), f1_(t.f1_), f2_(t.f2_), f3_(t.f3_), f4_(t.f4_) {} tuple& operator=(const tuple& t) { return CopyFrom(t); } template tuple& operator=(const GTEST_5_TUPLE_(U)& t) { return CopyFrom(t); } GTEST_DECLARE_TUPLE_AS_FRIEND_ template tuple& CopyFrom(const GTEST_5_TUPLE_(U)& t) { f0_ = t.f0_; f1_ = t.f1_; f2_ = t.f2_; f3_ = t.f3_; f4_ = t.f4_; return *this; } T0 f0_; T1 f1_; T2 f2_; T3 f3_; T4 f4_; }; template class GTEST_6_TUPLE_(T) { public: template friend class gtest_internal::Get; tuple() : f0_(), f1_(), f2_(), f3_(), f4_(), f5_() {} explicit tuple(GTEST_BY_REF_(T0) f0, GTEST_BY_REF_(T1) f1, GTEST_BY_REF_(T2) f2, GTEST_BY_REF_(T3) f3, GTEST_BY_REF_(T4) f4, GTEST_BY_REF_(T5) f5) : f0_(f0), f1_(f1), f2_(f2), f3_(f3), f4_(f4), f5_(f5) {} tuple(const tuple& t) : f0_(t.f0_), f1_(t.f1_), f2_(t.f2_), f3_(t.f3_), f4_(t.f4_), f5_(t.f5_) {} template tuple(const GTEST_6_TUPLE_(U)& t) : f0_(t.f0_), f1_(t.f1_), f2_(t.f2_), f3_(t.f3_), f4_(t.f4_), f5_(t.f5_) {} tuple& operator=(const tuple& t) { return CopyFrom(t); } template tuple& operator=(const GTEST_6_TUPLE_(U)& t) { return CopyFrom(t); } GTEST_DECLARE_TUPLE_AS_FRIEND_ template tuple& CopyFrom(const GTEST_6_TUPLE_(U)& t) { f0_ = t.f0_; f1_ = t.f1_; f2_ = t.f2_; f3_ = t.f3_; f4_ = t.f4_; f5_ = t.f5_; return *this; } T0 f0_; T1 f1_; T2 f2_; T3 f3_; T4 f4_; T5 f5_; }; template class GTEST_7_TUPLE_(T) { public: template friend class gtest_internal::Get; tuple() : f0_(), f1_(), f2_(), f3_(), f4_(), f5_(), f6_() {} explicit tuple(GTEST_BY_REF_(T0) f0, GTEST_BY_REF_(T1) f1, GTEST_BY_REF_(T2) f2, GTEST_BY_REF_(T3) f3, GTEST_BY_REF_(T4) f4, GTEST_BY_REF_(T5) f5, GTEST_BY_REF_(T6) f6) : f0_(f0), f1_(f1), f2_(f2), f3_(f3), f4_(f4), f5_(f5), f6_(f6) {} tuple(const tuple& t) : f0_(t.f0_), f1_(t.f1_), f2_(t.f2_), f3_(t.f3_), f4_(t.f4_), f5_(t.f5_), f6_(t.f6_) {} template tuple(const GTEST_7_TUPLE_(U)& t) : f0_(t.f0_), f1_(t.f1_), f2_(t.f2_), f3_(t.f3_), f4_(t.f4_), f5_(t.f5_), f6_(t.f6_) {} tuple& operator=(const tuple& t) { return CopyFrom(t); } template tuple& operator=(const GTEST_7_TUPLE_(U)& t) { return CopyFrom(t); } GTEST_DECLARE_TUPLE_AS_FRIEND_ template tuple& CopyFrom(const GTEST_7_TUPLE_(U)& t) { f0_ = t.f0_; f1_ = t.f1_; f2_ = t.f2_; f3_ = t.f3_; f4_ = t.f4_; f5_ = t.f5_; f6_ = t.f6_; return *this; } T0 f0_; T1 f1_; T2 f2_; T3 f3_; T4 f4_; T5 f5_; T6 f6_; }; template class GTEST_8_TUPLE_(T) { public: template friend class gtest_internal::Get; tuple() : f0_(), f1_(), f2_(), f3_(), f4_(), f5_(), f6_(), f7_() {} explicit tuple(GTEST_BY_REF_(T0) f0, GTEST_BY_REF_(T1) f1, GTEST_BY_REF_(T2) f2, GTEST_BY_REF_(T3) f3, GTEST_BY_REF_(T4) f4, GTEST_BY_REF_(T5) f5, GTEST_BY_REF_(T6) f6, GTEST_BY_REF_(T7) f7) : f0_(f0), f1_(f1), f2_(f2), f3_(f3), f4_(f4), f5_(f5), f6_(f6), f7_(f7) {} tuple(const tuple& t) : f0_(t.f0_), f1_(t.f1_), f2_(t.f2_), f3_(t.f3_), f4_(t.f4_), f5_(t.f5_), f6_(t.f6_), f7_(t.f7_) {} template tuple(const GTEST_8_TUPLE_(U)& t) : f0_(t.f0_), f1_(t.f1_), f2_(t.f2_), f3_(t.f3_), f4_(t.f4_), f5_(t.f5_), f6_(t.f6_), f7_(t.f7_) {} tuple& operator=(const tuple& t) { return CopyFrom(t); } template tuple& operator=(const GTEST_8_TUPLE_(U)& t) { return CopyFrom(t); } GTEST_DECLARE_TUPLE_AS_FRIEND_ template tuple& CopyFrom(const GTEST_8_TUPLE_(U)& t) { f0_ = t.f0_; f1_ = t.f1_; f2_ = t.f2_; f3_ = t.f3_; f4_ = t.f4_; f5_ = t.f5_; f6_ = t.f6_; f7_ = t.f7_; return *this; } T0 f0_; T1 f1_; T2 f2_; T3 f3_; T4 f4_; T5 f5_; T6 f6_; T7 f7_; }; template class GTEST_9_TUPLE_(T) { public: template friend class gtest_internal::Get; tuple() : f0_(), f1_(), f2_(), f3_(), f4_(), f5_(), f6_(), f7_(), f8_() {} explicit tuple(GTEST_BY_REF_(T0) f0, GTEST_BY_REF_(T1) f1, GTEST_BY_REF_(T2) f2, GTEST_BY_REF_(T3) f3, GTEST_BY_REF_(T4) f4, GTEST_BY_REF_(T5) f5, GTEST_BY_REF_(T6) f6, GTEST_BY_REF_(T7) f7, GTEST_BY_REF_(T8) f8) : f0_(f0), f1_(f1), f2_(f2), f3_(f3), f4_(f4), f5_(f5), f6_(f6), f7_(f7), f8_(f8) {} tuple(const tuple& t) : f0_(t.f0_), f1_(t.f1_), f2_(t.f2_), f3_(t.f3_), f4_(t.f4_), f5_(t.f5_), f6_(t.f6_), f7_(t.f7_), f8_(t.f8_) {} template tuple(const GTEST_9_TUPLE_(U)& t) : f0_(t.f0_), f1_(t.f1_), f2_(t.f2_), f3_(t.f3_), f4_(t.f4_), f5_(t.f5_), f6_(t.f6_), f7_(t.f7_), f8_(t.f8_) {} tuple& operator=(const tuple& t) { return CopyFrom(t); } template tuple& operator=(const GTEST_9_TUPLE_(U)& t) { return CopyFrom(t); } GTEST_DECLARE_TUPLE_AS_FRIEND_ template tuple& CopyFrom(const GTEST_9_TUPLE_(U)& t) { f0_ = t.f0_; f1_ = t.f1_; f2_ = t.f2_; f3_ = t.f3_; f4_ = t.f4_; f5_ = t.f5_; f6_ = t.f6_; f7_ = t.f7_; f8_ = t.f8_; return *this; } T0 f0_; T1 f1_; T2 f2_; T3 f3_; T4 f4_; T5 f5_; T6 f6_; T7 f7_; T8 f8_; }; template class tuple { public: template friend class gtest_internal::Get; tuple() : f0_(), f1_(), f2_(), f3_(), f4_(), f5_(), f6_(), f7_(), f8_(), f9_() {} explicit tuple(GTEST_BY_REF_(T0) f0, GTEST_BY_REF_(T1) f1, GTEST_BY_REF_(T2) f2, GTEST_BY_REF_(T3) f3, GTEST_BY_REF_(T4) f4, GTEST_BY_REF_(T5) f5, GTEST_BY_REF_(T6) f6, GTEST_BY_REF_(T7) f7, GTEST_BY_REF_(T8) f8, GTEST_BY_REF_(T9) f9) : f0_(f0), f1_(f1), f2_(f2), f3_(f3), f4_(f4), f5_(f5), f6_(f6), f7_(f7), f8_(f8), f9_(f9) {} tuple(const tuple& t) : f0_(t.f0_), f1_(t.f1_), f2_(t.f2_), f3_(t.f3_), f4_(t.f4_), f5_(t.f5_), f6_(t.f6_), f7_(t.f7_), f8_(t.f8_), f9_(t.f9_) {} template tuple(const GTEST_10_TUPLE_(U)& t) : f0_(t.f0_), f1_(t.f1_), f2_(t.f2_), f3_(t.f3_), f4_(t.f4_), f5_(t.f5_), f6_(t.f6_), f7_(t.f7_), f8_(t.f8_), f9_(t.f9_) {} tuple& operator=(const tuple& t) { return CopyFrom(t); } template tuple& operator=(const GTEST_10_TUPLE_(U)& t) { return CopyFrom(t); } GTEST_DECLARE_TUPLE_AS_FRIEND_ template tuple& CopyFrom(const GTEST_10_TUPLE_(U)& t) { f0_ = t.f0_; f1_ = t.f1_; f2_ = t.f2_; f3_ = t.f3_; f4_ = t.f4_; f5_ = t.f5_; f6_ = t.f6_; f7_ = t.f7_; f8_ = t.f8_; f9_ = t.f9_; return *this; } T0 f0_; T1 f1_; T2 f2_; T3 f3_; T4 f4_; T5 f5_; T6 f6_; T7 f7_; T8 f8_; T9 f9_; }; // 6.1.3.2 Tuple creation functions. // Known limitations: we don't support passing an // std::tr1::reference_wrapper to make_tuple(). And we don't // implement tie(). inline tuple<> make_tuple() { return tuple<>(); } template inline GTEST_1_TUPLE_(T) make_tuple(const T0& f0) { return GTEST_1_TUPLE_(T)(f0); } template inline GTEST_2_TUPLE_(T) make_tuple(const T0& f0, const T1& f1) { return GTEST_2_TUPLE_(T)(f0, f1); } template inline GTEST_3_TUPLE_(T) make_tuple(const T0& f0, const T1& f1, const T2& f2) { return GTEST_3_TUPLE_(T)(f0, f1, f2); } template inline GTEST_4_TUPLE_(T) make_tuple(const T0& f0, const T1& f1, const T2& f2, const T3& f3) { return GTEST_4_TUPLE_(T)(f0, f1, f2, f3); } template inline GTEST_5_TUPLE_(T) make_tuple(const T0& f0, const T1& f1, const T2& f2, const T3& f3, const T4& f4) { return GTEST_5_TUPLE_(T)(f0, f1, f2, f3, f4); } template inline GTEST_6_TUPLE_(T) make_tuple(const T0& f0, const T1& f1, const T2& f2, const T3& f3, const T4& f4, const T5& f5) { return GTEST_6_TUPLE_(T)(f0, f1, f2, f3, f4, f5); } template inline GTEST_7_TUPLE_(T) make_tuple(const T0& f0, const T1& f1, const T2& f2, const T3& f3, const T4& f4, const T5& f5, const T6& f6) { return GTEST_7_TUPLE_(T)(f0, f1, f2, f3, f4, f5, f6); } template inline GTEST_8_TUPLE_(T) make_tuple(const T0& f0, const T1& f1, const T2& f2, const T3& f3, const T4& f4, const T5& f5, const T6& f6, const T7& f7) { return GTEST_8_TUPLE_(T)(f0, f1, f2, f3, f4, f5, f6, f7); } template inline GTEST_9_TUPLE_(T) make_tuple(const T0& f0, const T1& f1, const T2& f2, const T3& f3, const T4& f4, const T5& f5, const T6& f6, const T7& f7, const T8& f8) { return GTEST_9_TUPLE_(T)(f0, f1, f2, f3, f4, f5, f6, f7, f8); } template inline GTEST_10_TUPLE_(T) make_tuple(const T0& f0, const T1& f1, const T2& f2, const T3& f3, const T4& f4, const T5& f5, const T6& f6, const T7& f7, const T8& f8, const T9& f9) { return GTEST_10_TUPLE_(T)(f0, f1, f2, f3, f4, f5, f6, f7, f8, f9); } // 6.1.3.3 Tuple helper classes. template struct tuple_size; template struct tuple_size { static const int value = 0; }; template struct tuple_size { static const int value = 1; }; template struct tuple_size { static const int value = 2; }; template struct tuple_size { static const int value = 3; }; template struct tuple_size { static const int value = 4; }; template struct tuple_size { static const int value = 5; }; template struct tuple_size { static const int value = 6; }; template struct tuple_size { static const int value = 7; }; template struct tuple_size { static const int value = 8; }; template struct tuple_size { static const int value = 9; }; template struct tuple_size { static const int value = 10; }; template struct tuple_element { typedef typename gtest_internal::TupleElement< k < (tuple_size::value), k, Tuple>::type type; }; #define GTEST_TUPLE_ELEMENT_(k, Tuple) typename tuple_element::type // 6.1.3.4 Element access. namespace gtest_internal { template <> class Get<0> { public: template static GTEST_ADD_REF_(GTEST_TUPLE_ELEMENT_(0, Tuple)) Field(Tuple& t) { return t.f0_; } // NOLINT template static GTEST_BY_REF_(GTEST_TUPLE_ELEMENT_(0, Tuple)) ConstField(const Tuple& t) { return t.f0_; } }; template <> class Get<1> { public: template static GTEST_ADD_REF_(GTEST_TUPLE_ELEMENT_(1, Tuple)) Field(Tuple& t) { return t.f1_; } // NOLINT template static GTEST_BY_REF_(GTEST_TUPLE_ELEMENT_(1, Tuple)) ConstField(const Tuple& t) { return t.f1_; } }; template <> class Get<2> { public: template static GTEST_ADD_REF_(GTEST_TUPLE_ELEMENT_(2, Tuple)) Field(Tuple& t) { return t.f2_; } // NOLINT template static GTEST_BY_REF_(GTEST_TUPLE_ELEMENT_(2, Tuple)) ConstField(const Tuple& t) { return t.f2_; } }; template <> class Get<3> { public: template static GTEST_ADD_REF_(GTEST_TUPLE_ELEMENT_(3, Tuple)) Field(Tuple& t) { return t.f3_; } // NOLINT template static GTEST_BY_REF_(GTEST_TUPLE_ELEMENT_(3, Tuple)) ConstField(const Tuple& t) { return t.f3_; } }; template <> class Get<4> { public: template static GTEST_ADD_REF_(GTEST_TUPLE_ELEMENT_(4, Tuple)) Field(Tuple& t) { return t.f4_; } // NOLINT template static GTEST_BY_REF_(GTEST_TUPLE_ELEMENT_(4, Tuple)) ConstField(const Tuple& t) { return t.f4_; } }; template <> class Get<5> { public: template static GTEST_ADD_REF_(GTEST_TUPLE_ELEMENT_(5, Tuple)) Field(Tuple& t) { return t.f5_; } // NOLINT template static GTEST_BY_REF_(GTEST_TUPLE_ELEMENT_(5, Tuple)) ConstField(const Tuple& t) { return t.f5_; } }; template <> class Get<6> { public: template static GTEST_ADD_REF_(GTEST_TUPLE_ELEMENT_(6, Tuple)) Field(Tuple& t) { return t.f6_; } // NOLINT template static GTEST_BY_REF_(GTEST_TUPLE_ELEMENT_(6, Tuple)) ConstField(const Tuple& t) { return t.f6_; } }; template <> class Get<7> { public: template static GTEST_ADD_REF_(GTEST_TUPLE_ELEMENT_(7, Tuple)) Field(Tuple& t) { return t.f7_; } // NOLINT template static GTEST_BY_REF_(GTEST_TUPLE_ELEMENT_(7, Tuple)) ConstField(const Tuple& t) { return t.f7_; } }; template <> class Get<8> { public: template static GTEST_ADD_REF_(GTEST_TUPLE_ELEMENT_(8, Tuple)) Field(Tuple& t) { return t.f8_; } // NOLINT template static GTEST_BY_REF_(GTEST_TUPLE_ELEMENT_(8, Tuple)) ConstField(const Tuple& t) { return t.f8_; } }; template <> class Get<9> { public: template static GTEST_ADD_REF_(GTEST_TUPLE_ELEMENT_(9, Tuple)) Field(Tuple& t) { return t.f9_; } // NOLINT template static GTEST_BY_REF_(GTEST_TUPLE_ELEMENT_(9, Tuple)) ConstField(const Tuple& t) { return t.f9_; } }; } // namespace gtest_internal template GTEST_ADD_REF_(GTEST_TUPLE_ELEMENT_(k, GTEST_10_TUPLE_(T))) get(GTEST_10_TUPLE_(T)& t) { return gtest_internal::Get::Field(t); } template GTEST_BY_REF_(GTEST_TUPLE_ELEMENT_(k, GTEST_10_TUPLE_(T))) get(const GTEST_10_TUPLE_(T)& t) { return gtest_internal::Get::ConstField(t); } // 6.1.3.5 Relational operators // We only implement == and !=, as we don't have a need for the rest yet. namespace gtest_internal { // SameSizeTuplePrefixComparator::Eq(t1, t2) returns true if the // first k fields of t1 equals the first k fields of t2. // SameSizeTuplePrefixComparator(k1, k2) would be a compiler error if // k1 != k2. template struct SameSizeTuplePrefixComparator; template <> struct SameSizeTuplePrefixComparator<0, 0> { template static bool Eq(const Tuple1& /* t1 */, const Tuple2& /* t2 */) { return true; } }; template struct SameSizeTuplePrefixComparator { template static bool Eq(const Tuple1& t1, const Tuple2& t2) { return SameSizeTuplePrefixComparator::Eq(t1, t2) && ::std::tr1::get(t1) == ::std::tr1::get(t2); } }; } // namespace gtest_internal template inline bool operator==(const GTEST_10_TUPLE_(T)& t, const GTEST_10_TUPLE_(U)& u) { return gtest_internal::SameSizeTuplePrefixComparator< tuple_size::value, tuple_size::value>::Eq(t, u); } template inline bool operator!=(const GTEST_10_TUPLE_(T)& t, const GTEST_10_TUPLE_(U)& u) { return !(t == u); } // 6.1.4 Pairs. // Unimplemented. } // namespace tr1 } // namespace std #undef GTEST_0_TUPLE_ #undef GTEST_1_TUPLE_ #undef GTEST_2_TUPLE_ #undef GTEST_3_TUPLE_ #undef GTEST_4_TUPLE_ #undef GTEST_5_TUPLE_ #undef GTEST_6_TUPLE_ #undef GTEST_7_TUPLE_ #undef GTEST_8_TUPLE_ #undef GTEST_9_TUPLE_ #undef GTEST_10_TUPLE_ #undef GTEST_0_TYPENAMES_ #undef GTEST_1_TYPENAMES_ #undef GTEST_2_TYPENAMES_ #undef GTEST_3_TYPENAMES_ #undef GTEST_4_TYPENAMES_ #undef GTEST_5_TYPENAMES_ #undef GTEST_6_TYPENAMES_ #undef GTEST_7_TYPENAMES_ #undef GTEST_8_TYPENAMES_ #undef GTEST_9_TYPENAMES_ #undef GTEST_10_TYPENAMES_ #undef GTEST_DECLARE_TUPLE_AS_FRIEND_ #undef GTEST_BY_REF_ #undef GTEST_ADD_REF_ #undef GTEST_TUPLE_ELEMENT_ #endif // GTEST_INCLUDE_GTEST_INTERNAL_GTEST_TUPLE_H_ ffc-2019.1.0.post0/libs/gtest-1.7.0/include/gtest/internal/gtest-tuple.h.pump000066400000000000000000000220121346026536000262630ustar00rootroot00000000000000$$ -*- mode: c++; -*- $var n = 10 $$ Maximum number of tuple fields we want to support. $$ This meta comment fixes auto-indentation in Emacs. }} // Copyright 2009 Google Inc. // All Rights Reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: wan@google.com (Zhanyong Wan) // Implements a subset of TR1 tuple needed by Google Test and Google Mock. #ifndef GTEST_INCLUDE_GTEST_INTERNAL_GTEST_TUPLE_H_ #define GTEST_INCLUDE_GTEST_INTERNAL_GTEST_TUPLE_H_ #include // For ::std::pair. // The compiler used in Symbian has a bug that prevents us from declaring the // tuple template as a friend (it complains that tuple is redefined). This // hack bypasses the bug by declaring the members that should otherwise be // private as public. // Sun Studio versions < 12 also have the above bug. #if defined(__SYMBIAN32__) || (defined(__SUNPRO_CC) && __SUNPRO_CC < 0x590) # define GTEST_DECLARE_TUPLE_AS_FRIEND_ public: #else # define GTEST_DECLARE_TUPLE_AS_FRIEND_ \ template friend class tuple; \ private: #endif $range i 0..n-1 $range j 0..n $range k 1..n // GTEST_n_TUPLE_(T) is the type of an n-tuple. #define GTEST_0_TUPLE_(T) tuple<> $for k [[ $range m 0..k-1 $range m2 k..n-1 #define GTEST_$(k)_TUPLE_(T) tuple<$for m, [[T##$m]]$for m2 [[, void]]> ]] // GTEST_n_TYPENAMES_(T) declares a list of n typenames. $for j [[ $range m 0..j-1 #define GTEST_$(j)_TYPENAMES_(T) $for m, [[typename T##$m]] ]] // In theory, defining stuff in the ::std namespace is undefined // behavior. We can do this as we are playing the role of a standard // library vendor. namespace std { namespace tr1 { template <$for i, [[typename T$i = void]]> class tuple; // Anything in namespace gtest_internal is Google Test's INTERNAL // IMPLEMENTATION DETAIL and MUST NOT BE USED DIRECTLY in user code. namespace gtest_internal { // ByRef::type is T if T is a reference; otherwise it's const T&. template struct ByRef { typedef const T& type; }; // NOLINT template struct ByRef { typedef T& type; }; // NOLINT // A handy wrapper for ByRef. #define GTEST_BY_REF_(T) typename ::std::tr1::gtest_internal::ByRef::type // AddRef::type is T if T is a reference; otherwise it's T&. This // is the same as tr1::add_reference::type. template struct AddRef { typedef T& type; }; // NOLINT template struct AddRef { typedef T& type; }; // NOLINT // A handy wrapper for AddRef. #define GTEST_ADD_REF_(T) typename ::std::tr1::gtest_internal::AddRef::type // A helper for implementing get(). template class Get; // A helper for implementing tuple_element. kIndexValid is true // iff k < the number of fields in tuple type T. template struct TupleElement; $for i [[ template struct TupleElement { typedef T$i type; }; ]] } // namespace gtest_internal template <> class tuple<> { public: tuple() {} tuple(const tuple& /* t */) {} tuple& operator=(const tuple& /* t */) { return *this; } }; $for k [[ $range m 0..k-1 template class $if k < n [[GTEST_$(k)_TUPLE_(T)]] $else [[tuple]] { public: template friend class gtest_internal::Get; tuple() : $for m, [[f$(m)_()]] {} explicit tuple($for m, [[GTEST_BY_REF_(T$m) f$m]]) : [[]] $for m, [[f$(m)_(f$m)]] {} tuple(const tuple& t) : $for m, [[f$(m)_(t.f$(m)_)]] {} template tuple(const GTEST_$(k)_TUPLE_(U)& t) : $for m, [[f$(m)_(t.f$(m)_)]] {} $if k == 2 [[ template tuple(const ::std::pair& p) : f0_(p.first), f1_(p.second) {} ]] tuple& operator=(const tuple& t) { return CopyFrom(t); } template tuple& operator=(const GTEST_$(k)_TUPLE_(U)& t) { return CopyFrom(t); } $if k == 2 [[ template tuple& operator=(const ::std::pair& p) { f0_ = p.first; f1_ = p.second; return *this; } ]] GTEST_DECLARE_TUPLE_AS_FRIEND_ template tuple& CopyFrom(const GTEST_$(k)_TUPLE_(U)& t) { $for m [[ f$(m)_ = t.f$(m)_; ]] return *this; } $for m [[ T$m f$(m)_; ]] }; ]] // 6.1.3.2 Tuple creation functions. // Known limitations: we don't support passing an // std::tr1::reference_wrapper to make_tuple(). And we don't // implement tie(). inline tuple<> make_tuple() { return tuple<>(); } $for k [[ $range m 0..k-1 template inline GTEST_$(k)_TUPLE_(T) make_tuple($for m, [[const T$m& f$m]]) { return GTEST_$(k)_TUPLE_(T)($for m, [[f$m]]); } ]] // 6.1.3.3 Tuple helper classes. template struct tuple_size; $for j [[ template struct tuple_size { static const int value = $j; }; ]] template struct tuple_element { typedef typename gtest_internal::TupleElement< k < (tuple_size::value), k, Tuple>::type type; }; #define GTEST_TUPLE_ELEMENT_(k, Tuple) typename tuple_element::type // 6.1.3.4 Element access. namespace gtest_internal { $for i [[ template <> class Get<$i> { public: template static GTEST_ADD_REF_(GTEST_TUPLE_ELEMENT_($i, Tuple)) Field(Tuple& t) { return t.f$(i)_; } // NOLINT template static GTEST_BY_REF_(GTEST_TUPLE_ELEMENT_($i, Tuple)) ConstField(const Tuple& t) { return t.f$(i)_; } }; ]] } // namespace gtest_internal template GTEST_ADD_REF_(GTEST_TUPLE_ELEMENT_(k, GTEST_$(n)_TUPLE_(T))) get(GTEST_$(n)_TUPLE_(T)& t) { return gtest_internal::Get::Field(t); } template GTEST_BY_REF_(GTEST_TUPLE_ELEMENT_(k, GTEST_$(n)_TUPLE_(T))) get(const GTEST_$(n)_TUPLE_(T)& t) { return gtest_internal::Get::ConstField(t); } // 6.1.3.5 Relational operators // We only implement == and !=, as we don't have a need for the rest yet. namespace gtest_internal { // SameSizeTuplePrefixComparator::Eq(t1, t2) returns true if the // first k fields of t1 equals the first k fields of t2. // SameSizeTuplePrefixComparator(k1, k2) would be a compiler error if // k1 != k2. template struct SameSizeTuplePrefixComparator; template <> struct SameSizeTuplePrefixComparator<0, 0> { template static bool Eq(const Tuple1& /* t1 */, const Tuple2& /* t2 */) { return true; } }; template struct SameSizeTuplePrefixComparator { template static bool Eq(const Tuple1& t1, const Tuple2& t2) { return SameSizeTuplePrefixComparator::Eq(t1, t2) && ::std::tr1::get(t1) == ::std::tr1::get(t2); } }; } // namespace gtest_internal template inline bool operator==(const GTEST_$(n)_TUPLE_(T)& t, const GTEST_$(n)_TUPLE_(U)& u) { return gtest_internal::SameSizeTuplePrefixComparator< tuple_size::value, tuple_size::value>::Eq(t, u); } template inline bool operator!=(const GTEST_$(n)_TUPLE_(T)& t, const GTEST_$(n)_TUPLE_(U)& u) { return !(t == u); } // 6.1.4 Pairs. // Unimplemented. } // namespace tr1 } // namespace std $for j [[ #undef GTEST_$(j)_TUPLE_ ]] $for j [[ #undef GTEST_$(j)_TYPENAMES_ ]] #undef GTEST_DECLARE_TUPLE_AS_FRIEND_ #undef GTEST_BY_REF_ #undef GTEST_ADD_REF_ #undef GTEST_TUPLE_ELEMENT_ #endif // GTEST_INCLUDE_GTEST_INTERNAL_GTEST_TUPLE_H_ ffc-2019.1.0.post0/libs/gtest-1.7.0/include/gtest/internal/gtest-type-util.h000066400000000000000000005525021346026536000261220ustar00rootroot00000000000000// This file was GENERATED by command: // pump.py gtest-type-util.h.pump // DO NOT EDIT BY HAND!!! // Copyright 2008 Google Inc. // All Rights Reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: wan@google.com (Zhanyong Wan) // Type utilities needed for implementing typed and type-parameterized // tests. This file is generated by a SCRIPT. DO NOT EDIT BY HAND! // // Currently we support at most 50 types in a list, and at most 50 // type-parameterized tests in one type-parameterized test case. // Please contact googletestframework@googlegroups.com if you need // more. #ifndef GTEST_INCLUDE_GTEST_INTERNAL_GTEST_TYPE_UTIL_H_ #define GTEST_INCLUDE_GTEST_INTERNAL_GTEST_TYPE_UTIL_H_ #include "gtest/internal/gtest-port.h" // #ifdef __GNUC__ is too general here. It is possible to use gcc without using // libstdc++ (which is where cxxabi.h comes from). # if GTEST_HAS_CXXABI_H_ # include # elif defined(__HP_aCC) # include # endif // GTEST_HASH_CXXABI_H_ namespace testing { namespace internal { // GetTypeName() returns a human-readable name of type T. // NB: This function is also used in Google Mock, so don't move it inside of // the typed-test-only section below. template std::string GetTypeName() { # if GTEST_HAS_RTTI const char* const name = typeid(T).name(); # if GTEST_HAS_CXXABI_H_ || defined(__HP_aCC) int status = 0; // gcc's implementation of typeid(T).name() mangles the type name, // so we have to demangle it. # if GTEST_HAS_CXXABI_H_ using abi::__cxa_demangle; # endif // GTEST_HAS_CXXABI_H_ char* const readable_name = __cxa_demangle(name, 0, 0, &status); const std::string name_str(status == 0 ? readable_name : name); free(readable_name); return name_str; # else return name; # endif // GTEST_HAS_CXXABI_H_ || __HP_aCC # else return ""; # endif // GTEST_HAS_RTTI } #if GTEST_HAS_TYPED_TEST || GTEST_HAS_TYPED_TEST_P // AssertyTypeEq::type is defined iff T1 and T2 are the same // type. This can be used as a compile-time assertion to ensure that // two types are equal. template struct AssertTypeEq; template struct AssertTypeEq { typedef bool type; }; // A unique type used as the default value for the arguments of class // template Types. This allows us to simulate variadic templates // (e.g. Types, Type, and etc), which C++ doesn't // support directly. struct None {}; // The following family of struct and struct templates are used to // represent type lists. In particular, TypesN // represents a type list with N types (T1, T2, ..., and TN) in it. // Except for Types0, every struct in the family has two member types: // Head for the first type in the list, and Tail for the rest of the // list. // The empty type list. struct Types0 {}; // Type lists of length 1, 2, 3, and so on. template struct Types1 { typedef T1 Head; typedef Types0 Tail; }; template struct Types2 { typedef T1 Head; typedef Types1 Tail; }; template struct Types3 { typedef T1 Head; typedef Types2 Tail; }; template struct Types4 { typedef T1 Head; typedef Types3 Tail; }; template struct Types5 { typedef T1 Head; typedef Types4 Tail; }; template struct Types6 { typedef T1 Head; typedef Types5 Tail; }; template struct Types7 { typedef T1 Head; typedef Types6 Tail; }; template struct Types8 { typedef T1 Head; typedef Types7 Tail; }; template struct Types9 { typedef T1 Head; typedef Types8 Tail; }; template struct Types10 { typedef T1 Head; typedef Types9 Tail; }; template struct Types11 { typedef T1 Head; typedef Types10 Tail; }; template struct Types12 { typedef T1 Head; typedef Types11 Tail; }; template struct Types13 { typedef T1 Head; typedef Types12 Tail; }; template struct Types14 { typedef T1 Head; typedef Types13 Tail; }; template struct Types15 { typedef T1 Head; typedef Types14 Tail; }; template struct Types16 { typedef T1 Head; typedef Types15 Tail; }; template struct Types17 { typedef T1 Head; typedef Types16 Tail; }; template struct Types18 { typedef T1 Head; typedef Types17 Tail; }; template struct Types19 { typedef T1 Head; typedef Types18 Tail; }; template struct Types20 { typedef T1 Head; typedef Types19 Tail; }; template struct Types21 { typedef T1 Head; typedef Types20 Tail; }; template struct Types22 { typedef T1 Head; typedef Types21 Tail; }; template struct Types23 { typedef T1 Head; typedef Types22 Tail; }; template struct Types24 { typedef T1 Head; typedef Types23 Tail; }; template struct Types25 { typedef T1 Head; typedef Types24 Tail; }; template struct Types26 { typedef T1 Head; typedef Types25 Tail; }; template struct Types27 { typedef T1 Head; typedef Types26 Tail; }; template struct Types28 { typedef T1 Head; typedef Types27 Tail; }; template struct Types29 { typedef T1 Head; typedef Types28 Tail; }; template struct Types30 { typedef T1 Head; typedef Types29 Tail; }; template struct Types31 { typedef T1 Head; typedef Types30 Tail; }; template struct Types32 { typedef T1 Head; typedef Types31 Tail; }; template struct Types33 { typedef T1 Head; typedef Types32 Tail; }; template struct Types34 { typedef T1 Head; typedef Types33 Tail; }; template struct Types35 { typedef T1 Head; typedef Types34 Tail; }; template struct Types36 { typedef T1 Head; typedef Types35 Tail; }; template struct Types37 { typedef T1 Head; typedef Types36 Tail; }; template struct Types38 { typedef T1 Head; typedef Types37 Tail; }; template struct Types39 { typedef T1 Head; typedef Types38 Tail; }; template struct Types40 { typedef T1 Head; typedef Types39 Tail; }; template struct Types41 { typedef T1 Head; typedef Types40 Tail; }; template struct Types42 { typedef T1 Head; typedef Types41 Tail; }; template struct Types43 { typedef T1 Head; typedef Types42 Tail; }; template struct Types44 { typedef T1 Head; typedef Types43 Tail; }; template struct Types45 { typedef T1 Head; typedef Types44 Tail; }; template struct Types46 { typedef T1 Head; typedef Types45 Tail; }; template struct Types47 { typedef T1 Head; typedef Types46 Tail; }; template struct Types48 { typedef T1 Head; typedef Types47 Tail; }; template struct Types49 { typedef T1 Head; typedef Types48 Tail; }; template struct Types50 { typedef T1 Head; typedef Types49 Tail; }; } // namespace internal // We don't want to require the users to write TypesN<...> directly, // as that would require them to count the length. Types<...> is much // easier to write, but generates horrible messages when there is a // compiler error, as gcc insists on printing out each template // argument, even if it has the default value (this means Types // will appear as Types in the compiler // errors). // // Our solution is to combine the best part of the two approaches: a // user would write Types, and Google Test will translate // that to TypesN internally to make error messages // readable. The translation is done by the 'type' member of the // Types template. template struct Types { typedef internal::Types50 type; }; template <> struct Types { typedef internal::Types0 type; }; template struct Types { typedef internal::Types1 type; }; template struct Types { typedef internal::Types2 type; }; template struct Types { typedef internal::Types3 type; }; template struct Types { typedef internal::Types4 type; }; template struct Types { typedef internal::Types5 type; }; template struct Types { typedef internal::Types6 type; }; template struct Types { typedef internal::Types7 type; }; template struct Types { typedef internal::Types8 type; }; template struct Types { typedef internal::Types9 type; }; template struct Types { typedef internal::Types10 type; }; template struct Types { typedef internal::Types11 type; }; template struct Types { typedef internal::Types12 type; }; template struct Types { typedef internal::Types13 type; }; template struct Types { typedef internal::Types14 type; }; template struct Types { typedef internal::Types15 type; }; template struct Types { typedef internal::Types16 type; }; template struct Types { typedef internal::Types17 type; }; template struct Types { typedef internal::Types18 type; }; template struct Types { typedef internal::Types19 type; }; template struct Types { typedef internal::Types20 type; }; template struct Types { typedef internal::Types21 type; }; template struct Types { typedef internal::Types22 type; }; template struct Types { typedef internal::Types23 type; }; template struct Types { typedef internal::Types24 type; }; template struct Types { typedef internal::Types25 type; }; template struct Types { typedef internal::Types26 type; }; template struct Types { typedef internal::Types27 type; }; template struct Types { typedef internal::Types28 type; }; template struct Types { typedef internal::Types29 type; }; template struct Types { typedef internal::Types30 type; }; template struct Types { typedef internal::Types31 type; }; template struct Types { typedef internal::Types32 type; }; template struct Types { typedef internal::Types33 type; }; template struct Types { typedef internal::Types34 type; }; template struct Types { typedef internal::Types35 type; }; template struct Types { typedef internal::Types36 type; }; template struct Types { typedef internal::Types37 type; }; template struct Types { typedef internal::Types38 type; }; template struct Types { typedef internal::Types39 type; }; template struct Types { typedef internal::Types40 type; }; template struct Types { typedef internal::Types41 type; }; template struct Types { typedef internal::Types42 type; }; template struct Types { typedef internal::Types43 type; }; template struct Types { typedef internal::Types44 type; }; template struct Types { typedef internal::Types45 type; }; template struct Types { typedef internal::Types46 type; }; template struct Types { typedef internal::Types47 type; }; template struct Types { typedef internal::Types48 type; }; template struct Types { typedef internal::Types49 type; }; namespace internal { # define GTEST_TEMPLATE_ template class // The template "selector" struct TemplateSel is used to // represent Tmpl, which must be a class template with one type // parameter, as a type. TemplateSel::Bind::type is defined // as the type Tmpl. This allows us to actually instantiate the // template "selected" by TemplateSel. // // This trick is necessary for simulating typedef for class templates, // which C++ doesn't support directly. template struct TemplateSel { template struct Bind { typedef Tmpl type; }; }; # define GTEST_BIND_(TmplSel, T) \ TmplSel::template Bind::type // A unique struct template used as the default value for the // arguments of class template Templates. This allows us to simulate // variadic templates (e.g. Templates, Templates, // and etc), which C++ doesn't support directly. template struct NoneT {}; // The following family of struct and struct templates are used to // represent template lists. In particular, TemplatesN represents a list of N templates (T1, T2, ..., and TN). Except // for Templates0, every struct in the family has two member types: // Head for the selector of the first template in the list, and Tail // for the rest of the list. // The empty template list. struct Templates0 {}; // Template lists of length 1, 2, 3, and so on. template struct Templates1 { typedef TemplateSel Head; typedef Templates0 Tail; }; template struct Templates2 { typedef TemplateSel Head; typedef Templates1 Tail; }; template struct Templates3 { typedef TemplateSel Head; typedef Templates2 Tail; }; template struct Templates4 { typedef TemplateSel Head; typedef Templates3 Tail; }; template struct Templates5 { typedef TemplateSel Head; typedef Templates4 Tail; }; template struct Templates6 { typedef TemplateSel Head; typedef Templates5 Tail; }; template struct Templates7 { typedef TemplateSel Head; typedef Templates6 Tail; }; template struct Templates8 { typedef TemplateSel Head; typedef Templates7 Tail; }; template struct Templates9 { typedef TemplateSel Head; typedef Templates8 Tail; }; template struct Templates10 { typedef TemplateSel Head; typedef Templates9 Tail; }; template struct Templates11 { typedef TemplateSel Head; typedef Templates10 Tail; }; template struct Templates12 { typedef TemplateSel Head; typedef Templates11 Tail; }; template struct Templates13 { typedef TemplateSel Head; typedef Templates12 Tail; }; template struct Templates14 { typedef TemplateSel Head; typedef Templates13 Tail; }; template struct Templates15 { typedef TemplateSel Head; typedef Templates14 Tail; }; template struct Templates16 { typedef TemplateSel Head; typedef Templates15 Tail; }; template struct Templates17 { typedef TemplateSel Head; typedef Templates16 Tail; }; template struct Templates18 { typedef TemplateSel Head; typedef Templates17 Tail; }; template struct Templates19 { typedef TemplateSel Head; typedef Templates18 Tail; }; template struct Templates20 { typedef TemplateSel Head; typedef Templates19 Tail; }; template struct Templates21 { typedef TemplateSel Head; typedef Templates20 Tail; }; template struct Templates22 { typedef TemplateSel Head; typedef Templates21 Tail; }; template struct Templates23 { typedef TemplateSel Head; typedef Templates22 Tail; }; template struct Templates24 { typedef TemplateSel Head; typedef Templates23 Tail; }; template struct Templates25 { typedef TemplateSel Head; typedef Templates24 Tail; }; template struct Templates26 { typedef TemplateSel Head; typedef Templates25 Tail; }; template struct Templates27 { typedef TemplateSel Head; typedef Templates26 Tail; }; template struct Templates28 { typedef TemplateSel Head; typedef Templates27 Tail; }; template struct Templates29 { typedef TemplateSel Head; typedef Templates28 Tail; }; template struct Templates30 { typedef TemplateSel Head; typedef Templates29 Tail; }; template struct Templates31 { typedef TemplateSel Head; typedef Templates30 Tail; }; template struct Templates32 { typedef TemplateSel Head; typedef Templates31 Tail; }; template struct Templates33 { typedef TemplateSel Head; typedef Templates32 Tail; }; template struct Templates34 { typedef TemplateSel Head; typedef Templates33 Tail; }; template struct Templates35 { typedef TemplateSel Head; typedef Templates34 Tail; }; template struct Templates36 { typedef TemplateSel Head; typedef Templates35 Tail; }; template struct Templates37 { typedef TemplateSel Head; typedef Templates36 Tail; }; template struct Templates38 { typedef TemplateSel Head; typedef Templates37 Tail; }; template struct Templates39 { typedef TemplateSel Head; typedef Templates38 Tail; }; template struct Templates40 { typedef TemplateSel Head; typedef Templates39 Tail; }; template struct Templates41 { typedef TemplateSel Head; typedef Templates40 Tail; }; template struct Templates42 { typedef TemplateSel Head; typedef Templates41 Tail; }; template struct Templates43 { typedef TemplateSel Head; typedef Templates42 Tail; }; template struct Templates44 { typedef TemplateSel Head; typedef Templates43 Tail; }; template struct Templates45 { typedef TemplateSel Head; typedef Templates44 Tail; }; template struct Templates46 { typedef TemplateSel Head; typedef Templates45 Tail; }; template struct Templates47 { typedef TemplateSel Head; typedef Templates46 Tail; }; template struct Templates48 { typedef TemplateSel Head; typedef Templates47 Tail; }; template struct Templates49 { typedef TemplateSel Head; typedef Templates48 Tail; }; template struct Templates50 { typedef TemplateSel Head; typedef Templates49 Tail; }; // We don't want to require the users to write TemplatesN<...> directly, // as that would require them to count the length. Templates<...> is much // easier to write, but generates horrible messages when there is a // compiler error, as gcc insists on printing out each template // argument, even if it has the default value (this means Templates // will appear as Templates in the compiler // errors). // // Our solution is to combine the best part of the two approaches: a // user would write Templates, and Google Test will translate // that to TemplatesN internally to make error messages // readable. The translation is done by the 'type' member of the // Templates template. template struct Templates { typedef Templates50 type; }; template <> struct Templates { typedef Templates0 type; }; template struct Templates { typedef Templates1 type; }; template struct Templates { typedef Templates2 type; }; template struct Templates { typedef Templates3 type; }; template struct Templates { typedef Templates4 type; }; template struct Templates { typedef Templates5 type; }; template struct Templates { typedef Templates6 type; }; template struct Templates { typedef Templates7 type; }; template struct Templates { typedef Templates8 type; }; template struct Templates { typedef Templates9 type; }; template struct Templates { typedef Templates10 type; }; template struct Templates { typedef Templates11 type; }; template struct Templates { typedef Templates12 type; }; template struct Templates { typedef Templates13 type; }; template struct Templates { typedef Templates14 type; }; template struct Templates { typedef Templates15 type; }; template struct Templates { typedef Templates16 type; }; template struct Templates { typedef Templates17 type; }; template struct Templates { typedef Templates18 type; }; template struct Templates { typedef Templates19 type; }; template struct Templates { typedef Templates20 type; }; template struct Templates { typedef Templates21 type; }; template struct Templates { typedef Templates22 type; }; template struct Templates { typedef Templates23 type; }; template struct Templates { typedef Templates24 type; }; template struct Templates { typedef Templates25 type; }; template struct Templates { typedef Templates26 type; }; template struct Templates { typedef Templates27 type; }; template struct Templates { typedef Templates28 type; }; template struct Templates { typedef Templates29 type; }; template struct Templates { typedef Templates30 type; }; template struct Templates { typedef Templates31 type; }; template struct Templates { typedef Templates32 type; }; template struct Templates { typedef Templates33 type; }; template struct Templates { typedef Templates34 type; }; template struct Templates { typedef Templates35 type; }; template struct Templates { typedef Templates36 type; }; template struct Templates { typedef Templates37 type; }; template struct Templates { typedef Templates38 type; }; template struct Templates { typedef Templates39 type; }; template struct Templates { typedef Templates40 type; }; template struct Templates { typedef Templates41 type; }; template struct Templates { typedef Templates42 type; }; template struct Templates { typedef Templates43 type; }; template struct Templates { typedef Templates44 type; }; template struct Templates { typedef Templates45 type; }; template struct Templates { typedef Templates46 type; }; template struct Templates { typedef Templates47 type; }; template struct Templates { typedef Templates48 type; }; template struct Templates { typedef Templates49 type; }; // The TypeList template makes it possible to use either a single type // or a Types<...> list in TYPED_TEST_CASE() and // INSTANTIATE_TYPED_TEST_CASE_P(). template struct TypeList { typedef Types1 type; }; template struct TypeList > { typedef typename Types::type type; }; #endif // GTEST_HAS_TYPED_TEST || GTEST_HAS_TYPED_TEST_P } // namespace internal } // namespace testing #endif // GTEST_INCLUDE_GTEST_INTERNAL_GTEST_TYPE_UTIL_H_ ffc-2019.1.0.post0/libs/gtest-1.7.0/include/gtest/internal/gtest-type-util.h.pump000066400000000000000000000221451346026536000270750ustar00rootroot00000000000000$$ -*- mode: c++; -*- $var n = 50 $$ Maximum length of type lists we want to support. // Copyright 2008 Google Inc. // All Rights Reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: wan@google.com (Zhanyong Wan) // Type utilities needed for implementing typed and type-parameterized // tests. This file is generated by a SCRIPT. DO NOT EDIT BY HAND! // // Currently we support at most $n types in a list, and at most $n // type-parameterized tests in one type-parameterized test case. // Please contact googletestframework@googlegroups.com if you need // more. #ifndef GTEST_INCLUDE_GTEST_INTERNAL_GTEST_TYPE_UTIL_H_ #define GTEST_INCLUDE_GTEST_INTERNAL_GTEST_TYPE_UTIL_H_ #include "gtest/internal/gtest-port.h" // #ifdef __GNUC__ is too general here. It is possible to use gcc without using // libstdc++ (which is where cxxabi.h comes from). # if GTEST_HAS_CXXABI_H_ # include # elif defined(__HP_aCC) # include # endif // GTEST_HASH_CXXABI_H_ namespace testing { namespace internal { // GetTypeName() returns a human-readable name of type T. // NB: This function is also used in Google Mock, so don't move it inside of // the typed-test-only section below. template std::string GetTypeName() { # if GTEST_HAS_RTTI const char* const name = typeid(T).name(); # if GTEST_HAS_CXXABI_H_ || defined(__HP_aCC) int status = 0; // gcc's implementation of typeid(T).name() mangles the type name, // so we have to demangle it. # if GTEST_HAS_CXXABI_H_ using abi::__cxa_demangle; # endif // GTEST_HAS_CXXABI_H_ char* const readable_name = __cxa_demangle(name, 0, 0, &status); const std::string name_str(status == 0 ? readable_name : name); free(readable_name); return name_str; # else return name; # endif // GTEST_HAS_CXXABI_H_ || __HP_aCC # else return ""; # endif // GTEST_HAS_RTTI } #if GTEST_HAS_TYPED_TEST || GTEST_HAS_TYPED_TEST_P // AssertyTypeEq::type is defined iff T1 and T2 are the same // type. This can be used as a compile-time assertion to ensure that // two types are equal. template struct AssertTypeEq; template struct AssertTypeEq { typedef bool type; }; // A unique type used as the default value for the arguments of class // template Types. This allows us to simulate variadic templates // (e.g. Types, Type, and etc), which C++ doesn't // support directly. struct None {}; // The following family of struct and struct templates are used to // represent type lists. In particular, TypesN // represents a type list with N types (T1, T2, ..., and TN) in it. // Except for Types0, every struct in the family has two member types: // Head for the first type in the list, and Tail for the rest of the // list. // The empty type list. struct Types0 {}; // Type lists of length 1, 2, 3, and so on. template struct Types1 { typedef T1 Head; typedef Types0 Tail; }; $range i 2..n $for i [[ $range j 1..i $range k 2..i template <$for j, [[typename T$j]]> struct Types$i { typedef T1 Head; typedef Types$(i-1)<$for k, [[T$k]]> Tail; }; ]] } // namespace internal // We don't want to require the users to write TypesN<...> directly, // as that would require them to count the length. Types<...> is much // easier to write, but generates horrible messages when there is a // compiler error, as gcc insists on printing out each template // argument, even if it has the default value (this means Types // will appear as Types in the compiler // errors). // // Our solution is to combine the best part of the two approaches: a // user would write Types, and Google Test will translate // that to TypesN internally to make error messages // readable. The translation is done by the 'type' member of the // Types template. $range i 1..n template <$for i, [[typename T$i = internal::None]]> struct Types { typedef internal::Types$n<$for i, [[T$i]]> type; }; template <> struct Types<$for i, [[internal::None]]> { typedef internal::Types0 type; }; $range i 1..n-1 $for i [[ $range j 1..i $range k i+1..n template <$for j, [[typename T$j]]> struct Types<$for j, [[T$j]]$for k[[, internal::None]]> { typedef internal::Types$i<$for j, [[T$j]]> type; }; ]] namespace internal { # define GTEST_TEMPLATE_ template class // The template "selector" struct TemplateSel is used to // represent Tmpl, which must be a class template with one type // parameter, as a type. TemplateSel::Bind::type is defined // as the type Tmpl. This allows us to actually instantiate the // template "selected" by TemplateSel. // // This trick is necessary for simulating typedef for class templates, // which C++ doesn't support directly. template struct TemplateSel { template struct Bind { typedef Tmpl type; }; }; # define GTEST_BIND_(TmplSel, T) \ TmplSel::template Bind::type // A unique struct template used as the default value for the // arguments of class template Templates. This allows us to simulate // variadic templates (e.g. Templates, Templates, // and etc), which C++ doesn't support directly. template struct NoneT {}; // The following family of struct and struct templates are used to // represent template lists. In particular, TemplatesN represents a list of N templates (T1, T2, ..., and TN). Except // for Templates0, every struct in the family has two member types: // Head for the selector of the first template in the list, and Tail // for the rest of the list. // The empty template list. struct Templates0 {}; // Template lists of length 1, 2, 3, and so on. template struct Templates1 { typedef TemplateSel Head; typedef Templates0 Tail; }; $range i 2..n $for i [[ $range j 1..i $range k 2..i template <$for j, [[GTEST_TEMPLATE_ T$j]]> struct Templates$i { typedef TemplateSel Head; typedef Templates$(i-1)<$for k, [[T$k]]> Tail; }; ]] // We don't want to require the users to write TemplatesN<...> directly, // as that would require them to count the length. Templates<...> is much // easier to write, but generates horrible messages when there is a // compiler error, as gcc insists on printing out each template // argument, even if it has the default value (this means Templates // will appear as Templates in the compiler // errors). // // Our solution is to combine the best part of the two approaches: a // user would write Templates, and Google Test will translate // that to TemplatesN internally to make error messages // readable. The translation is done by the 'type' member of the // Templates template. $range i 1..n template <$for i, [[GTEST_TEMPLATE_ T$i = NoneT]]> struct Templates { typedef Templates$n<$for i, [[T$i]]> type; }; template <> struct Templates<$for i, [[NoneT]]> { typedef Templates0 type; }; $range i 1..n-1 $for i [[ $range j 1..i $range k i+1..n template <$for j, [[GTEST_TEMPLATE_ T$j]]> struct Templates<$for j, [[T$j]]$for k[[, NoneT]]> { typedef Templates$i<$for j, [[T$j]]> type; }; ]] // The TypeList template makes it possible to use either a single type // or a Types<...> list in TYPED_TEST_CASE() and // INSTANTIATE_TYPED_TEST_CASE_P(). template struct TypeList { typedef Types1 type; }; $range i 1..n template <$for i, [[typename T$i]]> struct TypeList > { typedef typename Types<$for i, [[T$i]]>::type type; }; #endif // GTEST_HAS_TYPED_TEST || GTEST_HAS_TYPED_TEST_P } // namespace internal } // namespace testing #endif // GTEST_INCLUDE_GTEST_INTERNAL_GTEST_TYPE_UTIL_H_ ffc-2019.1.0.post0/libs/gtest-1.7.0/m4/000077500000000000000000000000001346026536000166135ustar00rootroot00000000000000ffc-2019.1.0.post0/libs/gtest-1.7.0/m4/acx_pthread.m4000066400000000000000000000317661346026536000213540ustar00rootroot00000000000000# This was retrieved from # http://svn.0pointer.de/viewvc/trunk/common/acx_pthread.m4?revision=1277&root=avahi # See also (perhaps for new versions?) # http://svn.0pointer.de/viewvc/trunk/common/acx_pthread.m4?root=avahi # # We've rewritten the inconsistency check code (from avahi), to work # more broadly. In particular, it no longer assumes ld accepts -zdefs. # This caused a restructing of the code, but the functionality has only # changed a little. dnl @synopsis ACX_PTHREAD([ACTION-IF-FOUND[, ACTION-IF-NOT-FOUND]]) dnl dnl @summary figure out how to build C programs using POSIX threads dnl dnl This macro figures out how to build C programs using POSIX threads. dnl It sets the PTHREAD_LIBS output variable to the threads library and dnl linker flags, and the PTHREAD_CFLAGS output variable to any special dnl C compiler flags that are needed. (The user can also force certain dnl compiler flags/libs to be tested by setting these environment dnl variables.) dnl dnl Also sets PTHREAD_CC to any special C compiler that is needed for dnl multi-threaded programs (defaults to the value of CC otherwise). dnl (This is necessary on AIX to use the special cc_r compiler alias.) dnl dnl NOTE: You are assumed to not only compile your program with these dnl flags, but also link it with them as well. e.g. you should link dnl with $PTHREAD_CC $CFLAGS $PTHREAD_CFLAGS $LDFLAGS ... $PTHREAD_LIBS dnl $LIBS dnl dnl If you are only building threads programs, you may wish to use dnl these variables in your default LIBS, CFLAGS, and CC: dnl dnl LIBS="$PTHREAD_LIBS $LIBS" dnl CFLAGS="$CFLAGS $PTHREAD_CFLAGS" dnl CC="$PTHREAD_CC" dnl dnl In addition, if the PTHREAD_CREATE_JOINABLE thread-attribute dnl constant has a nonstandard name, defines PTHREAD_CREATE_JOINABLE to dnl that name (e.g. PTHREAD_CREATE_UNDETACHED on AIX). dnl dnl ACTION-IF-FOUND is a list of shell commands to run if a threads dnl library is found, and ACTION-IF-NOT-FOUND is a list of commands to dnl run it if it is not found. If ACTION-IF-FOUND is not specified, the dnl default action will define HAVE_PTHREAD. dnl dnl Please let the authors know if this macro fails on any platform, or dnl if you have any other suggestions or comments. This macro was based dnl on work by SGJ on autoconf scripts for FFTW (www.fftw.org) (with dnl help from M. Frigo), as well as ac_pthread and hb_pthread macros dnl posted by Alejandro Forero Cuervo to the autoconf macro repository. dnl We are also grateful for the helpful feedback of numerous users. dnl dnl @category InstalledPackages dnl @author Steven G. Johnson dnl @version 2006-05-29 dnl @license GPLWithACException dnl dnl Checks for GCC shared/pthread inconsistency based on work by dnl Marcin Owsiany AC_DEFUN([ACX_PTHREAD], [ AC_REQUIRE([AC_CANONICAL_HOST]) AC_LANG_SAVE AC_LANG_C acx_pthread_ok=no # We used to check for pthread.h first, but this fails if pthread.h # requires special compiler flags (e.g. on True64 or Sequent). # It gets checked for in the link test anyway. # First of all, check if the user has set any of the PTHREAD_LIBS, # etcetera environment variables, and if threads linking works using # them: if test x"$PTHREAD_LIBS$PTHREAD_CFLAGS" != x; then save_CFLAGS="$CFLAGS" CFLAGS="$CFLAGS $PTHREAD_CFLAGS" save_LIBS="$LIBS" LIBS="$PTHREAD_LIBS $LIBS" AC_MSG_CHECKING([for pthread_join in LIBS=$PTHREAD_LIBS with CFLAGS=$PTHREAD_CFLAGS]) AC_TRY_LINK_FUNC(pthread_join, acx_pthread_ok=yes) AC_MSG_RESULT($acx_pthread_ok) if test x"$acx_pthread_ok" = xno; then PTHREAD_LIBS="" PTHREAD_CFLAGS="" fi LIBS="$save_LIBS" CFLAGS="$save_CFLAGS" fi # We must check for the threads library under a number of different # names; the ordering is very important because some systems # (e.g. DEC) have both -lpthread and -lpthreads, where one of the # libraries is broken (non-POSIX). # Create a list of thread flags to try. Items starting with a "-" are # C compiler flags, and other items are library names, except for "none" # which indicates that we try without any flags at all, and "pthread-config" # which is a program returning the flags for the Pth emulation library. acx_pthread_flags="pthreads none -Kthread -kthread lthread -pthread -pthreads -mthreads pthread --thread-safe -mt pthread-config" # The ordering *is* (sometimes) important. Some notes on the # individual items follow: # pthreads: AIX (must check this before -lpthread) # none: in case threads are in libc; should be tried before -Kthread and # other compiler flags to prevent continual compiler warnings # -Kthread: Sequent (threads in libc, but -Kthread needed for pthread.h) # -kthread: FreeBSD kernel threads (preferred to -pthread since SMP-able) # lthread: LinuxThreads port on FreeBSD (also preferred to -pthread) # -pthread: Linux/gcc (kernel threads), BSD/gcc (userland threads) # -pthreads: Solaris/gcc # -mthreads: Mingw32/gcc, Lynx/gcc # -mt: Sun Workshop C (may only link SunOS threads [-lthread], but it # doesn't hurt to check since this sometimes defines pthreads too; # also defines -D_REENTRANT) # ... -mt is also the pthreads flag for HP/aCC # pthread: Linux, etcetera # --thread-safe: KAI C++ # pthread-config: use pthread-config program (for GNU Pth library) case "${host_cpu}-${host_os}" in *solaris*) # On Solaris (at least, for some versions), libc contains stubbed # (non-functional) versions of the pthreads routines, so link-based # tests will erroneously succeed. (We need to link with -pthreads/-mt/ # -lpthread.) (The stubs are missing pthread_cleanup_push, or rather # a function called by this macro, so we could check for that, but # who knows whether they'll stub that too in a future libc.) So, # we'll just look for -pthreads and -lpthread first: acx_pthread_flags="-pthreads pthread -mt -pthread $acx_pthread_flags" ;; esac if test x"$acx_pthread_ok" = xno; then for flag in $acx_pthread_flags; do case $flag in none) AC_MSG_CHECKING([whether pthreads work without any flags]) ;; -*) AC_MSG_CHECKING([whether pthreads work with $flag]) PTHREAD_CFLAGS="$flag" ;; pthread-config) AC_CHECK_PROG(acx_pthread_config, pthread-config, yes, no) if test x"$acx_pthread_config" = xno; then continue; fi PTHREAD_CFLAGS="`pthread-config --cflags`" PTHREAD_LIBS="`pthread-config --ldflags` `pthread-config --libs`" ;; *) AC_MSG_CHECKING([for the pthreads library -l$flag]) PTHREAD_LIBS="-l$flag" ;; esac save_LIBS="$LIBS" save_CFLAGS="$CFLAGS" LIBS="$PTHREAD_LIBS $LIBS" CFLAGS="$CFLAGS $PTHREAD_CFLAGS" # Check for various functions. We must include pthread.h, # since some functions may be macros. (On the Sequent, we # need a special flag -Kthread to make this header compile.) # We check for pthread_join because it is in -lpthread on IRIX # while pthread_create is in libc. We check for pthread_attr_init # due to DEC craziness with -lpthreads. We check for # pthread_cleanup_push because it is one of the few pthread # functions on Solaris that doesn't have a non-functional libc stub. # We try pthread_create on general principles. AC_TRY_LINK([#include ], [pthread_t th; pthread_join(th, 0); pthread_attr_init(0); pthread_cleanup_push(0, 0); pthread_create(0,0,0,0); pthread_cleanup_pop(0); ], [acx_pthread_ok=yes]) LIBS="$save_LIBS" CFLAGS="$save_CFLAGS" AC_MSG_RESULT($acx_pthread_ok) if test "x$acx_pthread_ok" = xyes; then break; fi PTHREAD_LIBS="" PTHREAD_CFLAGS="" done fi # Various other checks: if test "x$acx_pthread_ok" = xyes; then save_LIBS="$LIBS" LIBS="$PTHREAD_LIBS $LIBS" save_CFLAGS="$CFLAGS" CFLAGS="$CFLAGS $PTHREAD_CFLAGS" # Detect AIX lossage: JOINABLE attribute is called UNDETACHED. AC_MSG_CHECKING([for joinable pthread attribute]) attr_name=unknown for attr in PTHREAD_CREATE_JOINABLE PTHREAD_CREATE_UNDETACHED; do AC_TRY_LINK([#include ], [int attr=$attr; return attr;], [attr_name=$attr; break]) done AC_MSG_RESULT($attr_name) if test "$attr_name" != PTHREAD_CREATE_JOINABLE; then AC_DEFINE_UNQUOTED(PTHREAD_CREATE_JOINABLE, $attr_name, [Define to necessary symbol if this constant uses a non-standard name on your system.]) fi AC_MSG_CHECKING([if more special flags are required for pthreads]) flag=no case "${host_cpu}-${host_os}" in *-aix* | *-freebsd* | *-darwin*) flag="-D_THREAD_SAFE";; *solaris* | *-osf* | *-hpux*) flag="-D_REENTRANT";; esac AC_MSG_RESULT(${flag}) if test "x$flag" != xno; then PTHREAD_CFLAGS="$flag $PTHREAD_CFLAGS" fi LIBS="$save_LIBS" CFLAGS="$save_CFLAGS" # More AIX lossage: must compile with xlc_r or cc_r if test x"$GCC" != xyes; then AC_CHECK_PROGS(PTHREAD_CC, xlc_r cc_r, ${CC}) else PTHREAD_CC=$CC fi # The next part tries to detect GCC inconsistency with -shared on some # architectures and systems. The problem is that in certain # configurations, when -shared is specified, GCC "forgets" to # internally use various flags which are still necessary. # # Prepare the flags # save_CFLAGS="$CFLAGS" save_LIBS="$LIBS" save_CC="$CC" # Try with the flags determined by the earlier checks. # # -Wl,-z,defs forces link-time symbol resolution, so that the # linking checks with -shared actually have any value # # FIXME: -fPIC is required for -shared on many architectures, # so we specify it here, but the right way would probably be to # properly detect whether it is actually required. CFLAGS="-shared -fPIC -Wl,-z,defs $CFLAGS $PTHREAD_CFLAGS" LIBS="$PTHREAD_LIBS $LIBS" CC="$PTHREAD_CC" # In order not to create several levels of indentation, we test # the value of "$done" until we find the cure or run out of ideas. done="no" # First, make sure the CFLAGS we added are actually accepted by our # compiler. If not (and OS X's ld, for instance, does not accept -z), # then we can't do this test. if test x"$done" = xno; then AC_MSG_CHECKING([whether to check for GCC pthread/shared inconsistencies]) AC_TRY_LINK(,, , [done=yes]) if test "x$done" = xyes ; then AC_MSG_RESULT([no]) else AC_MSG_RESULT([yes]) fi fi if test x"$done" = xno; then AC_MSG_CHECKING([whether -pthread is sufficient with -shared]) AC_TRY_LINK([#include ], [pthread_t th; pthread_join(th, 0); pthread_attr_init(0); pthread_cleanup_push(0, 0); pthread_create(0,0,0,0); pthread_cleanup_pop(0); ], [done=yes]) if test "x$done" = xyes; then AC_MSG_RESULT([yes]) else AC_MSG_RESULT([no]) fi fi # # Linux gcc on some architectures such as mips/mipsel forgets # about -lpthread # if test x"$done" = xno; then AC_MSG_CHECKING([whether -lpthread fixes that]) LIBS="-lpthread $PTHREAD_LIBS $save_LIBS" AC_TRY_LINK([#include ], [pthread_t th; pthread_join(th, 0); pthread_attr_init(0); pthread_cleanup_push(0, 0); pthread_create(0,0,0,0); pthread_cleanup_pop(0); ], [done=yes]) if test "x$done" = xyes; then AC_MSG_RESULT([yes]) PTHREAD_LIBS="-lpthread $PTHREAD_LIBS" else AC_MSG_RESULT([no]) fi fi # # FreeBSD 4.10 gcc forgets to use -lc_r instead of -lc # if test x"$done" = xno; then AC_MSG_CHECKING([whether -lc_r fixes that]) LIBS="-lc_r $PTHREAD_LIBS $save_LIBS" AC_TRY_LINK([#include ], [pthread_t th; pthread_join(th, 0); pthread_attr_init(0); pthread_cleanup_push(0, 0); pthread_create(0,0,0,0); pthread_cleanup_pop(0); ], [done=yes]) if test "x$done" = xyes; then AC_MSG_RESULT([yes]) PTHREAD_LIBS="-lc_r $PTHREAD_LIBS" else AC_MSG_RESULT([no]) fi fi if test x"$done" = xno; then # OK, we have run out of ideas AC_MSG_WARN([Impossible to determine how to use pthreads with shared libraries]) # so it's not safe to assume that we may use pthreads acx_pthread_ok=no fi CFLAGS="$save_CFLAGS" LIBS="$save_LIBS" CC="$save_CC" else PTHREAD_CC="$CC" fi AC_SUBST(PTHREAD_LIBS) AC_SUBST(PTHREAD_CFLAGS) AC_SUBST(PTHREAD_CC) # Finally, execute ACTION-IF-FOUND/ACTION-IF-NOT-FOUND: if test x"$acx_pthread_ok" = xyes; then ifelse([$1],,AC_DEFINE(HAVE_PTHREAD,1,[Define if you have POSIX threads libraries and header files.]),[$1]) : else acx_pthread_ok=no $2 fi AC_LANG_RESTORE ])dnl ACX_PTHREAD ffc-2019.1.0.post0/libs/gtest-1.7.0/m4/gtest.m4000066400000000000000000000062211346026536000202040ustar00rootroot00000000000000dnl GTEST_LIB_CHECK([minimum version [, dnl action if found [,action if not found]]]) dnl dnl Check for the presence of the Google Test library, optionally at a minimum dnl version, and indicate a viable version with the HAVE_GTEST flag. It defines dnl standard variables for substitution including GTEST_CPPFLAGS, dnl GTEST_CXXFLAGS, GTEST_LDFLAGS, and GTEST_LIBS. It also defines dnl GTEST_VERSION as the version of Google Test found. Finally, it provides dnl optional custom action slots in the event GTEST is found or not. AC_DEFUN([GTEST_LIB_CHECK], [ dnl Provide a flag to enable or disable Google Test usage. AC_ARG_ENABLE([gtest], [AS_HELP_STRING([--enable-gtest], [Enable tests using the Google C++ Testing Framework. (Default is enabled.)])], [], [enable_gtest=]) AC_ARG_VAR([GTEST_CONFIG], [The exact path of Google Test's 'gtest-config' script.]) AC_ARG_VAR([GTEST_CPPFLAGS], [C-like preprocessor flags for Google Test.]) AC_ARG_VAR([GTEST_CXXFLAGS], [C++ compile flags for Google Test.]) AC_ARG_VAR([GTEST_LDFLAGS], [Linker path and option flags for Google Test.]) AC_ARG_VAR([GTEST_LIBS], [Library linking flags for Google Test.]) AC_ARG_VAR([GTEST_VERSION], [The version of Google Test available.]) HAVE_GTEST="no" AS_IF([test "x${enable_gtest}" != "xno"], [AC_MSG_CHECKING([for 'gtest-config']) AS_IF([test "x${enable_gtest}" != "xyes"], [AS_IF([test -x "${enable_gtest}/scripts/gtest-config"], [GTEST_CONFIG="${enable_gtest}/scripts/gtest-config"], [GTEST_CONFIG="${enable_gtest}/bin/gtest-config"]) AS_IF([test -x "${GTEST_CONFIG}"], [], [AC_MSG_RESULT([no]) AC_MSG_ERROR([dnl Unable to locate either a built or installed Google Test. The specific location '${enable_gtest}' was provided for a built or installed Google Test, but no 'gtest-config' script could be found at this location.]) ])], [AC_PATH_PROG([GTEST_CONFIG], [gtest-config])]) AS_IF([test -x "${GTEST_CONFIG}"], [AC_MSG_RESULT([${GTEST_CONFIG}]) m4_ifval([$1], [_gtest_min_version="--min-version=$1" AC_MSG_CHECKING([for Google Test at least version >= $1])], [_gtest_min_version="--min-version=0" AC_MSG_CHECKING([for Google Test])]) AS_IF([${GTEST_CONFIG} ${_gtest_min_version}], [AC_MSG_RESULT([yes]) HAVE_GTEST='yes'], [AC_MSG_RESULT([no])])], [AC_MSG_RESULT([no])]) AS_IF([test "x${HAVE_GTEST}" = "xyes"], [GTEST_CPPFLAGS=`${GTEST_CONFIG} --cppflags` GTEST_CXXFLAGS=`${GTEST_CONFIG} --cxxflags` GTEST_LDFLAGS=`${GTEST_CONFIG} --ldflags` GTEST_LIBS=`${GTEST_CONFIG} --libs` GTEST_VERSION=`${GTEST_CONFIG} --version` AC_DEFINE([HAVE_GTEST],[1],[Defined when Google Test is available.])], [AS_IF([test "x${enable_gtest}" = "xyes"], [AC_MSG_ERROR([dnl Google Test was enabled, but no viable version could be found.]) ])])]) AC_SUBST([HAVE_GTEST]) AM_CONDITIONAL([HAVE_GTEST],[test "x$HAVE_GTEST" = "xyes"]) AS_IF([test "x$HAVE_GTEST" = "xyes"], [m4_ifval([$2], [$2])], [m4_ifval([$3], [$3])]) ]) ffc-2019.1.0.post0/libs/gtest-1.7.0/m4/libtool.m4000066400000000000000000010604341346026536000205310ustar00rootroot00000000000000# libtool.m4 - Configure libtool for the host system. -*-Autoconf-*- # # Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2003, 2004, 2005, # 2006, 2007, 2008, 2009, 2010, 2011 Free Software # Foundation, Inc. # Written by Gordon Matzigkeit, 1996 # # This file is free software; the Free Software Foundation gives # unlimited permission to copy and/or distribute it, with or without # modifications, as long as this notice is preserved. m4_define([_LT_COPYING], [dnl # Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2003, 2004, 2005, # 2006, 2007, 2008, 2009, 2010, 2011 Free Software # Foundation, Inc. # Written by Gordon Matzigkeit, 1996 # # This file is part of GNU Libtool. # # GNU Libtool is free software; you can redistribute it and/or # modify it under the terms of the GNU General Public License as # published by the Free Software Foundation; either version 2 of # the License, or (at your option) any later version. # # As a special exception to the GNU General Public License, # if you distribute this file as part of a program or library that # is built using GNU Libtool, you may include this file under the # same distribution terms that you use for the rest of that program. # # GNU Libtool is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with GNU Libtool; see the file COPYING. If not, a copy # can be downloaded from http://www.gnu.org/licenses/gpl.html, or # obtained by writing to the Free Software Foundation, Inc., # 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. ]) # serial 57 LT_INIT # LT_PREREQ(VERSION) # ------------------ # Complain and exit if this libtool version is less that VERSION. m4_defun([LT_PREREQ], [m4_if(m4_version_compare(m4_defn([LT_PACKAGE_VERSION]), [$1]), -1, [m4_default([$3], [m4_fatal([Libtool version $1 or higher is required], 63)])], [$2])]) # _LT_CHECK_BUILDDIR # ------------------ # Complain if the absolute build directory name contains unusual characters m4_defun([_LT_CHECK_BUILDDIR], [case `pwd` in *\ * | *\ *) AC_MSG_WARN([Libtool does not cope well with whitespace in `pwd`]) ;; esac ]) # LT_INIT([OPTIONS]) # ------------------ AC_DEFUN([LT_INIT], [AC_PREREQ([2.58])dnl We use AC_INCLUDES_DEFAULT AC_REQUIRE([AC_CONFIG_AUX_DIR_DEFAULT])dnl AC_BEFORE([$0], [LT_LANG])dnl AC_BEFORE([$0], [LT_OUTPUT])dnl AC_BEFORE([$0], [LTDL_INIT])dnl m4_require([_LT_CHECK_BUILDDIR])dnl dnl Autoconf doesn't catch unexpanded LT_ macros by default: m4_pattern_forbid([^_?LT_[A-Z_]+$])dnl m4_pattern_allow([^(_LT_EOF|LT_DLGLOBAL|LT_DLLAZY_OR_NOW|LT_MULTI_MODULE)$])dnl dnl aclocal doesn't pull ltoptions.m4, ltsugar.m4, or ltversion.m4 dnl unless we require an AC_DEFUNed macro: AC_REQUIRE([LTOPTIONS_VERSION])dnl AC_REQUIRE([LTSUGAR_VERSION])dnl AC_REQUIRE([LTVERSION_VERSION])dnl AC_REQUIRE([LTOBSOLETE_VERSION])dnl m4_require([_LT_PROG_LTMAIN])dnl _LT_SHELL_INIT([SHELL=${CONFIG_SHELL-/bin/sh}]) dnl Parse OPTIONS _LT_SET_OPTIONS([$0], [$1]) # This can be used to rebuild libtool when needed LIBTOOL_DEPS="$ltmain" # Always use our own libtool. LIBTOOL='$(SHELL) $(top_builddir)/libtool' AC_SUBST(LIBTOOL)dnl _LT_SETUP # Only expand once: m4_define([LT_INIT]) ])# LT_INIT # Old names: AU_ALIAS([AC_PROG_LIBTOOL], [LT_INIT]) AU_ALIAS([AM_PROG_LIBTOOL], [LT_INIT]) dnl aclocal-1.4 backwards compatibility: dnl AC_DEFUN([AC_PROG_LIBTOOL], []) dnl AC_DEFUN([AM_PROG_LIBTOOL], []) # _LT_CC_BASENAME(CC) # ------------------- # Calculate cc_basename. Skip known compiler wrappers and cross-prefix. m4_defun([_LT_CC_BASENAME], [for cc_temp in $1""; do case $cc_temp in compile | *[[\\/]]compile | ccache | *[[\\/]]ccache ) ;; distcc | *[[\\/]]distcc | purify | *[[\\/]]purify ) ;; \-*) ;; *) break;; esac done cc_basename=`$ECHO "$cc_temp" | $SED "s%.*/%%; s%^$host_alias-%%"` ]) # _LT_FILEUTILS_DEFAULTS # ---------------------- # It is okay to use these file commands and assume they have been set # sensibly after `m4_require([_LT_FILEUTILS_DEFAULTS])'. m4_defun([_LT_FILEUTILS_DEFAULTS], [: ${CP="cp -f"} : ${MV="mv -f"} : ${RM="rm -f"} ])# _LT_FILEUTILS_DEFAULTS # _LT_SETUP # --------- m4_defun([_LT_SETUP], [AC_REQUIRE([AC_CANONICAL_HOST])dnl AC_REQUIRE([AC_CANONICAL_BUILD])dnl AC_REQUIRE([_LT_PREPARE_SED_QUOTE_VARS])dnl AC_REQUIRE([_LT_PROG_ECHO_BACKSLASH])dnl _LT_DECL([], [PATH_SEPARATOR], [1], [The PATH separator for the build system])dnl dnl _LT_DECL([], [host_alias], [0], [The host system])dnl _LT_DECL([], [host], [0])dnl _LT_DECL([], [host_os], [0])dnl dnl _LT_DECL([], [build_alias], [0], [The build system])dnl _LT_DECL([], [build], [0])dnl _LT_DECL([], [build_os], [0])dnl dnl AC_REQUIRE([AC_PROG_CC])dnl AC_REQUIRE([LT_PATH_LD])dnl AC_REQUIRE([LT_PATH_NM])dnl dnl AC_REQUIRE([AC_PROG_LN_S])dnl test -z "$LN_S" && LN_S="ln -s" _LT_DECL([], [LN_S], [1], [Whether we need soft or hard links])dnl dnl AC_REQUIRE([LT_CMD_MAX_LEN])dnl _LT_DECL([objext], [ac_objext], [0], [Object file suffix (normally "o")])dnl _LT_DECL([], [exeext], [0], [Executable file suffix (normally "")])dnl dnl m4_require([_LT_FILEUTILS_DEFAULTS])dnl m4_require([_LT_CHECK_SHELL_FEATURES])dnl m4_require([_LT_PATH_CONVERSION_FUNCTIONS])dnl m4_require([_LT_CMD_RELOAD])dnl m4_require([_LT_CHECK_MAGIC_METHOD])dnl m4_require([_LT_CHECK_SHAREDLIB_FROM_LINKLIB])dnl m4_require([_LT_CMD_OLD_ARCHIVE])dnl m4_require([_LT_CMD_GLOBAL_SYMBOLS])dnl m4_require([_LT_WITH_SYSROOT])dnl _LT_CONFIG_LIBTOOL_INIT([ # See if we are running on zsh, and set the options which allow our # commands through without removal of \ escapes INIT. if test -n "\${ZSH_VERSION+set}" ; then setopt NO_GLOB_SUBST fi ]) if test -n "${ZSH_VERSION+set}" ; then setopt NO_GLOB_SUBST fi _LT_CHECK_OBJDIR m4_require([_LT_TAG_COMPILER])dnl case $host_os in aix3*) # AIX sometimes has problems with the GCC collect2 program. For some # reason, if we set the COLLECT_NAMES environment variable, the problems # vanish in a puff of smoke. if test "X${COLLECT_NAMES+set}" != Xset; then COLLECT_NAMES= export COLLECT_NAMES fi ;; esac # Global variables: ofile=libtool can_build_shared=yes # All known linkers require a `.a' archive for static linking (except MSVC, # which needs '.lib'). libext=a with_gnu_ld="$lt_cv_prog_gnu_ld" old_CC="$CC" old_CFLAGS="$CFLAGS" # Set sane defaults for various variables test -z "$CC" && CC=cc test -z "$LTCC" && LTCC=$CC test -z "$LTCFLAGS" && LTCFLAGS=$CFLAGS test -z "$LD" && LD=ld test -z "$ac_objext" && ac_objext=o _LT_CC_BASENAME([$compiler]) # Only perform the check for file, if the check method requires it test -z "$MAGIC_CMD" && MAGIC_CMD=file case $deplibs_check_method in file_magic*) if test "$file_magic_cmd" = '$MAGIC_CMD'; then _LT_PATH_MAGIC fi ;; esac # Use C for the default configuration in the libtool script LT_SUPPORTED_TAG([CC]) _LT_LANG_C_CONFIG _LT_LANG_DEFAULT_CONFIG _LT_CONFIG_COMMANDS ])# _LT_SETUP # _LT_PREPARE_SED_QUOTE_VARS # -------------------------- # Define a few sed substitution that help us do robust quoting. m4_defun([_LT_PREPARE_SED_QUOTE_VARS], [# Backslashify metacharacters that are still active within # double-quoted strings. sed_quote_subst='s/\([["`$\\]]\)/\\\1/g' # Same as above, but do not quote variable references. double_quote_subst='s/\([["`\\]]\)/\\\1/g' # Sed substitution to delay expansion of an escaped shell variable in a # double_quote_subst'ed string. delay_variable_subst='s/\\\\\\\\\\\$/\\\\\\$/g' # Sed substitution to delay expansion of an escaped single quote. delay_single_quote_subst='s/'\''/'\'\\\\\\\'\''/g' # Sed substitution to avoid accidental globbing in evaled expressions no_glob_subst='s/\*/\\\*/g' ]) # _LT_PROG_LTMAIN # --------------- # Note that this code is called both from `configure', and `config.status' # now that we use AC_CONFIG_COMMANDS to generate libtool. Notably, # `config.status' has no value for ac_aux_dir unless we are using Automake, # so we pass a copy along to make sure it has a sensible value anyway. m4_defun([_LT_PROG_LTMAIN], [m4_ifdef([AC_REQUIRE_AUX_FILE], [AC_REQUIRE_AUX_FILE([ltmain.sh])])dnl _LT_CONFIG_LIBTOOL_INIT([ac_aux_dir='$ac_aux_dir']) ltmain="$ac_aux_dir/ltmain.sh" ])# _LT_PROG_LTMAIN ## ------------------------------------- ## ## Accumulate code for creating libtool. ## ## ------------------------------------- ## # So that we can recreate a full libtool script including additional # tags, we accumulate the chunks of code to send to AC_CONFIG_COMMANDS # in macros and then make a single call at the end using the `libtool' # label. # _LT_CONFIG_LIBTOOL_INIT([INIT-COMMANDS]) # ---------------------------------------- # Register INIT-COMMANDS to be passed to AC_CONFIG_COMMANDS later. m4_define([_LT_CONFIG_LIBTOOL_INIT], [m4_ifval([$1], [m4_append([_LT_OUTPUT_LIBTOOL_INIT], [$1 ])])]) # Initialize. m4_define([_LT_OUTPUT_LIBTOOL_INIT]) # _LT_CONFIG_LIBTOOL([COMMANDS]) # ------------------------------ # Register COMMANDS to be passed to AC_CONFIG_COMMANDS later. m4_define([_LT_CONFIG_LIBTOOL], [m4_ifval([$1], [m4_append([_LT_OUTPUT_LIBTOOL_COMMANDS], [$1 ])])]) # Initialize. m4_define([_LT_OUTPUT_LIBTOOL_COMMANDS]) # _LT_CONFIG_SAVE_COMMANDS([COMMANDS], [INIT_COMMANDS]) # ----------------------------------------------------- m4_defun([_LT_CONFIG_SAVE_COMMANDS], [_LT_CONFIG_LIBTOOL([$1]) _LT_CONFIG_LIBTOOL_INIT([$2]) ]) # _LT_FORMAT_COMMENT([COMMENT]) # ----------------------------- # Add leading comment marks to the start of each line, and a trailing # full-stop to the whole comment if one is not present already. m4_define([_LT_FORMAT_COMMENT], [m4_ifval([$1], [ m4_bpatsubst([m4_bpatsubst([$1], [^ *], [# ])], [['`$\]], [\\\&])]m4_bmatch([$1], [[!?.]$], [], [.]) )]) ## ------------------------ ## ## FIXME: Eliminate VARNAME ## ## ------------------------ ## # _LT_DECL([CONFIGNAME], VARNAME, VALUE, [DESCRIPTION], [IS-TAGGED?]) # ------------------------------------------------------------------- # CONFIGNAME is the name given to the value in the libtool script. # VARNAME is the (base) name used in the configure script. # VALUE may be 0, 1 or 2 for a computed quote escaped value based on # VARNAME. Any other value will be used directly. m4_define([_LT_DECL], [lt_if_append_uniq([lt_decl_varnames], [$2], [, ], [lt_dict_add_subkey([lt_decl_dict], [$2], [libtool_name], [m4_ifval([$1], [$1], [$2])]) lt_dict_add_subkey([lt_decl_dict], [$2], [value], [$3]) m4_ifval([$4], [lt_dict_add_subkey([lt_decl_dict], [$2], [description], [$4])]) lt_dict_add_subkey([lt_decl_dict], [$2], [tagged?], [m4_ifval([$5], [yes], [no])])]) ]) # _LT_TAGDECL([CONFIGNAME], VARNAME, VALUE, [DESCRIPTION]) # -------------------------------------------------------- m4_define([_LT_TAGDECL], [_LT_DECL([$1], [$2], [$3], [$4], [yes])]) # lt_decl_tag_varnames([SEPARATOR], [VARNAME1...]) # ------------------------------------------------ m4_define([lt_decl_tag_varnames], [_lt_decl_filter([tagged?], [yes], $@)]) # _lt_decl_filter(SUBKEY, VALUE, [SEPARATOR], [VARNAME1..]) # --------------------------------------------------------- m4_define([_lt_decl_filter], [m4_case([$#], [0], [m4_fatal([$0: too few arguments: $#])], [1], [m4_fatal([$0: too few arguments: $#: $1])], [2], [lt_dict_filter([lt_decl_dict], [$1], [$2], [], lt_decl_varnames)], [3], [lt_dict_filter([lt_decl_dict], [$1], [$2], [$3], lt_decl_varnames)], [lt_dict_filter([lt_decl_dict], $@)])[]dnl ]) # lt_decl_quote_varnames([SEPARATOR], [VARNAME1...]) # -------------------------------------------------- m4_define([lt_decl_quote_varnames], [_lt_decl_filter([value], [1], $@)]) # lt_decl_dquote_varnames([SEPARATOR], [VARNAME1...]) # --------------------------------------------------- m4_define([lt_decl_dquote_varnames], [_lt_decl_filter([value], [2], $@)]) # lt_decl_varnames_tagged([SEPARATOR], [VARNAME1...]) # --------------------------------------------------- m4_define([lt_decl_varnames_tagged], [m4_assert([$# <= 2])dnl _$0(m4_quote(m4_default([$1], [[, ]])), m4_ifval([$2], [[$2]], [m4_dquote(lt_decl_tag_varnames)]), m4_split(m4_normalize(m4_quote(_LT_TAGS)), [ ]))]) m4_define([_lt_decl_varnames_tagged], [m4_ifval([$3], [lt_combine([$1], [$2], [_], $3)])]) # lt_decl_all_varnames([SEPARATOR], [VARNAME1...]) # ------------------------------------------------ m4_define([lt_decl_all_varnames], [_$0(m4_quote(m4_default([$1], [[, ]])), m4_if([$2], [], m4_quote(lt_decl_varnames), m4_quote(m4_shift($@))))[]dnl ]) m4_define([_lt_decl_all_varnames], [lt_join($@, lt_decl_varnames_tagged([$1], lt_decl_tag_varnames([[, ]], m4_shift($@))))dnl ]) # _LT_CONFIG_STATUS_DECLARE([VARNAME]) # ------------------------------------ # Quote a variable value, and forward it to `config.status' so that its # declaration there will have the same value as in `configure'. VARNAME # must have a single quote delimited value for this to work. m4_define([_LT_CONFIG_STATUS_DECLARE], [$1='`$ECHO "$][$1" | $SED "$delay_single_quote_subst"`']) # _LT_CONFIG_STATUS_DECLARATIONS # ------------------------------ # We delimit libtool config variables with single quotes, so when # we write them to config.status, we have to be sure to quote all # embedded single quotes properly. In configure, this macro expands # each variable declared with _LT_DECL (and _LT_TAGDECL) into: # # ='`$ECHO "$" | $SED "$delay_single_quote_subst"`' m4_defun([_LT_CONFIG_STATUS_DECLARATIONS], [m4_foreach([_lt_var], m4_quote(lt_decl_all_varnames), [m4_n([_LT_CONFIG_STATUS_DECLARE(_lt_var)])])]) # _LT_LIBTOOL_TAGS # ---------------- # Output comment and list of tags supported by the script m4_defun([_LT_LIBTOOL_TAGS], [_LT_FORMAT_COMMENT([The names of the tagged configurations supported by this script])dnl available_tags="_LT_TAGS"dnl ]) # _LT_LIBTOOL_DECLARE(VARNAME, [TAG]) # ----------------------------------- # Extract the dictionary values for VARNAME (optionally with TAG) and # expand to a commented shell variable setting: # # # Some comment about what VAR is for. # visible_name=$lt_internal_name m4_define([_LT_LIBTOOL_DECLARE], [_LT_FORMAT_COMMENT(m4_quote(lt_dict_fetch([lt_decl_dict], [$1], [description])))[]dnl m4_pushdef([_libtool_name], m4_quote(lt_dict_fetch([lt_decl_dict], [$1], [libtool_name])))[]dnl m4_case(m4_quote(lt_dict_fetch([lt_decl_dict], [$1], [value])), [0], [_libtool_name=[$]$1], [1], [_libtool_name=$lt_[]$1], [2], [_libtool_name=$lt_[]$1], [_libtool_name=lt_dict_fetch([lt_decl_dict], [$1], [value])])[]dnl m4_ifval([$2], [_$2])[]m4_popdef([_libtool_name])[]dnl ]) # _LT_LIBTOOL_CONFIG_VARS # ----------------------- # Produce commented declarations of non-tagged libtool config variables # suitable for insertion in the LIBTOOL CONFIG section of the `libtool' # script. Tagged libtool config variables (even for the LIBTOOL CONFIG # section) are produced by _LT_LIBTOOL_TAG_VARS. m4_defun([_LT_LIBTOOL_CONFIG_VARS], [m4_foreach([_lt_var], m4_quote(_lt_decl_filter([tagged?], [no], [], lt_decl_varnames)), [m4_n([_LT_LIBTOOL_DECLARE(_lt_var)])])]) # _LT_LIBTOOL_TAG_VARS(TAG) # ------------------------- m4_define([_LT_LIBTOOL_TAG_VARS], [m4_foreach([_lt_var], m4_quote(lt_decl_tag_varnames), [m4_n([_LT_LIBTOOL_DECLARE(_lt_var, [$1])])])]) # _LT_TAGVAR(VARNAME, [TAGNAME]) # ------------------------------ m4_define([_LT_TAGVAR], [m4_ifval([$2], [$1_$2], [$1])]) # _LT_CONFIG_COMMANDS # ------------------- # Send accumulated output to $CONFIG_STATUS. Thanks to the lists of # variables for single and double quote escaping we saved from calls # to _LT_DECL, we can put quote escaped variables declarations # into `config.status', and then the shell code to quote escape them in # for loops in `config.status'. Finally, any additional code accumulated # from calls to _LT_CONFIG_LIBTOOL_INIT is expanded. m4_defun([_LT_CONFIG_COMMANDS], [AC_PROVIDE_IFELSE([LT_OUTPUT], dnl If the libtool generation code has been placed in $CONFIG_LT, dnl instead of duplicating it all over again into config.status, dnl then we will have config.status run $CONFIG_LT later, so it dnl needs to know what name is stored there: [AC_CONFIG_COMMANDS([libtool], [$SHELL $CONFIG_LT || AS_EXIT(1)], [CONFIG_LT='$CONFIG_LT'])], dnl If the libtool generation code is destined for config.status, dnl expand the accumulated commands and init code now: [AC_CONFIG_COMMANDS([libtool], [_LT_OUTPUT_LIBTOOL_COMMANDS], [_LT_OUTPUT_LIBTOOL_COMMANDS_INIT])]) ])#_LT_CONFIG_COMMANDS # Initialize. m4_define([_LT_OUTPUT_LIBTOOL_COMMANDS_INIT], [ # The HP-UX ksh and POSIX shell print the target directory to stdout # if CDPATH is set. (unset CDPATH) >/dev/null 2>&1 && unset CDPATH sed_quote_subst='$sed_quote_subst' double_quote_subst='$double_quote_subst' delay_variable_subst='$delay_variable_subst' _LT_CONFIG_STATUS_DECLARATIONS LTCC='$LTCC' LTCFLAGS='$LTCFLAGS' compiler='$compiler_DEFAULT' # A function that is used when there is no print builtin or printf. func_fallback_echo () { eval 'cat <<_LTECHO_EOF \$[]1 _LTECHO_EOF' } # Quote evaled strings. for var in lt_decl_all_varnames([[ \ ]], lt_decl_quote_varnames); do case \`eval \\\\\$ECHO \\\\""\\\\\$\$var"\\\\"\` in *[[\\\\\\\`\\"\\\$]]*) eval "lt_\$var=\\\\\\"\\\`\\\$ECHO \\"\\\$\$var\\" | \\\$SED \\"\\\$sed_quote_subst\\"\\\`\\\\\\"" ;; *) eval "lt_\$var=\\\\\\"\\\$\$var\\\\\\"" ;; esac done # Double-quote double-evaled strings. for var in lt_decl_all_varnames([[ \ ]], lt_decl_dquote_varnames); do case \`eval \\\\\$ECHO \\\\""\\\\\$\$var"\\\\"\` in *[[\\\\\\\`\\"\\\$]]*) eval "lt_\$var=\\\\\\"\\\`\\\$ECHO \\"\\\$\$var\\" | \\\$SED -e \\"\\\$double_quote_subst\\" -e \\"\\\$sed_quote_subst\\" -e \\"\\\$delay_variable_subst\\"\\\`\\\\\\"" ;; *) eval "lt_\$var=\\\\\\"\\\$\$var\\\\\\"" ;; esac done _LT_OUTPUT_LIBTOOL_INIT ]) # _LT_GENERATED_FILE_INIT(FILE, [COMMENT]) # ------------------------------------ # Generate a child script FILE with all initialization necessary to # reuse the environment learned by the parent script, and make the # file executable. If COMMENT is supplied, it is inserted after the # `#!' sequence but before initialization text begins. After this # macro, additional text can be appended to FILE to form the body of # the child script. The macro ends with non-zero status if the # file could not be fully written (such as if the disk is full). m4_ifdef([AS_INIT_GENERATED], [m4_defun([_LT_GENERATED_FILE_INIT],[AS_INIT_GENERATED($@)])], [m4_defun([_LT_GENERATED_FILE_INIT], [m4_require([AS_PREPARE])]dnl [m4_pushdef([AS_MESSAGE_LOG_FD])]dnl [lt_write_fail=0 cat >$1 <<_ASEOF || lt_write_fail=1 #! $SHELL # Generated by $as_me. $2 SHELL=\${CONFIG_SHELL-$SHELL} export SHELL _ASEOF cat >>$1 <<\_ASEOF || lt_write_fail=1 AS_SHELL_SANITIZE _AS_PREPARE exec AS_MESSAGE_FD>&1 _ASEOF test $lt_write_fail = 0 && chmod +x $1[]dnl m4_popdef([AS_MESSAGE_LOG_FD])])])# _LT_GENERATED_FILE_INIT # LT_OUTPUT # --------- # This macro allows early generation of the libtool script (before # AC_OUTPUT is called), incase it is used in configure for compilation # tests. AC_DEFUN([LT_OUTPUT], [: ${CONFIG_LT=./config.lt} AC_MSG_NOTICE([creating $CONFIG_LT]) _LT_GENERATED_FILE_INIT(["$CONFIG_LT"], [# Run this file to recreate a libtool stub with the current configuration.]) cat >>"$CONFIG_LT" <<\_LTEOF lt_cl_silent=false exec AS_MESSAGE_LOG_FD>>config.log { echo AS_BOX([Running $as_me.]) } >&AS_MESSAGE_LOG_FD lt_cl_help="\ \`$as_me' creates a local libtool stub from the current configuration, for use in further configure time tests before the real libtool is generated. Usage: $[0] [[OPTIONS]] -h, --help print this help, then exit -V, --version print version number, then exit -q, --quiet do not print progress messages -d, --debug don't remove temporary files Report bugs to ." lt_cl_version="\ m4_ifset([AC_PACKAGE_NAME], [AC_PACKAGE_NAME ])config.lt[]dnl m4_ifset([AC_PACKAGE_VERSION], [ AC_PACKAGE_VERSION]) configured by $[0], generated by m4_PACKAGE_STRING. Copyright (C) 2011 Free Software Foundation, Inc. This config.lt script is free software; the Free Software Foundation gives unlimited permision to copy, distribute and modify it." while test $[#] != 0 do case $[1] in --version | --v* | -V ) echo "$lt_cl_version"; exit 0 ;; --help | --h* | -h ) echo "$lt_cl_help"; exit 0 ;; --debug | --d* | -d ) debug=: ;; --quiet | --q* | --silent | --s* | -q ) lt_cl_silent=: ;; -*) AC_MSG_ERROR([unrecognized option: $[1] Try \`$[0] --help' for more information.]) ;; *) AC_MSG_ERROR([unrecognized argument: $[1] Try \`$[0] --help' for more information.]) ;; esac shift done if $lt_cl_silent; then exec AS_MESSAGE_FD>/dev/null fi _LTEOF cat >>"$CONFIG_LT" <<_LTEOF _LT_OUTPUT_LIBTOOL_COMMANDS_INIT _LTEOF cat >>"$CONFIG_LT" <<\_LTEOF AC_MSG_NOTICE([creating $ofile]) _LT_OUTPUT_LIBTOOL_COMMANDS AS_EXIT(0) _LTEOF chmod +x "$CONFIG_LT" # configure is writing to config.log, but config.lt does its own redirection, # appending to config.log, which fails on DOS, as config.log is still kept # open by configure. Here we exec the FD to /dev/null, effectively closing # config.log, so it can be properly (re)opened and appended to by config.lt. lt_cl_success=: test "$silent" = yes && lt_config_lt_args="$lt_config_lt_args --quiet" exec AS_MESSAGE_LOG_FD>/dev/null $SHELL "$CONFIG_LT" $lt_config_lt_args || lt_cl_success=false exec AS_MESSAGE_LOG_FD>>config.log $lt_cl_success || AS_EXIT(1) ])# LT_OUTPUT # _LT_CONFIG(TAG) # --------------- # If TAG is the built-in tag, create an initial libtool script with a # default configuration from the untagged config vars. Otherwise add code # to config.status for appending the configuration named by TAG from the # matching tagged config vars. m4_defun([_LT_CONFIG], [m4_require([_LT_FILEUTILS_DEFAULTS])dnl _LT_CONFIG_SAVE_COMMANDS([ m4_define([_LT_TAG], m4_if([$1], [], [C], [$1]))dnl m4_if(_LT_TAG, [C], [ # See if we are running on zsh, and set the options which allow our # commands through without removal of \ escapes. if test -n "${ZSH_VERSION+set}" ; then setopt NO_GLOB_SUBST fi cfgfile="${ofile}T" trap "$RM \"$cfgfile\"; exit 1" 1 2 15 $RM "$cfgfile" cat <<_LT_EOF >> "$cfgfile" #! $SHELL # `$ECHO "$ofile" | sed 's%^.*/%%'` - Provide generalized library-building support services. # Generated automatically by $as_me ($PACKAGE$TIMESTAMP) $VERSION # Libtool was configured on host `(hostname || uname -n) 2>/dev/null | sed 1q`: # NOTE: Changes made to this file will be lost: look at ltmain.sh. # _LT_COPYING _LT_LIBTOOL_TAGS # ### BEGIN LIBTOOL CONFIG _LT_LIBTOOL_CONFIG_VARS _LT_LIBTOOL_TAG_VARS # ### END LIBTOOL CONFIG _LT_EOF case $host_os in aix3*) cat <<\_LT_EOF >> "$cfgfile" # AIX sometimes has problems with the GCC collect2 program. For some # reason, if we set the COLLECT_NAMES environment variable, the problems # vanish in a puff of smoke. if test "X${COLLECT_NAMES+set}" != Xset; then COLLECT_NAMES= export COLLECT_NAMES fi _LT_EOF ;; esac _LT_PROG_LTMAIN # We use sed instead of cat because bash on DJGPP gets confused if # if finds mixed CR/LF and LF-only lines. Since sed operates in # text mode, it properly converts lines to CR/LF. This bash problem # is reportedly fixed, but why not run on old versions too? sed '$q' "$ltmain" >> "$cfgfile" \ || (rm -f "$cfgfile"; exit 1) _LT_PROG_REPLACE_SHELLFNS mv -f "$cfgfile" "$ofile" || (rm -f "$ofile" && cp "$cfgfile" "$ofile" && rm -f "$cfgfile") chmod +x "$ofile" ], [cat <<_LT_EOF >> "$ofile" dnl Unfortunately we have to use $1 here, since _LT_TAG is not expanded dnl in a comment (ie after a #). # ### BEGIN LIBTOOL TAG CONFIG: $1 _LT_LIBTOOL_TAG_VARS(_LT_TAG) # ### END LIBTOOL TAG CONFIG: $1 _LT_EOF ])dnl /m4_if ], [m4_if([$1], [], [ PACKAGE='$PACKAGE' VERSION='$VERSION' TIMESTAMP='$TIMESTAMP' RM='$RM' ofile='$ofile'], []) ])dnl /_LT_CONFIG_SAVE_COMMANDS ])# _LT_CONFIG # LT_SUPPORTED_TAG(TAG) # --------------------- # Trace this macro to discover what tags are supported by the libtool # --tag option, using: # autoconf --trace 'LT_SUPPORTED_TAG:$1' AC_DEFUN([LT_SUPPORTED_TAG], []) # C support is built-in for now m4_define([_LT_LANG_C_enabled], []) m4_define([_LT_TAGS], []) # LT_LANG(LANG) # ------------- # Enable libtool support for the given language if not already enabled. AC_DEFUN([LT_LANG], [AC_BEFORE([$0], [LT_OUTPUT])dnl m4_case([$1], [C], [_LT_LANG(C)], [C++], [_LT_LANG(CXX)], [Go], [_LT_LANG(GO)], [Java], [_LT_LANG(GCJ)], [Fortran 77], [_LT_LANG(F77)], [Fortran], [_LT_LANG(FC)], [Windows Resource], [_LT_LANG(RC)], [m4_ifdef([_LT_LANG_]$1[_CONFIG], [_LT_LANG($1)], [m4_fatal([$0: unsupported language: "$1"])])])dnl ])# LT_LANG # _LT_LANG(LANGNAME) # ------------------ m4_defun([_LT_LANG], [m4_ifdef([_LT_LANG_]$1[_enabled], [], [LT_SUPPORTED_TAG([$1])dnl m4_append([_LT_TAGS], [$1 ])dnl m4_define([_LT_LANG_]$1[_enabled], [])dnl _LT_LANG_$1_CONFIG($1)])dnl ])# _LT_LANG m4_ifndef([AC_PROG_GO], [ ############################################################ # NOTE: This macro has been submitted for inclusion into # # GNU Autoconf as AC_PROG_GO. When it is available in # # a released version of Autoconf we should remove this # # macro and use it instead. # ############################################################ m4_defun([AC_PROG_GO], [AC_LANG_PUSH(Go)dnl AC_ARG_VAR([GOC], [Go compiler command])dnl AC_ARG_VAR([GOFLAGS], [Go compiler flags])dnl _AC_ARG_VAR_LDFLAGS()dnl AC_CHECK_TOOL(GOC, gccgo) if test -z "$GOC"; then if test -n "$ac_tool_prefix"; then AC_CHECK_PROG(GOC, [${ac_tool_prefix}gccgo], [${ac_tool_prefix}gccgo]) fi fi if test -z "$GOC"; then AC_CHECK_PROG(GOC, gccgo, gccgo, false) fi ])#m4_defun ])#m4_ifndef # _LT_LANG_DEFAULT_CONFIG # ----------------------- m4_defun([_LT_LANG_DEFAULT_CONFIG], [AC_PROVIDE_IFELSE([AC_PROG_CXX], [LT_LANG(CXX)], [m4_define([AC_PROG_CXX], defn([AC_PROG_CXX])[LT_LANG(CXX)])]) AC_PROVIDE_IFELSE([AC_PROG_F77], [LT_LANG(F77)], [m4_define([AC_PROG_F77], defn([AC_PROG_F77])[LT_LANG(F77)])]) AC_PROVIDE_IFELSE([AC_PROG_FC], [LT_LANG(FC)], [m4_define([AC_PROG_FC], defn([AC_PROG_FC])[LT_LANG(FC)])]) dnl The call to [A][M_PROG_GCJ] is quoted like that to stop aclocal dnl pulling things in needlessly. AC_PROVIDE_IFELSE([AC_PROG_GCJ], [LT_LANG(GCJ)], [AC_PROVIDE_IFELSE([A][M_PROG_GCJ], [LT_LANG(GCJ)], [AC_PROVIDE_IFELSE([LT_PROG_GCJ], [LT_LANG(GCJ)], [m4_ifdef([AC_PROG_GCJ], [m4_define([AC_PROG_GCJ], defn([AC_PROG_GCJ])[LT_LANG(GCJ)])]) m4_ifdef([A][M_PROG_GCJ], [m4_define([A][M_PROG_GCJ], defn([A][M_PROG_GCJ])[LT_LANG(GCJ)])]) m4_ifdef([LT_PROG_GCJ], [m4_define([LT_PROG_GCJ], defn([LT_PROG_GCJ])[LT_LANG(GCJ)])])])])]) AC_PROVIDE_IFELSE([AC_PROG_GO], [LT_LANG(GO)], [m4_define([AC_PROG_GO], defn([AC_PROG_GO])[LT_LANG(GO)])]) AC_PROVIDE_IFELSE([LT_PROG_RC], [LT_LANG(RC)], [m4_define([LT_PROG_RC], defn([LT_PROG_RC])[LT_LANG(RC)])]) ])# _LT_LANG_DEFAULT_CONFIG # Obsolete macros: AU_DEFUN([AC_LIBTOOL_CXX], [LT_LANG(C++)]) AU_DEFUN([AC_LIBTOOL_F77], [LT_LANG(Fortran 77)]) AU_DEFUN([AC_LIBTOOL_FC], [LT_LANG(Fortran)]) AU_DEFUN([AC_LIBTOOL_GCJ], [LT_LANG(Java)]) AU_DEFUN([AC_LIBTOOL_RC], [LT_LANG(Windows Resource)]) dnl aclocal-1.4 backwards compatibility: dnl AC_DEFUN([AC_LIBTOOL_CXX], []) dnl AC_DEFUN([AC_LIBTOOL_F77], []) dnl AC_DEFUN([AC_LIBTOOL_FC], []) dnl AC_DEFUN([AC_LIBTOOL_GCJ], []) dnl AC_DEFUN([AC_LIBTOOL_RC], []) # _LT_TAG_COMPILER # ---------------- m4_defun([_LT_TAG_COMPILER], [AC_REQUIRE([AC_PROG_CC])dnl _LT_DECL([LTCC], [CC], [1], [A C compiler])dnl _LT_DECL([LTCFLAGS], [CFLAGS], [1], [LTCC compiler flags])dnl _LT_TAGDECL([CC], [compiler], [1], [A language specific compiler])dnl _LT_TAGDECL([with_gcc], [GCC], [0], [Is the compiler the GNU compiler?])dnl # If no C compiler was specified, use CC. LTCC=${LTCC-"$CC"} # If no C compiler flags were specified, use CFLAGS. LTCFLAGS=${LTCFLAGS-"$CFLAGS"} # Allow CC to be a program name with arguments. compiler=$CC ])# _LT_TAG_COMPILER # _LT_COMPILER_BOILERPLATE # ------------------------ # Check for compiler boilerplate output or warnings with # the simple compiler test code. m4_defun([_LT_COMPILER_BOILERPLATE], [m4_require([_LT_DECL_SED])dnl ac_outfile=conftest.$ac_objext echo "$lt_simple_compile_test_code" >conftest.$ac_ext eval "$ac_compile" 2>&1 >/dev/null | $SED '/^$/d; /^ *+/d' >conftest.err _lt_compiler_boilerplate=`cat conftest.err` $RM conftest* ])# _LT_COMPILER_BOILERPLATE # _LT_LINKER_BOILERPLATE # ---------------------- # Check for linker boilerplate output or warnings with # the simple link test code. m4_defun([_LT_LINKER_BOILERPLATE], [m4_require([_LT_DECL_SED])dnl ac_outfile=conftest.$ac_objext echo "$lt_simple_link_test_code" >conftest.$ac_ext eval "$ac_link" 2>&1 >/dev/null | $SED '/^$/d; /^ *+/d' >conftest.err _lt_linker_boilerplate=`cat conftest.err` $RM -r conftest* ])# _LT_LINKER_BOILERPLATE # _LT_REQUIRED_DARWIN_CHECKS # ------------------------- m4_defun_once([_LT_REQUIRED_DARWIN_CHECKS],[ case $host_os in rhapsody* | darwin*) AC_CHECK_TOOL([DSYMUTIL], [dsymutil], [:]) AC_CHECK_TOOL([NMEDIT], [nmedit], [:]) AC_CHECK_TOOL([LIPO], [lipo], [:]) AC_CHECK_TOOL([OTOOL], [otool], [:]) AC_CHECK_TOOL([OTOOL64], [otool64], [:]) _LT_DECL([], [DSYMUTIL], [1], [Tool to manipulate archived DWARF debug symbol files on Mac OS X]) _LT_DECL([], [NMEDIT], [1], [Tool to change global to local symbols on Mac OS X]) _LT_DECL([], [LIPO], [1], [Tool to manipulate fat objects and archives on Mac OS X]) _LT_DECL([], [OTOOL], [1], [ldd/readelf like tool for Mach-O binaries on Mac OS X]) _LT_DECL([], [OTOOL64], [1], [ldd/readelf like tool for 64 bit Mach-O binaries on Mac OS X 10.4]) AC_CACHE_CHECK([for -single_module linker flag],[lt_cv_apple_cc_single_mod], [lt_cv_apple_cc_single_mod=no if test -z "${LT_MULTI_MODULE}"; then # By default we will add the -single_module flag. You can override # by either setting the environment variable LT_MULTI_MODULE # non-empty at configure time, or by adding -multi_module to the # link flags. rm -rf libconftest.dylib* echo "int foo(void){return 1;}" > conftest.c echo "$LTCC $LTCFLAGS $LDFLAGS -o libconftest.dylib \ -dynamiclib -Wl,-single_module conftest.c" >&AS_MESSAGE_LOG_FD $LTCC $LTCFLAGS $LDFLAGS -o libconftest.dylib \ -dynamiclib -Wl,-single_module conftest.c 2>conftest.err _lt_result=$? # If there is a non-empty error log, and "single_module" # appears in it, assume the flag caused a linker warning if test -s conftest.err && $GREP single_module conftest.err; then cat conftest.err >&AS_MESSAGE_LOG_FD # Otherwise, if the output was created with a 0 exit code from # the compiler, it worked. elif test -f libconftest.dylib && test $_lt_result -eq 0; then lt_cv_apple_cc_single_mod=yes else cat conftest.err >&AS_MESSAGE_LOG_FD fi rm -rf libconftest.dylib* rm -f conftest.* fi]) AC_CACHE_CHECK([for -exported_symbols_list linker flag], [lt_cv_ld_exported_symbols_list], [lt_cv_ld_exported_symbols_list=no save_LDFLAGS=$LDFLAGS echo "_main" > conftest.sym LDFLAGS="$LDFLAGS -Wl,-exported_symbols_list,conftest.sym" AC_LINK_IFELSE([AC_LANG_PROGRAM([],[])], [lt_cv_ld_exported_symbols_list=yes], [lt_cv_ld_exported_symbols_list=no]) LDFLAGS="$save_LDFLAGS" ]) AC_CACHE_CHECK([for -force_load linker flag],[lt_cv_ld_force_load], [lt_cv_ld_force_load=no cat > conftest.c << _LT_EOF int forced_loaded() { return 2;} _LT_EOF echo "$LTCC $LTCFLAGS -c -o conftest.o conftest.c" >&AS_MESSAGE_LOG_FD $LTCC $LTCFLAGS -c -o conftest.o conftest.c 2>&AS_MESSAGE_LOG_FD echo "$AR cru libconftest.a conftest.o" >&AS_MESSAGE_LOG_FD $AR cru libconftest.a conftest.o 2>&AS_MESSAGE_LOG_FD echo "$RANLIB libconftest.a" >&AS_MESSAGE_LOG_FD $RANLIB libconftest.a 2>&AS_MESSAGE_LOG_FD cat > conftest.c << _LT_EOF int main() { return 0;} _LT_EOF echo "$LTCC $LTCFLAGS $LDFLAGS -o conftest conftest.c -Wl,-force_load,./libconftest.a" >&AS_MESSAGE_LOG_FD $LTCC $LTCFLAGS $LDFLAGS -o conftest conftest.c -Wl,-force_load,./libconftest.a 2>conftest.err _lt_result=$? if test -s conftest.err && $GREP force_load conftest.err; then cat conftest.err >&AS_MESSAGE_LOG_FD elif test -f conftest && test $_lt_result -eq 0 && $GREP forced_load conftest >/dev/null 2>&1 ; then lt_cv_ld_force_load=yes else cat conftest.err >&AS_MESSAGE_LOG_FD fi rm -f conftest.err libconftest.a conftest conftest.c rm -rf conftest.dSYM ]) case $host_os in rhapsody* | darwin1.[[012]]) _lt_dar_allow_undefined='${wl}-undefined ${wl}suppress' ;; darwin1.*) _lt_dar_allow_undefined='${wl}-flat_namespace ${wl}-undefined ${wl}suppress' ;; darwin*) # darwin 5.x on # if running on 10.5 or later, the deployment target defaults # to the OS version, if on x86, and 10.4, the deployment # target defaults to 10.4. Don't you love it? case ${MACOSX_DEPLOYMENT_TARGET-10.0},$host in 10.0,*86*-darwin8*|10.0,*-darwin[[91]]*) _lt_dar_allow_undefined='${wl}-undefined ${wl}dynamic_lookup' ;; 10.[[012]]*) _lt_dar_allow_undefined='${wl}-flat_namespace ${wl}-undefined ${wl}suppress' ;; 10.*) _lt_dar_allow_undefined='${wl}-undefined ${wl}dynamic_lookup' ;; esac ;; esac if test "$lt_cv_apple_cc_single_mod" = "yes"; then _lt_dar_single_mod='$single_module' fi if test "$lt_cv_ld_exported_symbols_list" = "yes"; then _lt_dar_export_syms=' ${wl}-exported_symbols_list,$output_objdir/${libname}-symbols.expsym' else _lt_dar_export_syms='~$NMEDIT -s $output_objdir/${libname}-symbols.expsym ${lib}' fi if test "$DSYMUTIL" != ":" && test "$lt_cv_ld_force_load" = "no"; then _lt_dsymutil='~$DSYMUTIL $lib || :' else _lt_dsymutil= fi ;; esac ]) # _LT_DARWIN_LINKER_FEATURES([TAG]) # --------------------------------- # Checks for linker and compiler features on darwin m4_defun([_LT_DARWIN_LINKER_FEATURES], [ m4_require([_LT_REQUIRED_DARWIN_CHECKS]) _LT_TAGVAR(archive_cmds_need_lc, $1)=no _LT_TAGVAR(hardcode_direct, $1)=no _LT_TAGVAR(hardcode_automatic, $1)=yes _LT_TAGVAR(hardcode_shlibpath_var, $1)=unsupported if test "$lt_cv_ld_force_load" = "yes"; then _LT_TAGVAR(whole_archive_flag_spec, $1)='`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience ${wl}-force_load,$conv\"; done; func_echo_all \"$new_convenience\"`' m4_case([$1], [F77], [_LT_TAGVAR(compiler_needs_object, $1)=yes], [FC], [_LT_TAGVAR(compiler_needs_object, $1)=yes]) else _LT_TAGVAR(whole_archive_flag_spec, $1)='' fi _LT_TAGVAR(link_all_deplibs, $1)=yes _LT_TAGVAR(allow_undefined_flag, $1)="$_lt_dar_allow_undefined" case $cc_basename in ifort*) _lt_dar_can_shared=yes ;; *) _lt_dar_can_shared=$GCC ;; esac if test "$_lt_dar_can_shared" = "yes"; then output_verbose_link_cmd=func_echo_all _LT_TAGVAR(archive_cmds, $1)="\$CC -dynamiclib \$allow_undefined_flag -o \$lib \$libobjs \$deplibs \$compiler_flags -install_name \$rpath/\$soname \$verstring $_lt_dar_single_mod${_lt_dsymutil}" _LT_TAGVAR(module_cmds, $1)="\$CC \$allow_undefined_flag -o \$lib -bundle \$libobjs \$deplibs \$compiler_flags${_lt_dsymutil}" _LT_TAGVAR(archive_expsym_cmds, $1)="sed 's,^,_,' < \$export_symbols > \$output_objdir/\${libname}-symbols.expsym~\$CC -dynamiclib \$allow_undefined_flag -o \$lib \$libobjs \$deplibs \$compiler_flags -install_name \$rpath/\$soname \$verstring ${_lt_dar_single_mod}${_lt_dar_export_syms}${_lt_dsymutil}" _LT_TAGVAR(module_expsym_cmds, $1)="sed -e 's,^,_,' < \$export_symbols > \$output_objdir/\${libname}-symbols.expsym~\$CC \$allow_undefined_flag -o \$lib -bundle \$libobjs \$deplibs \$compiler_flags${_lt_dar_export_syms}${_lt_dsymutil}" m4_if([$1], [CXX], [ if test "$lt_cv_apple_cc_single_mod" != "yes"; then _LT_TAGVAR(archive_cmds, $1)="\$CC -r -keep_private_externs -nostdlib -o \${lib}-master.o \$libobjs~\$CC -dynamiclib \$allow_undefined_flag -o \$lib \${lib}-master.o \$deplibs \$compiler_flags -install_name \$rpath/\$soname \$verstring${_lt_dsymutil}" _LT_TAGVAR(archive_expsym_cmds, $1)="sed 's,^,_,' < \$export_symbols > \$output_objdir/\${libname}-symbols.expsym~\$CC -r -keep_private_externs -nostdlib -o \${lib}-master.o \$libobjs~\$CC -dynamiclib \$allow_undefined_flag -o \$lib \${lib}-master.o \$deplibs \$compiler_flags -install_name \$rpath/\$soname \$verstring${_lt_dar_export_syms}${_lt_dsymutil}" fi ],[]) else _LT_TAGVAR(ld_shlibs, $1)=no fi ]) # _LT_SYS_MODULE_PATH_AIX([TAGNAME]) # ---------------------------------- # Links a minimal program and checks the executable # for the system default hardcoded library path. In most cases, # this is /usr/lib:/lib, but when the MPI compilers are used # the location of the communication and MPI libs are included too. # If we don't find anything, use the default library path according # to the aix ld manual. # Store the results from the different compilers for each TAGNAME. # Allow to override them for all tags through lt_cv_aix_libpath. m4_defun([_LT_SYS_MODULE_PATH_AIX], [m4_require([_LT_DECL_SED])dnl if test "${lt_cv_aix_libpath+set}" = set; then aix_libpath=$lt_cv_aix_libpath else AC_CACHE_VAL([_LT_TAGVAR([lt_cv_aix_libpath_], [$1])], [AC_LINK_IFELSE([AC_LANG_PROGRAM],[ lt_aix_libpath_sed='[ /Import File Strings/,/^$/ { /^0/ { s/^0 *\([^ ]*\) *$/\1/ p } }]' _LT_TAGVAR([lt_cv_aix_libpath_], [$1])=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"` # Check for a 64-bit object if we didn't find anything. if test -z "$_LT_TAGVAR([lt_cv_aix_libpath_], [$1])"; then _LT_TAGVAR([lt_cv_aix_libpath_], [$1])=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"` fi],[]) if test -z "$_LT_TAGVAR([lt_cv_aix_libpath_], [$1])"; then _LT_TAGVAR([lt_cv_aix_libpath_], [$1])="/usr/lib:/lib" fi ]) aix_libpath=$_LT_TAGVAR([lt_cv_aix_libpath_], [$1]) fi ])# _LT_SYS_MODULE_PATH_AIX # _LT_SHELL_INIT(ARG) # ------------------- m4_define([_LT_SHELL_INIT], [m4_divert_text([M4SH-INIT], [$1 ])])# _LT_SHELL_INIT # _LT_PROG_ECHO_BACKSLASH # ----------------------- # Find how we can fake an echo command that does not interpret backslash. # In particular, with Autoconf 2.60 or later we add some code to the start # of the generated configure script which will find a shell with a builtin # printf (which we can use as an echo command). m4_defun([_LT_PROG_ECHO_BACKSLASH], [ECHO='\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\' ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO$ECHO AC_MSG_CHECKING([how to print strings]) # Test print first, because it will be a builtin if present. if test "X`( print -r -- -n ) 2>/dev/null`" = X-n && \ test "X`print -r -- $ECHO 2>/dev/null`" = "X$ECHO"; then ECHO='print -r --' elif test "X`printf %s $ECHO 2>/dev/null`" = "X$ECHO"; then ECHO='printf %s\n' else # Use this function as a fallback that always works. func_fallback_echo () { eval 'cat <<_LTECHO_EOF $[]1 _LTECHO_EOF' } ECHO='func_fallback_echo' fi # func_echo_all arg... # Invoke $ECHO with all args, space-separated. func_echo_all () { $ECHO "$*" } case "$ECHO" in printf*) AC_MSG_RESULT([printf]) ;; print*) AC_MSG_RESULT([print -r]) ;; *) AC_MSG_RESULT([cat]) ;; esac m4_ifdef([_AS_DETECT_SUGGESTED], [_AS_DETECT_SUGGESTED([ test -n "${ZSH_VERSION+set}${BASH_VERSION+set}" || ( ECHO='\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\' ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO$ECHO PATH=/empty FPATH=/empty; export PATH FPATH test "X`printf %s $ECHO`" = "X$ECHO" \ || test "X`print -r -- $ECHO`" = "X$ECHO" )])]) _LT_DECL([], [SHELL], [1], [Shell to use when invoking shell scripts]) _LT_DECL([], [ECHO], [1], [An echo program that protects backslashes]) ])# _LT_PROG_ECHO_BACKSLASH # _LT_WITH_SYSROOT # ---------------- AC_DEFUN([_LT_WITH_SYSROOT], [AC_MSG_CHECKING([for sysroot]) AC_ARG_WITH([sysroot], [ --with-sysroot[=DIR] Search for dependent libraries within DIR (or the compiler's sysroot if not specified).], [], [with_sysroot=no]) dnl lt_sysroot will always be passed unquoted. We quote it here dnl in case the user passed a directory name. lt_sysroot= case ${with_sysroot} in #( yes) if test "$GCC" = yes; then lt_sysroot=`$CC --print-sysroot 2>/dev/null` fi ;; #( /*) lt_sysroot=`echo "$with_sysroot" | sed -e "$sed_quote_subst"` ;; #( no|'') ;; #( *) AC_MSG_RESULT([${with_sysroot}]) AC_MSG_ERROR([The sysroot must be an absolute path.]) ;; esac AC_MSG_RESULT([${lt_sysroot:-no}]) _LT_DECL([], [lt_sysroot], [0], [The root where to search for ]dnl [dependent libraries, and in which our libraries should be installed.])]) # _LT_ENABLE_LOCK # --------------- m4_defun([_LT_ENABLE_LOCK], [AC_ARG_ENABLE([libtool-lock], [AS_HELP_STRING([--disable-libtool-lock], [avoid locking (might break parallel builds)])]) test "x$enable_libtool_lock" != xno && enable_libtool_lock=yes # Some flags need to be propagated to the compiler or linker for good # libtool support. case $host in ia64-*-hpux*) # Find out which ABI we are using. echo 'int i;' > conftest.$ac_ext if AC_TRY_EVAL(ac_compile); then case `/usr/bin/file conftest.$ac_objext` in *ELF-32*) HPUX_IA64_MODE="32" ;; *ELF-64*) HPUX_IA64_MODE="64" ;; esac fi rm -rf conftest* ;; *-*-irix6*) # Find out which ABI we are using. echo '[#]line '$LINENO' "configure"' > conftest.$ac_ext if AC_TRY_EVAL(ac_compile); then if test "$lt_cv_prog_gnu_ld" = yes; then case `/usr/bin/file conftest.$ac_objext` in *32-bit*) LD="${LD-ld} -melf32bsmip" ;; *N32*) LD="${LD-ld} -melf32bmipn32" ;; *64-bit*) LD="${LD-ld} -melf64bmip" ;; esac else case `/usr/bin/file conftest.$ac_objext` in *32-bit*) LD="${LD-ld} -32" ;; *N32*) LD="${LD-ld} -n32" ;; *64-bit*) LD="${LD-ld} -64" ;; esac fi fi rm -rf conftest* ;; x86_64-*kfreebsd*-gnu|x86_64-*linux*|ppc*-*linux*|powerpc*-*linux*| \ s390*-*linux*|s390*-*tpf*|sparc*-*linux*) # Find out which ABI we are using. echo 'int i;' > conftest.$ac_ext if AC_TRY_EVAL(ac_compile); then case `/usr/bin/file conftest.o` in *32-bit*) case $host in x86_64-*kfreebsd*-gnu) LD="${LD-ld} -m elf_i386_fbsd" ;; x86_64-*linux*) LD="${LD-ld} -m elf_i386" ;; ppc64-*linux*|powerpc64-*linux*) LD="${LD-ld} -m elf32ppclinux" ;; s390x-*linux*) LD="${LD-ld} -m elf_s390" ;; sparc64-*linux*) LD="${LD-ld} -m elf32_sparc" ;; esac ;; *64-bit*) case $host in x86_64-*kfreebsd*-gnu) LD="${LD-ld} -m elf_x86_64_fbsd" ;; x86_64-*linux*) LD="${LD-ld} -m elf_x86_64" ;; ppc*-*linux*|powerpc*-*linux*) LD="${LD-ld} -m elf64ppc" ;; s390*-*linux*|s390*-*tpf*) LD="${LD-ld} -m elf64_s390" ;; sparc*-*linux*) LD="${LD-ld} -m elf64_sparc" ;; esac ;; esac fi rm -rf conftest* ;; *-*-sco3.2v5*) # On SCO OpenServer 5, we need -belf to get full-featured binaries. SAVE_CFLAGS="$CFLAGS" CFLAGS="$CFLAGS -belf" AC_CACHE_CHECK([whether the C compiler needs -belf], lt_cv_cc_needs_belf, [AC_LANG_PUSH(C) AC_LINK_IFELSE([AC_LANG_PROGRAM([[]],[[]])],[lt_cv_cc_needs_belf=yes],[lt_cv_cc_needs_belf=no]) AC_LANG_POP]) if test x"$lt_cv_cc_needs_belf" != x"yes"; then # this is probably gcc 2.8.0, egcs 1.0 or newer; no need for -belf CFLAGS="$SAVE_CFLAGS" fi ;; *-*solaris*) # Find out which ABI we are using. echo 'int i;' > conftest.$ac_ext if AC_TRY_EVAL(ac_compile); then case `/usr/bin/file conftest.o` in *64-bit*) case $lt_cv_prog_gnu_ld in yes*) case $host in i?86-*-solaris*) LD="${LD-ld} -m elf_x86_64" ;; sparc*-*-solaris*) LD="${LD-ld} -m elf64_sparc" ;; esac # GNU ld 2.21 introduced _sol2 emulations. Use them if available. if ${LD-ld} -V | grep _sol2 >/dev/null 2>&1; then LD="${LD-ld}_sol2" fi ;; *) if ${LD-ld} -64 -r -o conftest2.o conftest.o >/dev/null 2>&1; then LD="${LD-ld} -64" fi ;; esac ;; esac fi rm -rf conftest* ;; esac need_locks="$enable_libtool_lock" ])# _LT_ENABLE_LOCK # _LT_PROG_AR # ----------- m4_defun([_LT_PROG_AR], [AC_CHECK_TOOLS(AR, [ar], false) : ${AR=ar} : ${AR_FLAGS=cru} _LT_DECL([], [AR], [1], [The archiver]) _LT_DECL([], [AR_FLAGS], [1], [Flags to create an archive]) AC_CACHE_CHECK([for archiver @FILE support], [lt_cv_ar_at_file], [lt_cv_ar_at_file=no AC_COMPILE_IFELSE([AC_LANG_PROGRAM], [echo conftest.$ac_objext > conftest.lst lt_ar_try='$AR $AR_FLAGS libconftest.a @conftest.lst >&AS_MESSAGE_LOG_FD' AC_TRY_EVAL([lt_ar_try]) if test "$ac_status" -eq 0; then # Ensure the archiver fails upon bogus file names. rm -f conftest.$ac_objext libconftest.a AC_TRY_EVAL([lt_ar_try]) if test "$ac_status" -ne 0; then lt_cv_ar_at_file=@ fi fi rm -f conftest.* libconftest.a ]) ]) if test "x$lt_cv_ar_at_file" = xno; then archiver_list_spec= else archiver_list_spec=$lt_cv_ar_at_file fi _LT_DECL([], [archiver_list_spec], [1], [How to feed a file listing to the archiver]) ])# _LT_PROG_AR # _LT_CMD_OLD_ARCHIVE # ------------------- m4_defun([_LT_CMD_OLD_ARCHIVE], [_LT_PROG_AR AC_CHECK_TOOL(STRIP, strip, :) test -z "$STRIP" && STRIP=: _LT_DECL([], [STRIP], [1], [A symbol stripping program]) AC_CHECK_TOOL(RANLIB, ranlib, :) test -z "$RANLIB" && RANLIB=: _LT_DECL([], [RANLIB], [1], [Commands used to install an old-style archive]) # Determine commands to create old-style static archives. old_archive_cmds='$AR $AR_FLAGS $oldlib$oldobjs' old_postinstall_cmds='chmod 644 $oldlib' old_postuninstall_cmds= if test -n "$RANLIB"; then case $host_os in openbsd*) old_postinstall_cmds="$old_postinstall_cmds~\$RANLIB -t \$tool_oldlib" ;; *) old_postinstall_cmds="$old_postinstall_cmds~\$RANLIB \$tool_oldlib" ;; esac old_archive_cmds="$old_archive_cmds~\$RANLIB \$tool_oldlib" fi case $host_os in darwin*) lock_old_archive_extraction=yes ;; *) lock_old_archive_extraction=no ;; esac _LT_DECL([], [old_postinstall_cmds], [2]) _LT_DECL([], [old_postuninstall_cmds], [2]) _LT_TAGDECL([], [old_archive_cmds], [2], [Commands used to build an old-style archive]) _LT_DECL([], [lock_old_archive_extraction], [0], [Whether to use a lock for old archive extraction]) ])# _LT_CMD_OLD_ARCHIVE # _LT_COMPILER_OPTION(MESSAGE, VARIABLE-NAME, FLAGS, # [OUTPUT-FILE], [ACTION-SUCCESS], [ACTION-FAILURE]) # ---------------------------------------------------------------- # Check whether the given compiler option works AC_DEFUN([_LT_COMPILER_OPTION], [m4_require([_LT_FILEUTILS_DEFAULTS])dnl m4_require([_LT_DECL_SED])dnl AC_CACHE_CHECK([$1], [$2], [$2=no m4_if([$4], , [ac_outfile=conftest.$ac_objext], [ac_outfile=$4]) echo "$lt_simple_compile_test_code" > conftest.$ac_ext lt_compiler_flag="$3" # Insert the option either (1) after the last *FLAGS variable, or # (2) before a word containing "conftest.", or (3) at the end. # Note that $ac_compile itself does not contain backslashes and begins # with a dollar sign (not a hyphen), so the echo should work correctly. # The option is referenced via a variable to avoid confusing sed. lt_compile=`echo "$ac_compile" | $SED \ -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \ -e 's: [[^ ]]*conftest\.: $lt_compiler_flag&:; t' \ -e 's:$: $lt_compiler_flag:'` (eval echo "\"\$as_me:$LINENO: $lt_compile\"" >&AS_MESSAGE_LOG_FD) (eval "$lt_compile" 2>conftest.err) ac_status=$? cat conftest.err >&AS_MESSAGE_LOG_FD echo "$as_me:$LINENO: \$? = $ac_status" >&AS_MESSAGE_LOG_FD if (exit $ac_status) && test -s "$ac_outfile"; then # The compiler can only warn and ignore the option if not recognized # So say no if there are warnings other than the usual output. $ECHO "$_lt_compiler_boilerplate" | $SED '/^$/d' >conftest.exp $SED '/^$/d; /^ *+/d' conftest.err >conftest.er2 if test ! -s conftest.er2 || diff conftest.exp conftest.er2 >/dev/null; then $2=yes fi fi $RM conftest* ]) if test x"[$]$2" = xyes; then m4_if([$5], , :, [$5]) else m4_if([$6], , :, [$6]) fi ])# _LT_COMPILER_OPTION # Old name: AU_ALIAS([AC_LIBTOOL_COMPILER_OPTION], [_LT_COMPILER_OPTION]) dnl aclocal-1.4 backwards compatibility: dnl AC_DEFUN([AC_LIBTOOL_COMPILER_OPTION], []) # _LT_LINKER_OPTION(MESSAGE, VARIABLE-NAME, FLAGS, # [ACTION-SUCCESS], [ACTION-FAILURE]) # ---------------------------------------------------- # Check whether the given linker option works AC_DEFUN([_LT_LINKER_OPTION], [m4_require([_LT_FILEUTILS_DEFAULTS])dnl m4_require([_LT_DECL_SED])dnl AC_CACHE_CHECK([$1], [$2], [$2=no save_LDFLAGS="$LDFLAGS" LDFLAGS="$LDFLAGS $3" echo "$lt_simple_link_test_code" > conftest.$ac_ext if (eval $ac_link 2>conftest.err) && test -s conftest$ac_exeext; then # The linker can only warn and ignore the option if not recognized # So say no if there are warnings if test -s conftest.err; then # Append any errors to the config.log. cat conftest.err 1>&AS_MESSAGE_LOG_FD $ECHO "$_lt_linker_boilerplate" | $SED '/^$/d' > conftest.exp $SED '/^$/d; /^ *+/d' conftest.err >conftest.er2 if diff conftest.exp conftest.er2 >/dev/null; then $2=yes fi else $2=yes fi fi $RM -r conftest* LDFLAGS="$save_LDFLAGS" ]) if test x"[$]$2" = xyes; then m4_if([$4], , :, [$4]) else m4_if([$5], , :, [$5]) fi ])# _LT_LINKER_OPTION # Old name: AU_ALIAS([AC_LIBTOOL_LINKER_OPTION], [_LT_LINKER_OPTION]) dnl aclocal-1.4 backwards compatibility: dnl AC_DEFUN([AC_LIBTOOL_LINKER_OPTION], []) # LT_CMD_MAX_LEN #--------------- AC_DEFUN([LT_CMD_MAX_LEN], [AC_REQUIRE([AC_CANONICAL_HOST])dnl # find the maximum length of command line arguments AC_MSG_CHECKING([the maximum length of command line arguments]) AC_CACHE_VAL([lt_cv_sys_max_cmd_len], [dnl i=0 teststring="ABCD" case $build_os in msdosdjgpp*) # On DJGPP, this test can blow up pretty badly due to problems in libc # (any single argument exceeding 2000 bytes causes a buffer overrun # during glob expansion). Even if it were fixed, the result of this # check would be larger than it should be. lt_cv_sys_max_cmd_len=12288; # 12K is about right ;; gnu*) # Under GNU Hurd, this test is not required because there is # no limit to the length of command line arguments. # Libtool will interpret -1 as no limit whatsoever lt_cv_sys_max_cmd_len=-1; ;; cygwin* | mingw* | cegcc*) # On Win9x/ME, this test blows up -- it succeeds, but takes # about 5 minutes as the teststring grows exponentially. # Worse, since 9x/ME are not pre-emptively multitasking, # you end up with a "frozen" computer, even though with patience # the test eventually succeeds (with a max line length of 256k). # Instead, let's just punt: use the minimum linelength reported by # all of the supported platforms: 8192 (on NT/2K/XP). lt_cv_sys_max_cmd_len=8192; ;; mint*) # On MiNT this can take a long time and run out of memory. lt_cv_sys_max_cmd_len=8192; ;; amigaos*) # On AmigaOS with pdksh, this test takes hours, literally. # So we just punt and use a minimum line length of 8192. lt_cv_sys_max_cmd_len=8192; ;; netbsd* | freebsd* | openbsd* | darwin* | dragonfly*) # This has been around since 386BSD, at least. Likely further. if test -x /sbin/sysctl; then lt_cv_sys_max_cmd_len=`/sbin/sysctl -n kern.argmax` elif test -x /usr/sbin/sysctl; then lt_cv_sys_max_cmd_len=`/usr/sbin/sysctl -n kern.argmax` else lt_cv_sys_max_cmd_len=65536 # usable default for all BSDs fi # And add a safety zone lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \/ 4` lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \* 3` ;; interix*) # We know the value 262144 and hardcode it with a safety zone (like BSD) lt_cv_sys_max_cmd_len=196608 ;; os2*) # The test takes a long time on OS/2. lt_cv_sys_max_cmd_len=8192 ;; osf*) # Dr. Hans Ekkehard Plesser reports seeing a kernel panic running configure # due to this test when exec_disable_arg_limit is 1 on Tru64. It is not # nice to cause kernel panics so lets avoid the loop below. # First set a reasonable default. lt_cv_sys_max_cmd_len=16384 # if test -x /sbin/sysconfig; then case `/sbin/sysconfig -q proc exec_disable_arg_limit` in *1*) lt_cv_sys_max_cmd_len=-1 ;; esac fi ;; sco3.2v5*) lt_cv_sys_max_cmd_len=102400 ;; sysv5* | sco5v6* | sysv4.2uw2*) kargmax=`grep ARG_MAX /etc/conf/cf.d/stune 2>/dev/null` if test -n "$kargmax"; then lt_cv_sys_max_cmd_len=`echo $kargmax | sed 's/.*[[ ]]//'` else lt_cv_sys_max_cmd_len=32768 fi ;; *) lt_cv_sys_max_cmd_len=`(getconf ARG_MAX) 2> /dev/null` if test -n "$lt_cv_sys_max_cmd_len"; then lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \/ 4` lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \* 3` else # Make teststring a little bigger before we do anything with it. # a 1K string should be a reasonable start. for i in 1 2 3 4 5 6 7 8 ; do teststring=$teststring$teststring done SHELL=${SHELL-${CONFIG_SHELL-/bin/sh}} # If test is not a shell built-in, we'll probably end up computing a # maximum length that is only half of the actual maximum length, but # we can't tell. while { test "X"`env echo "$teststring$teststring" 2>/dev/null` \ = "X$teststring$teststring"; } >/dev/null 2>&1 && test $i != 17 # 1/2 MB should be enough do i=`expr $i + 1` teststring=$teststring$teststring done # Only check the string length outside the loop. lt_cv_sys_max_cmd_len=`expr "X$teststring" : ".*" 2>&1` teststring= # Add a significant safety factor because C++ compilers can tack on # massive amounts of additional arguments before passing them to the # linker. It appears as though 1/2 is a usable value. lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \/ 2` fi ;; esac ]) if test -n $lt_cv_sys_max_cmd_len ; then AC_MSG_RESULT($lt_cv_sys_max_cmd_len) else AC_MSG_RESULT(none) fi max_cmd_len=$lt_cv_sys_max_cmd_len _LT_DECL([], [max_cmd_len], [0], [What is the maximum length of a command?]) ])# LT_CMD_MAX_LEN # Old name: AU_ALIAS([AC_LIBTOOL_SYS_MAX_CMD_LEN], [LT_CMD_MAX_LEN]) dnl aclocal-1.4 backwards compatibility: dnl AC_DEFUN([AC_LIBTOOL_SYS_MAX_CMD_LEN], []) # _LT_HEADER_DLFCN # ---------------- m4_defun([_LT_HEADER_DLFCN], [AC_CHECK_HEADERS([dlfcn.h], [], [], [AC_INCLUDES_DEFAULT])dnl ])# _LT_HEADER_DLFCN # _LT_TRY_DLOPEN_SELF (ACTION-IF-TRUE, ACTION-IF-TRUE-W-USCORE, # ACTION-IF-FALSE, ACTION-IF-CROSS-COMPILING) # ---------------------------------------------------------------- m4_defun([_LT_TRY_DLOPEN_SELF], [m4_require([_LT_HEADER_DLFCN])dnl if test "$cross_compiling" = yes; then : [$4] else lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2 lt_status=$lt_dlunknown cat > conftest.$ac_ext <<_LT_EOF [#line $LINENO "configure" #include "confdefs.h" #if HAVE_DLFCN_H #include #endif #include #ifdef RTLD_GLOBAL # define LT_DLGLOBAL RTLD_GLOBAL #else # ifdef DL_GLOBAL # define LT_DLGLOBAL DL_GLOBAL # else # define LT_DLGLOBAL 0 # endif #endif /* We may have to define LT_DLLAZY_OR_NOW in the command line if we find out it does not work in some platform. */ #ifndef LT_DLLAZY_OR_NOW # ifdef RTLD_LAZY # define LT_DLLAZY_OR_NOW RTLD_LAZY # else # ifdef DL_LAZY # define LT_DLLAZY_OR_NOW DL_LAZY # else # ifdef RTLD_NOW # define LT_DLLAZY_OR_NOW RTLD_NOW # else # ifdef DL_NOW # define LT_DLLAZY_OR_NOW DL_NOW # else # define LT_DLLAZY_OR_NOW 0 # endif # endif # endif # endif #endif /* When -fvisbility=hidden is used, assume the code has been annotated correspondingly for the symbols needed. */ #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3)) int fnord () __attribute__((visibility("default"))); #endif int fnord () { return 42; } int main () { void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW); int status = $lt_dlunknown; if (self) { if (dlsym (self,"fnord")) status = $lt_dlno_uscore; else { if (dlsym( self,"_fnord")) status = $lt_dlneed_uscore; else puts (dlerror ()); } /* dlclose (self); */ } else puts (dlerror ()); return status; }] _LT_EOF if AC_TRY_EVAL(ac_link) && test -s conftest${ac_exeext} 2>/dev/null; then (./conftest; exit; ) >&AS_MESSAGE_LOG_FD 2>/dev/null lt_status=$? case x$lt_status in x$lt_dlno_uscore) $1 ;; x$lt_dlneed_uscore) $2 ;; x$lt_dlunknown|x*) $3 ;; esac else : # compilation failed $3 fi fi rm -fr conftest* ])# _LT_TRY_DLOPEN_SELF # LT_SYS_DLOPEN_SELF # ------------------ AC_DEFUN([LT_SYS_DLOPEN_SELF], [m4_require([_LT_HEADER_DLFCN])dnl if test "x$enable_dlopen" != xyes; then enable_dlopen=unknown enable_dlopen_self=unknown enable_dlopen_self_static=unknown else lt_cv_dlopen=no lt_cv_dlopen_libs= case $host_os in beos*) lt_cv_dlopen="load_add_on" lt_cv_dlopen_libs= lt_cv_dlopen_self=yes ;; mingw* | pw32* | cegcc*) lt_cv_dlopen="LoadLibrary" lt_cv_dlopen_libs= ;; cygwin*) lt_cv_dlopen="dlopen" lt_cv_dlopen_libs= ;; darwin*) # if libdl is installed we need to link against it AC_CHECK_LIB([dl], [dlopen], [lt_cv_dlopen="dlopen" lt_cv_dlopen_libs="-ldl"],[ lt_cv_dlopen="dyld" lt_cv_dlopen_libs= lt_cv_dlopen_self=yes ]) ;; *) AC_CHECK_FUNC([shl_load], [lt_cv_dlopen="shl_load"], [AC_CHECK_LIB([dld], [shl_load], [lt_cv_dlopen="shl_load" lt_cv_dlopen_libs="-ldld"], [AC_CHECK_FUNC([dlopen], [lt_cv_dlopen="dlopen"], [AC_CHECK_LIB([dl], [dlopen], [lt_cv_dlopen="dlopen" lt_cv_dlopen_libs="-ldl"], [AC_CHECK_LIB([svld], [dlopen], [lt_cv_dlopen="dlopen" lt_cv_dlopen_libs="-lsvld"], [AC_CHECK_LIB([dld], [dld_link], [lt_cv_dlopen="dld_link" lt_cv_dlopen_libs="-ldld"]) ]) ]) ]) ]) ]) ;; esac if test "x$lt_cv_dlopen" != xno; then enable_dlopen=yes else enable_dlopen=no fi case $lt_cv_dlopen in dlopen) save_CPPFLAGS="$CPPFLAGS" test "x$ac_cv_header_dlfcn_h" = xyes && CPPFLAGS="$CPPFLAGS -DHAVE_DLFCN_H" save_LDFLAGS="$LDFLAGS" wl=$lt_prog_compiler_wl eval LDFLAGS=\"\$LDFLAGS $export_dynamic_flag_spec\" save_LIBS="$LIBS" LIBS="$lt_cv_dlopen_libs $LIBS" AC_CACHE_CHECK([whether a program can dlopen itself], lt_cv_dlopen_self, [dnl _LT_TRY_DLOPEN_SELF( lt_cv_dlopen_self=yes, lt_cv_dlopen_self=yes, lt_cv_dlopen_self=no, lt_cv_dlopen_self=cross) ]) if test "x$lt_cv_dlopen_self" = xyes; then wl=$lt_prog_compiler_wl eval LDFLAGS=\"\$LDFLAGS $lt_prog_compiler_static\" AC_CACHE_CHECK([whether a statically linked program can dlopen itself], lt_cv_dlopen_self_static, [dnl _LT_TRY_DLOPEN_SELF( lt_cv_dlopen_self_static=yes, lt_cv_dlopen_self_static=yes, lt_cv_dlopen_self_static=no, lt_cv_dlopen_self_static=cross) ]) fi CPPFLAGS="$save_CPPFLAGS" LDFLAGS="$save_LDFLAGS" LIBS="$save_LIBS" ;; esac case $lt_cv_dlopen_self in yes|no) enable_dlopen_self=$lt_cv_dlopen_self ;; *) enable_dlopen_self=unknown ;; esac case $lt_cv_dlopen_self_static in yes|no) enable_dlopen_self_static=$lt_cv_dlopen_self_static ;; *) enable_dlopen_self_static=unknown ;; esac fi _LT_DECL([dlopen_support], [enable_dlopen], [0], [Whether dlopen is supported]) _LT_DECL([dlopen_self], [enable_dlopen_self], [0], [Whether dlopen of programs is supported]) _LT_DECL([dlopen_self_static], [enable_dlopen_self_static], [0], [Whether dlopen of statically linked programs is supported]) ])# LT_SYS_DLOPEN_SELF # Old name: AU_ALIAS([AC_LIBTOOL_DLOPEN_SELF], [LT_SYS_DLOPEN_SELF]) dnl aclocal-1.4 backwards compatibility: dnl AC_DEFUN([AC_LIBTOOL_DLOPEN_SELF], []) # _LT_COMPILER_C_O([TAGNAME]) # --------------------------- # Check to see if options -c and -o are simultaneously supported by compiler. # This macro does not hard code the compiler like AC_PROG_CC_C_O. m4_defun([_LT_COMPILER_C_O], [m4_require([_LT_DECL_SED])dnl m4_require([_LT_FILEUTILS_DEFAULTS])dnl m4_require([_LT_TAG_COMPILER])dnl AC_CACHE_CHECK([if $compiler supports -c -o file.$ac_objext], [_LT_TAGVAR(lt_cv_prog_compiler_c_o, $1)], [_LT_TAGVAR(lt_cv_prog_compiler_c_o, $1)=no $RM -r conftest 2>/dev/null mkdir conftest cd conftest mkdir out echo "$lt_simple_compile_test_code" > conftest.$ac_ext lt_compiler_flag="-o out/conftest2.$ac_objext" # Insert the option either (1) after the last *FLAGS variable, or # (2) before a word containing "conftest.", or (3) at the end. # Note that $ac_compile itself does not contain backslashes and begins # with a dollar sign (not a hyphen), so the echo should work correctly. lt_compile=`echo "$ac_compile" | $SED \ -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \ -e 's: [[^ ]]*conftest\.: $lt_compiler_flag&:; t' \ -e 's:$: $lt_compiler_flag:'` (eval echo "\"\$as_me:$LINENO: $lt_compile\"" >&AS_MESSAGE_LOG_FD) (eval "$lt_compile" 2>out/conftest.err) ac_status=$? cat out/conftest.err >&AS_MESSAGE_LOG_FD echo "$as_me:$LINENO: \$? = $ac_status" >&AS_MESSAGE_LOG_FD if (exit $ac_status) && test -s out/conftest2.$ac_objext then # The compiler can only warn and ignore the option if not recognized # So say no if there are warnings $ECHO "$_lt_compiler_boilerplate" | $SED '/^$/d' > out/conftest.exp $SED '/^$/d; /^ *+/d' out/conftest.err >out/conftest.er2 if test ! -s out/conftest.er2 || diff out/conftest.exp out/conftest.er2 >/dev/null; then _LT_TAGVAR(lt_cv_prog_compiler_c_o, $1)=yes fi fi chmod u+w . 2>&AS_MESSAGE_LOG_FD $RM conftest* # SGI C++ compiler will create directory out/ii_files/ for # template instantiation test -d out/ii_files && $RM out/ii_files/* && rmdir out/ii_files $RM out/* && rmdir out cd .. $RM -r conftest $RM conftest* ]) _LT_TAGDECL([compiler_c_o], [lt_cv_prog_compiler_c_o], [1], [Does compiler simultaneously support -c and -o options?]) ])# _LT_COMPILER_C_O # _LT_COMPILER_FILE_LOCKS([TAGNAME]) # ---------------------------------- # Check to see if we can do hard links to lock some files if needed m4_defun([_LT_COMPILER_FILE_LOCKS], [m4_require([_LT_ENABLE_LOCK])dnl m4_require([_LT_FILEUTILS_DEFAULTS])dnl _LT_COMPILER_C_O([$1]) hard_links="nottested" if test "$_LT_TAGVAR(lt_cv_prog_compiler_c_o, $1)" = no && test "$need_locks" != no; then # do not overwrite the value of need_locks provided by the user AC_MSG_CHECKING([if we can lock with hard links]) hard_links=yes $RM conftest* ln conftest.a conftest.b 2>/dev/null && hard_links=no touch conftest.a ln conftest.a conftest.b 2>&5 || hard_links=no ln conftest.a conftest.b 2>/dev/null && hard_links=no AC_MSG_RESULT([$hard_links]) if test "$hard_links" = no; then AC_MSG_WARN([`$CC' does not support `-c -o', so `make -j' may be unsafe]) need_locks=warn fi else need_locks=no fi _LT_DECL([], [need_locks], [1], [Must we lock files when doing compilation?]) ])# _LT_COMPILER_FILE_LOCKS # _LT_CHECK_OBJDIR # ---------------- m4_defun([_LT_CHECK_OBJDIR], [AC_CACHE_CHECK([for objdir], [lt_cv_objdir], [rm -f .libs 2>/dev/null mkdir .libs 2>/dev/null if test -d .libs; then lt_cv_objdir=.libs else # MS-DOS does not allow filenames that begin with a dot. lt_cv_objdir=_libs fi rmdir .libs 2>/dev/null]) objdir=$lt_cv_objdir _LT_DECL([], [objdir], [0], [The name of the directory that contains temporary libtool files])dnl m4_pattern_allow([LT_OBJDIR])dnl AC_DEFINE_UNQUOTED(LT_OBJDIR, "$lt_cv_objdir/", [Define to the sub-directory in which libtool stores uninstalled libraries.]) ])# _LT_CHECK_OBJDIR # _LT_LINKER_HARDCODE_LIBPATH([TAGNAME]) # -------------------------------------- # Check hardcoding attributes. m4_defun([_LT_LINKER_HARDCODE_LIBPATH], [AC_MSG_CHECKING([how to hardcode library paths into programs]) _LT_TAGVAR(hardcode_action, $1)= if test -n "$_LT_TAGVAR(hardcode_libdir_flag_spec, $1)" || test -n "$_LT_TAGVAR(runpath_var, $1)" || test "X$_LT_TAGVAR(hardcode_automatic, $1)" = "Xyes" ; then # We can hardcode non-existent directories. if test "$_LT_TAGVAR(hardcode_direct, $1)" != no && # If the only mechanism to avoid hardcoding is shlibpath_var, we # have to relink, otherwise we might link with an installed library # when we should be linking with a yet-to-be-installed one ## test "$_LT_TAGVAR(hardcode_shlibpath_var, $1)" != no && test "$_LT_TAGVAR(hardcode_minus_L, $1)" != no; then # Linking always hardcodes the temporary library directory. _LT_TAGVAR(hardcode_action, $1)=relink else # We can link without hardcoding, and we can hardcode nonexisting dirs. _LT_TAGVAR(hardcode_action, $1)=immediate fi else # We cannot hardcode anything, or else we can only hardcode existing # directories. _LT_TAGVAR(hardcode_action, $1)=unsupported fi AC_MSG_RESULT([$_LT_TAGVAR(hardcode_action, $1)]) if test "$_LT_TAGVAR(hardcode_action, $1)" = relink || test "$_LT_TAGVAR(inherit_rpath, $1)" = yes; then # Fast installation is not supported enable_fast_install=no elif test "$shlibpath_overrides_runpath" = yes || test "$enable_shared" = no; then # Fast installation is not necessary enable_fast_install=needless fi _LT_TAGDECL([], [hardcode_action], [0], [How to hardcode a shared library path into an executable]) ])# _LT_LINKER_HARDCODE_LIBPATH # _LT_CMD_STRIPLIB # ---------------- m4_defun([_LT_CMD_STRIPLIB], [m4_require([_LT_DECL_EGREP]) striplib= old_striplib= AC_MSG_CHECKING([whether stripping libraries is possible]) if test -n "$STRIP" && $STRIP -V 2>&1 | $GREP "GNU strip" >/dev/null; then test -z "$old_striplib" && old_striplib="$STRIP --strip-debug" test -z "$striplib" && striplib="$STRIP --strip-unneeded" AC_MSG_RESULT([yes]) else # FIXME - insert some real tests, host_os isn't really good enough case $host_os in darwin*) if test -n "$STRIP" ; then striplib="$STRIP -x" old_striplib="$STRIP -S" AC_MSG_RESULT([yes]) else AC_MSG_RESULT([no]) fi ;; *) AC_MSG_RESULT([no]) ;; esac fi _LT_DECL([], [old_striplib], [1], [Commands to strip libraries]) _LT_DECL([], [striplib], [1]) ])# _LT_CMD_STRIPLIB # _LT_SYS_DYNAMIC_LINKER([TAG]) # ----------------------------- # PORTME Fill in your ld.so characteristics m4_defun([_LT_SYS_DYNAMIC_LINKER], [AC_REQUIRE([AC_CANONICAL_HOST])dnl m4_require([_LT_DECL_EGREP])dnl m4_require([_LT_FILEUTILS_DEFAULTS])dnl m4_require([_LT_DECL_OBJDUMP])dnl m4_require([_LT_DECL_SED])dnl m4_require([_LT_CHECK_SHELL_FEATURES])dnl AC_MSG_CHECKING([dynamic linker characteristics]) m4_if([$1], [], [ if test "$GCC" = yes; then case $host_os in darwin*) lt_awk_arg="/^libraries:/,/LR/" ;; *) lt_awk_arg="/^libraries:/" ;; esac case $host_os in mingw* | cegcc*) lt_sed_strip_eq="s,=\([[A-Za-z]]:\),\1,g" ;; *) lt_sed_strip_eq="s,=/,/,g" ;; esac lt_search_path_spec=`$CC -print-search-dirs | awk $lt_awk_arg | $SED -e "s/^libraries://" -e $lt_sed_strip_eq` case $lt_search_path_spec in *\;*) # if the path contains ";" then we assume it to be the separator # otherwise default to the standard path separator (i.e. ":") - it is # assumed that no part of a normal pathname contains ";" but that should # okay in the real world where ";" in dirpaths is itself problematic. lt_search_path_spec=`$ECHO "$lt_search_path_spec" | $SED 's/;/ /g'` ;; *) lt_search_path_spec=`$ECHO "$lt_search_path_spec" | $SED "s/$PATH_SEPARATOR/ /g"` ;; esac # Ok, now we have the path, separated by spaces, we can step through it # and add multilib dir if necessary. lt_tmp_lt_search_path_spec= lt_multi_os_dir=`$CC $CPPFLAGS $CFLAGS $LDFLAGS -print-multi-os-directory 2>/dev/null` for lt_sys_path in $lt_search_path_spec; do if test -d "$lt_sys_path/$lt_multi_os_dir"; then lt_tmp_lt_search_path_spec="$lt_tmp_lt_search_path_spec $lt_sys_path/$lt_multi_os_dir" else test -d "$lt_sys_path" && \ lt_tmp_lt_search_path_spec="$lt_tmp_lt_search_path_spec $lt_sys_path" fi done lt_search_path_spec=`$ECHO "$lt_tmp_lt_search_path_spec" | awk ' BEGIN {RS=" "; FS="/|\n";} { lt_foo=""; lt_count=0; for (lt_i = NF; lt_i > 0; lt_i--) { if ($lt_i != "" && $lt_i != ".") { if ($lt_i == "..") { lt_count++; } else { if (lt_count == 0) { lt_foo="/" $lt_i lt_foo; } else { lt_count--; } } } } if (lt_foo != "") { lt_freq[[lt_foo]]++; } if (lt_freq[[lt_foo]] == 1) { print lt_foo; } }'` # AWK program above erroneously prepends '/' to C:/dos/paths # for these hosts. case $host_os in mingw* | cegcc*) lt_search_path_spec=`$ECHO "$lt_search_path_spec" |\ $SED 's,/\([[A-Za-z]]:\),\1,g'` ;; esac sys_lib_search_path_spec=`$ECHO "$lt_search_path_spec" | $lt_NL2SP` else sys_lib_search_path_spec="/lib /usr/lib /usr/local/lib" fi]) library_names_spec= libname_spec='lib$name' soname_spec= shrext_cmds=".so" postinstall_cmds= postuninstall_cmds= finish_cmds= finish_eval= shlibpath_var= shlibpath_overrides_runpath=unknown version_type=none dynamic_linker="$host_os ld.so" sys_lib_dlsearch_path_spec="/lib /usr/lib" need_lib_prefix=unknown hardcode_into_libs=no # when you set need_version to no, make sure it does not cause -set_version # flags to be left without arguments need_version=unknown case $host_os in aix3*) version_type=linux # correct to gnu/linux during the next big refactor library_names_spec='${libname}${release}${shared_ext}$versuffix $libname.a' shlibpath_var=LIBPATH # AIX 3 has no versioning support, so we append a major version to the name. soname_spec='${libname}${release}${shared_ext}$major' ;; aix[[4-9]]*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no hardcode_into_libs=yes if test "$host_cpu" = ia64; then # AIX 5 supports IA64 library_names_spec='${libname}${release}${shared_ext}$major ${libname}${release}${shared_ext}$versuffix $libname${shared_ext}' shlibpath_var=LD_LIBRARY_PATH else # With GCC up to 2.95.x, collect2 would create an import file # for dependence libraries. The import file would start with # the line `#! .'. This would cause the generated library to # depend on `.', always an invalid library. This was fixed in # development snapshots of GCC prior to 3.0. case $host_os in aix4 | aix4.[[01]] | aix4.[[01]].*) if { echo '#if __GNUC__ > 2 || (__GNUC__ == 2 && __GNUC_MINOR__ >= 97)' echo ' yes ' echo '#endif'; } | ${CC} -E - | $GREP yes > /dev/null; then : else can_build_shared=no fi ;; esac # AIX (on Power*) has no versioning support, so currently we can not hardcode correct # soname into executable. Probably we can add versioning support to # collect2, so additional links can be useful in future. if test "$aix_use_runtimelinking" = yes; then # If using run time linking (on AIX 4.2 or later) use lib.so # instead of lib.a to let people know that these are not # typical AIX shared libraries. library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' else # We preserve .a as extension for shared libraries through AIX4.2 # and later when we are not doing run time linking. library_names_spec='${libname}${release}.a $libname.a' soname_spec='${libname}${release}${shared_ext}$major' fi shlibpath_var=LIBPATH fi ;; amigaos*) case $host_cpu in powerpc) # Since July 2007 AmigaOS4 officially supports .so libraries. # When compiling the executable, add -use-dynld -Lsobjs: to the compileline. library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' ;; m68k) library_names_spec='$libname.ixlibrary $libname.a' # Create ${libname}_ixlibrary.a entries in /sys/libs. finish_eval='for lib in `ls $libdir/*.ixlibrary 2>/dev/null`; do libname=`func_echo_all "$lib" | $SED '\''s%^.*/\([[^/]]*\)\.ixlibrary$%\1%'\''`; test $RM /sys/libs/${libname}_ixlibrary.a; $show "cd /sys/libs && $LN_S $lib ${libname}_ixlibrary.a"; cd /sys/libs && $LN_S $lib ${libname}_ixlibrary.a || exit 1; done' ;; esac ;; beos*) library_names_spec='${libname}${shared_ext}' dynamic_linker="$host_os ld.so" shlibpath_var=LIBRARY_PATH ;; bsdi[[45]]*) version_type=linux # correct to gnu/linux during the next big refactor need_version=no library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' finish_cmds='PATH="\$PATH:/sbin" ldconfig $libdir' shlibpath_var=LD_LIBRARY_PATH sys_lib_search_path_spec="/shlib /usr/lib /usr/X11/lib /usr/contrib/lib /lib /usr/local/lib" sys_lib_dlsearch_path_spec="/shlib /usr/lib /usr/local/lib" # the default ld.so.conf also contains /usr/contrib/lib and # /usr/X11R6/lib (/usr/X11 is a link to /usr/X11R6), but let us allow # libtool to hard-code these into programs ;; cygwin* | mingw* | pw32* | cegcc*) version_type=windows shrext_cmds=".dll" need_version=no need_lib_prefix=no case $GCC,$cc_basename in yes,*) # gcc library_names_spec='$libname.dll.a' # DLL is installed to $(libdir)/../bin by postinstall_cmds postinstall_cmds='base_file=`basename \${file}`~ dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i; echo \$dlname'\''`~ dldir=$destdir/`dirname \$dlpath`~ test -d \$dldir || mkdir -p \$dldir~ $install_prog $dir/$dlname \$dldir/$dlname~ chmod a+x \$dldir/$dlname~ if test -n '\''$stripme'\'' && test -n '\''$striplib'\''; then eval '\''$striplib \$dldir/$dlname'\'' || exit \$?; fi' postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~ dlpath=$dir/\$dldll~ $RM \$dlpath' shlibpath_overrides_runpath=yes case $host_os in cygwin*) # Cygwin DLLs use 'cyg' prefix rather than 'lib' soname_spec='`echo ${libname} | sed -e 's/^lib/cyg/'``echo ${release} | $SED -e 's/[[.]]/-/g'`${versuffix}${shared_ext}' m4_if([$1], [],[ sys_lib_search_path_spec="$sys_lib_search_path_spec /usr/lib/w32api"]) ;; mingw* | cegcc*) # MinGW DLLs use traditional 'lib' prefix soname_spec='${libname}`echo ${release} | $SED -e 's/[[.]]/-/g'`${versuffix}${shared_ext}' ;; pw32*) # pw32 DLLs use 'pw' prefix rather than 'lib' library_names_spec='`echo ${libname} | sed -e 's/^lib/pw/'``echo ${release} | $SED -e 's/[[.]]/-/g'`${versuffix}${shared_ext}' ;; esac dynamic_linker='Win32 ld.exe' ;; *,cl*) # Native MSVC libname_spec='$name' soname_spec='${libname}`echo ${release} | $SED -e 's/[[.]]/-/g'`${versuffix}${shared_ext}' library_names_spec='${libname}.dll.lib' case $build_os in mingw*) sys_lib_search_path_spec= lt_save_ifs=$IFS IFS=';' for lt_path in $LIB do IFS=$lt_save_ifs # Let DOS variable expansion print the short 8.3 style file name. lt_path=`cd "$lt_path" 2>/dev/null && cmd //C "for %i in (".") do @echo %~si"` sys_lib_search_path_spec="$sys_lib_search_path_spec $lt_path" done IFS=$lt_save_ifs # Convert to MSYS style. sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | sed -e 's|\\\\|/|g' -e 's| \\([[a-zA-Z]]\\):| /\\1|g' -e 's|^ ||'` ;; cygwin*) # Convert to unix form, then to dos form, then back to unix form # but this time dos style (no spaces!) so that the unix form looks # like /cygdrive/c/PROGRA~1:/cygdr... sys_lib_search_path_spec=`cygpath --path --unix "$LIB"` sys_lib_search_path_spec=`cygpath --path --dos "$sys_lib_search_path_spec" 2>/dev/null` sys_lib_search_path_spec=`cygpath --path --unix "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"` ;; *) sys_lib_search_path_spec="$LIB" if $ECHO "$sys_lib_search_path_spec" | [$GREP ';[c-zC-Z]:/' >/dev/null]; then # It is most probably a Windows format PATH. sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'` else sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"` fi # FIXME: find the short name or the path components, as spaces are # common. (e.g. "Program Files" -> "PROGRA~1") ;; esac # DLL is installed to $(libdir)/../bin by postinstall_cmds postinstall_cmds='base_file=`basename \${file}`~ dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i; echo \$dlname'\''`~ dldir=$destdir/`dirname \$dlpath`~ test -d \$dldir || mkdir -p \$dldir~ $install_prog $dir/$dlname \$dldir/$dlname' postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~ dlpath=$dir/\$dldll~ $RM \$dlpath' shlibpath_overrides_runpath=yes dynamic_linker='Win32 link.exe' ;; *) # Assume MSVC wrapper library_names_spec='${libname}`echo ${release} | $SED -e 's/[[.]]/-/g'`${versuffix}${shared_ext} $libname.lib' dynamic_linker='Win32 ld.exe' ;; esac # FIXME: first we should search . and the directory the executable is in shlibpath_var=PATH ;; darwin* | rhapsody*) dynamic_linker="$host_os dyld" version_type=darwin need_lib_prefix=no need_version=no library_names_spec='${libname}${release}${major}$shared_ext ${libname}$shared_ext' soname_spec='${libname}${release}${major}$shared_ext' shlibpath_overrides_runpath=yes shlibpath_var=DYLD_LIBRARY_PATH shrext_cmds='`test .$module = .yes && echo .so || echo .dylib`' m4_if([$1], [],[ sys_lib_search_path_spec="$sys_lib_search_path_spec /usr/local/lib"]) sys_lib_dlsearch_path_spec='/usr/local/lib /lib /usr/lib' ;; dgux*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname$shared_ext' soname_spec='${libname}${release}${shared_ext}$major' shlibpath_var=LD_LIBRARY_PATH ;; freebsd* | dragonfly*) # DragonFly does not have aout. When/if they implement a new # versioning mechanism, adjust this. if test -x /usr/bin/objformat; then objformat=`/usr/bin/objformat` else case $host_os in freebsd[[23]].*) objformat=aout ;; *) objformat=elf ;; esac fi version_type=freebsd-$objformat case $version_type in freebsd-elf*) library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext} $libname${shared_ext}' need_version=no need_lib_prefix=no ;; freebsd-*) library_names_spec='${libname}${release}${shared_ext}$versuffix $libname${shared_ext}$versuffix' need_version=yes ;; esac shlibpath_var=LD_LIBRARY_PATH case $host_os in freebsd2.*) shlibpath_overrides_runpath=yes ;; freebsd3.[[01]]* | freebsdelf3.[[01]]*) shlibpath_overrides_runpath=yes hardcode_into_libs=yes ;; freebsd3.[[2-9]]* | freebsdelf3.[[2-9]]* | \ freebsd4.[[0-5]] | freebsdelf4.[[0-5]] | freebsd4.1.1 | freebsdelf4.1.1) shlibpath_overrides_runpath=no hardcode_into_libs=yes ;; *) # from 4.6 on, and DragonFly shlibpath_overrides_runpath=yes hardcode_into_libs=yes ;; esac ;; gnu*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}${major} ${libname}${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=no hardcode_into_libs=yes ;; haiku*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no dynamic_linker="$host_os runtime_loader" library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}${major} ${libname}${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' shlibpath_var=LIBRARY_PATH shlibpath_overrides_runpath=yes sys_lib_dlsearch_path_spec='/boot/home/config/lib /boot/common/lib /boot/system/lib' hardcode_into_libs=yes ;; hpux9* | hpux10* | hpux11*) # Give a soname corresponding to the major version so that dld.sl refuses to # link against other versions. version_type=sunos need_lib_prefix=no need_version=no case $host_cpu in ia64*) shrext_cmds='.so' hardcode_into_libs=yes dynamic_linker="$host_os dld.so" shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes # Unless +noenvvar is specified. library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' if test "X$HPUX_IA64_MODE" = X32; then sys_lib_search_path_spec="/usr/lib/hpux32 /usr/local/lib/hpux32 /usr/local/lib" else sys_lib_search_path_spec="/usr/lib/hpux64 /usr/local/lib/hpux64" fi sys_lib_dlsearch_path_spec=$sys_lib_search_path_spec ;; hppa*64*) shrext_cmds='.sl' hardcode_into_libs=yes dynamic_linker="$host_os dld.sl" shlibpath_var=LD_LIBRARY_PATH # How should we handle SHLIB_PATH shlibpath_overrides_runpath=yes # Unless +noenvvar is specified. library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' sys_lib_search_path_spec="/usr/lib/pa20_64 /usr/ccs/lib/pa20_64" sys_lib_dlsearch_path_spec=$sys_lib_search_path_spec ;; *) shrext_cmds='.sl' dynamic_linker="$host_os dld.sl" shlibpath_var=SHLIB_PATH shlibpath_overrides_runpath=no # +s is required to enable SHLIB_PATH library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' ;; esac # HP-UX runs *really* slowly unless shared libraries are mode 555, ... postinstall_cmds='chmod 555 $lib' # or fails outright, so override atomically: install_override_mode=555 ;; interix[[3-9]]*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' dynamic_linker='Interix 3.x ld.so.1 (PE, like ELF)' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=no hardcode_into_libs=yes ;; irix5* | irix6* | nonstopux*) case $host_os in nonstopux*) version_type=nonstopux ;; *) if test "$lt_cv_prog_gnu_ld" = yes; then version_type=linux # correct to gnu/linux during the next big refactor else version_type=irix fi ;; esac need_lib_prefix=no need_version=no soname_spec='${libname}${release}${shared_ext}$major' library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${release}${shared_ext} $libname${shared_ext}' case $host_os in irix5* | nonstopux*) libsuff= shlibsuff= ;; *) case $LD in # libtool.m4 will add one of these switches to LD *-32|*"-32 "|*-melf32bsmip|*"-melf32bsmip ") libsuff= shlibsuff= libmagic=32-bit;; *-n32|*"-n32 "|*-melf32bmipn32|*"-melf32bmipn32 ") libsuff=32 shlibsuff=N32 libmagic=N32;; *-64|*"-64 "|*-melf64bmip|*"-melf64bmip ") libsuff=64 shlibsuff=64 libmagic=64-bit;; *) libsuff= shlibsuff= libmagic=never-match;; esac ;; esac shlibpath_var=LD_LIBRARY${shlibsuff}_PATH shlibpath_overrides_runpath=no sys_lib_search_path_spec="/usr/lib${libsuff} /lib${libsuff} /usr/local/lib${libsuff}" sys_lib_dlsearch_path_spec="/usr/lib${libsuff} /lib${libsuff}" hardcode_into_libs=yes ;; # No shared lib support for Linux oldld, aout, or coff. linux*oldld* | linux*aout* | linux*coff*) dynamic_linker=no ;; # This must be glibc/ELF. linux* | k*bsd*-gnu | kopensolaris*-gnu) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' finish_cmds='PATH="\$PATH:/sbin" ldconfig -n $libdir' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=no # Some binutils ld are patched to set DT_RUNPATH AC_CACHE_VAL([lt_cv_shlibpath_overrides_runpath], [lt_cv_shlibpath_overrides_runpath=no save_LDFLAGS=$LDFLAGS save_libdir=$libdir eval "libdir=/foo; wl=\"$_LT_TAGVAR(lt_prog_compiler_wl, $1)\"; \ LDFLAGS=\"\$LDFLAGS $_LT_TAGVAR(hardcode_libdir_flag_spec, $1)\"" AC_LINK_IFELSE([AC_LANG_PROGRAM([],[])], [AS_IF([ ($OBJDUMP -p conftest$ac_exeext) 2>/dev/null | grep "RUNPATH.*$libdir" >/dev/null], [lt_cv_shlibpath_overrides_runpath=yes])]) LDFLAGS=$save_LDFLAGS libdir=$save_libdir ]) shlibpath_overrides_runpath=$lt_cv_shlibpath_overrides_runpath # This implies no fast_install, which is unacceptable. # Some rework will be needed to allow for fast_install # before this can be enabled. hardcode_into_libs=yes # Append ld.so.conf contents to the search path if test -f /etc/ld.so.conf; then lt_ld_extra=`awk '/^include / { system(sprintf("cd /etc; cat %s 2>/dev/null", \[$]2)); skip = 1; } { if (!skip) print \[$]0; skip = 0; }' < /etc/ld.so.conf | $SED -e 's/#.*//;/^[ ]*hwcap[ ]/d;s/[:, ]/ /g;s/=[^=]*$//;s/=[^= ]* / /g;s/"//g;/^$/d' | tr '\n' ' '` sys_lib_dlsearch_path_spec="/lib /usr/lib $lt_ld_extra" fi # We used to test for /lib/ld.so.1 and disable shared libraries on # powerpc, because MkLinux only supported shared libraries with the # GNU dynamic linker. Since this was broken with cross compilers, # most powerpc-linux boxes support dynamic linking these days and # people can always --disable-shared, the test was removed, and we # assume the GNU/Linux dynamic linker is in use. dynamic_linker='GNU/Linux ld.so' ;; netbsdelf*-gnu) version_type=linux need_lib_prefix=no need_version=no library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=no hardcode_into_libs=yes dynamic_linker='NetBSD ld.elf_so' ;; netbsd*) version_type=sunos need_lib_prefix=no need_version=no if echo __ELF__ | $CC -E - | $GREP __ELF__ >/dev/null; then library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${shared_ext}$versuffix' finish_cmds='PATH="\$PATH:/sbin" ldconfig -m $libdir' dynamic_linker='NetBSD (a.out) ld.so' else library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' dynamic_linker='NetBSD ld.elf_so' fi shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes hardcode_into_libs=yes ;; newsos6) version_type=linux # correct to gnu/linux during the next big refactor library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes ;; *nto* | *qnx*) version_type=qnx need_lib_prefix=no need_version=no library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=no hardcode_into_libs=yes dynamic_linker='ldqnx.so' ;; openbsd*) version_type=sunos sys_lib_dlsearch_path_spec="/usr/lib" need_lib_prefix=no # Some older versions of OpenBSD (3.3 at least) *do* need versioned libs. case $host_os in openbsd3.3 | openbsd3.3.*) need_version=yes ;; *) need_version=no ;; esac library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${shared_ext}$versuffix' finish_cmds='PATH="\$PATH:/sbin" ldconfig -m $libdir' shlibpath_var=LD_LIBRARY_PATH if test -z "`echo __ELF__ | $CC -E - | $GREP __ELF__`" || test "$host_os-$host_cpu" = "openbsd2.8-powerpc"; then case $host_os in openbsd2.[[89]] | openbsd2.[[89]].*) shlibpath_overrides_runpath=no ;; *) shlibpath_overrides_runpath=yes ;; esac else shlibpath_overrides_runpath=yes fi ;; os2*) libname_spec='$name' shrext_cmds=".dll" need_lib_prefix=no library_names_spec='$libname${shared_ext} $libname.a' dynamic_linker='OS/2 ld.exe' shlibpath_var=LIBPATH ;; osf3* | osf4* | osf5*) version_type=osf need_lib_prefix=no need_version=no soname_spec='${libname}${release}${shared_ext}$major' library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' shlibpath_var=LD_LIBRARY_PATH sys_lib_search_path_spec="/usr/shlib /usr/ccs/lib /usr/lib/cmplrs/cc /usr/lib /usr/local/lib /var/shlib" sys_lib_dlsearch_path_spec="$sys_lib_search_path_spec" ;; rdos*) dynamic_linker=no ;; solaris*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes hardcode_into_libs=yes # ldd complains unless libraries are executable postinstall_cmds='chmod +x $lib' ;; sunos4*) version_type=sunos library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${shared_ext}$versuffix' finish_cmds='PATH="\$PATH:/usr/etc" ldconfig $libdir' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes if test "$with_gnu_ld" = yes; then need_lib_prefix=no fi need_version=yes ;; sysv4 | sysv4.3*) version_type=linux # correct to gnu/linux during the next big refactor library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' shlibpath_var=LD_LIBRARY_PATH case $host_vendor in sni) shlibpath_overrides_runpath=no need_lib_prefix=no runpath_var=LD_RUN_PATH ;; siemens) need_lib_prefix=no ;; motorola) need_lib_prefix=no need_version=no shlibpath_overrides_runpath=no sys_lib_search_path_spec='/lib /usr/lib /usr/ccs/lib' ;; esac ;; sysv4*MP*) if test -d /usr/nec ;then version_type=linux # correct to gnu/linux during the next big refactor library_names_spec='$libname${shared_ext}.$versuffix $libname${shared_ext}.$major $libname${shared_ext}' soname_spec='$libname${shared_ext}.$major' shlibpath_var=LD_LIBRARY_PATH fi ;; sysv5* | sco3.2v5* | sco5v6* | unixware* | OpenUNIX* | sysv4*uw2*) version_type=freebsd-elf need_lib_prefix=no need_version=no library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext} $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes hardcode_into_libs=yes if test "$with_gnu_ld" = yes; then sys_lib_search_path_spec='/usr/local/lib /usr/gnu/lib /usr/ccs/lib /usr/lib /lib' else sys_lib_search_path_spec='/usr/ccs/lib /usr/lib' case $host_os in sco3.2v5*) sys_lib_search_path_spec="$sys_lib_search_path_spec /lib" ;; esac fi sys_lib_dlsearch_path_spec='/usr/lib' ;; tpf*) # TPF is a cross-target only. Preferred cross-host = GNU/Linux. version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=no hardcode_into_libs=yes ;; uts4*) version_type=linux # correct to gnu/linux during the next big refactor library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' soname_spec='${libname}${release}${shared_ext}$major' shlibpath_var=LD_LIBRARY_PATH ;; *) dynamic_linker=no ;; esac AC_MSG_RESULT([$dynamic_linker]) test "$dynamic_linker" = no && can_build_shared=no variables_saved_for_relink="PATH $shlibpath_var $runpath_var" if test "$GCC" = yes; then variables_saved_for_relink="$variables_saved_for_relink GCC_EXEC_PREFIX COMPILER_PATH LIBRARY_PATH" fi if test "${lt_cv_sys_lib_search_path_spec+set}" = set; then sys_lib_search_path_spec="$lt_cv_sys_lib_search_path_spec" fi if test "${lt_cv_sys_lib_dlsearch_path_spec+set}" = set; then sys_lib_dlsearch_path_spec="$lt_cv_sys_lib_dlsearch_path_spec" fi _LT_DECL([], [variables_saved_for_relink], [1], [Variables whose values should be saved in libtool wrapper scripts and restored at link time]) _LT_DECL([], [need_lib_prefix], [0], [Do we need the "lib" prefix for modules?]) _LT_DECL([], [need_version], [0], [Do we need a version for libraries?]) _LT_DECL([], [version_type], [0], [Library versioning type]) _LT_DECL([], [runpath_var], [0], [Shared library runtime path variable]) _LT_DECL([], [shlibpath_var], [0],[Shared library path variable]) _LT_DECL([], [shlibpath_overrides_runpath], [0], [Is shlibpath searched before the hard-coded library search path?]) _LT_DECL([], [libname_spec], [1], [Format of library name prefix]) _LT_DECL([], [library_names_spec], [1], [[List of archive names. First name is the real one, the rest are links. The last name is the one that the linker finds with -lNAME]]) _LT_DECL([], [soname_spec], [1], [[The coded name of the library, if different from the real name]]) _LT_DECL([], [install_override_mode], [1], [Permission mode override for installation of shared libraries]) _LT_DECL([], [postinstall_cmds], [2], [Command to use after installation of a shared archive]) _LT_DECL([], [postuninstall_cmds], [2], [Command to use after uninstallation of a shared archive]) _LT_DECL([], [finish_cmds], [2], [Commands used to finish a libtool library installation in a directory]) _LT_DECL([], [finish_eval], [1], [[As "finish_cmds", except a single script fragment to be evaled but not shown]]) _LT_DECL([], [hardcode_into_libs], [0], [Whether we should hardcode library paths into libraries]) _LT_DECL([], [sys_lib_search_path_spec], [2], [Compile-time system search path for libraries]) _LT_DECL([], [sys_lib_dlsearch_path_spec], [2], [Run-time system search path for libraries]) ])# _LT_SYS_DYNAMIC_LINKER # _LT_PATH_TOOL_PREFIX(TOOL) # -------------------------- # find a file program which can recognize shared library AC_DEFUN([_LT_PATH_TOOL_PREFIX], [m4_require([_LT_DECL_EGREP])dnl AC_MSG_CHECKING([for $1]) AC_CACHE_VAL(lt_cv_path_MAGIC_CMD, [case $MAGIC_CMD in [[\\/*] | ?:[\\/]*]) lt_cv_path_MAGIC_CMD="$MAGIC_CMD" # Let the user override the test with a path. ;; *) lt_save_MAGIC_CMD="$MAGIC_CMD" lt_save_ifs="$IFS"; IFS=$PATH_SEPARATOR dnl $ac_dummy forces splitting on constant user-supplied paths. dnl POSIX.2 word splitting is done only on the output of word expansions, dnl not every word. This closes a longstanding sh security hole. ac_dummy="m4_if([$2], , $PATH, [$2])" for ac_dir in $ac_dummy; do IFS="$lt_save_ifs" test -z "$ac_dir" && ac_dir=. if test -f $ac_dir/$1; then lt_cv_path_MAGIC_CMD="$ac_dir/$1" if test -n "$file_magic_test_file"; then case $deplibs_check_method in "file_magic "*) file_magic_regex=`expr "$deplibs_check_method" : "file_magic \(.*\)"` MAGIC_CMD="$lt_cv_path_MAGIC_CMD" if eval $file_magic_cmd \$file_magic_test_file 2> /dev/null | $EGREP "$file_magic_regex" > /dev/null; then : else cat <<_LT_EOF 1>&2 *** Warning: the command libtool uses to detect shared libraries, *** $file_magic_cmd, produces output that libtool cannot recognize. *** The result is that libtool may fail to recognize shared libraries *** as such. This will affect the creation of libtool libraries that *** depend on shared libraries, but programs linked with such libtool *** libraries will work regardless of this problem. Nevertheless, you *** may want to report the problem to your system manager and/or to *** bug-libtool@gnu.org _LT_EOF fi ;; esac fi break fi done IFS="$lt_save_ifs" MAGIC_CMD="$lt_save_MAGIC_CMD" ;; esac]) MAGIC_CMD="$lt_cv_path_MAGIC_CMD" if test -n "$MAGIC_CMD"; then AC_MSG_RESULT($MAGIC_CMD) else AC_MSG_RESULT(no) fi _LT_DECL([], [MAGIC_CMD], [0], [Used to examine libraries when file_magic_cmd begins with "file"])dnl ])# _LT_PATH_TOOL_PREFIX # Old name: AU_ALIAS([AC_PATH_TOOL_PREFIX], [_LT_PATH_TOOL_PREFIX]) dnl aclocal-1.4 backwards compatibility: dnl AC_DEFUN([AC_PATH_TOOL_PREFIX], []) # _LT_PATH_MAGIC # -------------- # find a file program which can recognize a shared library m4_defun([_LT_PATH_MAGIC], [_LT_PATH_TOOL_PREFIX(${ac_tool_prefix}file, /usr/bin$PATH_SEPARATOR$PATH) if test -z "$lt_cv_path_MAGIC_CMD"; then if test -n "$ac_tool_prefix"; then _LT_PATH_TOOL_PREFIX(file, /usr/bin$PATH_SEPARATOR$PATH) else MAGIC_CMD=: fi fi ])# _LT_PATH_MAGIC # LT_PATH_LD # ---------- # find the pathname to the GNU or non-GNU linker AC_DEFUN([LT_PATH_LD], [AC_REQUIRE([AC_PROG_CC])dnl AC_REQUIRE([AC_CANONICAL_HOST])dnl AC_REQUIRE([AC_CANONICAL_BUILD])dnl m4_require([_LT_DECL_SED])dnl m4_require([_LT_DECL_EGREP])dnl m4_require([_LT_PROG_ECHO_BACKSLASH])dnl AC_ARG_WITH([gnu-ld], [AS_HELP_STRING([--with-gnu-ld], [assume the C compiler uses GNU ld @<:@default=no@:>@])], [test "$withval" = no || with_gnu_ld=yes], [with_gnu_ld=no])dnl ac_prog=ld if test "$GCC" = yes; then # Check if gcc -print-prog-name=ld gives a path. AC_MSG_CHECKING([for ld used by $CC]) case $host in *-*-mingw*) # gcc leaves a trailing carriage return which upsets mingw ac_prog=`($CC -print-prog-name=ld) 2>&5 | tr -d '\015'` ;; *) ac_prog=`($CC -print-prog-name=ld) 2>&5` ;; esac case $ac_prog in # Accept absolute paths. [[\\/]]* | ?:[[\\/]]*) re_direlt='/[[^/]][[^/]]*/\.\./' # Canonicalize the pathname of ld ac_prog=`$ECHO "$ac_prog"| $SED 's%\\\\%/%g'` while $ECHO "$ac_prog" | $GREP "$re_direlt" > /dev/null 2>&1; do ac_prog=`$ECHO $ac_prog| $SED "s%$re_direlt%/%"` done test -z "$LD" && LD="$ac_prog" ;; "") # If it fails, then pretend we aren't using GCC. ac_prog=ld ;; *) # If it is relative, then search for the first ld in PATH. with_gnu_ld=unknown ;; esac elif test "$with_gnu_ld" = yes; then AC_MSG_CHECKING([for GNU ld]) else AC_MSG_CHECKING([for non-GNU ld]) fi AC_CACHE_VAL(lt_cv_path_LD, [if test -z "$LD"; then lt_save_ifs="$IFS"; IFS=$PATH_SEPARATOR for ac_dir in $PATH; do IFS="$lt_save_ifs" test -z "$ac_dir" && ac_dir=. if test -f "$ac_dir/$ac_prog" || test -f "$ac_dir/$ac_prog$ac_exeext"; then lt_cv_path_LD="$ac_dir/$ac_prog" # Check to see if the program is GNU ld. I'd rather use --version, # but apparently some variants of GNU ld only accept -v. # Break only if it was the GNU/non-GNU ld that we prefer. case `"$lt_cv_path_LD" -v 2>&1 &1 /dev/null 2>&1; then lt_cv_deplibs_check_method='file_magic ^x86 archive import|^x86 DLL' lt_cv_file_magic_cmd='func_win32_libid' else # Keep this pattern in sync with the one in func_win32_libid. lt_cv_deplibs_check_method='file_magic file format (pei*-i386(.*architecture: i386)?|pe-arm-wince|pe-x86-64)' lt_cv_file_magic_cmd='$OBJDUMP -f' fi ;; cegcc*) # use the weaker test based on 'objdump'. See mingw*. lt_cv_deplibs_check_method='file_magic file format pe-arm-.*little(.*architecture: arm)?' lt_cv_file_magic_cmd='$OBJDUMP -f' ;; darwin* | rhapsody*) lt_cv_deplibs_check_method=pass_all ;; freebsd* | dragonfly*) if echo __ELF__ | $CC -E - | $GREP __ELF__ > /dev/null; then case $host_cpu in i*86 ) # Not sure whether the presence of OpenBSD here was a mistake. # Let's accept both of them until this is cleared up. lt_cv_deplibs_check_method='file_magic (FreeBSD|OpenBSD|DragonFly)/i[[3-9]]86 (compact )?demand paged shared library' lt_cv_file_magic_cmd=/usr/bin/file lt_cv_file_magic_test_file=`echo /usr/lib/libc.so.*` ;; esac else lt_cv_deplibs_check_method=pass_all fi ;; gnu*) lt_cv_deplibs_check_method=pass_all ;; haiku*) lt_cv_deplibs_check_method=pass_all ;; hpux10.20* | hpux11*) lt_cv_file_magic_cmd=/usr/bin/file case $host_cpu in ia64*) lt_cv_deplibs_check_method='file_magic (s[[0-9]][[0-9]][[0-9]]|ELF-[[0-9]][[0-9]]) shared object file - IA64' lt_cv_file_magic_test_file=/usr/lib/hpux32/libc.so ;; hppa*64*) [lt_cv_deplibs_check_method='file_magic (s[0-9][0-9][0-9]|ELF[ -][0-9][0-9])(-bit)?( [LM]SB)? shared object( file)?[, -]* PA-RISC [0-9]\.[0-9]'] lt_cv_file_magic_test_file=/usr/lib/pa20_64/libc.sl ;; *) lt_cv_deplibs_check_method='file_magic (s[[0-9]][[0-9]][[0-9]]|PA-RISC[[0-9]]\.[[0-9]]) shared library' lt_cv_file_magic_test_file=/usr/lib/libc.sl ;; esac ;; interix[[3-9]]*) # PIC code is broken on Interix 3.x, that's why |\.a not |_pic\.a here lt_cv_deplibs_check_method='match_pattern /lib[[^/]]+(\.so|\.a)$' ;; irix5* | irix6* | nonstopux*) case $LD in *-32|*"-32 ") libmagic=32-bit;; *-n32|*"-n32 ") libmagic=N32;; *-64|*"-64 ") libmagic=64-bit;; *) libmagic=never-match;; esac lt_cv_deplibs_check_method=pass_all ;; # This must be glibc/ELF. linux* | k*bsd*-gnu | kopensolaris*-gnu) lt_cv_deplibs_check_method=pass_all ;; netbsd* | netbsdelf*-gnu) if echo __ELF__ | $CC -E - | $GREP __ELF__ > /dev/null; then lt_cv_deplibs_check_method='match_pattern /lib[[^/]]+(\.so\.[[0-9]]+\.[[0-9]]+|_pic\.a)$' else lt_cv_deplibs_check_method='match_pattern /lib[[^/]]+(\.so|_pic\.a)$' fi ;; newos6*) lt_cv_deplibs_check_method='file_magic ELF [[0-9]][[0-9]]*-bit [[ML]]SB (executable|dynamic lib)' lt_cv_file_magic_cmd=/usr/bin/file lt_cv_file_magic_test_file=/usr/lib/libnls.so ;; *nto* | *qnx*) lt_cv_deplibs_check_method=pass_all ;; openbsd*) if test -z "`echo __ELF__ | $CC -E - | $GREP __ELF__`" || test "$host_os-$host_cpu" = "openbsd2.8-powerpc"; then lt_cv_deplibs_check_method='match_pattern /lib[[^/]]+(\.so\.[[0-9]]+\.[[0-9]]+|\.so|_pic\.a)$' else lt_cv_deplibs_check_method='match_pattern /lib[[^/]]+(\.so\.[[0-9]]+\.[[0-9]]+|_pic\.a)$' fi ;; osf3* | osf4* | osf5*) lt_cv_deplibs_check_method=pass_all ;; rdos*) lt_cv_deplibs_check_method=pass_all ;; solaris*) lt_cv_deplibs_check_method=pass_all ;; sysv5* | sco3.2v5* | sco5v6* | unixware* | OpenUNIX* | sysv4*uw2*) lt_cv_deplibs_check_method=pass_all ;; sysv4 | sysv4.3*) case $host_vendor in motorola) lt_cv_deplibs_check_method='file_magic ELF [[0-9]][[0-9]]*-bit [[ML]]SB (shared object|dynamic lib) M[[0-9]][[0-9]]* Version [[0-9]]' lt_cv_file_magic_test_file=`echo /usr/lib/libc.so*` ;; ncr) lt_cv_deplibs_check_method=pass_all ;; sequent) lt_cv_file_magic_cmd='/bin/file' lt_cv_deplibs_check_method='file_magic ELF [[0-9]][[0-9]]*-bit [[LM]]SB (shared object|dynamic lib )' ;; sni) lt_cv_file_magic_cmd='/bin/file' lt_cv_deplibs_check_method="file_magic ELF [[0-9]][[0-9]]*-bit [[LM]]SB dynamic lib" lt_cv_file_magic_test_file=/lib/libc.so ;; siemens) lt_cv_deplibs_check_method=pass_all ;; pc) lt_cv_deplibs_check_method=pass_all ;; esac ;; tpf*) lt_cv_deplibs_check_method=pass_all ;; esac ]) file_magic_glob= want_nocaseglob=no if test "$build" = "$host"; then case $host_os in mingw* | pw32*) if ( shopt | grep nocaseglob ) >/dev/null 2>&1; then want_nocaseglob=yes else file_magic_glob=`echo aAbBcCdDeEfFgGhHiIjJkKlLmMnNoOpPqQrRsStTuUvVwWxXyYzZ | $SED -e "s/\(..\)/s\/[[\1]]\/[[\1]]\/g;/g"` fi ;; esac fi file_magic_cmd=$lt_cv_file_magic_cmd deplibs_check_method=$lt_cv_deplibs_check_method test -z "$deplibs_check_method" && deplibs_check_method=unknown _LT_DECL([], [deplibs_check_method], [1], [Method to check whether dependent libraries are shared objects]) _LT_DECL([], [file_magic_cmd], [1], [Command to use when deplibs_check_method = "file_magic"]) _LT_DECL([], [file_magic_glob], [1], [How to find potential files when deplibs_check_method = "file_magic"]) _LT_DECL([], [want_nocaseglob], [1], [Find potential files using nocaseglob when deplibs_check_method = "file_magic"]) ])# _LT_CHECK_MAGIC_METHOD # LT_PATH_NM # ---------- # find the pathname to a BSD- or MS-compatible name lister AC_DEFUN([LT_PATH_NM], [AC_REQUIRE([AC_PROG_CC])dnl AC_CACHE_CHECK([for BSD- or MS-compatible name lister (nm)], lt_cv_path_NM, [if test -n "$NM"; then # Let the user override the test. lt_cv_path_NM="$NM" else lt_nm_to_check="${ac_tool_prefix}nm" if test -n "$ac_tool_prefix" && test "$build" = "$host"; then lt_nm_to_check="$lt_nm_to_check nm" fi for lt_tmp_nm in $lt_nm_to_check; do lt_save_ifs="$IFS"; IFS=$PATH_SEPARATOR for ac_dir in $PATH /usr/ccs/bin/elf /usr/ccs/bin /usr/ucb /bin; do IFS="$lt_save_ifs" test -z "$ac_dir" && ac_dir=. tmp_nm="$ac_dir/$lt_tmp_nm" if test -f "$tmp_nm" || test -f "$tmp_nm$ac_exeext" ; then # Check to see if the nm accepts a BSD-compat flag. # Adding the `sed 1q' prevents false positives on HP-UX, which says: # nm: unknown option "B" ignored # Tru64's nm complains that /dev/null is an invalid object file case `"$tmp_nm" -B /dev/null 2>&1 | sed '1q'` in */dev/null* | *'Invalid file or object type'*) lt_cv_path_NM="$tmp_nm -B" break ;; *) case `"$tmp_nm" -p /dev/null 2>&1 | sed '1q'` in */dev/null*) lt_cv_path_NM="$tmp_nm -p" break ;; *) lt_cv_path_NM=${lt_cv_path_NM="$tmp_nm"} # keep the first match, but continue # so that we can try to find one that supports BSD flags ;; esac ;; esac fi done IFS="$lt_save_ifs" done : ${lt_cv_path_NM=no} fi]) if test "$lt_cv_path_NM" != "no"; then NM="$lt_cv_path_NM" else # Didn't find any BSD compatible name lister, look for dumpbin. if test -n "$DUMPBIN"; then : # Let the user override the test. else AC_CHECK_TOOLS(DUMPBIN, [dumpbin "link -dump"], :) case `$DUMPBIN -symbols /dev/null 2>&1 | sed '1q'` in *COFF*) DUMPBIN="$DUMPBIN -symbols" ;; *) DUMPBIN=: ;; esac fi AC_SUBST([DUMPBIN]) if test "$DUMPBIN" != ":"; then NM="$DUMPBIN" fi fi test -z "$NM" && NM=nm AC_SUBST([NM]) _LT_DECL([], [NM], [1], [A BSD- or MS-compatible name lister])dnl AC_CACHE_CHECK([the name lister ($NM) interface], [lt_cv_nm_interface], [lt_cv_nm_interface="BSD nm" echo "int some_variable = 0;" > conftest.$ac_ext (eval echo "\"\$as_me:$LINENO: $ac_compile\"" >&AS_MESSAGE_LOG_FD) (eval "$ac_compile" 2>conftest.err) cat conftest.err >&AS_MESSAGE_LOG_FD (eval echo "\"\$as_me:$LINENO: $NM \\\"conftest.$ac_objext\\\"\"" >&AS_MESSAGE_LOG_FD) (eval "$NM \"conftest.$ac_objext\"" 2>conftest.err > conftest.out) cat conftest.err >&AS_MESSAGE_LOG_FD (eval echo "\"\$as_me:$LINENO: output\"" >&AS_MESSAGE_LOG_FD) cat conftest.out >&AS_MESSAGE_LOG_FD if $GREP 'External.*some_variable' conftest.out > /dev/null; then lt_cv_nm_interface="MS dumpbin" fi rm -f conftest*]) ])# LT_PATH_NM # Old names: AU_ALIAS([AM_PROG_NM], [LT_PATH_NM]) AU_ALIAS([AC_PROG_NM], [LT_PATH_NM]) dnl aclocal-1.4 backwards compatibility: dnl AC_DEFUN([AM_PROG_NM], []) dnl AC_DEFUN([AC_PROG_NM], []) # _LT_CHECK_SHAREDLIB_FROM_LINKLIB # -------------------------------- # how to determine the name of the shared library # associated with a specific link library. # -- PORTME fill in with the dynamic library characteristics m4_defun([_LT_CHECK_SHAREDLIB_FROM_LINKLIB], [m4_require([_LT_DECL_EGREP]) m4_require([_LT_DECL_OBJDUMP]) m4_require([_LT_DECL_DLLTOOL]) AC_CACHE_CHECK([how to associate runtime and link libraries], lt_cv_sharedlib_from_linklib_cmd, [lt_cv_sharedlib_from_linklib_cmd='unknown' case $host_os in cygwin* | mingw* | pw32* | cegcc*) # two different shell functions defined in ltmain.sh # decide which to use based on capabilities of $DLLTOOL case `$DLLTOOL --help 2>&1` in *--identify-strict*) lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib ;; *) lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib_fallback ;; esac ;; *) # fallback: assume linklib IS sharedlib lt_cv_sharedlib_from_linklib_cmd="$ECHO" ;; esac ]) sharedlib_from_linklib_cmd=$lt_cv_sharedlib_from_linklib_cmd test -z "$sharedlib_from_linklib_cmd" && sharedlib_from_linklib_cmd=$ECHO _LT_DECL([], [sharedlib_from_linklib_cmd], [1], [Command to associate shared and link libraries]) ])# _LT_CHECK_SHAREDLIB_FROM_LINKLIB # _LT_PATH_MANIFEST_TOOL # ---------------------- # locate the manifest tool m4_defun([_LT_PATH_MANIFEST_TOOL], [AC_CHECK_TOOL(MANIFEST_TOOL, mt, :) test -z "$MANIFEST_TOOL" && MANIFEST_TOOL=mt AC_CACHE_CHECK([if $MANIFEST_TOOL is a manifest tool], [lt_cv_path_mainfest_tool], [lt_cv_path_mainfest_tool=no echo "$as_me:$LINENO: $MANIFEST_TOOL '-?'" >&AS_MESSAGE_LOG_FD $MANIFEST_TOOL '-?' 2>conftest.err > conftest.out cat conftest.err >&AS_MESSAGE_LOG_FD if $GREP 'Manifest Tool' conftest.out > /dev/null; then lt_cv_path_mainfest_tool=yes fi rm -f conftest*]) if test "x$lt_cv_path_mainfest_tool" != xyes; then MANIFEST_TOOL=: fi _LT_DECL([], [MANIFEST_TOOL], [1], [Manifest tool])dnl ])# _LT_PATH_MANIFEST_TOOL # LT_LIB_M # -------- # check for math library AC_DEFUN([LT_LIB_M], [AC_REQUIRE([AC_CANONICAL_HOST])dnl LIBM= case $host in *-*-beos* | *-*-cegcc* | *-*-cygwin* | *-*-haiku* | *-*-pw32* | *-*-darwin*) # These system don't have libm, or don't need it ;; *-ncr-sysv4.3*) AC_CHECK_LIB(mw, _mwvalidcheckl, LIBM="-lmw") AC_CHECK_LIB(m, cos, LIBM="$LIBM -lm") ;; *) AC_CHECK_LIB(m, cos, LIBM="-lm") ;; esac AC_SUBST([LIBM]) ])# LT_LIB_M # Old name: AU_ALIAS([AC_CHECK_LIBM], [LT_LIB_M]) dnl aclocal-1.4 backwards compatibility: dnl AC_DEFUN([AC_CHECK_LIBM], []) # _LT_COMPILER_NO_RTTI([TAGNAME]) # ------------------------------- m4_defun([_LT_COMPILER_NO_RTTI], [m4_require([_LT_TAG_COMPILER])dnl _LT_TAGVAR(lt_prog_compiler_no_builtin_flag, $1)= if test "$GCC" = yes; then case $cc_basename in nvcc*) _LT_TAGVAR(lt_prog_compiler_no_builtin_flag, $1)=' -Xcompiler -fno-builtin' ;; *) _LT_TAGVAR(lt_prog_compiler_no_builtin_flag, $1)=' -fno-builtin' ;; esac _LT_COMPILER_OPTION([if $compiler supports -fno-rtti -fno-exceptions], lt_cv_prog_compiler_rtti_exceptions, [-fno-rtti -fno-exceptions], [], [_LT_TAGVAR(lt_prog_compiler_no_builtin_flag, $1)="$_LT_TAGVAR(lt_prog_compiler_no_builtin_flag, $1) -fno-rtti -fno-exceptions"]) fi _LT_TAGDECL([no_builtin_flag], [lt_prog_compiler_no_builtin_flag], [1], [Compiler flag to turn off builtin functions]) ])# _LT_COMPILER_NO_RTTI # _LT_CMD_GLOBAL_SYMBOLS # ---------------------- m4_defun([_LT_CMD_GLOBAL_SYMBOLS], [AC_REQUIRE([AC_CANONICAL_HOST])dnl AC_REQUIRE([AC_PROG_CC])dnl AC_REQUIRE([AC_PROG_AWK])dnl AC_REQUIRE([LT_PATH_NM])dnl AC_REQUIRE([LT_PATH_LD])dnl m4_require([_LT_DECL_SED])dnl m4_require([_LT_DECL_EGREP])dnl m4_require([_LT_TAG_COMPILER])dnl # Check for command to grab the raw symbol name followed by C symbol from nm. AC_MSG_CHECKING([command to parse $NM output from $compiler object]) AC_CACHE_VAL([lt_cv_sys_global_symbol_pipe], [ # These are sane defaults that work on at least a few old systems. # [They come from Ultrix. What could be older than Ultrix?!! ;)] # Character class describing NM global symbol codes. symcode='[[BCDEGRST]]' # Regexp to match symbols that can be accessed directly from C. sympat='\([[_A-Za-z]][[_A-Za-z0-9]]*\)' # Define system-specific variables. case $host_os in aix*) symcode='[[BCDT]]' ;; cygwin* | mingw* | pw32* | cegcc*) symcode='[[ABCDGISTW]]' ;; hpux*) if test "$host_cpu" = ia64; then symcode='[[ABCDEGRST]]' fi ;; irix* | nonstopux*) symcode='[[BCDEGRST]]' ;; osf*) symcode='[[BCDEGQRST]]' ;; solaris*) symcode='[[BDRT]]' ;; sco3.2v5*) symcode='[[DT]]' ;; sysv4.2uw2*) symcode='[[DT]]' ;; sysv5* | sco5v6* | unixware* | OpenUNIX*) symcode='[[ABDT]]' ;; sysv4) symcode='[[DFNSTU]]' ;; esac # If we're using GNU nm, then use its standard symbol codes. case `$NM -V 2>&1` in *GNU* | *'with BFD'*) symcode='[[ABCDGIRSTW]]' ;; esac # Transform an extracted symbol line into a proper C declaration. # Some systems (esp. on ia64) link data and code symbols differently, # so use this general approach. lt_cv_sys_global_symbol_to_cdecl="sed -n -e 's/^T .* \(.*\)$/extern int \1();/p' -e 's/^$symcode* .* \(.*\)$/extern char \1;/p'" # Transform an extracted symbol line into symbol name and symbol address lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([[^ ]]*\)[[ ]]*$/ {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([[^ ]]*\) \([[^ ]]*\)$/ {\"\2\", (void *) \&\2},/p'" lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([[^ ]]*\)[[ ]]*$/ {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([[^ ]]*\) \(lib[[^ ]]*\)$/ {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([[^ ]]*\) \([[^ ]]*\)$/ {\"lib\2\", (void *) \&\2},/p'" # Handle CRLF in mingw tool chain opt_cr= case $build_os in mingw*) opt_cr=`$ECHO 'x\{0,1\}' | tr x '\015'` # option cr in regexp ;; esac # Try without a prefix underscore, then with it. for ac_symprfx in "" "_"; do # Transform symcode, sympat, and symprfx into a raw symbol and a C symbol. symxfrm="\\1 $ac_symprfx\\2 \\2" # Write the raw and C identifiers. if test "$lt_cv_nm_interface" = "MS dumpbin"; then # Fake it for dumpbin and say T for any non-static function # and D for any global variable. # Also find C++ and __fastcall symbols from MSVC++, # which start with @ or ?. lt_cv_sys_global_symbol_pipe="$AWK ['"\ " {last_section=section; section=\$ 3};"\ " /^COFF SYMBOL TABLE/{for(i in hide) delete hide[i]};"\ " /Section length .*#relocs.*(pick any)/{hide[last_section]=1};"\ " \$ 0!~/External *\|/{next};"\ " / 0+ UNDEF /{next}; / UNDEF \([^|]\)*()/{next};"\ " {if(hide[section]) next};"\ " {f=0}; \$ 0~/\(\).*\|/{f=1}; {printf f ? \"T \" : \"D \"};"\ " {split(\$ 0, a, /\||\r/); split(a[2], s)};"\ " s[1]~/^[@?]/{print s[1], s[1]; next};"\ " s[1]~prfx {split(s[1],t,\"@\"); print t[1], substr(t[1],length(prfx))}"\ " ' prfx=^$ac_symprfx]" else lt_cv_sys_global_symbol_pipe="sed -n -e 's/^.*[[ ]]\($symcode$symcode*\)[[ ]][[ ]]*$ac_symprfx$sympat$opt_cr$/$symxfrm/p'" fi lt_cv_sys_global_symbol_pipe="$lt_cv_sys_global_symbol_pipe | sed '/ __gnu_lto/d'" # Check to see that the pipe works correctly. pipe_works=no rm -f conftest* cat > conftest.$ac_ext <<_LT_EOF #ifdef __cplusplus extern "C" { #endif char nm_test_var; void nm_test_func(void); void nm_test_func(void){} #ifdef __cplusplus } #endif int main(){nm_test_var='a';nm_test_func();return(0);} _LT_EOF if AC_TRY_EVAL(ac_compile); then # Now try to grab the symbols. nlist=conftest.nm if AC_TRY_EVAL(NM conftest.$ac_objext \| "$lt_cv_sys_global_symbol_pipe" \> $nlist) && test -s "$nlist"; then # Try sorting and uniquifying the output. if sort "$nlist" | uniq > "$nlist"T; then mv -f "$nlist"T "$nlist" else rm -f "$nlist"T fi # Make sure that we snagged all the symbols we need. if $GREP ' nm_test_var$' "$nlist" >/dev/null; then if $GREP ' nm_test_func$' "$nlist" >/dev/null; then cat <<_LT_EOF > conftest.$ac_ext /* Keep this code in sync between libtool.m4, ltmain, lt_system.h, and tests. */ #if defined(_WIN32) || defined(__CYGWIN__) || defined(_WIN32_WCE) /* DATA imports from DLLs on WIN32 con't be const, because runtime relocations are performed -- see ld's documentation on pseudo-relocs. */ # define LT@&t@_DLSYM_CONST #elif defined(__osf__) /* This system does not cope well with relocations in const data. */ # define LT@&t@_DLSYM_CONST #else # define LT@&t@_DLSYM_CONST const #endif #ifdef __cplusplus extern "C" { #endif _LT_EOF # Now generate the symbol file. eval "$lt_cv_sys_global_symbol_to_cdecl"' < "$nlist" | $GREP -v main >> conftest.$ac_ext' cat <<_LT_EOF >> conftest.$ac_ext /* The mapping between symbol names and symbols. */ LT@&t@_DLSYM_CONST struct { const char *name; void *address; } lt__PROGRAM__LTX_preloaded_symbols[[]] = { { "@PROGRAM@", (void *) 0 }, _LT_EOF $SED "s/^$symcode$symcode* \(.*\) \(.*\)$/ {\"\2\", (void *) \&\2},/" < "$nlist" | $GREP -v main >> conftest.$ac_ext cat <<\_LT_EOF >> conftest.$ac_ext {0, (void *) 0} }; /* This works around a problem in FreeBSD linker */ #ifdef FREEBSD_WORKAROUND static const void *lt_preloaded_setup() { return lt__PROGRAM__LTX_preloaded_symbols; } #endif #ifdef __cplusplus } #endif _LT_EOF # Now try linking the two files. mv conftest.$ac_objext conftstm.$ac_objext lt_globsym_save_LIBS=$LIBS lt_globsym_save_CFLAGS=$CFLAGS LIBS="conftstm.$ac_objext" CFLAGS="$CFLAGS$_LT_TAGVAR(lt_prog_compiler_no_builtin_flag, $1)" if AC_TRY_EVAL(ac_link) && test -s conftest${ac_exeext}; then pipe_works=yes fi LIBS=$lt_globsym_save_LIBS CFLAGS=$lt_globsym_save_CFLAGS else echo "cannot find nm_test_func in $nlist" >&AS_MESSAGE_LOG_FD fi else echo "cannot find nm_test_var in $nlist" >&AS_MESSAGE_LOG_FD fi else echo "cannot run $lt_cv_sys_global_symbol_pipe" >&AS_MESSAGE_LOG_FD fi else echo "$progname: failed program was:" >&AS_MESSAGE_LOG_FD cat conftest.$ac_ext >&5 fi rm -rf conftest* conftst* # Do not use the global_symbol_pipe unless it works. if test "$pipe_works" = yes; then break else lt_cv_sys_global_symbol_pipe= fi done ]) if test -z "$lt_cv_sys_global_symbol_pipe"; then lt_cv_sys_global_symbol_to_cdecl= fi if test -z "$lt_cv_sys_global_symbol_pipe$lt_cv_sys_global_symbol_to_cdecl"; then AC_MSG_RESULT(failed) else AC_MSG_RESULT(ok) fi # Response file support. if test "$lt_cv_nm_interface" = "MS dumpbin"; then nm_file_list_spec='@' elif $NM --help 2>/dev/null | grep '[[@]]FILE' >/dev/null; then nm_file_list_spec='@' fi _LT_DECL([global_symbol_pipe], [lt_cv_sys_global_symbol_pipe], [1], [Take the output of nm and produce a listing of raw symbols and C names]) _LT_DECL([global_symbol_to_cdecl], [lt_cv_sys_global_symbol_to_cdecl], [1], [Transform the output of nm in a proper C declaration]) _LT_DECL([global_symbol_to_c_name_address], [lt_cv_sys_global_symbol_to_c_name_address], [1], [Transform the output of nm in a C name address pair]) _LT_DECL([global_symbol_to_c_name_address_lib_prefix], [lt_cv_sys_global_symbol_to_c_name_address_lib_prefix], [1], [Transform the output of nm in a C name address pair when lib prefix is needed]) _LT_DECL([], [nm_file_list_spec], [1], [Specify filename containing input files for $NM]) ]) # _LT_CMD_GLOBAL_SYMBOLS # _LT_COMPILER_PIC([TAGNAME]) # --------------------------- m4_defun([_LT_COMPILER_PIC], [m4_require([_LT_TAG_COMPILER])dnl _LT_TAGVAR(lt_prog_compiler_wl, $1)= _LT_TAGVAR(lt_prog_compiler_pic, $1)= _LT_TAGVAR(lt_prog_compiler_static, $1)= m4_if([$1], [CXX], [ # C++ specific cases for pic, static, wl, etc. if test "$GXX" = yes; then _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_static, $1)='-static' case $host_os in aix*) # All AIX code is PIC. if test "$host_cpu" = ia64; then # AIX 5 now supports IA64 processor _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' fi ;; amigaos*) case $host_cpu in powerpc) # see comment about AmigaOS4 .so support _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC' ;; m68k) # FIXME: we need at least 68020 code to build shared libraries, but # adding the `-m68020' flag to GCC prevents building anything better, # like `-m68040'. _LT_TAGVAR(lt_prog_compiler_pic, $1)='-m68020 -resident32 -malways-restore-a4' ;; esac ;; beos* | irix5* | irix6* | nonstopux* | osf3* | osf4* | osf5*) # PIC is the default for these OSes. ;; mingw* | cygwin* | os2* | pw32* | cegcc*) # This hack is so that the source file can tell whether it is being # built for inclusion in a dll (and should export symbols for example). # Although the cygwin gcc ignores -fPIC, still need this for old-style # (--disable-auto-import) libraries m4_if([$1], [GCJ], [], [_LT_TAGVAR(lt_prog_compiler_pic, $1)='-DDLL_EXPORT']) ;; darwin* | rhapsody*) # PIC is the default on this platform # Common symbols not allowed in MH_DYLIB files _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fno-common' ;; *djgpp*) # DJGPP does not support shared libraries at all _LT_TAGVAR(lt_prog_compiler_pic, $1)= ;; haiku*) # PIC is the default for Haiku. # The "-static" flag exists, but is broken. _LT_TAGVAR(lt_prog_compiler_static, $1)= ;; interix[[3-9]]*) # Interix 3.x gcc -fpic/-fPIC options generate broken code. # Instead, we relocate shared libraries at runtime. ;; sysv4*MP*) if test -d /usr/nec; then _LT_TAGVAR(lt_prog_compiler_pic, $1)=-Kconform_pic fi ;; hpux*) # PIC is the default for 64-bit PA HP-UX, but not for 32-bit # PA HP-UX. On IA64 HP-UX, PIC is the default but the pic flag # sets the default TLS model and affects inlining. case $host_cpu in hppa*64*) ;; *) _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC' ;; esac ;; *qnx* | *nto*) # QNX uses GNU C++, but need to define -shared option too, otherwise # it will coredump. _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC -shared' ;; *) _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC' ;; esac else case $host_os in aix[[4-9]]*) # All AIX code is PIC. if test "$host_cpu" = ia64; then # AIX 5 now supports IA64 processor _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' else _LT_TAGVAR(lt_prog_compiler_static, $1)='-bnso -bI:/lib/syscalls.exp' fi ;; chorus*) case $cc_basename in cxch68*) # Green Hills C++ Compiler # _LT_TAGVAR(lt_prog_compiler_static, $1)="--no_auto_instantiation -u __main -u __premain -u _abort -r $COOL_DIR/lib/libOrb.a $MVME_DIR/lib/CC/libC.a $MVME_DIR/lib/classix/libcx.s.a" ;; esac ;; mingw* | cygwin* | os2* | pw32* | cegcc*) # This hack is so that the source file can tell whether it is being # built for inclusion in a dll (and should export symbols for example). m4_if([$1], [GCJ], [], [_LT_TAGVAR(lt_prog_compiler_pic, $1)='-DDLL_EXPORT']) ;; dgux*) case $cc_basename in ec++*) _LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' ;; ghcx*) # Green Hills C++ Compiler _LT_TAGVAR(lt_prog_compiler_pic, $1)='-pic' ;; *) ;; esac ;; freebsd* | dragonfly*) # FreeBSD uses GNU C++ ;; hpux9* | hpux10* | hpux11*) case $cc_basename in CC*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_static, $1)='${wl}-a ${wl}archive' if test "$host_cpu" != ia64; then _LT_TAGVAR(lt_prog_compiler_pic, $1)='+Z' fi ;; aCC*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_static, $1)='${wl}-a ${wl}archive' case $host_cpu in hppa*64*|ia64*) # +Z the default ;; *) _LT_TAGVAR(lt_prog_compiler_pic, $1)='+Z' ;; esac ;; *) ;; esac ;; interix*) # This is c89, which is MS Visual C++ (no shared libs) # Anyone wants to do a port? ;; irix5* | irix6* | nonstopux*) case $cc_basename in CC*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_static, $1)='-non_shared' # CC pic flag -KPIC is the default. ;; *) ;; esac ;; linux* | k*bsd*-gnu | kopensolaris*-gnu) case $cc_basename in KCC*) # KAI C++ Compiler _LT_TAGVAR(lt_prog_compiler_wl, $1)='--backend -Wl,' _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC' ;; ecpc* ) # old Intel C++ for x86_64 which still supported -KPIC. _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-static' ;; icpc* ) # Intel C++, used to be incompatible with GCC. # ICC 10 doesn't accept -KPIC any more. _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-static' ;; pgCC* | pgcpp*) # Portland Group C++ compiler _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fpic' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' ;; cxx*) # Compaq C++ # Make sure the PIC flag is empty. It appears that all Alpha # Linux and Compaq Tru64 Unix objects are PIC. _LT_TAGVAR(lt_prog_compiler_pic, $1)= _LT_TAGVAR(lt_prog_compiler_static, $1)='-non_shared' ;; xlc* | xlC* | bgxl[[cC]]* | mpixl[[cC]]*) # IBM XL 8.0, 9.0 on PPC and BlueGene _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_pic, $1)='-qpic' _LT_TAGVAR(lt_prog_compiler_static, $1)='-qstaticlink' ;; *) case `$CC -V 2>&1 | sed 5q` in *Sun\ C*) # Sun C++ 5.9 _LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Qoption ld ' ;; esac ;; esac ;; lynxos*) ;; m88k*) ;; mvs*) case $cc_basename in cxx*) _LT_TAGVAR(lt_prog_compiler_pic, $1)='-W c,exportall' ;; *) ;; esac ;; netbsd* | netbsdelf*-gnu) ;; *qnx* | *nto*) # QNX uses GNU C++, but need to define -shared option too, otherwise # it will coredump. _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC -shared' ;; osf3* | osf4* | osf5*) case $cc_basename in KCC*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='--backend -Wl,' ;; RCC*) # Rational C++ 2.4.1 _LT_TAGVAR(lt_prog_compiler_pic, $1)='-pic' ;; cxx*) # Digital/Compaq C++ _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' # Make sure the PIC flag is empty. It appears that all Alpha # Linux and Compaq Tru64 Unix objects are PIC. _LT_TAGVAR(lt_prog_compiler_pic, $1)= _LT_TAGVAR(lt_prog_compiler_static, $1)='-non_shared' ;; *) ;; esac ;; psos*) ;; solaris*) case $cc_basename in CC* | sunCC*) # Sun C++ 4.2, 5.x and Centerline C++ _LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Qoption ld ' ;; gcx*) # Green Hills C++ Compiler _LT_TAGVAR(lt_prog_compiler_pic, $1)='-PIC' ;; *) ;; esac ;; sunos4*) case $cc_basename in CC*) # Sun C++ 4.x _LT_TAGVAR(lt_prog_compiler_pic, $1)='-pic' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' ;; lcc*) # Lucid _LT_TAGVAR(lt_prog_compiler_pic, $1)='-pic' ;; *) ;; esac ;; sysv5* | unixware* | sco3.2v5* | sco5v6* | OpenUNIX*) case $cc_basename in CC*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' ;; esac ;; tandem*) case $cc_basename in NCC*) # NonStop-UX NCC 3.20 _LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' ;; *) ;; esac ;; vxworks*) ;; *) _LT_TAGVAR(lt_prog_compiler_can_build_shared, $1)=no ;; esac fi ], [ if test "$GCC" = yes; then _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_static, $1)='-static' case $host_os in aix*) # All AIX code is PIC. if test "$host_cpu" = ia64; then # AIX 5 now supports IA64 processor _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' fi ;; amigaos*) case $host_cpu in powerpc) # see comment about AmigaOS4 .so support _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC' ;; m68k) # FIXME: we need at least 68020 code to build shared libraries, but # adding the `-m68020' flag to GCC prevents building anything better, # like `-m68040'. _LT_TAGVAR(lt_prog_compiler_pic, $1)='-m68020 -resident32 -malways-restore-a4' ;; esac ;; beos* | irix5* | irix6* | nonstopux* | osf3* | osf4* | osf5*) # PIC is the default for these OSes. ;; mingw* | cygwin* | pw32* | os2* | cegcc*) # This hack is so that the source file can tell whether it is being # built for inclusion in a dll (and should export symbols for example). # Although the cygwin gcc ignores -fPIC, still need this for old-style # (--disable-auto-import) libraries m4_if([$1], [GCJ], [], [_LT_TAGVAR(lt_prog_compiler_pic, $1)='-DDLL_EXPORT']) ;; darwin* | rhapsody*) # PIC is the default on this platform # Common symbols not allowed in MH_DYLIB files _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fno-common' ;; haiku*) # PIC is the default for Haiku. # The "-static" flag exists, but is broken. _LT_TAGVAR(lt_prog_compiler_static, $1)= ;; hpux*) # PIC is the default for 64-bit PA HP-UX, but not for 32-bit # PA HP-UX. On IA64 HP-UX, PIC is the default but the pic flag # sets the default TLS model and affects inlining. case $host_cpu in hppa*64*) # +Z the default ;; *) _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC' ;; esac ;; interix[[3-9]]*) # Interix 3.x gcc -fpic/-fPIC options generate broken code. # Instead, we relocate shared libraries at runtime. ;; msdosdjgpp*) # Just because we use GCC doesn't mean we suddenly get shared libraries # on systems that don't support them. _LT_TAGVAR(lt_prog_compiler_can_build_shared, $1)=no enable_shared=no ;; *nto* | *qnx*) # QNX uses GNU C++, but need to define -shared option too, otherwise # it will coredump. _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC -shared' ;; sysv4*MP*) if test -d /usr/nec; then _LT_TAGVAR(lt_prog_compiler_pic, $1)=-Kconform_pic fi ;; *) _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC' ;; esac case $cc_basename in nvcc*) # Cuda Compiler Driver 2.2 _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Xlinker ' if test -n "$_LT_TAGVAR(lt_prog_compiler_pic, $1)"; then _LT_TAGVAR(lt_prog_compiler_pic, $1)="-Xcompiler $_LT_TAGVAR(lt_prog_compiler_pic, $1)" fi ;; esac else # PORTME Check for flag to pass linker flags through the system compiler. case $host_os in aix*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' if test "$host_cpu" = ia64; then # AIX 5 now supports IA64 processor _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' else _LT_TAGVAR(lt_prog_compiler_static, $1)='-bnso -bI:/lib/syscalls.exp' fi ;; mingw* | cygwin* | pw32* | os2* | cegcc*) # This hack is so that the source file can tell whether it is being # built for inclusion in a dll (and should export symbols for example). m4_if([$1], [GCJ], [], [_LT_TAGVAR(lt_prog_compiler_pic, $1)='-DDLL_EXPORT']) ;; hpux9* | hpux10* | hpux11*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' # PIC is the default for IA64 HP-UX and 64-bit HP-UX, but # not for PA HP-UX. case $host_cpu in hppa*64*|ia64*) # +Z the default ;; *) _LT_TAGVAR(lt_prog_compiler_pic, $1)='+Z' ;; esac # Is there a better lt_prog_compiler_static that works with the bundled CC? _LT_TAGVAR(lt_prog_compiler_static, $1)='${wl}-a ${wl}archive' ;; irix5* | irix6* | nonstopux*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' # PIC (with -KPIC) is the default. _LT_TAGVAR(lt_prog_compiler_static, $1)='-non_shared' ;; linux* | k*bsd*-gnu | kopensolaris*-gnu) case $cc_basename in # old Intel for x86_64 which still supported -KPIC. ecc*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-static' ;; # icc used to be incompatible with GCC. # ICC 10 doesn't accept -KPIC any more. icc* | ifort*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-static' ;; # Lahey Fortran 8.1. lf95*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_pic, $1)='--shared' _LT_TAGVAR(lt_prog_compiler_static, $1)='--static' ;; nagfor*) # NAG Fortran compiler _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,-Wl,,' _LT_TAGVAR(lt_prog_compiler_pic, $1)='-PIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' ;; pgcc* | pgf77* | pgf90* | pgf95* | pgfortran*) # Portland Group compilers (*not* the Pentium gcc compiler, # which looks to be a dead project) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fpic' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' ;; ccc*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' # All Alpha code is PIC. _LT_TAGVAR(lt_prog_compiler_static, $1)='-non_shared' ;; xl* | bgxl* | bgf* | mpixl*) # IBM XL C 8.0/Fortran 10.1, 11.1 on PPC and BlueGene _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_pic, $1)='-qpic' _LT_TAGVAR(lt_prog_compiler_static, $1)='-qstaticlink' ;; *) case `$CC -V 2>&1 | sed 5q` in *Sun\ Ceres\ Fortran* | *Sun*Fortran*\ [[1-7]].* | *Sun*Fortran*\ 8.[[0-3]]*) # Sun Fortran 8.3 passes all unrecognized flags to the linker _LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' _LT_TAGVAR(lt_prog_compiler_wl, $1)='' ;; *Sun\ F* | *Sun*Fortran*) _LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Qoption ld ' ;; *Sun\ C*) # Sun C 5.9 _LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' ;; *Intel*\ [[CF]]*Compiler*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-static' ;; *Portland\ Group*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fpic' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' ;; esac ;; esac ;; newsos6) _LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' ;; *nto* | *qnx*) # QNX uses GNU C++, but need to define -shared option too, otherwise # it will coredump. _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC -shared' ;; osf3* | osf4* | osf5*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' # All OSF/1 code is PIC. _LT_TAGVAR(lt_prog_compiler_static, $1)='-non_shared' ;; rdos*) _LT_TAGVAR(lt_prog_compiler_static, $1)='-non_shared' ;; solaris*) _LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' case $cc_basename in f77* | f90* | f95* | sunf77* | sunf90* | sunf95*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Qoption ld ';; *) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,';; esac ;; sunos4*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Qoption ld ' _LT_TAGVAR(lt_prog_compiler_pic, $1)='-PIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' ;; sysv4 | sysv4.2uw2* | sysv4.3*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' ;; sysv4*MP*) if test -d /usr/nec ;then _LT_TAGVAR(lt_prog_compiler_pic, $1)='-Kconform_pic' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' fi ;; sysv5* | unixware* | sco3.2v5* | sco5v6* | OpenUNIX*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' ;; unicos*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_can_build_shared, $1)=no ;; uts4*) _LT_TAGVAR(lt_prog_compiler_pic, $1)='-pic' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' ;; *) _LT_TAGVAR(lt_prog_compiler_can_build_shared, $1)=no ;; esac fi ]) case $host_os in # For platforms which do not support PIC, -DPIC is meaningless: *djgpp*) _LT_TAGVAR(lt_prog_compiler_pic, $1)= ;; *) _LT_TAGVAR(lt_prog_compiler_pic, $1)="$_LT_TAGVAR(lt_prog_compiler_pic, $1)@&t@m4_if([$1],[],[ -DPIC],[m4_if([$1],[CXX],[ -DPIC],[])])" ;; esac AC_CACHE_CHECK([for $compiler option to produce PIC], [_LT_TAGVAR(lt_cv_prog_compiler_pic, $1)], [_LT_TAGVAR(lt_cv_prog_compiler_pic, $1)=$_LT_TAGVAR(lt_prog_compiler_pic, $1)]) _LT_TAGVAR(lt_prog_compiler_pic, $1)=$_LT_TAGVAR(lt_cv_prog_compiler_pic, $1) # # Check to make sure the PIC flag actually works. # if test -n "$_LT_TAGVAR(lt_prog_compiler_pic, $1)"; then _LT_COMPILER_OPTION([if $compiler PIC flag $_LT_TAGVAR(lt_prog_compiler_pic, $1) works], [_LT_TAGVAR(lt_cv_prog_compiler_pic_works, $1)], [$_LT_TAGVAR(lt_prog_compiler_pic, $1)@&t@m4_if([$1],[],[ -DPIC],[m4_if([$1],[CXX],[ -DPIC],[])])], [], [case $_LT_TAGVAR(lt_prog_compiler_pic, $1) in "" | " "*) ;; *) _LT_TAGVAR(lt_prog_compiler_pic, $1)=" $_LT_TAGVAR(lt_prog_compiler_pic, $1)" ;; esac], [_LT_TAGVAR(lt_prog_compiler_pic, $1)= _LT_TAGVAR(lt_prog_compiler_can_build_shared, $1)=no]) fi _LT_TAGDECL([pic_flag], [lt_prog_compiler_pic], [1], [Additional compiler flags for building library objects]) _LT_TAGDECL([wl], [lt_prog_compiler_wl], [1], [How to pass a linker flag through the compiler]) # # Check to make sure the static flag actually works. # wl=$_LT_TAGVAR(lt_prog_compiler_wl, $1) eval lt_tmp_static_flag=\"$_LT_TAGVAR(lt_prog_compiler_static, $1)\" _LT_LINKER_OPTION([if $compiler static flag $lt_tmp_static_flag works], _LT_TAGVAR(lt_cv_prog_compiler_static_works, $1), $lt_tmp_static_flag, [], [_LT_TAGVAR(lt_prog_compiler_static, $1)=]) _LT_TAGDECL([link_static_flag], [lt_prog_compiler_static], [1], [Compiler flag to prevent dynamic linking]) ])# _LT_COMPILER_PIC # _LT_LINKER_SHLIBS([TAGNAME]) # ---------------------------- # See if the linker supports building shared libraries. m4_defun([_LT_LINKER_SHLIBS], [AC_REQUIRE([LT_PATH_LD])dnl AC_REQUIRE([LT_PATH_NM])dnl m4_require([_LT_PATH_MANIFEST_TOOL])dnl m4_require([_LT_FILEUTILS_DEFAULTS])dnl m4_require([_LT_DECL_EGREP])dnl m4_require([_LT_DECL_SED])dnl m4_require([_LT_CMD_GLOBAL_SYMBOLS])dnl m4_require([_LT_TAG_COMPILER])dnl AC_MSG_CHECKING([whether the $compiler linker ($LD) supports shared libraries]) m4_if([$1], [CXX], [ _LT_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED '\''s/.* //'\'' | sort | uniq > $export_symbols' _LT_TAGVAR(exclude_expsyms, $1)=['_GLOBAL_OFFSET_TABLE_|_GLOBAL__F[ID]_.*'] case $host_os in aix[[4-9]]*) # If we're using GNU nm, then we don't want the "-C" option. # -C means demangle to AIX nm, but means don't demangle with GNU nm # Also, AIX nm treats weak defined symbols like other global defined # symbols, whereas GNU nm marks them as "W". if $NM -V 2>&1 | $GREP 'GNU' > /dev/null; then _LT_TAGVAR(export_symbols_cmds, $1)='$NM -Bpg $libobjs $convenience | awk '\''{ if (((\$ 2 == "T") || (\$ 2 == "D") || (\$ 2 == "B") || (\$ 2 == "W")) && ([substr](\$ 3,1,1) != ".")) { print \$ 3 } }'\'' | sort -u > $export_symbols' else _LT_TAGVAR(export_symbols_cmds, $1)='$NM -BCpg $libobjs $convenience | awk '\''{ if (((\$ 2 == "T") || (\$ 2 == "D") || (\$ 2 == "B")) && ([substr](\$ 3,1,1) != ".")) { print \$ 3 } }'\'' | sort -u > $export_symbols' fi ;; pw32*) _LT_TAGVAR(export_symbols_cmds, $1)="$ltdll_cmds" ;; cygwin* | mingw* | cegcc*) case $cc_basename in cl*) _LT_TAGVAR(exclude_expsyms, $1)='_NULL_IMPORT_DESCRIPTOR|_IMPORT_DESCRIPTOR_.*' ;; *) _LT_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[[BCDGRS]][[ ]]/s/.*[[ ]]\([[^ ]]*\)/\1 DATA/;s/^.*[[ ]]__nm__\([[^ ]]*\)[[ ]][[^ ]]*/\1 DATA/;/^I[[ ]]/d;/^[[AITW]][[ ]]/s/.* //'\'' | sort | uniq > $export_symbols' _LT_TAGVAR(exclude_expsyms, $1)=['[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname'] ;; esac ;; linux* | k*bsd*-gnu | gnu*) _LT_TAGVAR(link_all_deplibs, $1)=no ;; *) _LT_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED '\''s/.* //'\'' | sort | uniq > $export_symbols' ;; esac ], [ runpath_var= _LT_TAGVAR(allow_undefined_flag, $1)= _LT_TAGVAR(always_export_symbols, $1)=no _LT_TAGVAR(archive_cmds, $1)= _LT_TAGVAR(archive_expsym_cmds, $1)= _LT_TAGVAR(compiler_needs_object, $1)=no _LT_TAGVAR(enable_shared_with_static_runtimes, $1)=no _LT_TAGVAR(export_dynamic_flag_spec, $1)= _LT_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED '\''s/.* //'\'' | sort | uniq > $export_symbols' _LT_TAGVAR(hardcode_automatic, $1)=no _LT_TAGVAR(hardcode_direct, $1)=no _LT_TAGVAR(hardcode_direct_absolute, $1)=no _LT_TAGVAR(hardcode_libdir_flag_spec, $1)= _LT_TAGVAR(hardcode_libdir_separator, $1)= _LT_TAGVAR(hardcode_minus_L, $1)=no _LT_TAGVAR(hardcode_shlibpath_var, $1)=unsupported _LT_TAGVAR(inherit_rpath, $1)=no _LT_TAGVAR(link_all_deplibs, $1)=unknown _LT_TAGVAR(module_cmds, $1)= _LT_TAGVAR(module_expsym_cmds, $1)= _LT_TAGVAR(old_archive_from_new_cmds, $1)= _LT_TAGVAR(old_archive_from_expsyms_cmds, $1)= _LT_TAGVAR(thread_safe_flag_spec, $1)= _LT_TAGVAR(whole_archive_flag_spec, $1)= # include_expsyms should be a list of space-separated symbols to be *always* # included in the symbol list _LT_TAGVAR(include_expsyms, $1)= # exclude_expsyms can be an extended regexp of symbols to exclude # it will be wrapped by ` (' and `)$', so one must not match beginning or # end of line. Example: `a|bc|.*d.*' will exclude the symbols `a' and `bc', # as well as any symbol that contains `d'. _LT_TAGVAR(exclude_expsyms, $1)=['_GLOBAL_OFFSET_TABLE_|_GLOBAL__F[ID]_.*'] # Although _GLOBAL_OFFSET_TABLE_ is a valid symbol C name, most a.out # platforms (ab)use it in PIC code, but their linkers get confused if # the symbol is explicitly referenced. Since portable code cannot # rely on this symbol name, it's probably fine to never include it in # preloaded symbol tables. # Exclude shared library initialization/finalization symbols. dnl Note also adjust exclude_expsyms for C++ above. extract_expsyms_cmds= case $host_os in cygwin* | mingw* | pw32* | cegcc*) # FIXME: the MSVC++ port hasn't been tested in a loooong time # When not using gcc, we currently assume that we are using # Microsoft Visual C++. if test "$GCC" != yes; then with_gnu_ld=no fi ;; interix*) # we just hope/assume this is gcc and not c89 (= MSVC++) with_gnu_ld=yes ;; openbsd*) with_gnu_ld=no ;; linux* | k*bsd*-gnu | gnu*) _LT_TAGVAR(link_all_deplibs, $1)=no ;; esac _LT_TAGVAR(ld_shlibs, $1)=yes # On some targets, GNU ld is compatible enough with the native linker # that we're better off using the native interface for both. lt_use_gnu_ld_interface=no if test "$with_gnu_ld" = yes; then case $host_os in aix*) # The AIX port of GNU ld has always aspired to compatibility # with the native linker. However, as the warning in the GNU ld # block says, versions before 2.19.5* couldn't really create working # shared libraries, regardless of the interface used. case `$LD -v 2>&1` in *\ \(GNU\ Binutils\)\ 2.19.5*) ;; *\ \(GNU\ Binutils\)\ 2.[[2-9]]*) ;; *\ \(GNU\ Binutils\)\ [[3-9]]*) ;; *) lt_use_gnu_ld_interface=yes ;; esac ;; *) lt_use_gnu_ld_interface=yes ;; esac fi if test "$lt_use_gnu_ld_interface" = yes; then # If archive_cmds runs LD, not CC, wlarc should be empty wlarc='${wl}' # Set some defaults for GNU ld with shared library support. These # are reset later if shared libraries are not supported. Putting them # here allows them to be overridden if necessary. runpath_var=LD_RUN_PATH _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath ${wl}$libdir' _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}--export-dynamic' # ancient GNU ld didn't support --whole-archive et. al. if $LD --help 2>&1 | $GREP 'no-whole-archive' > /dev/null; then _LT_TAGVAR(whole_archive_flag_spec, $1)="$wlarc"'--whole-archive$convenience '"$wlarc"'--no-whole-archive' else _LT_TAGVAR(whole_archive_flag_spec, $1)= fi supports_anon_versioning=no case `$LD -v 2>&1` in *GNU\ gold*) supports_anon_versioning=yes ;; *\ [[01]].* | *\ 2.[[0-9]].* | *\ 2.10.*) ;; # catch versions < 2.11 *\ 2.11.93.0.2\ *) supports_anon_versioning=yes ;; # RH7.3 ... *\ 2.11.92.0.12\ *) supports_anon_versioning=yes ;; # Mandrake 8.2 ... *\ 2.11.*) ;; # other 2.11 versions *) supports_anon_versioning=yes ;; esac # See if GNU ld supports shared libraries. case $host_os in aix[[3-9]]*) # On AIX/PPC, the GNU linker is very broken if test "$host_cpu" != ia64; then _LT_TAGVAR(ld_shlibs, $1)=no cat <<_LT_EOF 1>&2 *** Warning: the GNU linker, at least up to release 2.19, is reported *** to be unable to reliably create shared libraries on AIX. *** Therefore, libtool is disabling shared libraries support. If you *** really care for shared libraries, you may want to install binutils *** 2.20 or above, or modify your PATH so that a non-GNU linker is found. *** You will then need to restart the configuration process. _LT_EOF fi ;; amigaos*) case $host_cpu in powerpc) # see comment about AmigaOS4 .so support _LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' _LT_TAGVAR(archive_expsym_cmds, $1)='' ;; m68k) _LT_TAGVAR(archive_cmds, $1)='$RM $output_objdir/a2ixlibrary.data~$ECHO "#define NAME $libname" > $output_objdir/a2ixlibrary.data~$ECHO "#define LIBRARY_ID 1" >> $output_objdir/a2ixlibrary.data~$ECHO "#define VERSION $major" >> $output_objdir/a2ixlibrary.data~$ECHO "#define REVISION $revision" >> $output_objdir/a2ixlibrary.data~$AR $AR_FLAGS $lib $libobjs~$RANLIB $lib~(cd $output_objdir && a2ixlibrary -32)' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir' _LT_TAGVAR(hardcode_minus_L, $1)=yes ;; esac ;; beos*) if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then _LT_TAGVAR(allow_undefined_flag, $1)=unsupported # Joseph Beckenbach says some releases of gcc # support --undefined. This deserves some investigation. FIXME _LT_TAGVAR(archive_cmds, $1)='$CC -nostart $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' else _LT_TAGVAR(ld_shlibs, $1)=no fi ;; cygwin* | mingw* | pw32* | cegcc*) # _LT_TAGVAR(hardcode_libdir_flag_spec, $1) is actually meaningless, # as there is no search path for DLLs. _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir' _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}--export-all-symbols' _LT_TAGVAR(allow_undefined_flag, $1)=unsupported _LT_TAGVAR(always_export_symbols, $1)=no _LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes _LT_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[[BCDGRS]][[ ]]/s/.*[[ ]]\([[^ ]]*\)/\1 DATA/;s/^.*[[ ]]__nm__\([[^ ]]*\)[[ ]][[^ ]]*/\1 DATA/;/^I[[ ]]/d;/^[[AITW]][[ ]]/s/.* //'\'' | sort | uniq > $export_symbols' _LT_TAGVAR(exclude_expsyms, $1)=['[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname'] if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then _LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib' # If the export-symbols file already is a .def file (1st line # is EXPORTS), use it as is; otherwise, prepend... _LT_TAGVAR(archive_expsym_cmds, $1)='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then cp $export_symbols $output_objdir/$soname.def; else echo EXPORTS > $output_objdir/$soname.def; cat $export_symbols >> $output_objdir/$soname.def; fi~ $CC -shared $output_objdir/$soname.def $libobjs $deplibs $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib' else _LT_TAGVAR(ld_shlibs, $1)=no fi ;; haiku*) _LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' _LT_TAGVAR(link_all_deplibs, $1)=yes ;; interix[[3-9]]*) _LT_TAGVAR(hardcode_direct, $1)=no _LT_TAGVAR(hardcode_shlibpath_var, $1)=no _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath,$libdir' _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}-E' # Hack: On Interix 3.x, we cannot compile PIC because of a broken gcc. # Instead, shared libraries are loaded at an image base (0x10000000 by # default) and relocated if they conflict, which is a slow very memory # consuming and fragmenting process. To avoid this, we pick a random, # 256 KiB-aligned image base between 0x50000000 and 0x6FFC0000 at link # time. Moving up from 0x10000000 also allows more sbrk(2) space. _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-h,$soname ${wl}--image-base,`expr ${RANDOM-$$} % 4096 / 2 \* 262144 + 1342177280` -o $lib' _LT_TAGVAR(archive_expsym_cmds, $1)='sed "s,^,_," $export_symbols >$output_objdir/$soname.expsym~$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-h,$soname ${wl}--retain-symbols-file,$output_objdir/$soname.expsym ${wl}--image-base,`expr ${RANDOM-$$} % 4096 / 2 \* 262144 + 1342177280` -o $lib' ;; gnu* | linux* | tpf* | k*bsd*-gnu | kopensolaris*-gnu) tmp_diet=no if test "$host_os" = linux-dietlibc; then case $cc_basename in diet\ *) tmp_diet=yes;; # linux-dietlibc with static linking (!diet-dyn) esac fi if $LD --help 2>&1 | $EGREP ': supported targets:.* elf' > /dev/null \ && test "$tmp_diet" = no then tmp_addflag=' $pic_flag' tmp_sharedflag='-shared' case $cc_basename,$host_cpu in pgcc*) # Portland Group C compiler _LT_TAGVAR(whole_archive_flag_spec, $1)='${wl}--whole-archive`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` ${wl}--no-whole-archive' tmp_addflag=' $pic_flag' ;; pgf77* | pgf90* | pgf95* | pgfortran*) # Portland Group f77 and f90 compilers _LT_TAGVAR(whole_archive_flag_spec, $1)='${wl}--whole-archive`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` ${wl}--no-whole-archive' tmp_addflag=' $pic_flag -Mnomain' ;; ecc*,ia64* | icc*,ia64*) # Intel C compiler on ia64 tmp_addflag=' -i_dynamic' ;; efc*,ia64* | ifort*,ia64*) # Intel Fortran compiler on ia64 tmp_addflag=' -i_dynamic -nofor_main' ;; ifc* | ifort*) # Intel Fortran compiler tmp_addflag=' -nofor_main' ;; lf95*) # Lahey Fortran 8.1 _LT_TAGVAR(whole_archive_flag_spec, $1)= tmp_sharedflag='--shared' ;; xl[[cC]]* | bgxl[[cC]]* | mpixl[[cC]]*) # IBM XL C 8.0 on PPC (deal with xlf below) tmp_sharedflag='-qmkshrobj' tmp_addflag= ;; nvcc*) # Cuda Compiler Driver 2.2 _LT_TAGVAR(whole_archive_flag_spec, $1)='${wl}--whole-archive`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` ${wl}--no-whole-archive' _LT_TAGVAR(compiler_needs_object, $1)=yes ;; esac case `$CC -V 2>&1 | sed 5q` in *Sun\ C*) # Sun C 5.9 _LT_TAGVAR(whole_archive_flag_spec, $1)='${wl}--whole-archive`new_convenience=; for conv in $convenience\"\"; do test -z \"$conv\" || new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` ${wl}--no-whole-archive' _LT_TAGVAR(compiler_needs_object, $1)=yes tmp_sharedflag='-G' ;; *Sun\ F*) # Sun Fortran 8.3 tmp_sharedflag='-G' ;; esac _LT_TAGVAR(archive_cmds, $1)='$CC '"$tmp_sharedflag""$tmp_addflag"' $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' if test "x$supports_anon_versioning" = xyes; then _LT_TAGVAR(archive_expsym_cmds, $1)='echo "{ global:" > $output_objdir/$libname.ver~ cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~ echo "local: *; };" >> $output_objdir/$libname.ver~ $CC '"$tmp_sharedflag""$tmp_addflag"' $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-version-script ${wl}$output_objdir/$libname.ver -o $lib' fi case $cc_basename in xlf* | bgf* | bgxlf* | mpixlf*) # IBM XL Fortran 10.1 on PPC cannot create shared libs itself _LT_TAGVAR(whole_archive_flag_spec, $1)='--whole-archive$convenience --no-whole-archive' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath ${wl}$libdir' _LT_TAGVAR(archive_cmds, $1)='$LD -shared $libobjs $deplibs $linker_flags -soname $soname -o $lib' if test "x$supports_anon_versioning" = xyes; then _LT_TAGVAR(archive_expsym_cmds, $1)='echo "{ global:" > $output_objdir/$libname.ver~ cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~ echo "local: *; };" >> $output_objdir/$libname.ver~ $LD -shared $libobjs $deplibs $linker_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib' fi ;; esac else _LT_TAGVAR(ld_shlibs, $1)=no fi ;; netbsd* | netbsdelf*-gnu) if echo __ELF__ | $CC -E - | $GREP __ELF__ >/dev/null; then _LT_TAGVAR(archive_cmds, $1)='$LD -Bshareable $libobjs $deplibs $linker_flags -o $lib' wlarc= else _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib' fi ;; solaris*) if $LD -v 2>&1 | $GREP 'BFD 2\.8' > /dev/null; then _LT_TAGVAR(ld_shlibs, $1)=no cat <<_LT_EOF 1>&2 *** Warning: The releases 2.8.* of the GNU linker cannot reliably *** create shared libraries on Solaris systems. Therefore, libtool *** is disabling shared libraries support. We urge you to upgrade GNU *** binutils to release 2.9.1 or newer. Another option is to modify *** your PATH or compiler configuration so that the native linker is *** used, and then restart. _LT_EOF elif $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib' else _LT_TAGVAR(ld_shlibs, $1)=no fi ;; sysv5* | sco3.2v5* | sco5v6* | unixware* | OpenUNIX*) case `$LD -v 2>&1` in *\ [[01]].* | *\ 2.[[0-9]].* | *\ 2.1[[0-5]].*) _LT_TAGVAR(ld_shlibs, $1)=no cat <<_LT_EOF 1>&2 *** Warning: Releases of the GNU linker prior to 2.16.91.0.3 can not *** reliably create shared libraries on SCO systems. Therefore, libtool *** is disabling shared libraries support. We urge you to upgrade GNU *** binutils to release 2.16.91.0.3 or newer. Another option is to modify *** your PATH or compiler configuration so that the native linker is *** used, and then restart. _LT_EOF ;; *) # For security reasons, it is highly recommended that you always # use absolute paths for naming shared libraries, and exclude the # DT_RUNPATH tag from executables and libraries. But doing so # requires that you compile everything twice, which is a pain. if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath ${wl}$libdir' _LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib' else _LT_TAGVAR(ld_shlibs, $1)=no fi ;; esac ;; sunos4*) _LT_TAGVAR(archive_cmds, $1)='$LD -assert pure-text -Bshareable -o $lib $libobjs $deplibs $linker_flags' wlarc= _LT_TAGVAR(hardcode_direct, $1)=yes _LT_TAGVAR(hardcode_shlibpath_var, $1)=no ;; *) if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib' else _LT_TAGVAR(ld_shlibs, $1)=no fi ;; esac if test "$_LT_TAGVAR(ld_shlibs, $1)" = no; then runpath_var= _LT_TAGVAR(hardcode_libdir_flag_spec, $1)= _LT_TAGVAR(export_dynamic_flag_spec, $1)= _LT_TAGVAR(whole_archive_flag_spec, $1)= fi else # PORTME fill in a description of your system's linker (not GNU ld) case $host_os in aix3*) _LT_TAGVAR(allow_undefined_flag, $1)=unsupported _LT_TAGVAR(always_export_symbols, $1)=yes _LT_TAGVAR(archive_expsym_cmds, $1)='$LD -o $output_objdir/$soname $libobjs $deplibs $linker_flags -bE:$export_symbols -T512 -H512 -bM:SRE~$AR $AR_FLAGS $lib $output_objdir/$soname' # Note: this linker hardcodes the directories in LIBPATH if there # are no directories specified by -L. _LT_TAGVAR(hardcode_minus_L, $1)=yes if test "$GCC" = yes && test -z "$lt_prog_compiler_static"; then # Neither direct hardcoding nor static linking is supported with a # broken collect2. _LT_TAGVAR(hardcode_direct, $1)=unsupported fi ;; aix[[4-9]]*) if test "$host_cpu" = ia64; then # On IA64, the linker does run time linking by default, so we don't # have to do anything special. aix_use_runtimelinking=no exp_sym_flag='-Bexport' no_entry_flag="" else # If we're using GNU nm, then we don't want the "-C" option. # -C means demangle to AIX nm, but means don't demangle with GNU nm # Also, AIX nm treats weak defined symbols like other global # defined symbols, whereas GNU nm marks them as "W". if $NM -V 2>&1 | $GREP 'GNU' > /dev/null; then _LT_TAGVAR(export_symbols_cmds, $1)='$NM -Bpg $libobjs $convenience | awk '\''{ if (((\$ 2 == "T") || (\$ 2 == "D") || (\$ 2 == "B") || (\$ 2 == "W")) && ([substr](\$ 3,1,1) != ".")) { print \$ 3 } }'\'' | sort -u > $export_symbols' else _LT_TAGVAR(export_symbols_cmds, $1)='$NM -BCpg $libobjs $convenience | awk '\''{ if (((\$ 2 == "T") || (\$ 2 == "D") || (\$ 2 == "B")) && ([substr](\$ 3,1,1) != ".")) { print \$ 3 } }'\'' | sort -u > $export_symbols' fi aix_use_runtimelinking=no # Test if we are trying to use run time linking or normal # AIX style linking. If -brtl is somewhere in LDFLAGS, we # need to do runtime linking. case $host_os in aix4.[[23]]|aix4.[[23]].*|aix[[5-9]]*) for ld_flag in $LDFLAGS; do if (test $ld_flag = "-brtl" || test $ld_flag = "-Wl,-brtl"); then aix_use_runtimelinking=yes break fi done ;; esac exp_sym_flag='-bexport' no_entry_flag='-bnoentry' fi # When large executables or shared objects are built, AIX ld can # have problems creating the table of contents. If linking a library # or program results in "error TOC overflow" add -mminimal-toc to # CXXFLAGS/CFLAGS for g++/gcc. In the cases where that is not # enough to fix the problem, add -Wl,-bbigtoc to LDFLAGS. _LT_TAGVAR(archive_cmds, $1)='' _LT_TAGVAR(hardcode_direct, $1)=yes _LT_TAGVAR(hardcode_direct_absolute, $1)=yes _LT_TAGVAR(hardcode_libdir_separator, $1)=':' _LT_TAGVAR(link_all_deplibs, $1)=yes _LT_TAGVAR(file_list_spec, $1)='${wl}-f,' if test "$GCC" = yes; then case $host_os in aix4.[[012]]|aix4.[[012]].*) # We only want to do this on AIX 4.2 and lower, the check # below for broken collect2 doesn't work under 4.3+ collect2name=`${CC} -print-prog-name=collect2` if test -f "$collect2name" && strings "$collect2name" | $GREP resolve_lib_name >/dev/null then # We have reworked collect2 : else # We have old collect2 _LT_TAGVAR(hardcode_direct, $1)=unsupported # It fails to find uninstalled libraries when the uninstalled # path is not listed in the libpath. Setting hardcode_minus_L # to unsupported forces relinking _LT_TAGVAR(hardcode_minus_L, $1)=yes _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir' _LT_TAGVAR(hardcode_libdir_separator, $1)= fi ;; esac shared_flag='-shared' if test "$aix_use_runtimelinking" = yes; then shared_flag="$shared_flag "'${wl}-G' fi _LT_TAGVAR(link_all_deplibs, $1)=no else # not using gcc if test "$host_cpu" = ia64; then # VisualAge C++, Version 5.5 for AIX 5L for IA-64, Beta 3 Release # chokes on -Wl,-G. The following line is correct: shared_flag='-G' else if test "$aix_use_runtimelinking" = yes; then shared_flag='${wl}-G' else shared_flag='${wl}-bM:SRE' fi fi fi _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}-bexpall' # It seems that -bexpall does not export symbols beginning with # underscore (_), so it is better to generate a list of symbols to export. _LT_TAGVAR(always_export_symbols, $1)=yes if test "$aix_use_runtimelinking" = yes; then # Warning - without using the other runtime loading flags (-brtl), # -berok will link without error, but may produce a broken library. _LT_TAGVAR(allow_undefined_flag, $1)='-berok' # Determine the default libpath from the value encoded in an # empty executable. _LT_SYS_MODULE_PATH_AIX([$1]) _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-blibpath:$libdir:'"$aix_libpath" _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags `if test "x${allow_undefined_flag}" != "x"; then func_echo_all "${wl}${allow_undefined_flag}"; else :; fi` '"\${wl}$exp_sym_flag:\$export_symbols $shared_flag" else if test "$host_cpu" = ia64; then _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-R $libdir:/usr/lib:/lib' _LT_TAGVAR(allow_undefined_flag, $1)="-z nodefs" _LT_TAGVAR(archive_expsym_cmds, $1)="\$CC $shared_flag"' -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags ${wl}${allow_undefined_flag} '"\${wl}$exp_sym_flag:\$export_symbols" else # Determine the default libpath from the value encoded in an # empty executable. _LT_SYS_MODULE_PATH_AIX([$1]) _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-blibpath:$libdir:'"$aix_libpath" # Warning - without using the other run time loading flags, # -berok will link without error, but may produce a broken library. _LT_TAGVAR(no_undefined_flag, $1)=' ${wl}-bernotok' _LT_TAGVAR(allow_undefined_flag, $1)=' ${wl}-berok' if test "$with_gnu_ld" = yes; then # We only use this code for GNU lds that support --whole-archive. _LT_TAGVAR(whole_archive_flag_spec, $1)='${wl}--whole-archive$convenience ${wl}--no-whole-archive' else # Exported symbols can be pulled into shared objects from archives _LT_TAGVAR(whole_archive_flag_spec, $1)='$convenience' fi _LT_TAGVAR(archive_cmds_need_lc, $1)=yes # This is similar to how AIX traditionally builds its shared libraries. _LT_TAGVAR(archive_expsym_cmds, $1)="\$CC $shared_flag"' -o $output_objdir/$soname $libobjs $deplibs ${wl}-bnoentry $compiler_flags ${wl}-bE:$export_symbols${allow_undefined_flag}~$AR $AR_FLAGS $output_objdir/$libname$release.a $output_objdir/$soname' fi fi ;; amigaos*) case $host_cpu in powerpc) # see comment about AmigaOS4 .so support _LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' _LT_TAGVAR(archive_expsym_cmds, $1)='' ;; m68k) _LT_TAGVAR(archive_cmds, $1)='$RM $output_objdir/a2ixlibrary.data~$ECHO "#define NAME $libname" > $output_objdir/a2ixlibrary.data~$ECHO "#define LIBRARY_ID 1" >> $output_objdir/a2ixlibrary.data~$ECHO "#define VERSION $major" >> $output_objdir/a2ixlibrary.data~$ECHO "#define REVISION $revision" >> $output_objdir/a2ixlibrary.data~$AR $AR_FLAGS $lib $libobjs~$RANLIB $lib~(cd $output_objdir && a2ixlibrary -32)' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir' _LT_TAGVAR(hardcode_minus_L, $1)=yes ;; esac ;; bsdi[[45]]*) _LT_TAGVAR(export_dynamic_flag_spec, $1)=-rdynamic ;; cygwin* | mingw* | pw32* | cegcc*) # When not using gcc, we currently assume that we are using # Microsoft Visual C++. # hardcode_libdir_flag_spec is actually meaningless, as there is # no search path for DLLs. case $cc_basename in cl*) # Native MSVC _LT_TAGVAR(hardcode_libdir_flag_spec, $1)=' ' _LT_TAGVAR(allow_undefined_flag, $1)=unsupported _LT_TAGVAR(always_export_symbols, $1)=yes _LT_TAGVAR(file_list_spec, $1)='@' # Tell ltmain to make .lib files, not .a files. libext=lib # Tell ltmain to make .dll files, not .so files. shrext_cmds=".dll" # FIXME: Setting linknames here is a bad hack. _LT_TAGVAR(archive_cmds, $1)='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-dll~linknames=' _LT_TAGVAR(archive_expsym_cmds, $1)='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then sed -n -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' -e '1\\\!p' < $export_symbols > $output_objdir/$soname.exp; else sed -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' < $export_symbols > $output_objdir/$soname.exp; fi~ $CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~ linknames=' # The linker will not automatically build a static lib if we build a DLL. # _LT_TAGVAR(old_archive_from_new_cmds, $1)='true' _LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes _LT_TAGVAR(exclude_expsyms, $1)='_NULL_IMPORT_DESCRIPTOR|_IMPORT_DESCRIPTOR_.*' _LT_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[[BCDGRS]][[ ]]/s/.*[[ ]]\([[^ ]]*\)/\1,DATA/'\'' | $SED -e '\''/^[[AITW]][[ ]]/s/.*[[ ]]//'\'' | sort | uniq > $export_symbols' # Don't use ranlib _LT_TAGVAR(old_postinstall_cmds, $1)='chmod 644 $oldlib' _LT_TAGVAR(postlink_cmds, $1)='lt_outputfile="@OUTPUT@"~ lt_tool_outputfile="@TOOL_OUTPUT@"~ case $lt_outputfile in *.exe|*.EXE) ;; *) lt_outputfile="$lt_outputfile.exe" lt_tool_outputfile="$lt_tool_outputfile.exe" ;; esac~ if test "$MANIFEST_TOOL" != ":" && test -f "$lt_outputfile.manifest"; then $MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1; $RM "$lt_outputfile.manifest"; fi' ;; *) # Assume MSVC wrapper _LT_TAGVAR(hardcode_libdir_flag_spec, $1)=' ' _LT_TAGVAR(allow_undefined_flag, $1)=unsupported # Tell ltmain to make .lib files, not .a files. libext=lib # Tell ltmain to make .dll files, not .so files. shrext_cmds=".dll" # FIXME: Setting linknames here is a bad hack. _LT_TAGVAR(archive_cmds, $1)='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames=' # The linker will automatically build a .lib file if we build a DLL. _LT_TAGVAR(old_archive_from_new_cmds, $1)='true' # FIXME: Should let the user specify the lib program. _LT_TAGVAR(old_archive_cmds, $1)='lib -OUT:$oldlib$oldobjs$old_deplibs' _LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes ;; esac ;; darwin* | rhapsody*) _LT_DARWIN_LINKER_FEATURES($1) ;; dgux*) _LT_TAGVAR(archive_cmds, $1)='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir' _LT_TAGVAR(hardcode_shlibpath_var, $1)=no ;; # FreeBSD 2.2.[012] allows us to include c++rt0.o to get C++ constructor # support. Future versions do this automatically, but an explicit c++rt0.o # does not break anything, and helps significantly (at the cost of a little # extra space). freebsd2.2*) _LT_TAGVAR(archive_cmds, $1)='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags /usr/lib/c++rt0.o' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-R$libdir' _LT_TAGVAR(hardcode_direct, $1)=yes _LT_TAGVAR(hardcode_shlibpath_var, $1)=no ;; # Unfortunately, older versions of FreeBSD 2 do not have this feature. freebsd2.*) _LT_TAGVAR(archive_cmds, $1)='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags' _LT_TAGVAR(hardcode_direct, $1)=yes _LT_TAGVAR(hardcode_minus_L, $1)=yes _LT_TAGVAR(hardcode_shlibpath_var, $1)=no ;; # FreeBSD 3 and greater uses gcc -shared to do shared libraries. freebsd* | dragonfly*) _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-R$libdir' _LT_TAGVAR(hardcode_direct, $1)=yes _LT_TAGVAR(hardcode_shlibpath_var, $1)=no ;; hpux9*) if test "$GCC" = yes; then _LT_TAGVAR(archive_cmds, $1)='$RM $output_objdir/$soname~$CC -shared $pic_flag ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib' else _LT_TAGVAR(archive_cmds, $1)='$RM $output_objdir/$soname~$LD -b +b $install_libdir -o $output_objdir/$soname $libobjs $deplibs $linker_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib' fi _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}+b ${wl}$libdir' _LT_TAGVAR(hardcode_libdir_separator, $1)=: _LT_TAGVAR(hardcode_direct, $1)=yes # hardcode_minus_L: Not really in the search PATH, # but as the default location of the library. _LT_TAGVAR(hardcode_minus_L, $1)=yes _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}-E' ;; hpux10*) if test "$GCC" = yes && test "$with_gnu_ld" = no; then _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags' else _LT_TAGVAR(archive_cmds, $1)='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags' fi if test "$with_gnu_ld" = no; then _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}+b ${wl}$libdir' _LT_TAGVAR(hardcode_libdir_separator, $1)=: _LT_TAGVAR(hardcode_direct, $1)=yes _LT_TAGVAR(hardcode_direct_absolute, $1)=yes _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}-E' # hardcode_minus_L: Not really in the search PATH, # but as the default location of the library. _LT_TAGVAR(hardcode_minus_L, $1)=yes fi ;; hpux11*) if test "$GCC" = yes && test "$with_gnu_ld" = no; then case $host_cpu in hppa*64*) _LT_TAGVAR(archive_cmds, $1)='$CC -shared ${wl}+h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags' ;; ia64*) _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags' ;; *) _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags' ;; esac else case $host_cpu in hppa*64*) _LT_TAGVAR(archive_cmds, $1)='$CC -b ${wl}+h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags' ;; ia64*) _LT_TAGVAR(archive_cmds, $1)='$CC -b ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags' ;; *) m4_if($1, [], [ # Older versions of the 11.00 compiler do not understand -b yet # (HP92453-01 A.11.01.20 doesn't, HP92453-01 B.11.X.35175-35176.GP does) _LT_LINKER_OPTION([if $CC understands -b], _LT_TAGVAR(lt_cv_prog_compiler__b, $1), [-b], [_LT_TAGVAR(archive_cmds, $1)='$CC -b ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'], [_LT_TAGVAR(archive_cmds, $1)='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags'])], [_LT_TAGVAR(archive_cmds, $1)='$CC -b ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags']) ;; esac fi if test "$with_gnu_ld" = no; then _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}+b ${wl}$libdir' _LT_TAGVAR(hardcode_libdir_separator, $1)=: case $host_cpu in hppa*64*|ia64*) _LT_TAGVAR(hardcode_direct, $1)=no _LT_TAGVAR(hardcode_shlibpath_var, $1)=no ;; *) _LT_TAGVAR(hardcode_direct, $1)=yes _LT_TAGVAR(hardcode_direct_absolute, $1)=yes _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}-E' # hardcode_minus_L: Not really in the search PATH, # but as the default location of the library. _LT_TAGVAR(hardcode_minus_L, $1)=yes ;; esac fi ;; irix5* | irix6* | nonstopux*) if test "$GCC" = yes; then _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib' # Try to use the -exported_symbol ld option, if it does not # work, assume that -exports_file does not work either and # implicitly export all symbols. # This should be the same for all languages, so no per-tag cache variable. AC_CACHE_CHECK([whether the $host_os linker accepts -exported_symbol], [lt_cv_irix_exported_symbol], [save_LDFLAGS="$LDFLAGS" LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null" AC_LINK_IFELSE( [AC_LANG_SOURCE( [AC_LANG_CASE([C], [[int foo (void) { return 0; }]], [C++], [[int foo (void) { return 0; }]], [Fortran 77], [[ subroutine foo end]], [Fortran], [[ subroutine foo end]])])], [lt_cv_irix_exported_symbol=yes], [lt_cv_irix_exported_symbol=no]) LDFLAGS="$save_LDFLAGS"]) if test "$lt_cv_irix_exported_symbol" = yes; then _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib' fi else _LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib' _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -exports_file $export_symbols -o $lib' fi _LT_TAGVAR(archive_cmds_need_lc, $1)='no' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath ${wl}$libdir' _LT_TAGVAR(hardcode_libdir_separator, $1)=: _LT_TAGVAR(inherit_rpath, $1)=yes _LT_TAGVAR(link_all_deplibs, $1)=yes ;; netbsd* | netbsdelf*-gnu) if echo __ELF__ | $CC -E - | $GREP __ELF__ >/dev/null; then _LT_TAGVAR(archive_cmds, $1)='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags' # a.out else _LT_TAGVAR(archive_cmds, $1)='$LD -shared -o $lib $libobjs $deplibs $linker_flags' # ELF fi _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-R$libdir' _LT_TAGVAR(hardcode_direct, $1)=yes _LT_TAGVAR(hardcode_shlibpath_var, $1)=no ;; newsos6) _LT_TAGVAR(archive_cmds, $1)='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' _LT_TAGVAR(hardcode_direct, $1)=yes _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath ${wl}$libdir' _LT_TAGVAR(hardcode_libdir_separator, $1)=: _LT_TAGVAR(hardcode_shlibpath_var, $1)=no ;; *nto* | *qnx*) ;; openbsd*) if test -f /usr/libexec/ld.so; then _LT_TAGVAR(hardcode_direct, $1)=yes _LT_TAGVAR(hardcode_shlibpath_var, $1)=no _LT_TAGVAR(hardcode_direct_absolute, $1)=yes if test -z "`echo __ELF__ | $CC -E - | $GREP __ELF__`" || test "$host_os-$host_cpu" = "openbsd2.8-powerpc"; then _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags' _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags ${wl}-retain-symbols-file,$export_symbols' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath,$libdir' _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}-E' else case $host_os in openbsd[[01]].* | openbsd2.[[0-7]] | openbsd2.[[0-7]].*) _LT_TAGVAR(archive_cmds, $1)='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-R$libdir' ;; *) _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath,$libdir' ;; esac fi else _LT_TAGVAR(ld_shlibs, $1)=no fi ;; os2*) _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir' _LT_TAGVAR(hardcode_minus_L, $1)=yes _LT_TAGVAR(allow_undefined_flag, $1)=unsupported _LT_TAGVAR(archive_cmds, $1)='$ECHO "LIBRARY $libname INITINSTANCE" > $output_objdir/$libname.def~$ECHO "DESCRIPTION \"$libname\"" >> $output_objdir/$libname.def~echo DATA >> $output_objdir/$libname.def~echo " SINGLE NONSHARED" >> $output_objdir/$libname.def~echo EXPORTS >> $output_objdir/$libname.def~emxexp $libobjs >> $output_objdir/$libname.def~$CC -Zdll -Zcrtdll -o $lib $libobjs $deplibs $compiler_flags $output_objdir/$libname.def' _LT_TAGVAR(old_archive_from_new_cmds, $1)='emximp -o $output_objdir/$libname.a $output_objdir/$libname.def' ;; osf3*) if test "$GCC" = yes; then _LT_TAGVAR(allow_undefined_flag, $1)=' ${wl}-expect_unresolved ${wl}\*' _LT_TAGVAR(archive_cmds, $1)='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib' else _LT_TAGVAR(allow_undefined_flag, $1)=' -expect_unresolved \*' _LT_TAGVAR(archive_cmds, $1)='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib' fi _LT_TAGVAR(archive_cmds_need_lc, $1)='no' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath ${wl}$libdir' _LT_TAGVAR(hardcode_libdir_separator, $1)=: ;; osf4* | osf5*) # as osf3* with the addition of -msym flag if test "$GCC" = yes; then _LT_TAGVAR(allow_undefined_flag, $1)=' ${wl}-expect_unresolved ${wl}\*' _LT_TAGVAR(archive_cmds, $1)='$CC -shared${allow_undefined_flag} $pic_flag $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath ${wl}$libdir' else _LT_TAGVAR(allow_undefined_flag, $1)=' -expect_unresolved \*' _LT_TAGVAR(archive_cmds, $1)='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags -msym -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib' _LT_TAGVAR(archive_expsym_cmds, $1)='for i in `cat $export_symbols`; do printf "%s %s\\n" -exported_symbol "\$i" >> $lib.exp; done; printf "%s\\n" "-hidden">> $lib.exp~ $CC -shared${allow_undefined_flag} ${wl}-input ${wl}$lib.exp $compiler_flags $libobjs $deplibs -soname $soname `test -n "$verstring" && $ECHO "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib~$RM $lib.exp' # Both c and cxx compiler support -rpath directly _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-rpath $libdir' fi _LT_TAGVAR(archive_cmds_need_lc, $1)='no' _LT_TAGVAR(hardcode_libdir_separator, $1)=: ;; solaris*) _LT_TAGVAR(no_undefined_flag, $1)=' -z defs' if test "$GCC" = yes; then wlarc='${wl}' _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags' _LT_TAGVAR(archive_expsym_cmds, $1)='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~ $CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp' else case `$CC -V 2>&1` in *"Compilers 5.0"*) wlarc='' _LT_TAGVAR(archive_cmds, $1)='$LD -G${allow_undefined_flag} -h $soname -o $lib $libobjs $deplibs $linker_flags' _LT_TAGVAR(archive_expsym_cmds, $1)='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~ $LD -G${allow_undefined_flag} -M $lib.exp -h $soname -o $lib $libobjs $deplibs $linker_flags~$RM $lib.exp' ;; *) wlarc='${wl}' _LT_TAGVAR(archive_cmds, $1)='$CC -G${allow_undefined_flag} -h $soname -o $lib $libobjs $deplibs $compiler_flags' _LT_TAGVAR(archive_expsym_cmds, $1)='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~ $CC -G${allow_undefined_flag} -M $lib.exp -h $soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp' ;; esac fi _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-R$libdir' _LT_TAGVAR(hardcode_shlibpath_var, $1)=no case $host_os in solaris2.[[0-5]] | solaris2.[[0-5]].*) ;; *) # The compiler driver will combine and reorder linker options, # but understands `-z linker_flag'. GCC discards it without `$wl', # but is careful enough not to reorder. # Supported since Solaris 2.6 (maybe 2.5.1?) if test "$GCC" = yes; then _LT_TAGVAR(whole_archive_flag_spec, $1)='${wl}-z ${wl}allextract$convenience ${wl}-z ${wl}defaultextract' else _LT_TAGVAR(whole_archive_flag_spec, $1)='-z allextract$convenience -z defaultextract' fi ;; esac _LT_TAGVAR(link_all_deplibs, $1)=yes ;; sunos4*) if test "x$host_vendor" = xsequent; then # Use $CC to link under sequent, because it throws in some extra .o # files that make .init and .fini sections work. _LT_TAGVAR(archive_cmds, $1)='$CC -G ${wl}-h $soname -o $lib $libobjs $deplibs $compiler_flags' else _LT_TAGVAR(archive_cmds, $1)='$LD -assert pure-text -Bstatic -o $lib $libobjs $deplibs $linker_flags' fi _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir' _LT_TAGVAR(hardcode_direct, $1)=yes _LT_TAGVAR(hardcode_minus_L, $1)=yes _LT_TAGVAR(hardcode_shlibpath_var, $1)=no ;; sysv4) case $host_vendor in sni) _LT_TAGVAR(archive_cmds, $1)='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' _LT_TAGVAR(hardcode_direct, $1)=yes # is this really true??? ;; siemens) ## LD is ld it makes a PLAMLIB ## CC just makes a GrossModule. _LT_TAGVAR(archive_cmds, $1)='$LD -G -o $lib $libobjs $deplibs $linker_flags' _LT_TAGVAR(reload_cmds, $1)='$CC -r -o $output$reload_objs' _LT_TAGVAR(hardcode_direct, $1)=no ;; motorola) _LT_TAGVAR(archive_cmds, $1)='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' _LT_TAGVAR(hardcode_direct, $1)=no #Motorola manual says yes, but my tests say they lie ;; esac runpath_var='LD_RUN_PATH' _LT_TAGVAR(hardcode_shlibpath_var, $1)=no ;; sysv4.3*) _LT_TAGVAR(archive_cmds, $1)='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' _LT_TAGVAR(hardcode_shlibpath_var, $1)=no _LT_TAGVAR(export_dynamic_flag_spec, $1)='-Bexport' ;; sysv4*MP*) if test -d /usr/nec; then _LT_TAGVAR(archive_cmds, $1)='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' _LT_TAGVAR(hardcode_shlibpath_var, $1)=no runpath_var=LD_RUN_PATH hardcode_runpath_var=yes _LT_TAGVAR(ld_shlibs, $1)=yes fi ;; sysv4*uw2* | sysv5OpenUNIX* | sysv5UnixWare7.[[01]].[[10]]* | unixware7* | sco3.2v5.0.[[024]]*) _LT_TAGVAR(no_undefined_flag, $1)='${wl}-z,text' _LT_TAGVAR(archive_cmds_need_lc, $1)=no _LT_TAGVAR(hardcode_shlibpath_var, $1)=no runpath_var='LD_RUN_PATH' if test "$GCC" = yes; then _LT_TAGVAR(archive_cmds, $1)='$CC -shared ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared ${wl}-Bexport:$export_symbols ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' else _LT_TAGVAR(archive_cmds, $1)='$CC -G ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -G ${wl}-Bexport:$export_symbols ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' fi ;; sysv5* | sco3.2v5* | sco5v6*) # Note: We can NOT use -z defs as we might desire, because we do not # link with -lc, and that would cause any symbols used from libc to # always be unresolved, which means just about no library would # ever link correctly. If we're not using GNU ld we use -z text # though, which does catch some bad symbols but isn't as heavy-handed # as -z defs. _LT_TAGVAR(no_undefined_flag, $1)='${wl}-z,text' _LT_TAGVAR(allow_undefined_flag, $1)='${wl}-z,nodefs' _LT_TAGVAR(archive_cmds_need_lc, $1)=no _LT_TAGVAR(hardcode_shlibpath_var, $1)=no _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-R,$libdir' _LT_TAGVAR(hardcode_libdir_separator, $1)=':' _LT_TAGVAR(link_all_deplibs, $1)=yes _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}-Bexport' runpath_var='LD_RUN_PATH' if test "$GCC" = yes; then _LT_TAGVAR(archive_cmds, $1)='$CC -shared ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared ${wl}-Bexport:$export_symbols ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' else _LT_TAGVAR(archive_cmds, $1)='$CC -G ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -G ${wl}-Bexport:$export_symbols ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' fi ;; uts4*) _LT_TAGVAR(archive_cmds, $1)='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir' _LT_TAGVAR(hardcode_shlibpath_var, $1)=no ;; *) _LT_TAGVAR(ld_shlibs, $1)=no ;; esac if test x$host_vendor = xsni; then case $host in sysv4 | sysv4.2uw2* | sysv4.3* | sysv5*) _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}-Blargedynsym' ;; esac fi fi ]) AC_MSG_RESULT([$_LT_TAGVAR(ld_shlibs, $1)]) test "$_LT_TAGVAR(ld_shlibs, $1)" = no && can_build_shared=no _LT_TAGVAR(with_gnu_ld, $1)=$with_gnu_ld _LT_DECL([], [libext], [0], [Old archive suffix (normally "a")])dnl _LT_DECL([], [shrext_cmds], [1], [Shared library suffix (normally ".so")])dnl _LT_DECL([], [extract_expsyms_cmds], [2], [The commands to extract the exported symbol list from a shared archive]) # # Do we need to explicitly link libc? # case "x$_LT_TAGVAR(archive_cmds_need_lc, $1)" in x|xyes) # Assume -lc should be added _LT_TAGVAR(archive_cmds_need_lc, $1)=yes if test "$enable_shared" = yes && test "$GCC" = yes; then case $_LT_TAGVAR(archive_cmds, $1) in *'~'*) # FIXME: we may have to deal with multi-command sequences. ;; '$CC '*) # Test whether the compiler implicitly links with -lc since on some # systems, -lgcc has to come before -lc. If gcc already passes -lc # to ld, don't add -lc before -lgcc. AC_CACHE_CHECK([whether -lc should be explicitly linked in], [lt_cv_]_LT_TAGVAR(archive_cmds_need_lc, $1), [$RM conftest* echo "$lt_simple_compile_test_code" > conftest.$ac_ext if AC_TRY_EVAL(ac_compile) 2>conftest.err; then soname=conftest lib=conftest libobjs=conftest.$ac_objext deplibs= wl=$_LT_TAGVAR(lt_prog_compiler_wl, $1) pic_flag=$_LT_TAGVAR(lt_prog_compiler_pic, $1) compiler_flags=-v linker_flags=-v verstring= output_objdir=. libname=conftest lt_save_allow_undefined_flag=$_LT_TAGVAR(allow_undefined_flag, $1) _LT_TAGVAR(allow_undefined_flag, $1)= if AC_TRY_EVAL(_LT_TAGVAR(archive_cmds, $1) 2\>\&1 \| $GREP \" -lc \" \>/dev/null 2\>\&1) then lt_cv_[]_LT_TAGVAR(archive_cmds_need_lc, $1)=no else lt_cv_[]_LT_TAGVAR(archive_cmds_need_lc, $1)=yes fi _LT_TAGVAR(allow_undefined_flag, $1)=$lt_save_allow_undefined_flag else cat conftest.err 1>&5 fi $RM conftest* ]) _LT_TAGVAR(archive_cmds_need_lc, $1)=$lt_cv_[]_LT_TAGVAR(archive_cmds_need_lc, $1) ;; esac fi ;; esac _LT_TAGDECL([build_libtool_need_lc], [archive_cmds_need_lc], [0], [Whether or not to add -lc for building shared libraries]) _LT_TAGDECL([allow_libtool_libs_with_static_runtimes], [enable_shared_with_static_runtimes], [0], [Whether or not to disallow shared libs when runtime libs are static]) _LT_TAGDECL([], [export_dynamic_flag_spec], [1], [Compiler flag to allow reflexive dlopens]) _LT_TAGDECL([], [whole_archive_flag_spec], [1], [Compiler flag to generate shared objects directly from archives]) _LT_TAGDECL([], [compiler_needs_object], [1], [Whether the compiler copes with passing no objects directly]) _LT_TAGDECL([], [old_archive_from_new_cmds], [2], [Create an old-style archive from a shared archive]) _LT_TAGDECL([], [old_archive_from_expsyms_cmds], [2], [Create a temporary old-style archive to link instead of a shared archive]) _LT_TAGDECL([], [archive_cmds], [2], [Commands used to build a shared archive]) _LT_TAGDECL([], [archive_expsym_cmds], [2]) _LT_TAGDECL([], [module_cmds], [2], [Commands used to build a loadable module if different from building a shared archive.]) _LT_TAGDECL([], [module_expsym_cmds], [2]) _LT_TAGDECL([], [with_gnu_ld], [1], [Whether we are building with GNU ld or not]) _LT_TAGDECL([], [allow_undefined_flag], [1], [Flag that allows shared libraries with undefined symbols to be built]) _LT_TAGDECL([], [no_undefined_flag], [1], [Flag that enforces no undefined symbols]) _LT_TAGDECL([], [hardcode_libdir_flag_spec], [1], [Flag to hardcode $libdir into a binary during linking. This must work even if $libdir does not exist]) _LT_TAGDECL([], [hardcode_libdir_separator], [1], [Whether we need a single "-rpath" flag with a separated argument]) _LT_TAGDECL([], [hardcode_direct], [0], [Set to "yes" if using DIR/libNAME${shared_ext} during linking hardcodes DIR into the resulting binary]) _LT_TAGDECL([], [hardcode_direct_absolute], [0], [Set to "yes" if using DIR/libNAME${shared_ext} during linking hardcodes DIR into the resulting binary and the resulting library dependency is "absolute", i.e impossible to change by setting ${shlibpath_var} if the library is relocated]) _LT_TAGDECL([], [hardcode_minus_L], [0], [Set to "yes" if using the -LDIR flag during linking hardcodes DIR into the resulting binary]) _LT_TAGDECL([], [hardcode_shlibpath_var], [0], [Set to "yes" if using SHLIBPATH_VAR=DIR during linking hardcodes DIR into the resulting binary]) _LT_TAGDECL([], [hardcode_automatic], [0], [Set to "yes" if building a shared library automatically hardcodes DIR into the library and all subsequent libraries and executables linked against it]) _LT_TAGDECL([], [inherit_rpath], [0], [Set to yes if linker adds runtime paths of dependent libraries to runtime path list]) _LT_TAGDECL([], [link_all_deplibs], [0], [Whether libtool must link a program against all its dependency libraries]) _LT_TAGDECL([], [always_export_symbols], [0], [Set to "yes" if exported symbols are required]) _LT_TAGDECL([], [export_symbols_cmds], [2], [The commands to list exported symbols]) _LT_TAGDECL([], [exclude_expsyms], [1], [Symbols that should not be listed in the preloaded symbols]) _LT_TAGDECL([], [include_expsyms], [1], [Symbols that must always be exported]) _LT_TAGDECL([], [prelink_cmds], [2], [Commands necessary for linking programs (against libraries) with templates]) _LT_TAGDECL([], [postlink_cmds], [2], [Commands necessary for finishing linking programs]) _LT_TAGDECL([], [file_list_spec], [1], [Specify filename containing input files]) dnl FIXME: Not yet implemented dnl _LT_TAGDECL([], [thread_safe_flag_spec], [1], dnl [Compiler flag to generate thread safe objects]) ])# _LT_LINKER_SHLIBS # _LT_LANG_C_CONFIG([TAG]) # ------------------------ # Ensure that the configuration variables for a C compiler are suitably # defined. These variables are subsequently used by _LT_CONFIG to write # the compiler configuration to `libtool'. m4_defun([_LT_LANG_C_CONFIG], [m4_require([_LT_DECL_EGREP])dnl lt_save_CC="$CC" AC_LANG_PUSH(C) # Source file extension for C test sources. ac_ext=c # Object file extension for compiled C test sources. objext=o _LT_TAGVAR(objext, $1)=$objext # Code to be used in simple compile tests lt_simple_compile_test_code="int some_variable = 0;" # Code to be used in simple link tests lt_simple_link_test_code='int main(){return(0);}' _LT_TAG_COMPILER # Save the default compiler, since it gets overwritten when the other # tags are being tested, and _LT_TAGVAR(compiler, []) is a NOP. compiler_DEFAULT=$CC # save warnings/boilerplate of simple test code _LT_COMPILER_BOILERPLATE _LT_LINKER_BOILERPLATE ## CAVEAT EMPTOR: ## There is no encapsulation within the following macros, do not change ## the running order or otherwise move them around unless you know exactly ## what you are doing... if test -n "$compiler"; then _LT_COMPILER_NO_RTTI($1) _LT_COMPILER_PIC($1) _LT_COMPILER_C_O($1) _LT_COMPILER_FILE_LOCKS($1) _LT_LINKER_SHLIBS($1) _LT_SYS_DYNAMIC_LINKER($1) _LT_LINKER_HARDCODE_LIBPATH($1) LT_SYS_DLOPEN_SELF _LT_CMD_STRIPLIB # Report which library types will actually be built AC_MSG_CHECKING([if libtool supports shared libraries]) AC_MSG_RESULT([$can_build_shared]) AC_MSG_CHECKING([whether to build shared libraries]) test "$can_build_shared" = "no" && enable_shared=no # On AIX, shared libraries and static libraries use the same namespace, and # are all built from PIC. case $host_os in aix3*) test "$enable_shared" = yes && enable_static=no if test -n "$RANLIB"; then archive_cmds="$archive_cmds~\$RANLIB \$lib" postinstall_cmds='$RANLIB $lib' fi ;; aix[[4-9]]*) if test "$host_cpu" != ia64 && test "$aix_use_runtimelinking" = no ; then test "$enable_shared" = yes && enable_static=no fi ;; esac AC_MSG_RESULT([$enable_shared]) AC_MSG_CHECKING([whether to build static libraries]) # Make sure either enable_shared or enable_static is yes. test "$enable_shared" = yes || enable_static=yes AC_MSG_RESULT([$enable_static]) _LT_CONFIG($1) fi AC_LANG_POP CC="$lt_save_CC" ])# _LT_LANG_C_CONFIG # _LT_LANG_CXX_CONFIG([TAG]) # -------------------------- # Ensure that the configuration variables for a C++ compiler are suitably # defined. These variables are subsequently used by _LT_CONFIG to write # the compiler configuration to `libtool'. m4_defun([_LT_LANG_CXX_CONFIG], [m4_require([_LT_FILEUTILS_DEFAULTS])dnl m4_require([_LT_DECL_EGREP])dnl m4_require([_LT_PATH_MANIFEST_TOOL])dnl if test -n "$CXX" && ( test "X$CXX" != "Xno" && ( (test "X$CXX" = "Xg++" && `g++ -v >/dev/null 2>&1` ) || (test "X$CXX" != "Xg++"))) ; then AC_PROG_CXXCPP else _lt_caught_CXX_error=yes fi AC_LANG_PUSH(C++) _LT_TAGVAR(archive_cmds_need_lc, $1)=no _LT_TAGVAR(allow_undefined_flag, $1)= _LT_TAGVAR(always_export_symbols, $1)=no _LT_TAGVAR(archive_expsym_cmds, $1)= _LT_TAGVAR(compiler_needs_object, $1)=no _LT_TAGVAR(export_dynamic_flag_spec, $1)= _LT_TAGVAR(hardcode_direct, $1)=no _LT_TAGVAR(hardcode_direct_absolute, $1)=no _LT_TAGVAR(hardcode_libdir_flag_spec, $1)= _LT_TAGVAR(hardcode_libdir_separator, $1)= _LT_TAGVAR(hardcode_minus_L, $1)=no _LT_TAGVAR(hardcode_shlibpath_var, $1)=unsupported _LT_TAGVAR(hardcode_automatic, $1)=no _LT_TAGVAR(inherit_rpath, $1)=no _LT_TAGVAR(module_cmds, $1)= _LT_TAGVAR(module_expsym_cmds, $1)= _LT_TAGVAR(link_all_deplibs, $1)=unknown _LT_TAGVAR(old_archive_cmds, $1)=$old_archive_cmds _LT_TAGVAR(reload_flag, $1)=$reload_flag _LT_TAGVAR(reload_cmds, $1)=$reload_cmds _LT_TAGVAR(no_undefined_flag, $1)= _LT_TAGVAR(whole_archive_flag_spec, $1)= _LT_TAGVAR(enable_shared_with_static_runtimes, $1)=no # Source file extension for C++ test sources. ac_ext=cpp # Object file extension for compiled C++ test sources. objext=o _LT_TAGVAR(objext, $1)=$objext # No sense in running all these tests if we already determined that # the CXX compiler isn't working. Some variables (like enable_shared) # are currently assumed to apply to all compilers on this platform, # and will be corrupted by setting them based on a non-working compiler. if test "$_lt_caught_CXX_error" != yes; then # Code to be used in simple compile tests lt_simple_compile_test_code="int some_variable = 0;" # Code to be used in simple link tests lt_simple_link_test_code='int main(int, char *[[]]) { return(0); }' # ltmain only uses $CC for tagged configurations so make sure $CC is set. _LT_TAG_COMPILER # save warnings/boilerplate of simple test code _LT_COMPILER_BOILERPLATE _LT_LINKER_BOILERPLATE # Allow CC to be a program name with arguments. lt_save_CC=$CC lt_save_CFLAGS=$CFLAGS lt_save_LD=$LD lt_save_GCC=$GCC GCC=$GXX lt_save_with_gnu_ld=$with_gnu_ld lt_save_path_LD=$lt_cv_path_LD if test -n "${lt_cv_prog_gnu_ldcxx+set}"; then lt_cv_prog_gnu_ld=$lt_cv_prog_gnu_ldcxx else $as_unset lt_cv_prog_gnu_ld fi if test -n "${lt_cv_path_LDCXX+set}"; then lt_cv_path_LD=$lt_cv_path_LDCXX else $as_unset lt_cv_path_LD fi test -z "${LDCXX+set}" || LD=$LDCXX CC=${CXX-"c++"} CFLAGS=$CXXFLAGS compiler=$CC _LT_TAGVAR(compiler, $1)=$CC _LT_CC_BASENAME([$compiler]) if test -n "$compiler"; then # We don't want -fno-exception when compiling C++ code, so set the # no_builtin_flag separately if test "$GXX" = yes; then _LT_TAGVAR(lt_prog_compiler_no_builtin_flag, $1)=' -fno-builtin' else _LT_TAGVAR(lt_prog_compiler_no_builtin_flag, $1)= fi if test "$GXX" = yes; then # Set up default GNU C++ configuration LT_PATH_LD # Check if GNU C++ uses GNU ld as the underlying linker, since the # archiving commands below assume that GNU ld is being used. if test "$with_gnu_ld" = yes; then _LT_TAGVAR(archive_cmds, $1)='$CC $pic_flag -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname $wl$soname -o $lib' _LT_TAGVAR(archive_expsym_cmds, $1)='$CC $pic_flag -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath ${wl}$libdir' _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}--export-dynamic' # If archive_cmds runs LD, not CC, wlarc should be empty # XXX I think wlarc can be eliminated in ltcf-cxx, but I need to # investigate it a little bit more. (MM) wlarc='${wl}' # ancient GNU ld didn't support --whole-archive et. al. if eval "`$CC -print-prog-name=ld` --help 2>&1" | $GREP 'no-whole-archive' > /dev/null; then _LT_TAGVAR(whole_archive_flag_spec, $1)="$wlarc"'--whole-archive$convenience '"$wlarc"'--no-whole-archive' else _LT_TAGVAR(whole_archive_flag_spec, $1)= fi else with_gnu_ld=no wlarc= # A generic and very simple default shared library creation # command for GNU C++ for the case where it uses the native # linker, instead of GNU ld. If possible, this setting should # overridden to take advantage of the native linker features on # the platform it is being used on. _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $lib' fi # Commands to make compiler produce verbose output that lists # what "hidden" libraries, object files and flags are used when # linking a shared library. output_verbose_link_cmd='$CC -shared $CFLAGS -v conftest.$objext 2>&1 | $GREP -v "^Configured with:" | $GREP "\-L"' else GXX=no with_gnu_ld=no wlarc= fi # PORTME: fill in a description of your system's C++ link characteristics AC_MSG_CHECKING([whether the $compiler linker ($LD) supports shared libraries]) _LT_TAGVAR(ld_shlibs, $1)=yes case $host_os in aix3*) # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; aix[[4-9]]*) if test "$host_cpu" = ia64; then # On IA64, the linker does run time linking by default, so we don't # have to do anything special. aix_use_runtimelinking=no exp_sym_flag='-Bexport' no_entry_flag="" else aix_use_runtimelinking=no # Test if we are trying to use run time linking or normal # AIX style linking. If -brtl is somewhere in LDFLAGS, we # need to do runtime linking. case $host_os in aix4.[[23]]|aix4.[[23]].*|aix[[5-9]]*) for ld_flag in $LDFLAGS; do case $ld_flag in *-brtl*) aix_use_runtimelinking=yes break ;; esac done ;; esac exp_sym_flag='-bexport' no_entry_flag='-bnoentry' fi # When large executables or shared objects are built, AIX ld can # have problems creating the table of contents. If linking a library # or program results in "error TOC overflow" add -mminimal-toc to # CXXFLAGS/CFLAGS for g++/gcc. In the cases where that is not # enough to fix the problem, add -Wl,-bbigtoc to LDFLAGS. _LT_TAGVAR(archive_cmds, $1)='' _LT_TAGVAR(hardcode_direct, $1)=yes _LT_TAGVAR(hardcode_direct_absolute, $1)=yes _LT_TAGVAR(hardcode_libdir_separator, $1)=':' _LT_TAGVAR(link_all_deplibs, $1)=yes _LT_TAGVAR(file_list_spec, $1)='${wl}-f,' if test "$GXX" = yes; then case $host_os in aix4.[[012]]|aix4.[[012]].*) # We only want to do this on AIX 4.2 and lower, the check # below for broken collect2 doesn't work under 4.3+ collect2name=`${CC} -print-prog-name=collect2` if test -f "$collect2name" && strings "$collect2name" | $GREP resolve_lib_name >/dev/null then # We have reworked collect2 : else # We have old collect2 _LT_TAGVAR(hardcode_direct, $1)=unsupported # It fails to find uninstalled libraries when the uninstalled # path is not listed in the libpath. Setting hardcode_minus_L # to unsupported forces relinking _LT_TAGVAR(hardcode_minus_L, $1)=yes _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir' _LT_TAGVAR(hardcode_libdir_separator, $1)= fi esac shared_flag='-shared' if test "$aix_use_runtimelinking" = yes; then shared_flag="$shared_flag "'${wl}-G' fi else # not using gcc if test "$host_cpu" = ia64; then # VisualAge C++, Version 5.5 for AIX 5L for IA-64, Beta 3 Release # chokes on -Wl,-G. The following line is correct: shared_flag='-G' else if test "$aix_use_runtimelinking" = yes; then shared_flag='${wl}-G' else shared_flag='${wl}-bM:SRE' fi fi fi _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}-bexpall' # It seems that -bexpall does not export symbols beginning with # underscore (_), so it is better to generate a list of symbols to # export. _LT_TAGVAR(always_export_symbols, $1)=yes if test "$aix_use_runtimelinking" = yes; then # Warning - without using the other runtime loading flags (-brtl), # -berok will link without error, but may produce a broken library. _LT_TAGVAR(allow_undefined_flag, $1)='-berok' # Determine the default libpath from the value encoded in an empty # executable. _LT_SYS_MODULE_PATH_AIX([$1]) _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-blibpath:$libdir:'"$aix_libpath" _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags `if test "x${allow_undefined_flag}" != "x"; then func_echo_all "${wl}${allow_undefined_flag}"; else :; fi` '"\${wl}$exp_sym_flag:\$export_symbols $shared_flag" else if test "$host_cpu" = ia64; then _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-R $libdir:/usr/lib:/lib' _LT_TAGVAR(allow_undefined_flag, $1)="-z nodefs" _LT_TAGVAR(archive_expsym_cmds, $1)="\$CC $shared_flag"' -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags ${wl}${allow_undefined_flag} '"\${wl}$exp_sym_flag:\$export_symbols" else # Determine the default libpath from the value encoded in an # empty executable. _LT_SYS_MODULE_PATH_AIX([$1]) _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-blibpath:$libdir:'"$aix_libpath" # Warning - without using the other run time loading flags, # -berok will link without error, but may produce a broken library. _LT_TAGVAR(no_undefined_flag, $1)=' ${wl}-bernotok' _LT_TAGVAR(allow_undefined_flag, $1)=' ${wl}-berok' if test "$with_gnu_ld" = yes; then # We only use this code for GNU lds that support --whole-archive. _LT_TAGVAR(whole_archive_flag_spec, $1)='${wl}--whole-archive$convenience ${wl}--no-whole-archive' else # Exported symbols can be pulled into shared objects from archives _LT_TAGVAR(whole_archive_flag_spec, $1)='$convenience' fi _LT_TAGVAR(archive_cmds_need_lc, $1)=yes # This is similar to how AIX traditionally builds its shared # libraries. _LT_TAGVAR(archive_expsym_cmds, $1)="\$CC $shared_flag"' -o $output_objdir/$soname $libobjs $deplibs ${wl}-bnoentry $compiler_flags ${wl}-bE:$export_symbols${allow_undefined_flag}~$AR $AR_FLAGS $output_objdir/$libname$release.a $output_objdir/$soname' fi fi ;; beos*) if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then _LT_TAGVAR(allow_undefined_flag, $1)=unsupported # Joseph Beckenbach says some releases of gcc # support --undefined. This deserves some investigation. FIXME _LT_TAGVAR(archive_cmds, $1)='$CC -nostart $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' else _LT_TAGVAR(ld_shlibs, $1)=no fi ;; chorus*) case $cc_basename in *) # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; esac ;; cygwin* | mingw* | pw32* | cegcc*) case $GXX,$cc_basename in ,cl* | no,cl*) # Native MSVC # hardcode_libdir_flag_spec is actually meaningless, as there is # no search path for DLLs. _LT_TAGVAR(hardcode_libdir_flag_spec, $1)=' ' _LT_TAGVAR(allow_undefined_flag, $1)=unsupported _LT_TAGVAR(always_export_symbols, $1)=yes _LT_TAGVAR(file_list_spec, $1)='@' # Tell ltmain to make .lib files, not .a files. libext=lib # Tell ltmain to make .dll files, not .so files. shrext_cmds=".dll" # FIXME: Setting linknames here is a bad hack. _LT_TAGVAR(archive_cmds, $1)='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-dll~linknames=' _LT_TAGVAR(archive_expsym_cmds, $1)='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then $SED -n -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' -e '1\\\!p' < $export_symbols > $output_objdir/$soname.exp; else $SED -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' < $export_symbols > $output_objdir/$soname.exp; fi~ $CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~ linknames=' # The linker will not automatically build a static lib if we build a DLL. # _LT_TAGVAR(old_archive_from_new_cmds, $1)='true' _LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes # Don't use ranlib _LT_TAGVAR(old_postinstall_cmds, $1)='chmod 644 $oldlib' _LT_TAGVAR(postlink_cmds, $1)='lt_outputfile="@OUTPUT@"~ lt_tool_outputfile="@TOOL_OUTPUT@"~ case $lt_outputfile in *.exe|*.EXE) ;; *) lt_outputfile="$lt_outputfile.exe" lt_tool_outputfile="$lt_tool_outputfile.exe" ;; esac~ func_to_tool_file "$lt_outputfile"~ if test "$MANIFEST_TOOL" != ":" && test -f "$lt_outputfile.manifest"; then $MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1; $RM "$lt_outputfile.manifest"; fi' ;; *) # g++ # _LT_TAGVAR(hardcode_libdir_flag_spec, $1) is actually meaningless, # as there is no search path for DLLs. _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir' _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}--export-all-symbols' _LT_TAGVAR(allow_undefined_flag, $1)=unsupported _LT_TAGVAR(always_export_symbols, $1)=no _LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib' # If the export-symbols file already is a .def file (1st line # is EXPORTS), use it as is; otherwise, prepend... _LT_TAGVAR(archive_expsym_cmds, $1)='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then cp $export_symbols $output_objdir/$soname.def; else echo EXPORTS > $output_objdir/$soname.def; cat $export_symbols >> $output_objdir/$soname.def; fi~ $CC -shared -nostdlib $output_objdir/$soname.def $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib' else _LT_TAGVAR(ld_shlibs, $1)=no fi ;; esac ;; darwin* | rhapsody*) _LT_DARWIN_LINKER_FEATURES($1) ;; dgux*) case $cc_basename in ec++*) # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; ghcx*) # Green Hills C++ Compiler # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; *) # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; esac ;; freebsd2.*) # C++ shared libraries reported to be fairly broken before # switch to ELF _LT_TAGVAR(ld_shlibs, $1)=no ;; freebsd-elf*) _LT_TAGVAR(archive_cmds_need_lc, $1)=no ;; freebsd* | dragonfly*) # FreeBSD 3 and later use GNU C++ and GNU ld with standard ELF # conventions _LT_TAGVAR(ld_shlibs, $1)=yes ;; gnu*) ;; haiku*) _LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' _LT_TAGVAR(link_all_deplibs, $1)=yes ;; hpux9*) _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}+b ${wl}$libdir' _LT_TAGVAR(hardcode_libdir_separator, $1)=: _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}-E' _LT_TAGVAR(hardcode_direct, $1)=yes _LT_TAGVAR(hardcode_minus_L, $1)=yes # Not in the search PATH, # but as the default # location of the library. case $cc_basename in CC*) # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; aCC*) _LT_TAGVAR(archive_cmds, $1)='$RM $output_objdir/$soname~$CC -b ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib' # Commands to make compiler produce verbose output that lists # what "hidden" libraries, object files and flags are used when # linking a shared library. # # There doesn't appear to be a way to prevent this compiler from # explicitly linking system object files so we need to strip them # from the output so that they don't get included in the library # dependencies. output_verbose_link_cmd='templist=`($CC -b $CFLAGS -v conftest.$objext 2>&1) | $EGREP "\-L"`; list=""; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; func_echo_all "$list"' ;; *) if test "$GXX" = yes; then _LT_TAGVAR(archive_cmds, $1)='$RM $output_objdir/$soname~$CC -shared -nostdlib $pic_flag ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib' else # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no fi ;; esac ;; hpux10*|hpux11*) if test $with_gnu_ld = no; then _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}+b ${wl}$libdir' _LT_TAGVAR(hardcode_libdir_separator, $1)=: case $host_cpu in hppa*64*|ia64*) ;; *) _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}-E' ;; esac fi case $host_cpu in hppa*64*|ia64*) _LT_TAGVAR(hardcode_direct, $1)=no _LT_TAGVAR(hardcode_shlibpath_var, $1)=no ;; *) _LT_TAGVAR(hardcode_direct, $1)=yes _LT_TAGVAR(hardcode_direct_absolute, $1)=yes _LT_TAGVAR(hardcode_minus_L, $1)=yes # Not in the search PATH, # but as the default # location of the library. ;; esac case $cc_basename in CC*) # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; aCC*) case $host_cpu in hppa*64*) _LT_TAGVAR(archive_cmds, $1)='$CC -b ${wl}+h ${wl}$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' ;; ia64*) _LT_TAGVAR(archive_cmds, $1)='$CC -b ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' ;; *) _LT_TAGVAR(archive_cmds, $1)='$CC -b ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' ;; esac # Commands to make compiler produce verbose output that lists # what "hidden" libraries, object files and flags are used when # linking a shared library. # # There doesn't appear to be a way to prevent this compiler from # explicitly linking system object files so we need to strip them # from the output so that they don't get included in the library # dependencies. output_verbose_link_cmd='templist=`($CC -b $CFLAGS -v conftest.$objext 2>&1) | $GREP "\-L"`; list=""; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; func_echo_all "$list"' ;; *) if test "$GXX" = yes; then if test $with_gnu_ld = no; then case $host_cpu in hppa*64*) _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib -fPIC ${wl}+h ${wl}$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' ;; ia64*) _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $pic_flag ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' ;; *) _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' ;; esac fi else # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no fi ;; esac ;; interix[[3-9]]*) _LT_TAGVAR(hardcode_direct, $1)=no _LT_TAGVAR(hardcode_shlibpath_var, $1)=no _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath,$libdir' _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}-E' # Hack: On Interix 3.x, we cannot compile PIC because of a broken gcc. # Instead, shared libraries are loaded at an image base (0x10000000 by # default) and relocated if they conflict, which is a slow very memory # consuming and fragmenting process. To avoid this, we pick a random, # 256 KiB-aligned image base between 0x50000000 and 0x6FFC0000 at link # time. Moving up from 0x10000000 also allows more sbrk(2) space. _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-h,$soname ${wl}--image-base,`expr ${RANDOM-$$} % 4096 / 2 \* 262144 + 1342177280` -o $lib' _LT_TAGVAR(archive_expsym_cmds, $1)='sed "s,^,_," $export_symbols >$output_objdir/$soname.expsym~$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-h,$soname ${wl}--retain-symbols-file,$output_objdir/$soname.expsym ${wl}--image-base,`expr ${RANDOM-$$} % 4096 / 2 \* 262144 + 1342177280` -o $lib' ;; irix5* | irix6*) case $cc_basename in CC*) # SGI C++ _LT_TAGVAR(archive_cmds, $1)='$CC -shared -all -multigot $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib' # Archives containing C++ object files must be created using # "CC -ar", where "CC" is the IRIX C++ compiler. This is # necessary to make sure instantiated templates are included # in the archive. _LT_TAGVAR(old_archive_cmds, $1)='$CC -ar -WR,-u -o $oldlib $oldobjs' ;; *) if test "$GXX" = yes; then if test "$with_gnu_ld" = no; then _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib' else _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` -o $lib' fi fi _LT_TAGVAR(link_all_deplibs, $1)=yes ;; esac _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath ${wl}$libdir' _LT_TAGVAR(hardcode_libdir_separator, $1)=: _LT_TAGVAR(inherit_rpath, $1)=yes ;; linux* | k*bsd*-gnu | kopensolaris*-gnu) case $cc_basename in KCC*) # Kuck and Associates, Inc. (KAI) C++ Compiler # KCC will only create a shared library if the output file # ends with ".so" (or ".sl" for HP-UX), so rename the library # to its proper name (with version) after linking. _LT_TAGVAR(archive_cmds, $1)='tempext=`echo $shared_ext | $SED -e '\''s/\([[^()0-9A-Za-z{}]]\)/\\\\\1/g'\''`; templib=`echo $lib | $SED -e "s/\${tempext}\..*/.so/"`; $CC $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags --soname $soname -o \$templib; mv \$templib $lib' _LT_TAGVAR(archive_expsym_cmds, $1)='tempext=`echo $shared_ext | $SED -e '\''s/\([[^()0-9A-Za-z{}]]\)/\\\\\1/g'\''`; templib=`echo $lib | $SED -e "s/\${tempext}\..*/.so/"`; $CC $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags --soname $soname -o \$templib ${wl}-retain-symbols-file,$export_symbols; mv \$templib $lib' # Commands to make compiler produce verbose output that lists # what "hidden" libraries, object files and flags are used when # linking a shared library. # # There doesn't appear to be a way to prevent this compiler from # explicitly linking system object files so we need to strip them # from the output so that they don't get included in the library # dependencies. output_verbose_link_cmd='templist=`$CC $CFLAGS -v conftest.$objext -o libconftest$shared_ext 2>&1 | $GREP "ld"`; rm -f libconftest$shared_ext; list=""; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; func_echo_all "$list"' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath,$libdir' _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}--export-dynamic' # Archives containing C++ object files must be created using # "CC -Bstatic", where "CC" is the KAI C++ compiler. _LT_TAGVAR(old_archive_cmds, $1)='$CC -Bstatic -o $oldlib $oldobjs' ;; icpc* | ecpc* ) # Intel C++ with_gnu_ld=yes # version 8.0 and above of icpc choke on multiply defined symbols # if we add $predep_objects and $postdep_objects, however 7.1 and # earlier do not add the objects themselves. case `$CC -V 2>&1` in *"Version 7."*) _LT_TAGVAR(archive_cmds, $1)='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname $wl$soname -o $lib' _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib' ;; *) # Version 8.0 or newer tmp_idyn= case $host_cpu in ia64*) tmp_idyn=' -i_dynamic';; esac _LT_TAGVAR(archive_cmds, $1)='$CC -shared'"$tmp_idyn"' $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared'"$tmp_idyn"' $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib' ;; esac _LT_TAGVAR(archive_cmds_need_lc, $1)=no _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath,$libdir' _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}--export-dynamic' _LT_TAGVAR(whole_archive_flag_spec, $1)='${wl}--whole-archive$convenience ${wl}--no-whole-archive' ;; pgCC* | pgcpp*) # Portland Group C++ compiler case `$CC -V` in *pgCC\ [[1-5]].* | *pgcpp\ [[1-5]].*) _LT_TAGVAR(prelink_cmds, $1)='tpldir=Template.dir~ rm -rf $tpldir~ $CC --prelink_objects --instantiation_dir $tpldir $objs $libobjs $compile_deplibs~ compile_command="$compile_command `find $tpldir -name \*.o | sort | $NL2SP`"' _LT_TAGVAR(old_archive_cmds, $1)='tpldir=Template.dir~ rm -rf $tpldir~ $CC --prelink_objects --instantiation_dir $tpldir $oldobjs$old_deplibs~ $AR $AR_FLAGS $oldlib$oldobjs$old_deplibs `find $tpldir -name \*.o | sort | $NL2SP`~ $RANLIB $oldlib' _LT_TAGVAR(archive_cmds, $1)='tpldir=Template.dir~ rm -rf $tpldir~ $CC --prelink_objects --instantiation_dir $tpldir $predep_objects $libobjs $deplibs $convenience $postdep_objects~ $CC -shared $pic_flag $predep_objects $libobjs $deplibs `find $tpldir -name \*.o | sort | $NL2SP` $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname -o $lib' _LT_TAGVAR(archive_expsym_cmds, $1)='tpldir=Template.dir~ rm -rf $tpldir~ $CC --prelink_objects --instantiation_dir $tpldir $predep_objects $libobjs $deplibs $convenience $postdep_objects~ $CC -shared $pic_flag $predep_objects $libobjs $deplibs `find $tpldir -name \*.o | sort | $NL2SP` $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname ${wl}-retain-symbols-file ${wl}$export_symbols -o $lib' ;; *) # Version 6 and above use weak symbols _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname -o $lib' _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $pic_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname ${wl}-retain-symbols-file ${wl}$export_symbols -o $lib' ;; esac _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}--rpath ${wl}$libdir' _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}--export-dynamic' _LT_TAGVAR(whole_archive_flag_spec, $1)='${wl}--whole-archive`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` ${wl}--no-whole-archive' ;; cxx*) # Compaq C++ _LT_TAGVAR(archive_cmds, $1)='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname $wl$soname -o $lib' _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname $wl$soname -o $lib ${wl}-retain-symbols-file $wl$export_symbols' runpath_var=LD_RUN_PATH _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-rpath $libdir' _LT_TAGVAR(hardcode_libdir_separator, $1)=: # Commands to make compiler produce verbose output that lists # what "hidden" libraries, object files and flags are used when # linking a shared library. # # There doesn't appear to be a way to prevent this compiler from # explicitly linking system object files so we need to strip them # from the output so that they don't get included in the library # dependencies. output_verbose_link_cmd='templist=`$CC -shared $CFLAGS -v conftest.$objext 2>&1 | $GREP "ld"`; templist=`func_echo_all "$templist" | $SED "s/\(^.*ld.*\)\( .*ld .*$\)/\1/"`; list=""; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; func_echo_all "X$list" | $Xsed' ;; xl* | mpixl* | bgxl*) # IBM XL 8.0 on PPC, with GNU ld _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath ${wl}$libdir' _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}--export-dynamic' _LT_TAGVAR(archive_cmds, $1)='$CC -qmkshrobj $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' if test "x$supports_anon_versioning" = xyes; then _LT_TAGVAR(archive_expsym_cmds, $1)='echo "{ global:" > $output_objdir/$libname.ver~ cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~ echo "local: *; };" >> $output_objdir/$libname.ver~ $CC -qmkshrobj $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-version-script ${wl}$output_objdir/$libname.ver -o $lib' fi ;; *) case `$CC -V 2>&1 | sed 5q` in *Sun\ C*) # Sun C++ 5.9 _LT_TAGVAR(no_undefined_flag, $1)=' -zdefs' _LT_TAGVAR(archive_cmds, $1)='$CC -G${allow_undefined_flag} -h$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -G${allow_undefined_flag} -h$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-retain-symbols-file ${wl}$export_symbols' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-R$libdir' _LT_TAGVAR(whole_archive_flag_spec, $1)='${wl}--whole-archive`new_convenience=; for conv in $convenience\"\"; do test -z \"$conv\" || new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` ${wl}--no-whole-archive' _LT_TAGVAR(compiler_needs_object, $1)=yes # Not sure whether something based on # $CC $CFLAGS -v conftest.$objext -o libconftest$shared_ext 2>&1 # would be better. output_verbose_link_cmd='func_echo_all' # Archives containing C++ object files must be created using # "CC -xar", where "CC" is the Sun C++ compiler. This is # necessary to make sure instantiated templates are included # in the archive. _LT_TAGVAR(old_archive_cmds, $1)='$CC -xar -o $oldlib $oldobjs' ;; esac ;; esac ;; lynxos*) # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; m88k*) # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; mvs*) case $cc_basename in cxx*) # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; *) # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; esac ;; netbsd*) if echo __ELF__ | $CC -E - | $GREP __ELF__ >/dev/null; then _LT_TAGVAR(archive_cmds, $1)='$LD -Bshareable -o $lib $predep_objects $libobjs $deplibs $postdep_objects $linker_flags' wlarc= _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-R$libdir' _LT_TAGVAR(hardcode_direct, $1)=yes _LT_TAGVAR(hardcode_shlibpath_var, $1)=no fi # Workaround some broken pre-1.5 toolchains output_verbose_link_cmd='$CC -shared $CFLAGS -v conftest.$objext 2>&1 | $GREP conftest.$objext | $SED -e "s:-lgcc -lc -lgcc::"' ;; *nto* | *qnx*) _LT_TAGVAR(ld_shlibs, $1)=yes ;; openbsd2*) # C++ shared libraries are fairly broken _LT_TAGVAR(ld_shlibs, $1)=no ;; openbsd*) if test -f /usr/libexec/ld.so; then _LT_TAGVAR(hardcode_direct, $1)=yes _LT_TAGVAR(hardcode_shlibpath_var, $1)=no _LT_TAGVAR(hardcode_direct_absolute, $1)=yes _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $lib' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath,$libdir' if test -z "`echo __ELF__ | $CC -E - | grep __ELF__`" || test "$host_os-$host_cpu" = "openbsd2.8-powerpc"; then _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $pic_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-retain-symbols-file,$export_symbols -o $lib' _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}-E' _LT_TAGVAR(whole_archive_flag_spec, $1)="$wlarc"'--whole-archive$convenience '"$wlarc"'--no-whole-archive' fi output_verbose_link_cmd=func_echo_all else _LT_TAGVAR(ld_shlibs, $1)=no fi ;; osf3* | osf4* | osf5*) case $cc_basename in KCC*) # Kuck and Associates, Inc. (KAI) C++ Compiler # KCC will only create a shared library if the output file # ends with ".so" (or ".sl" for HP-UX), so rename the library # to its proper name (with version) after linking. _LT_TAGVAR(archive_cmds, $1)='tempext=`echo $shared_ext | $SED -e '\''s/\([[^()0-9A-Za-z{}]]\)/\\\\\1/g'\''`; templib=`echo "$lib" | $SED -e "s/\${tempext}\..*/.so/"`; $CC $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags --soname $soname -o \$templib; mv \$templib $lib' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath,$libdir' _LT_TAGVAR(hardcode_libdir_separator, $1)=: # Archives containing C++ object files must be created using # the KAI C++ compiler. case $host in osf3*) _LT_TAGVAR(old_archive_cmds, $1)='$CC -Bstatic -o $oldlib $oldobjs' ;; *) _LT_TAGVAR(old_archive_cmds, $1)='$CC -o $oldlib $oldobjs' ;; esac ;; RCC*) # Rational C++ 2.4.1 # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; cxx*) case $host in osf3*) _LT_TAGVAR(allow_undefined_flag, $1)=' ${wl}-expect_unresolved ${wl}\*' _LT_TAGVAR(archive_cmds, $1)='$CC -shared${allow_undefined_flag} $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname $soname `test -n "$verstring" && func_echo_all "${wl}-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath ${wl}$libdir' ;; *) _LT_TAGVAR(allow_undefined_flag, $1)=' -expect_unresolved \*' _LT_TAGVAR(archive_cmds, $1)='$CC -shared${allow_undefined_flag} $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -msym -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib' _LT_TAGVAR(archive_expsym_cmds, $1)='for i in `cat $export_symbols`; do printf "%s %s\\n" -exported_symbol "\$i" >> $lib.exp; done~ echo "-hidden">> $lib.exp~ $CC -shared$allow_undefined_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -msym -soname $soname ${wl}-input ${wl}$lib.exp `test -n "$verstring" && $ECHO "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib~ $RM $lib.exp' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-rpath $libdir' ;; esac _LT_TAGVAR(hardcode_libdir_separator, $1)=: # Commands to make compiler produce verbose output that lists # what "hidden" libraries, object files and flags are used when # linking a shared library. # # There doesn't appear to be a way to prevent this compiler from # explicitly linking system object files so we need to strip them # from the output so that they don't get included in the library # dependencies. output_verbose_link_cmd='templist=`$CC -shared $CFLAGS -v conftest.$objext 2>&1 | $GREP "ld" | $GREP -v "ld:"`; templist=`func_echo_all "$templist" | $SED "s/\(^.*ld.*\)\( .*ld.*$\)/\1/"`; list=""; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; func_echo_all "$list"' ;; *) if test "$GXX" = yes && test "$with_gnu_ld" = no; then _LT_TAGVAR(allow_undefined_flag, $1)=' ${wl}-expect_unresolved ${wl}\*' case $host in osf3*) _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib ${allow_undefined_flag} $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib' ;; *) _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -nostdlib ${allow_undefined_flag} $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib' ;; esac _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath ${wl}$libdir' _LT_TAGVAR(hardcode_libdir_separator, $1)=: # Commands to make compiler produce verbose output that lists # what "hidden" libraries, object files and flags are used when # linking a shared library. output_verbose_link_cmd='$CC -shared $CFLAGS -v conftest.$objext 2>&1 | $GREP -v "^Configured with:" | $GREP "\-L"' else # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no fi ;; esac ;; psos*) # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; sunos4*) case $cc_basename in CC*) # Sun C++ 4.x # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; lcc*) # Lucid # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; *) # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; esac ;; solaris*) case $cc_basename in CC* | sunCC*) # Sun C++ 4.2, 5.x and Centerline C++ _LT_TAGVAR(archive_cmds_need_lc,$1)=yes _LT_TAGVAR(no_undefined_flag, $1)=' -zdefs' _LT_TAGVAR(archive_cmds, $1)='$CC -G${allow_undefined_flag} -h$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' _LT_TAGVAR(archive_expsym_cmds, $1)='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~ $CC -G${allow_undefined_flag} ${wl}-M ${wl}$lib.exp -h$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~$RM $lib.exp' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-R$libdir' _LT_TAGVAR(hardcode_shlibpath_var, $1)=no case $host_os in solaris2.[[0-5]] | solaris2.[[0-5]].*) ;; *) # The compiler driver will combine and reorder linker options, # but understands `-z linker_flag'. # Supported since Solaris 2.6 (maybe 2.5.1?) _LT_TAGVAR(whole_archive_flag_spec, $1)='-z allextract$convenience -z defaultextract' ;; esac _LT_TAGVAR(link_all_deplibs, $1)=yes output_verbose_link_cmd='func_echo_all' # Archives containing C++ object files must be created using # "CC -xar", where "CC" is the Sun C++ compiler. This is # necessary to make sure instantiated templates are included # in the archive. _LT_TAGVAR(old_archive_cmds, $1)='$CC -xar -o $oldlib $oldobjs' ;; gcx*) # Green Hills C++ Compiler _LT_TAGVAR(archive_cmds, $1)='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-h $wl$soname -o $lib' # The C++ compiler must be used to create the archive. _LT_TAGVAR(old_archive_cmds, $1)='$CC $LDFLAGS -archive -o $oldlib $oldobjs' ;; *) # GNU C++ compiler with Solaris linker if test "$GXX" = yes && test "$with_gnu_ld" = no; then _LT_TAGVAR(no_undefined_flag, $1)=' ${wl}-z ${wl}defs' if $CC --version | $GREP -v '^2\.7' > /dev/null; then _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -nostdlib $LDFLAGS $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-h $wl$soname -o $lib' _LT_TAGVAR(archive_expsym_cmds, $1)='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~ $CC -shared $pic_flag -nostdlib ${wl}-M $wl$lib.exp -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~$RM $lib.exp' # Commands to make compiler produce verbose output that lists # what "hidden" libraries, object files and flags are used when # linking a shared library. output_verbose_link_cmd='$CC -shared $CFLAGS -v conftest.$objext 2>&1 | $GREP -v "^Configured with:" | $GREP "\-L"' else # g++ 2.7 appears to require `-G' NOT `-shared' on this # platform. _LT_TAGVAR(archive_cmds, $1)='$CC -G -nostdlib $LDFLAGS $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-h $wl$soname -o $lib' _LT_TAGVAR(archive_expsym_cmds, $1)='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~ $CC -G -nostdlib ${wl}-M $wl$lib.exp -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~$RM $lib.exp' # Commands to make compiler produce verbose output that lists # what "hidden" libraries, object files and flags are used when # linking a shared library. output_verbose_link_cmd='$CC -G $CFLAGS -v conftest.$objext 2>&1 | $GREP -v "^Configured with:" | $GREP "\-L"' fi _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-R $wl$libdir' case $host_os in solaris2.[[0-5]] | solaris2.[[0-5]].*) ;; *) _LT_TAGVAR(whole_archive_flag_spec, $1)='${wl}-z ${wl}allextract$convenience ${wl}-z ${wl}defaultextract' ;; esac fi ;; esac ;; sysv4*uw2* | sysv5OpenUNIX* | sysv5UnixWare7.[[01]].[[10]]* | unixware7* | sco3.2v5.0.[[024]]*) _LT_TAGVAR(no_undefined_flag, $1)='${wl}-z,text' _LT_TAGVAR(archive_cmds_need_lc, $1)=no _LT_TAGVAR(hardcode_shlibpath_var, $1)=no runpath_var='LD_RUN_PATH' case $cc_basename in CC*) _LT_TAGVAR(archive_cmds, $1)='$CC -G ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -G ${wl}-Bexport:$export_symbols ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' ;; *) _LT_TAGVAR(archive_cmds, $1)='$CC -shared ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared ${wl}-Bexport:$export_symbols ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' ;; esac ;; sysv5* | sco3.2v5* | sco5v6*) # Note: We can NOT use -z defs as we might desire, because we do not # link with -lc, and that would cause any symbols used from libc to # always be unresolved, which means just about no library would # ever link correctly. If we're not using GNU ld we use -z text # though, which does catch some bad symbols but isn't as heavy-handed # as -z defs. _LT_TAGVAR(no_undefined_flag, $1)='${wl}-z,text' _LT_TAGVAR(allow_undefined_flag, $1)='${wl}-z,nodefs' _LT_TAGVAR(archive_cmds_need_lc, $1)=no _LT_TAGVAR(hardcode_shlibpath_var, $1)=no _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-R,$libdir' _LT_TAGVAR(hardcode_libdir_separator, $1)=':' _LT_TAGVAR(link_all_deplibs, $1)=yes _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}-Bexport' runpath_var='LD_RUN_PATH' case $cc_basename in CC*) _LT_TAGVAR(archive_cmds, $1)='$CC -G ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -G ${wl}-Bexport:$export_symbols ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' _LT_TAGVAR(old_archive_cmds, $1)='$CC -Tprelink_objects $oldobjs~ '"$_LT_TAGVAR(old_archive_cmds, $1)" _LT_TAGVAR(reload_cmds, $1)='$CC -Tprelink_objects $reload_objs~ '"$_LT_TAGVAR(reload_cmds, $1)" ;; *) _LT_TAGVAR(archive_cmds, $1)='$CC -shared ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared ${wl}-Bexport:$export_symbols ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' ;; esac ;; tandem*) case $cc_basename in NCC*) # NonStop-UX NCC 3.20 # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; *) # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; esac ;; vxworks*) # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; *) # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; esac AC_MSG_RESULT([$_LT_TAGVAR(ld_shlibs, $1)]) test "$_LT_TAGVAR(ld_shlibs, $1)" = no && can_build_shared=no _LT_TAGVAR(GCC, $1)="$GXX" _LT_TAGVAR(LD, $1)="$LD" ## CAVEAT EMPTOR: ## There is no encapsulation within the following macros, do not change ## the running order or otherwise move them around unless you know exactly ## what you are doing... _LT_SYS_HIDDEN_LIBDEPS($1) _LT_COMPILER_PIC($1) _LT_COMPILER_C_O($1) _LT_COMPILER_FILE_LOCKS($1) _LT_LINKER_SHLIBS($1) _LT_SYS_DYNAMIC_LINKER($1) _LT_LINKER_HARDCODE_LIBPATH($1) _LT_CONFIG($1) fi # test -n "$compiler" CC=$lt_save_CC CFLAGS=$lt_save_CFLAGS LDCXX=$LD LD=$lt_save_LD GCC=$lt_save_GCC with_gnu_ld=$lt_save_with_gnu_ld lt_cv_path_LDCXX=$lt_cv_path_LD lt_cv_path_LD=$lt_save_path_LD lt_cv_prog_gnu_ldcxx=$lt_cv_prog_gnu_ld lt_cv_prog_gnu_ld=$lt_save_with_gnu_ld fi # test "$_lt_caught_CXX_error" != yes AC_LANG_POP ])# _LT_LANG_CXX_CONFIG # _LT_FUNC_STRIPNAME_CNF # ---------------------- # func_stripname_cnf prefix suffix name # strip PREFIX and SUFFIX off of NAME. # PREFIX and SUFFIX must not contain globbing or regex special # characters, hashes, percent signs, but SUFFIX may contain a leading # dot (in which case that matches only a dot). # # This function is identical to the (non-XSI) version of func_stripname, # except this one can be used by m4 code that may be executed by configure, # rather than the libtool script. m4_defun([_LT_FUNC_STRIPNAME_CNF],[dnl AC_REQUIRE([_LT_DECL_SED]) AC_REQUIRE([_LT_PROG_ECHO_BACKSLASH]) func_stripname_cnf () { case ${2} in .*) func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%\\\\${2}\$%%"`;; *) func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%${2}\$%%"`;; esac } # func_stripname_cnf ])# _LT_FUNC_STRIPNAME_CNF # _LT_SYS_HIDDEN_LIBDEPS([TAGNAME]) # --------------------------------- # Figure out "hidden" library dependencies from verbose # compiler output when linking a shared library. # Parse the compiler output and extract the necessary # objects, libraries and library flags. m4_defun([_LT_SYS_HIDDEN_LIBDEPS], [m4_require([_LT_FILEUTILS_DEFAULTS])dnl AC_REQUIRE([_LT_FUNC_STRIPNAME_CNF])dnl # Dependencies to place before and after the object being linked: _LT_TAGVAR(predep_objects, $1)= _LT_TAGVAR(postdep_objects, $1)= _LT_TAGVAR(predeps, $1)= _LT_TAGVAR(postdeps, $1)= _LT_TAGVAR(compiler_lib_search_path, $1)= dnl we can't use the lt_simple_compile_test_code here, dnl because it contains code intended for an executable, dnl not a library. It's possible we should let each dnl tag define a new lt_????_link_test_code variable, dnl but it's only used here... m4_if([$1], [], [cat > conftest.$ac_ext <<_LT_EOF int a; void foo (void) { a = 0; } _LT_EOF ], [$1], [CXX], [cat > conftest.$ac_ext <<_LT_EOF class Foo { public: Foo (void) { a = 0; } private: int a; }; _LT_EOF ], [$1], [F77], [cat > conftest.$ac_ext <<_LT_EOF subroutine foo implicit none integer*4 a a=0 return end _LT_EOF ], [$1], [FC], [cat > conftest.$ac_ext <<_LT_EOF subroutine foo implicit none integer a a=0 return end _LT_EOF ], [$1], [GCJ], [cat > conftest.$ac_ext <<_LT_EOF public class foo { private int a; public void bar (void) { a = 0; } }; _LT_EOF ], [$1], [GO], [cat > conftest.$ac_ext <<_LT_EOF package foo func foo() { } _LT_EOF ]) _lt_libdeps_save_CFLAGS=$CFLAGS case "$CC $CFLAGS " in #( *\ -flto*\ *) CFLAGS="$CFLAGS -fno-lto" ;; *\ -fwhopr*\ *) CFLAGS="$CFLAGS -fno-whopr" ;; *\ -fuse-linker-plugin*\ *) CFLAGS="$CFLAGS -fno-use-linker-plugin" ;; esac dnl Parse the compiler output and extract the necessary dnl objects, libraries and library flags. if AC_TRY_EVAL(ac_compile); then # Parse the compiler output and extract the necessary # objects, libraries and library flags. # Sentinel used to keep track of whether or not we are before # the conftest object file. pre_test_object_deps_done=no for p in `eval "$output_verbose_link_cmd"`; do case ${prev}${p} in -L* | -R* | -l*) # Some compilers place space between "-{L,R}" and the path. # Remove the space. if test $p = "-L" || test $p = "-R"; then prev=$p continue fi # Expand the sysroot to ease extracting the directories later. if test -z "$prev"; then case $p in -L*) func_stripname_cnf '-L' '' "$p"; prev=-L; p=$func_stripname_result ;; -R*) func_stripname_cnf '-R' '' "$p"; prev=-R; p=$func_stripname_result ;; -l*) func_stripname_cnf '-l' '' "$p"; prev=-l; p=$func_stripname_result ;; esac fi case $p in =*) func_stripname_cnf '=' '' "$p"; p=$lt_sysroot$func_stripname_result ;; esac if test "$pre_test_object_deps_done" = no; then case ${prev} in -L | -R) # Internal compiler library paths should come after those # provided the user. The postdeps already come after the # user supplied libs so there is no need to process them. if test -z "$_LT_TAGVAR(compiler_lib_search_path, $1)"; then _LT_TAGVAR(compiler_lib_search_path, $1)="${prev}${p}" else _LT_TAGVAR(compiler_lib_search_path, $1)="${_LT_TAGVAR(compiler_lib_search_path, $1)} ${prev}${p}" fi ;; # The "-l" case would never come before the object being # linked, so don't bother handling this case. esac else if test -z "$_LT_TAGVAR(postdeps, $1)"; then _LT_TAGVAR(postdeps, $1)="${prev}${p}" else _LT_TAGVAR(postdeps, $1)="${_LT_TAGVAR(postdeps, $1)} ${prev}${p}" fi fi prev= ;; *.lto.$objext) ;; # Ignore GCC LTO objects *.$objext) # This assumes that the test object file only shows up # once in the compiler output. if test "$p" = "conftest.$objext"; then pre_test_object_deps_done=yes continue fi if test "$pre_test_object_deps_done" = no; then if test -z "$_LT_TAGVAR(predep_objects, $1)"; then _LT_TAGVAR(predep_objects, $1)="$p" else _LT_TAGVAR(predep_objects, $1)="$_LT_TAGVAR(predep_objects, $1) $p" fi else if test -z "$_LT_TAGVAR(postdep_objects, $1)"; then _LT_TAGVAR(postdep_objects, $1)="$p" else _LT_TAGVAR(postdep_objects, $1)="$_LT_TAGVAR(postdep_objects, $1) $p" fi fi ;; *) ;; # Ignore the rest. esac done # Clean up. rm -f a.out a.exe else echo "libtool.m4: error: problem compiling $1 test program" fi $RM -f confest.$objext CFLAGS=$_lt_libdeps_save_CFLAGS # PORTME: override above test on systems where it is broken m4_if([$1], [CXX], [case $host_os in interix[[3-9]]*) # Interix 3.5 installs completely hosed .la files for C++, so rather than # hack all around it, let's just trust "g++" to DTRT. _LT_TAGVAR(predep_objects,$1)= _LT_TAGVAR(postdep_objects,$1)= _LT_TAGVAR(postdeps,$1)= ;; linux*) case `$CC -V 2>&1 | sed 5q` in *Sun\ C*) # Sun C++ 5.9 # The more standards-conforming stlport4 library is # incompatible with the Cstd library. Avoid specifying # it if it's in CXXFLAGS. Ignore libCrun as # -library=stlport4 depends on it. case " $CXX $CXXFLAGS " in *" -library=stlport4 "*) solaris_use_stlport4=yes ;; esac if test "$solaris_use_stlport4" != yes; then _LT_TAGVAR(postdeps,$1)='-library=Cstd -library=Crun' fi ;; esac ;; solaris*) case $cc_basename in CC* | sunCC*) # The more standards-conforming stlport4 library is # incompatible with the Cstd library. Avoid specifying # it if it's in CXXFLAGS. Ignore libCrun as # -library=stlport4 depends on it. case " $CXX $CXXFLAGS " in *" -library=stlport4 "*) solaris_use_stlport4=yes ;; esac # Adding this requires a known-good setup of shared libraries for # Sun compiler versions before 5.6, else PIC objects from an old # archive will be linked into the output, leading to subtle bugs. if test "$solaris_use_stlport4" != yes; then _LT_TAGVAR(postdeps,$1)='-library=Cstd -library=Crun' fi ;; esac ;; esac ]) case " $_LT_TAGVAR(postdeps, $1) " in *" -lc "*) _LT_TAGVAR(archive_cmds_need_lc, $1)=no ;; esac _LT_TAGVAR(compiler_lib_search_dirs, $1)= if test -n "${_LT_TAGVAR(compiler_lib_search_path, $1)}"; then _LT_TAGVAR(compiler_lib_search_dirs, $1)=`echo " ${_LT_TAGVAR(compiler_lib_search_path, $1)}" | ${SED} -e 's! -L! !g' -e 's!^ !!'` fi _LT_TAGDECL([], [compiler_lib_search_dirs], [1], [The directories searched by this compiler when creating a shared library]) _LT_TAGDECL([], [predep_objects], [1], [Dependencies to place before and after the objects being linked to create a shared library]) _LT_TAGDECL([], [postdep_objects], [1]) _LT_TAGDECL([], [predeps], [1]) _LT_TAGDECL([], [postdeps], [1]) _LT_TAGDECL([], [compiler_lib_search_path], [1], [The library search path used internally by the compiler when linking a shared library]) ])# _LT_SYS_HIDDEN_LIBDEPS # _LT_LANG_F77_CONFIG([TAG]) # -------------------------- # Ensure that the configuration variables for a Fortran 77 compiler are # suitably defined. These variables are subsequently used by _LT_CONFIG # to write the compiler configuration to `libtool'. m4_defun([_LT_LANG_F77_CONFIG], [AC_LANG_PUSH(Fortran 77) if test -z "$F77" || test "X$F77" = "Xno"; then _lt_disable_F77=yes fi _LT_TAGVAR(archive_cmds_need_lc, $1)=no _LT_TAGVAR(allow_undefined_flag, $1)= _LT_TAGVAR(always_export_symbols, $1)=no _LT_TAGVAR(archive_expsym_cmds, $1)= _LT_TAGVAR(export_dynamic_flag_spec, $1)= _LT_TAGVAR(hardcode_direct, $1)=no _LT_TAGVAR(hardcode_direct_absolute, $1)=no _LT_TAGVAR(hardcode_libdir_flag_spec, $1)= _LT_TAGVAR(hardcode_libdir_separator, $1)= _LT_TAGVAR(hardcode_minus_L, $1)=no _LT_TAGVAR(hardcode_automatic, $1)=no _LT_TAGVAR(inherit_rpath, $1)=no _LT_TAGVAR(module_cmds, $1)= _LT_TAGVAR(module_expsym_cmds, $1)= _LT_TAGVAR(link_all_deplibs, $1)=unknown _LT_TAGVAR(old_archive_cmds, $1)=$old_archive_cmds _LT_TAGVAR(reload_flag, $1)=$reload_flag _LT_TAGVAR(reload_cmds, $1)=$reload_cmds _LT_TAGVAR(no_undefined_flag, $1)= _LT_TAGVAR(whole_archive_flag_spec, $1)= _LT_TAGVAR(enable_shared_with_static_runtimes, $1)=no # Source file extension for f77 test sources. ac_ext=f # Object file extension for compiled f77 test sources. objext=o _LT_TAGVAR(objext, $1)=$objext # No sense in running all these tests if we already determined that # the F77 compiler isn't working. Some variables (like enable_shared) # are currently assumed to apply to all compilers on this platform, # and will be corrupted by setting them based on a non-working compiler. if test "$_lt_disable_F77" != yes; then # Code to be used in simple compile tests lt_simple_compile_test_code="\ subroutine t return end " # Code to be used in simple link tests lt_simple_link_test_code="\ program t end " # ltmain only uses $CC for tagged configurations so make sure $CC is set. _LT_TAG_COMPILER # save warnings/boilerplate of simple test code _LT_COMPILER_BOILERPLATE _LT_LINKER_BOILERPLATE # Allow CC to be a program name with arguments. lt_save_CC="$CC" lt_save_GCC=$GCC lt_save_CFLAGS=$CFLAGS CC=${F77-"f77"} CFLAGS=$FFLAGS compiler=$CC _LT_TAGVAR(compiler, $1)=$CC _LT_CC_BASENAME([$compiler]) GCC=$G77 if test -n "$compiler"; then AC_MSG_CHECKING([if libtool supports shared libraries]) AC_MSG_RESULT([$can_build_shared]) AC_MSG_CHECKING([whether to build shared libraries]) test "$can_build_shared" = "no" && enable_shared=no # On AIX, shared libraries and static libraries use the same namespace, and # are all built from PIC. case $host_os in aix3*) test "$enable_shared" = yes && enable_static=no if test -n "$RANLIB"; then archive_cmds="$archive_cmds~\$RANLIB \$lib" postinstall_cmds='$RANLIB $lib' fi ;; aix[[4-9]]*) if test "$host_cpu" != ia64 && test "$aix_use_runtimelinking" = no ; then test "$enable_shared" = yes && enable_static=no fi ;; esac AC_MSG_RESULT([$enable_shared]) AC_MSG_CHECKING([whether to build static libraries]) # Make sure either enable_shared or enable_static is yes. test "$enable_shared" = yes || enable_static=yes AC_MSG_RESULT([$enable_static]) _LT_TAGVAR(GCC, $1)="$G77" _LT_TAGVAR(LD, $1)="$LD" ## CAVEAT EMPTOR: ## There is no encapsulation within the following macros, do not change ## the running order or otherwise move them around unless you know exactly ## what you are doing... _LT_COMPILER_PIC($1) _LT_COMPILER_C_O($1) _LT_COMPILER_FILE_LOCKS($1) _LT_LINKER_SHLIBS($1) _LT_SYS_DYNAMIC_LINKER($1) _LT_LINKER_HARDCODE_LIBPATH($1) _LT_CONFIG($1) fi # test -n "$compiler" GCC=$lt_save_GCC CC="$lt_save_CC" CFLAGS="$lt_save_CFLAGS" fi # test "$_lt_disable_F77" != yes AC_LANG_POP ])# _LT_LANG_F77_CONFIG # _LT_LANG_FC_CONFIG([TAG]) # ------------------------- # Ensure that the configuration variables for a Fortran compiler are # suitably defined. These variables are subsequently used by _LT_CONFIG # to write the compiler configuration to `libtool'. m4_defun([_LT_LANG_FC_CONFIG], [AC_LANG_PUSH(Fortran) if test -z "$FC" || test "X$FC" = "Xno"; then _lt_disable_FC=yes fi _LT_TAGVAR(archive_cmds_need_lc, $1)=no _LT_TAGVAR(allow_undefined_flag, $1)= _LT_TAGVAR(always_export_symbols, $1)=no _LT_TAGVAR(archive_expsym_cmds, $1)= _LT_TAGVAR(export_dynamic_flag_spec, $1)= _LT_TAGVAR(hardcode_direct, $1)=no _LT_TAGVAR(hardcode_direct_absolute, $1)=no _LT_TAGVAR(hardcode_libdir_flag_spec, $1)= _LT_TAGVAR(hardcode_libdir_separator, $1)= _LT_TAGVAR(hardcode_minus_L, $1)=no _LT_TAGVAR(hardcode_automatic, $1)=no _LT_TAGVAR(inherit_rpath, $1)=no _LT_TAGVAR(module_cmds, $1)= _LT_TAGVAR(module_expsym_cmds, $1)= _LT_TAGVAR(link_all_deplibs, $1)=unknown _LT_TAGVAR(old_archive_cmds, $1)=$old_archive_cmds _LT_TAGVAR(reload_flag, $1)=$reload_flag _LT_TAGVAR(reload_cmds, $1)=$reload_cmds _LT_TAGVAR(no_undefined_flag, $1)= _LT_TAGVAR(whole_archive_flag_spec, $1)= _LT_TAGVAR(enable_shared_with_static_runtimes, $1)=no # Source file extension for fc test sources. ac_ext=${ac_fc_srcext-f} # Object file extension for compiled fc test sources. objext=o _LT_TAGVAR(objext, $1)=$objext # No sense in running all these tests if we already determined that # the FC compiler isn't working. Some variables (like enable_shared) # are currently assumed to apply to all compilers on this platform, # and will be corrupted by setting them based on a non-working compiler. if test "$_lt_disable_FC" != yes; then # Code to be used in simple compile tests lt_simple_compile_test_code="\ subroutine t return end " # Code to be used in simple link tests lt_simple_link_test_code="\ program t end " # ltmain only uses $CC for tagged configurations so make sure $CC is set. _LT_TAG_COMPILER # save warnings/boilerplate of simple test code _LT_COMPILER_BOILERPLATE _LT_LINKER_BOILERPLATE # Allow CC to be a program name with arguments. lt_save_CC="$CC" lt_save_GCC=$GCC lt_save_CFLAGS=$CFLAGS CC=${FC-"f95"} CFLAGS=$FCFLAGS compiler=$CC GCC=$ac_cv_fc_compiler_gnu _LT_TAGVAR(compiler, $1)=$CC _LT_CC_BASENAME([$compiler]) if test -n "$compiler"; then AC_MSG_CHECKING([if libtool supports shared libraries]) AC_MSG_RESULT([$can_build_shared]) AC_MSG_CHECKING([whether to build shared libraries]) test "$can_build_shared" = "no" && enable_shared=no # On AIX, shared libraries and static libraries use the same namespace, and # are all built from PIC. case $host_os in aix3*) test "$enable_shared" = yes && enable_static=no if test -n "$RANLIB"; then archive_cmds="$archive_cmds~\$RANLIB \$lib" postinstall_cmds='$RANLIB $lib' fi ;; aix[[4-9]]*) if test "$host_cpu" != ia64 && test "$aix_use_runtimelinking" = no ; then test "$enable_shared" = yes && enable_static=no fi ;; esac AC_MSG_RESULT([$enable_shared]) AC_MSG_CHECKING([whether to build static libraries]) # Make sure either enable_shared or enable_static is yes. test "$enable_shared" = yes || enable_static=yes AC_MSG_RESULT([$enable_static]) _LT_TAGVAR(GCC, $1)="$ac_cv_fc_compiler_gnu" _LT_TAGVAR(LD, $1)="$LD" ## CAVEAT EMPTOR: ## There is no encapsulation within the following macros, do not change ## the running order or otherwise move them around unless you know exactly ## what you are doing... _LT_SYS_HIDDEN_LIBDEPS($1) _LT_COMPILER_PIC($1) _LT_COMPILER_C_O($1) _LT_COMPILER_FILE_LOCKS($1) _LT_LINKER_SHLIBS($1) _LT_SYS_DYNAMIC_LINKER($1) _LT_LINKER_HARDCODE_LIBPATH($1) _LT_CONFIG($1) fi # test -n "$compiler" GCC=$lt_save_GCC CC=$lt_save_CC CFLAGS=$lt_save_CFLAGS fi # test "$_lt_disable_FC" != yes AC_LANG_POP ])# _LT_LANG_FC_CONFIG # _LT_LANG_GCJ_CONFIG([TAG]) # -------------------------- # Ensure that the configuration variables for the GNU Java Compiler compiler # are suitably defined. These variables are subsequently used by _LT_CONFIG # to write the compiler configuration to `libtool'. m4_defun([_LT_LANG_GCJ_CONFIG], [AC_REQUIRE([LT_PROG_GCJ])dnl AC_LANG_SAVE # Source file extension for Java test sources. ac_ext=java # Object file extension for compiled Java test sources. objext=o _LT_TAGVAR(objext, $1)=$objext # Code to be used in simple compile tests lt_simple_compile_test_code="class foo {}" # Code to be used in simple link tests lt_simple_link_test_code='public class conftest { public static void main(String[[]] argv) {}; }' # ltmain only uses $CC for tagged configurations so make sure $CC is set. _LT_TAG_COMPILER # save warnings/boilerplate of simple test code _LT_COMPILER_BOILERPLATE _LT_LINKER_BOILERPLATE # Allow CC to be a program name with arguments. lt_save_CC=$CC lt_save_CFLAGS=$CFLAGS lt_save_GCC=$GCC GCC=yes CC=${GCJ-"gcj"} CFLAGS=$GCJFLAGS compiler=$CC _LT_TAGVAR(compiler, $1)=$CC _LT_TAGVAR(LD, $1)="$LD" _LT_CC_BASENAME([$compiler]) # GCJ did not exist at the time GCC didn't implicitly link libc in. _LT_TAGVAR(archive_cmds_need_lc, $1)=no _LT_TAGVAR(old_archive_cmds, $1)=$old_archive_cmds _LT_TAGVAR(reload_flag, $1)=$reload_flag _LT_TAGVAR(reload_cmds, $1)=$reload_cmds ## CAVEAT EMPTOR: ## There is no encapsulation within the following macros, do not change ## the running order or otherwise move them around unless you know exactly ## what you are doing... if test -n "$compiler"; then _LT_COMPILER_NO_RTTI($1) _LT_COMPILER_PIC($1) _LT_COMPILER_C_O($1) _LT_COMPILER_FILE_LOCKS($1) _LT_LINKER_SHLIBS($1) _LT_LINKER_HARDCODE_LIBPATH($1) _LT_CONFIG($1) fi AC_LANG_RESTORE GCC=$lt_save_GCC CC=$lt_save_CC CFLAGS=$lt_save_CFLAGS ])# _LT_LANG_GCJ_CONFIG # _LT_LANG_GO_CONFIG([TAG]) # -------------------------- # Ensure that the configuration variables for the GNU Go compiler # are suitably defined. These variables are subsequently used by _LT_CONFIG # to write the compiler configuration to `libtool'. m4_defun([_LT_LANG_GO_CONFIG], [AC_REQUIRE([LT_PROG_GO])dnl AC_LANG_SAVE # Source file extension for Go test sources. ac_ext=go # Object file extension for compiled Go test sources. objext=o _LT_TAGVAR(objext, $1)=$objext # Code to be used in simple compile tests lt_simple_compile_test_code="package main; func main() { }" # Code to be used in simple link tests lt_simple_link_test_code='package main; func main() { }' # ltmain only uses $CC for tagged configurations so make sure $CC is set. _LT_TAG_COMPILER # save warnings/boilerplate of simple test code _LT_COMPILER_BOILERPLATE _LT_LINKER_BOILERPLATE # Allow CC to be a program name with arguments. lt_save_CC=$CC lt_save_CFLAGS=$CFLAGS lt_save_GCC=$GCC GCC=yes CC=${GOC-"gccgo"} CFLAGS=$GOFLAGS compiler=$CC _LT_TAGVAR(compiler, $1)=$CC _LT_TAGVAR(LD, $1)="$LD" _LT_CC_BASENAME([$compiler]) # Go did not exist at the time GCC didn't implicitly link libc in. _LT_TAGVAR(archive_cmds_need_lc, $1)=no _LT_TAGVAR(old_archive_cmds, $1)=$old_archive_cmds _LT_TAGVAR(reload_flag, $1)=$reload_flag _LT_TAGVAR(reload_cmds, $1)=$reload_cmds ## CAVEAT EMPTOR: ## There is no encapsulation within the following macros, do not change ## the running order or otherwise move them around unless you know exactly ## what you are doing... if test -n "$compiler"; then _LT_COMPILER_NO_RTTI($1) _LT_COMPILER_PIC($1) _LT_COMPILER_C_O($1) _LT_COMPILER_FILE_LOCKS($1) _LT_LINKER_SHLIBS($1) _LT_LINKER_HARDCODE_LIBPATH($1) _LT_CONFIG($1) fi AC_LANG_RESTORE GCC=$lt_save_GCC CC=$lt_save_CC CFLAGS=$lt_save_CFLAGS ])# _LT_LANG_GO_CONFIG # _LT_LANG_RC_CONFIG([TAG]) # ------------------------- # Ensure that the configuration variables for the Windows resource compiler # are suitably defined. These variables are subsequently used by _LT_CONFIG # to write the compiler configuration to `libtool'. m4_defun([_LT_LANG_RC_CONFIG], [AC_REQUIRE([LT_PROG_RC])dnl AC_LANG_SAVE # Source file extension for RC test sources. ac_ext=rc # Object file extension for compiled RC test sources. objext=o _LT_TAGVAR(objext, $1)=$objext # Code to be used in simple compile tests lt_simple_compile_test_code='sample MENU { MENUITEM "&Soup", 100, CHECKED }' # Code to be used in simple link tests lt_simple_link_test_code="$lt_simple_compile_test_code" # ltmain only uses $CC for tagged configurations so make sure $CC is set. _LT_TAG_COMPILER # save warnings/boilerplate of simple test code _LT_COMPILER_BOILERPLATE _LT_LINKER_BOILERPLATE # Allow CC to be a program name with arguments. lt_save_CC="$CC" lt_save_CFLAGS=$CFLAGS lt_save_GCC=$GCC GCC= CC=${RC-"windres"} CFLAGS= compiler=$CC _LT_TAGVAR(compiler, $1)=$CC _LT_CC_BASENAME([$compiler]) _LT_TAGVAR(lt_cv_prog_compiler_c_o, $1)=yes if test -n "$compiler"; then : _LT_CONFIG($1) fi GCC=$lt_save_GCC AC_LANG_RESTORE CC=$lt_save_CC CFLAGS=$lt_save_CFLAGS ])# _LT_LANG_RC_CONFIG # LT_PROG_GCJ # ----------- AC_DEFUN([LT_PROG_GCJ], [m4_ifdef([AC_PROG_GCJ], [AC_PROG_GCJ], [m4_ifdef([A][M_PROG_GCJ], [A][M_PROG_GCJ], [AC_CHECK_TOOL(GCJ, gcj,) test "x${GCJFLAGS+set}" = xset || GCJFLAGS="-g -O2" AC_SUBST(GCJFLAGS)])])[]dnl ]) # Old name: AU_ALIAS([LT_AC_PROG_GCJ], [LT_PROG_GCJ]) dnl aclocal-1.4 backwards compatibility: dnl AC_DEFUN([LT_AC_PROG_GCJ], []) # LT_PROG_GO # ---------- AC_DEFUN([LT_PROG_GO], [AC_CHECK_TOOL(GOC, gccgo,) ]) # LT_PROG_RC # ---------- AC_DEFUN([LT_PROG_RC], [AC_CHECK_TOOL(RC, windres,) ]) # Old name: AU_ALIAS([LT_AC_PROG_RC], [LT_PROG_RC]) dnl aclocal-1.4 backwards compatibility: dnl AC_DEFUN([LT_AC_PROG_RC], []) # _LT_DECL_EGREP # -------------- # If we don't have a new enough Autoconf to choose the best grep # available, choose the one first in the user's PATH. m4_defun([_LT_DECL_EGREP], [AC_REQUIRE([AC_PROG_EGREP])dnl AC_REQUIRE([AC_PROG_FGREP])dnl test -z "$GREP" && GREP=grep _LT_DECL([], [GREP], [1], [A grep program that handles long lines]) _LT_DECL([], [EGREP], [1], [An ERE matcher]) _LT_DECL([], [FGREP], [1], [A literal string matcher]) dnl Non-bleeding-edge autoconf doesn't subst GREP, so do it here too AC_SUBST([GREP]) ]) # _LT_DECL_OBJDUMP # -------------- # If we don't have a new enough Autoconf to choose the best objdump # available, choose the one first in the user's PATH. m4_defun([_LT_DECL_OBJDUMP], [AC_CHECK_TOOL(OBJDUMP, objdump, false) test -z "$OBJDUMP" && OBJDUMP=objdump _LT_DECL([], [OBJDUMP], [1], [An object symbol dumper]) AC_SUBST([OBJDUMP]) ]) # _LT_DECL_DLLTOOL # ---------------- # Ensure DLLTOOL variable is set. m4_defun([_LT_DECL_DLLTOOL], [AC_CHECK_TOOL(DLLTOOL, dlltool, false) test -z "$DLLTOOL" && DLLTOOL=dlltool _LT_DECL([], [DLLTOOL], [1], [DLL creation program]) AC_SUBST([DLLTOOL]) ]) # _LT_DECL_SED # ------------ # Check for a fully-functional sed program, that truncates # as few characters as possible. Prefer GNU sed if found. m4_defun([_LT_DECL_SED], [AC_PROG_SED test -z "$SED" && SED=sed Xsed="$SED -e 1s/^X//" _LT_DECL([], [SED], [1], [A sed program that does not truncate output]) _LT_DECL([], [Xsed], ["\$SED -e 1s/^X//"], [Sed that helps us avoid accidentally triggering echo(1) options like -n]) ])# _LT_DECL_SED m4_ifndef([AC_PROG_SED], [ ############################################################ # NOTE: This macro has been submitted for inclusion into # # GNU Autoconf as AC_PROG_SED. When it is available in # # a released version of Autoconf we should remove this # # macro and use it instead. # ############################################################ m4_defun([AC_PROG_SED], [AC_MSG_CHECKING([for a sed that does not truncate output]) AC_CACHE_VAL(lt_cv_path_SED, [# Loop through the user's path and test for sed and gsed. # Then use that list of sed's as ones to test for truncation. as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for lt_ac_prog in sed gsed; do for ac_exec_ext in '' $ac_executable_extensions; do if $as_executable_p "$as_dir/$lt_ac_prog$ac_exec_ext"; then lt_ac_sed_list="$lt_ac_sed_list $as_dir/$lt_ac_prog$ac_exec_ext" fi done done done IFS=$as_save_IFS lt_ac_max=0 lt_ac_count=0 # Add /usr/xpg4/bin/sed as it is typically found on Solaris # along with /bin/sed that truncates output. for lt_ac_sed in $lt_ac_sed_list /usr/xpg4/bin/sed; do test ! -f $lt_ac_sed && continue cat /dev/null > conftest.in lt_ac_count=0 echo $ECHO_N "0123456789$ECHO_C" >conftest.in # Check for GNU sed and select it if it is found. if "$lt_ac_sed" --version 2>&1 < /dev/null | grep 'GNU' > /dev/null; then lt_cv_path_SED=$lt_ac_sed break fi while true; do cat conftest.in conftest.in >conftest.tmp mv conftest.tmp conftest.in cp conftest.in conftest.nl echo >>conftest.nl $lt_ac_sed -e 's/a$//' < conftest.nl >conftest.out || break cmp -s conftest.out conftest.nl || break # 10000 chars as input seems more than enough test $lt_ac_count -gt 10 && break lt_ac_count=`expr $lt_ac_count + 1` if test $lt_ac_count -gt $lt_ac_max; then lt_ac_max=$lt_ac_count lt_cv_path_SED=$lt_ac_sed fi done done ]) SED=$lt_cv_path_SED AC_SUBST([SED]) AC_MSG_RESULT([$SED]) ])#AC_PROG_SED ])#m4_ifndef # Old name: AU_ALIAS([LT_AC_PROG_SED], [AC_PROG_SED]) dnl aclocal-1.4 backwards compatibility: dnl AC_DEFUN([LT_AC_PROG_SED], []) # _LT_CHECK_SHELL_FEATURES # ------------------------ # Find out whether the shell is Bourne or XSI compatible, # or has some other useful features. m4_defun([_LT_CHECK_SHELL_FEATURES], [AC_MSG_CHECKING([whether the shell understands some XSI constructs]) # Try some XSI features xsi_shell=no ( _lt_dummy="a/b/c" test "${_lt_dummy##*/},${_lt_dummy%/*},${_lt_dummy#??}"${_lt_dummy%"$_lt_dummy"}, \ = c,a/b,b/c, \ && eval 'test $(( 1 + 1 )) -eq 2 \ && test "${#_lt_dummy}" -eq 5' ) >/dev/null 2>&1 \ && xsi_shell=yes AC_MSG_RESULT([$xsi_shell]) _LT_CONFIG_LIBTOOL_INIT([xsi_shell='$xsi_shell']) AC_MSG_CHECKING([whether the shell understands "+="]) lt_shell_append=no ( foo=bar; set foo baz; eval "$[1]+=\$[2]" && test "$foo" = barbaz ) \ >/dev/null 2>&1 \ && lt_shell_append=yes AC_MSG_RESULT([$lt_shell_append]) _LT_CONFIG_LIBTOOL_INIT([lt_shell_append='$lt_shell_append']) if ( (MAIL=60; unset MAIL) || exit) >/dev/null 2>&1; then lt_unset=unset else lt_unset=false fi _LT_DECL([], [lt_unset], [0], [whether the shell understands "unset"])dnl # test EBCDIC or ASCII case `echo X|tr X '\101'` in A) # ASCII based system # \n is not interpreted correctly by Solaris 8 /usr/ucb/tr lt_SP2NL='tr \040 \012' lt_NL2SP='tr \015\012 \040\040' ;; *) # EBCDIC based system lt_SP2NL='tr \100 \n' lt_NL2SP='tr \r\n \100\100' ;; esac _LT_DECL([SP2NL], [lt_SP2NL], [1], [turn spaces into newlines])dnl _LT_DECL([NL2SP], [lt_NL2SP], [1], [turn newlines into spaces])dnl ])# _LT_CHECK_SHELL_FEATURES # _LT_PROG_FUNCTION_REPLACE (FUNCNAME, REPLACEMENT-BODY) # ------------------------------------------------------ # In `$cfgfile', look for function FUNCNAME delimited by `^FUNCNAME ()$' and # '^} FUNCNAME ', and replace its body with REPLACEMENT-BODY. m4_defun([_LT_PROG_FUNCTION_REPLACE], [dnl { sed -e '/^$1 ()$/,/^} # $1 /c\ $1 ()\ {\ m4_bpatsubsts([$2], [$], [\\], [^\([ ]\)], [\\\1]) } # Extended-shell $1 implementation' "$cfgfile" > $cfgfile.tmp \ && mv -f "$cfgfile.tmp" "$cfgfile" \ || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp") test 0 -eq $? || _lt_function_replace_fail=: ]) # _LT_PROG_REPLACE_SHELLFNS # ------------------------- # Replace existing portable implementations of several shell functions with # equivalent extended shell implementations where those features are available.. m4_defun([_LT_PROG_REPLACE_SHELLFNS], [if test x"$xsi_shell" = xyes; then _LT_PROG_FUNCTION_REPLACE([func_dirname], [dnl case ${1} in */*) func_dirname_result="${1%/*}${2}" ;; * ) func_dirname_result="${3}" ;; esac]) _LT_PROG_FUNCTION_REPLACE([func_basename], [dnl func_basename_result="${1##*/}"]) _LT_PROG_FUNCTION_REPLACE([func_dirname_and_basename], [dnl case ${1} in */*) func_dirname_result="${1%/*}${2}" ;; * ) func_dirname_result="${3}" ;; esac func_basename_result="${1##*/}"]) _LT_PROG_FUNCTION_REPLACE([func_stripname], [dnl # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are # positional parameters, so assign one to ordinary parameter first. func_stripname_result=${3} func_stripname_result=${func_stripname_result#"${1}"} func_stripname_result=${func_stripname_result%"${2}"}]) _LT_PROG_FUNCTION_REPLACE([func_split_long_opt], [dnl func_split_long_opt_name=${1%%=*} func_split_long_opt_arg=${1#*=}]) _LT_PROG_FUNCTION_REPLACE([func_split_short_opt], [dnl func_split_short_opt_arg=${1#??} func_split_short_opt_name=${1%"$func_split_short_opt_arg"}]) _LT_PROG_FUNCTION_REPLACE([func_lo2o], [dnl case ${1} in *.lo) func_lo2o_result=${1%.lo}.${objext} ;; *) func_lo2o_result=${1} ;; esac]) _LT_PROG_FUNCTION_REPLACE([func_xform], [ func_xform_result=${1%.*}.lo]) _LT_PROG_FUNCTION_REPLACE([func_arith], [ func_arith_result=$(( $[*] ))]) _LT_PROG_FUNCTION_REPLACE([func_len], [ func_len_result=${#1}]) fi if test x"$lt_shell_append" = xyes; then _LT_PROG_FUNCTION_REPLACE([func_append], [ eval "${1}+=\\${2}"]) _LT_PROG_FUNCTION_REPLACE([func_append_quoted], [dnl func_quote_for_eval "${2}" dnl m4 expansion turns \\\\ into \\, and then the shell eval turns that into \ eval "${1}+=\\\\ \\$func_quote_for_eval_result"]) # Save a `func_append' function call where possible by direct use of '+=' sed -e 's%func_append \([[a-zA-Z_]]\{1,\}\) "%\1+="%g' $cfgfile > $cfgfile.tmp \ && mv -f "$cfgfile.tmp" "$cfgfile" \ || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp") test 0 -eq $? || _lt_function_replace_fail=: else # Save a `func_append' function call even when '+=' is not available sed -e 's%func_append \([[a-zA-Z_]]\{1,\}\) "%\1="$\1%g' $cfgfile > $cfgfile.tmp \ && mv -f "$cfgfile.tmp" "$cfgfile" \ || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp") test 0 -eq $? || _lt_function_replace_fail=: fi if test x"$_lt_function_replace_fail" = x":"; then AC_MSG_WARN([Unable to substitute extended shell functions in $ofile]) fi ]) # _LT_PATH_CONVERSION_FUNCTIONS # ----------------------------- # Determine which file name conversion functions should be used by # func_to_host_file (and, implicitly, by func_to_host_path). These are needed # for certain cross-compile configurations and native mingw. m4_defun([_LT_PATH_CONVERSION_FUNCTIONS], [AC_REQUIRE([AC_CANONICAL_HOST])dnl AC_REQUIRE([AC_CANONICAL_BUILD])dnl AC_MSG_CHECKING([how to convert $build file names to $host format]) AC_CACHE_VAL(lt_cv_to_host_file_cmd, [case $host in *-*-mingw* ) case $build in *-*-mingw* ) # actually msys lt_cv_to_host_file_cmd=func_convert_file_msys_to_w32 ;; *-*-cygwin* ) lt_cv_to_host_file_cmd=func_convert_file_cygwin_to_w32 ;; * ) # otherwise, assume *nix lt_cv_to_host_file_cmd=func_convert_file_nix_to_w32 ;; esac ;; *-*-cygwin* ) case $build in *-*-mingw* ) # actually msys lt_cv_to_host_file_cmd=func_convert_file_msys_to_cygwin ;; *-*-cygwin* ) lt_cv_to_host_file_cmd=func_convert_file_noop ;; * ) # otherwise, assume *nix lt_cv_to_host_file_cmd=func_convert_file_nix_to_cygwin ;; esac ;; * ) # unhandled hosts (and "normal" native builds) lt_cv_to_host_file_cmd=func_convert_file_noop ;; esac ]) to_host_file_cmd=$lt_cv_to_host_file_cmd AC_MSG_RESULT([$lt_cv_to_host_file_cmd]) _LT_DECL([to_host_file_cmd], [lt_cv_to_host_file_cmd], [0], [convert $build file names to $host format])dnl AC_MSG_CHECKING([how to convert $build file names to toolchain format]) AC_CACHE_VAL(lt_cv_to_tool_file_cmd, [#assume ordinary cross tools, or native build. lt_cv_to_tool_file_cmd=func_convert_file_noop case $host in *-*-mingw* ) case $build in *-*-mingw* ) # actually msys lt_cv_to_tool_file_cmd=func_convert_file_msys_to_w32 ;; esac ;; esac ]) to_tool_file_cmd=$lt_cv_to_tool_file_cmd AC_MSG_RESULT([$lt_cv_to_tool_file_cmd]) _LT_DECL([to_tool_file_cmd], [lt_cv_to_tool_file_cmd], [0], [convert $build files to toolchain format])dnl ])# _LT_PATH_CONVERSION_FUNCTIONS ffc-2019.1.0.post0/libs/gtest-1.7.0/m4/ltoptions.m4000066400000000000000000000300731346026536000211130ustar00rootroot00000000000000# Helper functions for option handling. -*- Autoconf -*- # # Copyright (C) 2004, 2005, 2007, 2008, 2009 Free Software Foundation, # Inc. # Written by Gary V. Vaughan, 2004 # # This file is free software; the Free Software Foundation gives # unlimited permission to copy and/or distribute it, with or without # modifications, as long as this notice is preserved. # serial 7 ltoptions.m4 # This is to help aclocal find these macros, as it can't see m4_define. AC_DEFUN([LTOPTIONS_VERSION], [m4_if([1])]) # _LT_MANGLE_OPTION(MACRO-NAME, OPTION-NAME) # ------------------------------------------ m4_define([_LT_MANGLE_OPTION], [[_LT_OPTION_]m4_bpatsubst($1__$2, [[^a-zA-Z0-9_]], [_])]) # _LT_SET_OPTION(MACRO-NAME, OPTION-NAME) # --------------------------------------- # Set option OPTION-NAME for macro MACRO-NAME, and if there is a # matching handler defined, dispatch to it. Other OPTION-NAMEs are # saved as a flag. m4_define([_LT_SET_OPTION], [m4_define(_LT_MANGLE_OPTION([$1], [$2]))dnl m4_ifdef(_LT_MANGLE_DEFUN([$1], [$2]), _LT_MANGLE_DEFUN([$1], [$2]), [m4_warning([Unknown $1 option `$2'])])[]dnl ]) # _LT_IF_OPTION(MACRO-NAME, OPTION-NAME, IF-SET, [IF-NOT-SET]) # ------------------------------------------------------------ # Execute IF-SET if OPTION is set, IF-NOT-SET otherwise. m4_define([_LT_IF_OPTION], [m4_ifdef(_LT_MANGLE_OPTION([$1], [$2]), [$3], [$4])]) # _LT_UNLESS_OPTIONS(MACRO-NAME, OPTION-LIST, IF-NOT-SET) # ------------------------------------------------------- # Execute IF-NOT-SET unless all options in OPTION-LIST for MACRO-NAME # are set. m4_define([_LT_UNLESS_OPTIONS], [m4_foreach([_LT_Option], m4_split(m4_normalize([$2])), [m4_ifdef(_LT_MANGLE_OPTION([$1], _LT_Option), [m4_define([$0_found])])])[]dnl m4_ifdef([$0_found], [m4_undefine([$0_found])], [$3 ])[]dnl ]) # _LT_SET_OPTIONS(MACRO-NAME, OPTION-LIST) # ---------------------------------------- # OPTION-LIST is a space-separated list of Libtool options associated # with MACRO-NAME. If any OPTION has a matching handler declared with # LT_OPTION_DEFINE, dispatch to that macro; otherwise complain about # the unknown option and exit. m4_defun([_LT_SET_OPTIONS], [# Set options m4_foreach([_LT_Option], m4_split(m4_normalize([$2])), [_LT_SET_OPTION([$1], _LT_Option)]) m4_if([$1],[LT_INIT],[ dnl dnl Simply set some default values (i.e off) if boolean options were not dnl specified: _LT_UNLESS_OPTIONS([LT_INIT], [dlopen], [enable_dlopen=no ]) _LT_UNLESS_OPTIONS([LT_INIT], [win32-dll], [enable_win32_dll=no ]) dnl dnl If no reference was made to various pairs of opposing options, then dnl we run the default mode handler for the pair. For example, if neither dnl `shared' nor `disable-shared' was passed, we enable building of shared dnl archives by default: _LT_UNLESS_OPTIONS([LT_INIT], [shared disable-shared], [_LT_ENABLE_SHARED]) _LT_UNLESS_OPTIONS([LT_INIT], [static disable-static], [_LT_ENABLE_STATIC]) _LT_UNLESS_OPTIONS([LT_INIT], [pic-only no-pic], [_LT_WITH_PIC]) _LT_UNLESS_OPTIONS([LT_INIT], [fast-install disable-fast-install], [_LT_ENABLE_FAST_INSTALL]) ]) ])# _LT_SET_OPTIONS ## --------------------------------- ## ## Macros to handle LT_INIT options. ## ## --------------------------------- ## # _LT_MANGLE_DEFUN(MACRO-NAME, OPTION-NAME) # ----------------------------------------- m4_define([_LT_MANGLE_DEFUN], [[_LT_OPTION_DEFUN_]m4_bpatsubst(m4_toupper([$1__$2]), [[^A-Z0-9_]], [_])]) # LT_OPTION_DEFINE(MACRO-NAME, OPTION-NAME, CODE) # ----------------------------------------------- m4_define([LT_OPTION_DEFINE], [m4_define(_LT_MANGLE_DEFUN([$1], [$2]), [$3])[]dnl ])# LT_OPTION_DEFINE # dlopen # ------ LT_OPTION_DEFINE([LT_INIT], [dlopen], [enable_dlopen=yes ]) AU_DEFUN([AC_LIBTOOL_DLOPEN], [_LT_SET_OPTION([LT_INIT], [dlopen]) AC_DIAGNOSE([obsolete], [$0: Remove this warning and the call to _LT_SET_OPTION when you put the `dlopen' option into LT_INIT's first parameter.]) ]) dnl aclocal-1.4 backwards compatibility: dnl AC_DEFUN([AC_LIBTOOL_DLOPEN], []) # win32-dll # --------- # Declare package support for building win32 dll's. LT_OPTION_DEFINE([LT_INIT], [win32-dll], [enable_win32_dll=yes case $host in *-*-cygwin* | *-*-mingw* | *-*-pw32* | *-*-cegcc*) AC_CHECK_TOOL(AS, as, false) AC_CHECK_TOOL(DLLTOOL, dlltool, false) AC_CHECK_TOOL(OBJDUMP, objdump, false) ;; esac test -z "$AS" && AS=as _LT_DECL([], [AS], [1], [Assembler program])dnl test -z "$DLLTOOL" && DLLTOOL=dlltool _LT_DECL([], [DLLTOOL], [1], [DLL creation program])dnl test -z "$OBJDUMP" && OBJDUMP=objdump _LT_DECL([], [OBJDUMP], [1], [Object dumper program])dnl ])# win32-dll AU_DEFUN([AC_LIBTOOL_WIN32_DLL], [AC_REQUIRE([AC_CANONICAL_HOST])dnl _LT_SET_OPTION([LT_INIT], [win32-dll]) AC_DIAGNOSE([obsolete], [$0: Remove this warning and the call to _LT_SET_OPTION when you put the `win32-dll' option into LT_INIT's first parameter.]) ]) dnl aclocal-1.4 backwards compatibility: dnl AC_DEFUN([AC_LIBTOOL_WIN32_DLL], []) # _LT_ENABLE_SHARED([DEFAULT]) # ---------------------------- # implement the --enable-shared flag, and supports the `shared' and # `disable-shared' LT_INIT options. # DEFAULT is either `yes' or `no'. If omitted, it defaults to `yes'. m4_define([_LT_ENABLE_SHARED], [m4_define([_LT_ENABLE_SHARED_DEFAULT], [m4_if($1, no, no, yes)])dnl AC_ARG_ENABLE([shared], [AS_HELP_STRING([--enable-shared@<:@=PKGS@:>@], [build shared libraries @<:@default=]_LT_ENABLE_SHARED_DEFAULT[@:>@])], [p=${PACKAGE-default} case $enableval in yes) enable_shared=yes ;; no) enable_shared=no ;; *) enable_shared=no # Look at the argument we got. We use all the common list separators. lt_save_ifs="$IFS"; IFS="${IFS}$PATH_SEPARATOR," for pkg in $enableval; do IFS="$lt_save_ifs" if test "X$pkg" = "X$p"; then enable_shared=yes fi done IFS="$lt_save_ifs" ;; esac], [enable_shared=]_LT_ENABLE_SHARED_DEFAULT) _LT_DECL([build_libtool_libs], [enable_shared], [0], [Whether or not to build shared libraries]) ])# _LT_ENABLE_SHARED LT_OPTION_DEFINE([LT_INIT], [shared], [_LT_ENABLE_SHARED([yes])]) LT_OPTION_DEFINE([LT_INIT], [disable-shared], [_LT_ENABLE_SHARED([no])]) # Old names: AC_DEFUN([AC_ENABLE_SHARED], [_LT_SET_OPTION([LT_INIT], m4_if([$1], [no], [disable-])[shared]) ]) AC_DEFUN([AC_DISABLE_SHARED], [_LT_SET_OPTION([LT_INIT], [disable-shared]) ]) AU_DEFUN([AM_ENABLE_SHARED], [AC_ENABLE_SHARED($@)]) AU_DEFUN([AM_DISABLE_SHARED], [AC_DISABLE_SHARED($@)]) dnl aclocal-1.4 backwards compatibility: dnl AC_DEFUN([AM_ENABLE_SHARED], []) dnl AC_DEFUN([AM_DISABLE_SHARED], []) # _LT_ENABLE_STATIC([DEFAULT]) # ---------------------------- # implement the --enable-static flag, and support the `static' and # `disable-static' LT_INIT options. # DEFAULT is either `yes' or `no'. If omitted, it defaults to `yes'. m4_define([_LT_ENABLE_STATIC], [m4_define([_LT_ENABLE_STATIC_DEFAULT], [m4_if($1, no, no, yes)])dnl AC_ARG_ENABLE([static], [AS_HELP_STRING([--enable-static@<:@=PKGS@:>@], [build static libraries @<:@default=]_LT_ENABLE_STATIC_DEFAULT[@:>@])], [p=${PACKAGE-default} case $enableval in yes) enable_static=yes ;; no) enable_static=no ;; *) enable_static=no # Look at the argument we got. We use all the common list separators. lt_save_ifs="$IFS"; IFS="${IFS}$PATH_SEPARATOR," for pkg in $enableval; do IFS="$lt_save_ifs" if test "X$pkg" = "X$p"; then enable_static=yes fi done IFS="$lt_save_ifs" ;; esac], [enable_static=]_LT_ENABLE_STATIC_DEFAULT) _LT_DECL([build_old_libs], [enable_static], [0], [Whether or not to build static libraries]) ])# _LT_ENABLE_STATIC LT_OPTION_DEFINE([LT_INIT], [static], [_LT_ENABLE_STATIC([yes])]) LT_OPTION_DEFINE([LT_INIT], [disable-static], [_LT_ENABLE_STATIC([no])]) # Old names: AC_DEFUN([AC_ENABLE_STATIC], [_LT_SET_OPTION([LT_INIT], m4_if([$1], [no], [disable-])[static]) ]) AC_DEFUN([AC_DISABLE_STATIC], [_LT_SET_OPTION([LT_INIT], [disable-static]) ]) AU_DEFUN([AM_ENABLE_STATIC], [AC_ENABLE_STATIC($@)]) AU_DEFUN([AM_DISABLE_STATIC], [AC_DISABLE_STATIC($@)]) dnl aclocal-1.4 backwards compatibility: dnl AC_DEFUN([AM_ENABLE_STATIC], []) dnl AC_DEFUN([AM_DISABLE_STATIC], []) # _LT_ENABLE_FAST_INSTALL([DEFAULT]) # ---------------------------------- # implement the --enable-fast-install flag, and support the `fast-install' # and `disable-fast-install' LT_INIT options. # DEFAULT is either `yes' or `no'. If omitted, it defaults to `yes'. m4_define([_LT_ENABLE_FAST_INSTALL], [m4_define([_LT_ENABLE_FAST_INSTALL_DEFAULT], [m4_if($1, no, no, yes)])dnl AC_ARG_ENABLE([fast-install], [AS_HELP_STRING([--enable-fast-install@<:@=PKGS@:>@], [optimize for fast installation @<:@default=]_LT_ENABLE_FAST_INSTALL_DEFAULT[@:>@])], [p=${PACKAGE-default} case $enableval in yes) enable_fast_install=yes ;; no) enable_fast_install=no ;; *) enable_fast_install=no # Look at the argument we got. We use all the common list separators. lt_save_ifs="$IFS"; IFS="${IFS}$PATH_SEPARATOR," for pkg in $enableval; do IFS="$lt_save_ifs" if test "X$pkg" = "X$p"; then enable_fast_install=yes fi done IFS="$lt_save_ifs" ;; esac], [enable_fast_install=]_LT_ENABLE_FAST_INSTALL_DEFAULT) _LT_DECL([fast_install], [enable_fast_install], [0], [Whether or not to optimize for fast installation])dnl ])# _LT_ENABLE_FAST_INSTALL LT_OPTION_DEFINE([LT_INIT], [fast-install], [_LT_ENABLE_FAST_INSTALL([yes])]) LT_OPTION_DEFINE([LT_INIT], [disable-fast-install], [_LT_ENABLE_FAST_INSTALL([no])]) # Old names: AU_DEFUN([AC_ENABLE_FAST_INSTALL], [_LT_SET_OPTION([LT_INIT], m4_if([$1], [no], [disable-])[fast-install]) AC_DIAGNOSE([obsolete], [$0: Remove this warning and the call to _LT_SET_OPTION when you put the `fast-install' option into LT_INIT's first parameter.]) ]) AU_DEFUN([AC_DISABLE_FAST_INSTALL], [_LT_SET_OPTION([LT_INIT], [disable-fast-install]) AC_DIAGNOSE([obsolete], [$0: Remove this warning and the call to _LT_SET_OPTION when you put the `disable-fast-install' option into LT_INIT's first parameter.]) ]) dnl aclocal-1.4 backwards compatibility: dnl AC_DEFUN([AC_ENABLE_FAST_INSTALL], []) dnl AC_DEFUN([AM_DISABLE_FAST_INSTALL], []) # _LT_WITH_PIC([MODE]) # -------------------- # implement the --with-pic flag, and support the `pic-only' and `no-pic' # LT_INIT options. # MODE is either `yes' or `no'. If omitted, it defaults to `both'. m4_define([_LT_WITH_PIC], [AC_ARG_WITH([pic], [AS_HELP_STRING([--with-pic@<:@=PKGS@:>@], [try to use only PIC/non-PIC objects @<:@default=use both@:>@])], [lt_p=${PACKAGE-default} case $withval in yes|no) pic_mode=$withval ;; *) pic_mode=default # Look at the argument we got. We use all the common list separators. lt_save_ifs="$IFS"; IFS="${IFS}$PATH_SEPARATOR," for lt_pkg in $withval; do IFS="$lt_save_ifs" if test "X$lt_pkg" = "X$lt_p"; then pic_mode=yes fi done IFS="$lt_save_ifs" ;; esac], [pic_mode=default]) test -z "$pic_mode" && pic_mode=m4_default([$1], [default]) _LT_DECL([], [pic_mode], [0], [What type of objects to build])dnl ])# _LT_WITH_PIC LT_OPTION_DEFINE([LT_INIT], [pic-only], [_LT_WITH_PIC([yes])]) LT_OPTION_DEFINE([LT_INIT], [no-pic], [_LT_WITH_PIC([no])]) # Old name: AU_DEFUN([AC_LIBTOOL_PICMODE], [_LT_SET_OPTION([LT_INIT], [pic-only]) AC_DIAGNOSE([obsolete], [$0: Remove this warning and the call to _LT_SET_OPTION when you put the `pic-only' option into LT_INIT's first parameter.]) ]) dnl aclocal-1.4 backwards compatibility: dnl AC_DEFUN([AC_LIBTOOL_PICMODE], []) ## ----------------- ## ## LTDL_INIT Options ## ## ----------------- ## m4_define([_LTDL_MODE], []) LT_OPTION_DEFINE([LTDL_INIT], [nonrecursive], [m4_define([_LTDL_MODE], [nonrecursive])]) LT_OPTION_DEFINE([LTDL_INIT], [recursive], [m4_define([_LTDL_MODE], [recursive])]) LT_OPTION_DEFINE([LTDL_INIT], [subproject], [m4_define([_LTDL_MODE], [subproject])]) m4_define([_LTDL_TYPE], []) LT_OPTION_DEFINE([LTDL_INIT], [installable], [m4_define([_LTDL_TYPE], [installable])]) LT_OPTION_DEFINE([LTDL_INIT], [convenience], [m4_define([_LTDL_TYPE], [convenience])]) ffc-2019.1.0.post0/libs/gtest-1.7.0/m4/ltsugar.m4000066400000000000000000000104241346026536000205370ustar00rootroot00000000000000# ltsugar.m4 -- libtool m4 base layer. -*-Autoconf-*- # # Copyright (C) 2004, 2005, 2007, 2008 Free Software Foundation, Inc. # Written by Gary V. Vaughan, 2004 # # This file is free software; the Free Software Foundation gives # unlimited permission to copy and/or distribute it, with or without # modifications, as long as this notice is preserved. # serial 6 ltsugar.m4 # This is to help aclocal find these macros, as it can't see m4_define. AC_DEFUN([LTSUGAR_VERSION], [m4_if([0.1])]) # lt_join(SEP, ARG1, [ARG2...]) # ----------------------------- # Produce ARG1SEPARG2...SEPARGn, omitting [] arguments and their # associated separator. # Needed until we can rely on m4_join from Autoconf 2.62, since all earlier # versions in m4sugar had bugs. m4_define([lt_join], [m4_if([$#], [1], [], [$#], [2], [[$2]], [m4_if([$2], [], [], [[$2]_])$0([$1], m4_shift(m4_shift($@)))])]) m4_define([_lt_join], [m4_if([$#$2], [2], [], [m4_if([$2], [], [], [[$1$2]])$0([$1], m4_shift(m4_shift($@)))])]) # lt_car(LIST) # lt_cdr(LIST) # ------------ # Manipulate m4 lists. # These macros are necessary as long as will still need to support # Autoconf-2.59 which quotes differently. m4_define([lt_car], [[$1]]) m4_define([lt_cdr], [m4_if([$#], 0, [m4_fatal([$0: cannot be called without arguments])], [$#], 1, [], [m4_dquote(m4_shift($@))])]) m4_define([lt_unquote], $1) # lt_append(MACRO-NAME, STRING, [SEPARATOR]) # ------------------------------------------ # Redefine MACRO-NAME to hold its former content plus `SEPARATOR'`STRING'. # Note that neither SEPARATOR nor STRING are expanded; they are appended # to MACRO-NAME as is (leaving the expansion for when MACRO-NAME is invoked). # No SEPARATOR is output if MACRO-NAME was previously undefined (different # than defined and empty). # # This macro is needed until we can rely on Autoconf 2.62, since earlier # versions of m4sugar mistakenly expanded SEPARATOR but not STRING. m4_define([lt_append], [m4_define([$1], m4_ifdef([$1], [m4_defn([$1])[$3]])[$2])]) # lt_combine(SEP, PREFIX-LIST, INFIX, SUFFIX1, [SUFFIX2...]) # ---------------------------------------------------------- # Produce a SEP delimited list of all paired combinations of elements of # PREFIX-LIST with SUFFIX1 through SUFFIXn. Each element of the list # has the form PREFIXmINFIXSUFFIXn. # Needed until we can rely on m4_combine added in Autoconf 2.62. m4_define([lt_combine], [m4_if(m4_eval([$# > 3]), [1], [m4_pushdef([_Lt_sep], [m4_define([_Lt_sep], m4_defn([lt_car]))])]]dnl [[m4_foreach([_Lt_prefix], [$2], [m4_foreach([_Lt_suffix], ]m4_dquote(m4_dquote(m4_shift(m4_shift(m4_shift($@)))))[, [_Lt_sep([$1])[]m4_defn([_Lt_prefix])[$3]m4_defn([_Lt_suffix])])])])]) # lt_if_append_uniq(MACRO-NAME, VARNAME, [SEPARATOR], [UNIQ], [NOT-UNIQ]) # ----------------------------------------------------------------------- # Iff MACRO-NAME does not yet contain VARNAME, then append it (delimited # by SEPARATOR if supplied) and expand UNIQ, else NOT-UNIQ. m4_define([lt_if_append_uniq], [m4_ifdef([$1], [m4_if(m4_index([$3]m4_defn([$1])[$3], [$3$2$3]), [-1], [lt_append([$1], [$2], [$3])$4], [$5])], [lt_append([$1], [$2], [$3])$4])]) # lt_dict_add(DICT, KEY, VALUE) # ----------------------------- m4_define([lt_dict_add], [m4_define([$1($2)], [$3])]) # lt_dict_add_subkey(DICT, KEY, SUBKEY, VALUE) # -------------------------------------------- m4_define([lt_dict_add_subkey], [m4_define([$1($2:$3)], [$4])]) # lt_dict_fetch(DICT, KEY, [SUBKEY]) # ---------------------------------- m4_define([lt_dict_fetch], [m4_ifval([$3], m4_ifdef([$1($2:$3)], [m4_defn([$1($2:$3)])]), m4_ifdef([$1($2)], [m4_defn([$1($2)])]))]) # lt_if_dict_fetch(DICT, KEY, [SUBKEY], VALUE, IF-TRUE, [IF-FALSE]) # ----------------------------------------------------------------- m4_define([lt_if_dict_fetch], [m4_if(lt_dict_fetch([$1], [$2], [$3]), [$4], [$5], [$6])]) # lt_dict_filter(DICT, [SUBKEY], VALUE, [SEPARATOR], KEY, [...]) # -------------------------------------------------------------- m4_define([lt_dict_filter], [m4_if([$5], [], [], [lt_join(m4_quote(m4_default([$4], [[, ]])), lt_unquote(m4_split(m4_normalize(m4_foreach(_Lt_key, lt_car([m4_shiftn(4, $@)]), [lt_if_dict_fetch([$1], _Lt_key, [$2], [$3], [_Lt_key ])])))))])[]dnl ]) ffc-2019.1.0.post0/libs/gtest-1.7.0/m4/ltversion.m4000066400000000000000000000012621346026536000211030ustar00rootroot00000000000000# ltversion.m4 -- version numbers -*- Autoconf -*- # # Copyright (C) 2004 Free Software Foundation, Inc. # Written by Scott James Remnant, 2004 # # This file is free software; the Free Software Foundation gives # unlimited permission to copy and/or distribute it, with or without # modifications, as long as this notice is preserved. # @configure_input@ # serial 3337 ltversion.m4 # This file is part of GNU Libtool m4_define([LT_PACKAGE_VERSION], [2.4.2]) m4_define([LT_PACKAGE_REVISION], [1.3337]) AC_DEFUN([LTVERSION_VERSION], [macro_version='2.4.2' macro_revision='1.3337' _LT_DECL(, macro_version, 0, [Which release of libtool.m4 was used?]) _LT_DECL(, macro_revision, 0) ]) ffc-2019.1.0.post0/libs/gtest-1.7.0/m4/lt~obsolete.m4000066400000000000000000000137561346026536000214430ustar00rootroot00000000000000# lt~obsolete.m4 -- aclocal satisfying obsolete definitions. -*-Autoconf-*- # # Copyright (C) 2004, 2005, 2007, 2009 Free Software Foundation, Inc. # Written by Scott James Remnant, 2004. # # This file is free software; the Free Software Foundation gives # unlimited permission to copy and/or distribute it, with or without # modifications, as long as this notice is preserved. # serial 5 lt~obsolete.m4 # These exist entirely to fool aclocal when bootstrapping libtool. # # In the past libtool.m4 has provided macros via AC_DEFUN (or AU_DEFUN) # which have later been changed to m4_define as they aren't part of the # exported API, or moved to Autoconf or Automake where they belong. # # The trouble is, aclocal is a bit thick. It'll see the old AC_DEFUN # in /usr/share/aclocal/libtool.m4 and remember it, then when it sees us # using a macro with the same name in our local m4/libtool.m4 it'll # pull the old libtool.m4 in (it doesn't see our shiny new m4_define # and doesn't know about Autoconf macros at all.) # # So we provide this file, which has a silly filename so it's always # included after everything else. This provides aclocal with the # AC_DEFUNs it wants, but when m4 processes it, it doesn't do anything # because those macros already exist, or will be overwritten later. # We use AC_DEFUN over AU_DEFUN for compatibility with aclocal-1.6. # # Anytime we withdraw an AC_DEFUN or AU_DEFUN, remember to add it here. # Yes, that means every name once taken will need to remain here until # we give up compatibility with versions before 1.7, at which point # we need to keep only those names which we still refer to. # This is to help aclocal find these macros, as it can't see m4_define. AC_DEFUN([LTOBSOLETE_VERSION], [m4_if([1])]) m4_ifndef([AC_LIBTOOL_LINKER_OPTION], [AC_DEFUN([AC_LIBTOOL_LINKER_OPTION])]) m4_ifndef([AC_PROG_EGREP], [AC_DEFUN([AC_PROG_EGREP])]) m4_ifndef([_LT_AC_PROG_ECHO_BACKSLASH], [AC_DEFUN([_LT_AC_PROG_ECHO_BACKSLASH])]) m4_ifndef([_LT_AC_SHELL_INIT], [AC_DEFUN([_LT_AC_SHELL_INIT])]) m4_ifndef([_LT_AC_SYS_LIBPATH_AIX], [AC_DEFUN([_LT_AC_SYS_LIBPATH_AIX])]) m4_ifndef([_LT_PROG_LTMAIN], [AC_DEFUN([_LT_PROG_LTMAIN])]) m4_ifndef([_LT_AC_TAGVAR], [AC_DEFUN([_LT_AC_TAGVAR])]) m4_ifndef([AC_LTDL_ENABLE_INSTALL], [AC_DEFUN([AC_LTDL_ENABLE_INSTALL])]) m4_ifndef([AC_LTDL_PREOPEN], [AC_DEFUN([AC_LTDL_PREOPEN])]) m4_ifndef([_LT_AC_SYS_COMPILER], [AC_DEFUN([_LT_AC_SYS_COMPILER])]) m4_ifndef([_LT_AC_LOCK], [AC_DEFUN([_LT_AC_LOCK])]) m4_ifndef([AC_LIBTOOL_SYS_OLD_ARCHIVE], [AC_DEFUN([AC_LIBTOOL_SYS_OLD_ARCHIVE])]) m4_ifndef([_LT_AC_TRY_DLOPEN_SELF], [AC_DEFUN([_LT_AC_TRY_DLOPEN_SELF])]) m4_ifndef([AC_LIBTOOL_PROG_CC_C_O], [AC_DEFUN([AC_LIBTOOL_PROG_CC_C_O])]) m4_ifndef([AC_LIBTOOL_SYS_HARD_LINK_LOCKS], [AC_DEFUN([AC_LIBTOOL_SYS_HARD_LINK_LOCKS])]) m4_ifndef([AC_LIBTOOL_OBJDIR], [AC_DEFUN([AC_LIBTOOL_OBJDIR])]) m4_ifndef([AC_LTDL_OBJDIR], [AC_DEFUN([AC_LTDL_OBJDIR])]) m4_ifndef([AC_LIBTOOL_PROG_LD_HARDCODE_LIBPATH], [AC_DEFUN([AC_LIBTOOL_PROG_LD_HARDCODE_LIBPATH])]) m4_ifndef([AC_LIBTOOL_SYS_LIB_STRIP], [AC_DEFUN([AC_LIBTOOL_SYS_LIB_STRIP])]) m4_ifndef([AC_PATH_MAGIC], [AC_DEFUN([AC_PATH_MAGIC])]) m4_ifndef([AC_PROG_LD_GNU], [AC_DEFUN([AC_PROG_LD_GNU])]) m4_ifndef([AC_PROG_LD_RELOAD_FLAG], [AC_DEFUN([AC_PROG_LD_RELOAD_FLAG])]) m4_ifndef([AC_DEPLIBS_CHECK_METHOD], [AC_DEFUN([AC_DEPLIBS_CHECK_METHOD])]) m4_ifndef([AC_LIBTOOL_PROG_COMPILER_NO_RTTI], [AC_DEFUN([AC_LIBTOOL_PROG_COMPILER_NO_RTTI])]) m4_ifndef([AC_LIBTOOL_SYS_GLOBAL_SYMBOL_PIPE], [AC_DEFUN([AC_LIBTOOL_SYS_GLOBAL_SYMBOL_PIPE])]) m4_ifndef([AC_LIBTOOL_PROG_COMPILER_PIC], [AC_DEFUN([AC_LIBTOOL_PROG_COMPILER_PIC])]) m4_ifndef([AC_LIBTOOL_PROG_LD_SHLIBS], [AC_DEFUN([AC_LIBTOOL_PROG_LD_SHLIBS])]) m4_ifndef([AC_LIBTOOL_POSTDEP_PREDEP], [AC_DEFUN([AC_LIBTOOL_POSTDEP_PREDEP])]) m4_ifndef([LT_AC_PROG_EGREP], [AC_DEFUN([LT_AC_PROG_EGREP])]) m4_ifndef([LT_AC_PROG_SED], [AC_DEFUN([LT_AC_PROG_SED])]) m4_ifndef([_LT_CC_BASENAME], [AC_DEFUN([_LT_CC_BASENAME])]) m4_ifndef([_LT_COMPILER_BOILERPLATE], [AC_DEFUN([_LT_COMPILER_BOILERPLATE])]) m4_ifndef([_LT_LINKER_BOILERPLATE], [AC_DEFUN([_LT_LINKER_BOILERPLATE])]) m4_ifndef([_AC_PROG_LIBTOOL], [AC_DEFUN([_AC_PROG_LIBTOOL])]) m4_ifndef([AC_LIBTOOL_SETUP], [AC_DEFUN([AC_LIBTOOL_SETUP])]) m4_ifndef([_LT_AC_CHECK_DLFCN], [AC_DEFUN([_LT_AC_CHECK_DLFCN])]) m4_ifndef([AC_LIBTOOL_SYS_DYNAMIC_LINKER], [AC_DEFUN([AC_LIBTOOL_SYS_DYNAMIC_LINKER])]) m4_ifndef([_LT_AC_TAGCONFIG], [AC_DEFUN([_LT_AC_TAGCONFIG])]) m4_ifndef([AC_DISABLE_FAST_INSTALL], [AC_DEFUN([AC_DISABLE_FAST_INSTALL])]) m4_ifndef([_LT_AC_LANG_CXX], [AC_DEFUN([_LT_AC_LANG_CXX])]) m4_ifndef([_LT_AC_LANG_F77], [AC_DEFUN([_LT_AC_LANG_F77])]) m4_ifndef([_LT_AC_LANG_GCJ], [AC_DEFUN([_LT_AC_LANG_GCJ])]) m4_ifndef([AC_LIBTOOL_LANG_C_CONFIG], [AC_DEFUN([AC_LIBTOOL_LANG_C_CONFIG])]) m4_ifndef([_LT_AC_LANG_C_CONFIG], [AC_DEFUN([_LT_AC_LANG_C_CONFIG])]) m4_ifndef([AC_LIBTOOL_LANG_CXX_CONFIG], [AC_DEFUN([AC_LIBTOOL_LANG_CXX_CONFIG])]) m4_ifndef([_LT_AC_LANG_CXX_CONFIG], [AC_DEFUN([_LT_AC_LANG_CXX_CONFIG])]) m4_ifndef([AC_LIBTOOL_LANG_F77_CONFIG], [AC_DEFUN([AC_LIBTOOL_LANG_F77_CONFIG])]) m4_ifndef([_LT_AC_LANG_F77_CONFIG], [AC_DEFUN([_LT_AC_LANG_F77_CONFIG])]) m4_ifndef([AC_LIBTOOL_LANG_GCJ_CONFIG], [AC_DEFUN([AC_LIBTOOL_LANG_GCJ_CONFIG])]) m4_ifndef([_LT_AC_LANG_GCJ_CONFIG], [AC_DEFUN([_LT_AC_LANG_GCJ_CONFIG])]) m4_ifndef([AC_LIBTOOL_LANG_RC_CONFIG], [AC_DEFUN([AC_LIBTOOL_LANG_RC_CONFIG])]) m4_ifndef([_LT_AC_LANG_RC_CONFIG], [AC_DEFUN([_LT_AC_LANG_RC_CONFIG])]) m4_ifndef([AC_LIBTOOL_CONFIG], [AC_DEFUN([AC_LIBTOOL_CONFIG])]) m4_ifndef([_LT_AC_FILE_LTDLL_C], [AC_DEFUN([_LT_AC_FILE_LTDLL_C])]) m4_ifndef([_LT_REQUIRED_DARWIN_CHECKS], [AC_DEFUN([_LT_REQUIRED_DARWIN_CHECKS])]) m4_ifndef([_LT_AC_PROG_CXXCPP], [AC_DEFUN([_LT_AC_PROG_CXXCPP])]) m4_ifndef([_LT_PREPARE_SED_QUOTE_VARS], [AC_DEFUN([_LT_PREPARE_SED_QUOTE_VARS])]) m4_ifndef([_LT_PROG_ECHO_BACKSLASH], [AC_DEFUN([_LT_PROG_ECHO_BACKSLASH])]) m4_ifndef([_LT_PROG_F77], [AC_DEFUN([_LT_PROG_F77])]) m4_ifndef([_LT_PROG_FC], [AC_DEFUN([_LT_PROG_FC])]) m4_ifndef([_LT_PROG_CXX], [AC_DEFUN([_LT_PROG_CXX])]) ffc-2019.1.0.post0/libs/gtest-1.7.0/make/000077500000000000000000000000001346026536000172105ustar00rootroot00000000000000ffc-2019.1.0.post0/libs/gtest-1.7.0/make/Makefile000066400000000000000000000053011346026536000206470ustar00rootroot00000000000000# A sample Makefile for building Google Test and using it in user # tests. Please tweak it to suit your environment and project. You # may want to move it to your project's root directory. # # SYNOPSIS: # # make [all] - makes everything. # make TARGET - makes the given target. # make clean - removes all files generated by make. # Please tweak the following variable definitions as needed by your # project, except GTEST_HEADERS, which you can use in your own targets # but shouldn't modify. # Points to the root of Google Test, relative to where this file is. # Remember to tweak this if you move this file. GTEST_DIR = .. # Where to find user code. USER_DIR = ../samples # Flags passed to the preprocessor. # Set Google Test's header directory as a system directory, such that # the compiler doesn't generate warnings in Google Test headers. CPPFLAGS += -isystem $(GTEST_DIR)/include # Flags passed to the C++ compiler. CXXFLAGS += -g -Wall -Wextra -pthread # All tests produced by this Makefile. Remember to add new tests you # created to the list. TESTS = sample1_unittest # All Google Test headers. Usually you shouldn't change this # definition. GTEST_HEADERS = $(GTEST_DIR)/include/gtest/*.h \ $(GTEST_DIR)/include/gtest/internal/*.h # House-keeping build targets. all : $(TESTS) clean : rm -f $(TESTS) gtest.a gtest_main.a *.o # Builds gtest.a and gtest_main.a. # Usually you shouldn't tweak such internal variables, indicated by a # trailing _. GTEST_SRCS_ = $(GTEST_DIR)/src/*.cc $(GTEST_DIR)/src/*.h $(GTEST_HEADERS) # For simplicity and to avoid depending on Google Test's # implementation details, the dependencies specified below are # conservative and not optimized. This is fine as Google Test # compiles fast and for ordinary users its source rarely changes. gtest-all.o : $(GTEST_SRCS_) $(CXX) $(CPPFLAGS) -I$(GTEST_DIR) $(CXXFLAGS) -c \ $(GTEST_DIR)/src/gtest-all.cc gtest_main.o : $(GTEST_SRCS_) $(CXX) $(CPPFLAGS) -I$(GTEST_DIR) $(CXXFLAGS) -c \ $(GTEST_DIR)/src/gtest_main.cc gtest.a : gtest-all.o $(AR) $(ARFLAGS) $@ $^ gtest_main.a : gtest-all.o gtest_main.o $(AR) $(ARFLAGS) $@ $^ # Builds a sample test. A test should link with either gtest.a or # gtest_main.a, depending on whether it defines its own main() # function. sample1.o : $(USER_DIR)/sample1.cc $(USER_DIR)/sample1.h $(GTEST_HEADERS) $(CXX) $(CPPFLAGS) $(CXXFLAGS) -c $(USER_DIR)/sample1.cc sample1_unittest.o : $(USER_DIR)/sample1_unittest.cc \ $(USER_DIR)/sample1.h $(GTEST_HEADERS) $(CXX) $(CPPFLAGS) $(CXXFLAGS) -c $(USER_DIR)/sample1_unittest.cc sample1_unittest : sample1.o sample1_unittest.o gtest_main.a $(CXX) $(CPPFLAGS) $(CXXFLAGS) -lpthread $^ -o $@ ffc-2019.1.0.post0/libs/gtest-1.7.0/msvc/000077500000000000000000000000001346026536000172435ustar00rootroot00000000000000ffc-2019.1.0.post0/libs/gtest-1.7.0/msvc/gtest-md.sln000077500000000000000000000046301346026536000215130ustar00rootroot00000000000000Microsoft Visual Studio Solution File, Format Version 8.00 Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "gtest-md", "gtest-md.vcproj", "{C8F6C172-56F2-4E76-B5FA-C3B423B31BE8}" ProjectSection(ProjectDependencies) = postProject EndProjectSection EndProject Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "gtest_main-md", "gtest_main-md.vcproj", "{3AF54C8A-10BF-4332-9147-F68ED9862033}" ProjectSection(ProjectDependencies) = postProject EndProjectSection EndProject Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "gtest_prod_test-md", "gtest_prod_test-md.vcproj", "{24848551-EF4F-47E8-9A9D-EA4D49BC3ECB}" ProjectSection(ProjectDependencies) = postProject EndProjectSection EndProject Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "gtest_unittest-md", "gtest_unittest-md.vcproj", "{4D9FDFB5-986A-4139-823C-F4EE0ED481A2}" ProjectSection(ProjectDependencies) = postProject EndProjectSection EndProject Global GlobalSection(SolutionConfiguration) = preSolution Debug = Debug Release = Release EndGlobalSection GlobalSection(ProjectConfiguration) = postSolution {C8F6C172-56F2-4E76-B5FA-C3B423B31BE8}.Debug.ActiveCfg = Debug|Win32 {C8F6C172-56F2-4E76-B5FA-C3B423B31BE8}.Debug.Build.0 = Debug|Win32 {C8F6C172-56F2-4E76-B5FA-C3B423B31BE8}.Release.ActiveCfg = Release|Win32 {C8F6C172-56F2-4E76-B5FA-C3B423B31BE8}.Release.Build.0 = Release|Win32 {3AF54C8A-10BF-4332-9147-F68ED9862033}.Debug.ActiveCfg = Debug|Win32 {3AF54C8A-10BF-4332-9147-F68ED9862033}.Debug.Build.0 = Debug|Win32 {3AF54C8A-10BF-4332-9147-F68ED9862033}.Release.ActiveCfg = Release|Win32 {3AF54C8A-10BF-4332-9147-F68ED9862033}.Release.Build.0 = Release|Win32 {24848551-EF4F-47E8-9A9D-EA4D49BC3ECB}.Debug.ActiveCfg = Debug|Win32 {24848551-EF4F-47E8-9A9D-EA4D49BC3ECB}.Debug.Build.0 = Debug|Win32 {24848551-EF4F-47E8-9A9D-EA4D49BC3ECB}.Release.ActiveCfg = Release|Win32 {24848551-EF4F-47E8-9A9D-EA4D49BC3ECB}.Release.Build.0 = Release|Win32 {4D9FDFB5-986A-4139-823C-F4EE0ED481A2}.Debug.ActiveCfg = Debug|Win32 {4D9FDFB5-986A-4139-823C-F4EE0ED481A2}.Debug.Build.0 = Debug|Win32 {4D9FDFB5-986A-4139-823C-F4EE0ED481A2}.Release.ActiveCfg = Release|Win32 {4D9FDFB5-986A-4139-823C-F4EE0ED481A2}.Release.Build.0 = Release|Win32 EndGlobalSection GlobalSection(ExtensibilityGlobals) = postSolution EndGlobalSection GlobalSection(ExtensibilityAddIns) = postSolution EndGlobalSection EndGlobal ffc-2019.1.0.post0/libs/gtest-1.7.0/msvc/gtest-md.vcproj000077500000000000000000000064561346026536000222320ustar00rootroot00000000000000 ffc-2019.1.0.post0/libs/gtest-1.7.0/msvc/gtest.sln000077500000000000000000000046001346026536000211120ustar00rootroot00000000000000Microsoft Visual Studio Solution File, Format Version 8.00 Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "gtest", "gtest.vcproj", "{C8F6C172-56F2-4E76-B5FA-C3B423B31BE7}" ProjectSection(ProjectDependencies) = postProject EndProjectSection EndProject Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "gtest_main", "gtest_main.vcproj", "{3AF54C8A-10BF-4332-9147-F68ED9862032}" ProjectSection(ProjectDependencies) = postProject EndProjectSection EndProject Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "gtest_unittest", "gtest_unittest.vcproj", "{4D9FDFB5-986A-4139-823C-F4EE0ED481A1}" ProjectSection(ProjectDependencies) = postProject EndProjectSection EndProject Project("{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}") = "gtest_prod_test", "gtest_prod_test.vcproj", "{24848551-EF4F-47E8-9A9D-EA4D49BC3ECA}" ProjectSection(ProjectDependencies) = postProject EndProjectSection EndProject Global GlobalSection(SolutionConfiguration) = preSolution Debug = Debug Release = Release EndGlobalSection GlobalSection(ProjectConfiguration) = postSolution {C8F6C172-56F2-4E76-B5FA-C3B423B31BE7}.Debug.ActiveCfg = Debug|Win32 {C8F6C172-56F2-4E76-B5FA-C3B423B31BE7}.Debug.Build.0 = Debug|Win32 {C8F6C172-56F2-4E76-B5FA-C3B423B31BE7}.Release.ActiveCfg = Release|Win32 {C8F6C172-56F2-4E76-B5FA-C3B423B31BE7}.Release.Build.0 = Release|Win32 {3AF54C8A-10BF-4332-9147-F68ED9862032}.Debug.ActiveCfg = Debug|Win32 {3AF54C8A-10BF-4332-9147-F68ED9862032}.Debug.Build.0 = Debug|Win32 {3AF54C8A-10BF-4332-9147-F68ED9862032}.Release.ActiveCfg = Release|Win32 {3AF54C8A-10BF-4332-9147-F68ED9862032}.Release.Build.0 = Release|Win32 {4D9FDFB5-986A-4139-823C-F4EE0ED481A1}.Debug.ActiveCfg = Debug|Win32 {4D9FDFB5-986A-4139-823C-F4EE0ED481A1}.Debug.Build.0 = Debug|Win32 {4D9FDFB5-986A-4139-823C-F4EE0ED481A1}.Release.ActiveCfg = Release|Win32 {4D9FDFB5-986A-4139-823C-F4EE0ED481A1}.Release.Build.0 = Release|Win32 {24848551-EF4F-47E8-9A9D-EA4D49BC3ECA}.Debug.ActiveCfg = Debug|Win32 {24848551-EF4F-47E8-9A9D-EA4D49BC3ECA}.Debug.Build.0 = Debug|Win32 {24848551-EF4F-47E8-9A9D-EA4D49BC3ECA}.Release.ActiveCfg = Release|Win32 {24848551-EF4F-47E8-9A9D-EA4D49BC3ECA}.Release.Build.0 = Release|Win32 EndGlobalSection GlobalSection(ExtensibilityGlobals) = postSolution EndGlobalSection GlobalSection(ExtensibilityAddIns) = postSolution EndGlobalSection EndGlobal ffc-2019.1.0.post0/libs/gtest-1.7.0/msvc/gtest.vcproj000077500000000000000000000064531346026536000216310ustar00rootroot00000000000000 ffc-2019.1.0.post0/libs/gtest-1.7.0/msvc/gtest_main-md.vcproj000077500000000000000000000066721346026536000232360ustar00rootroot00000000000000 ffc-2019.1.0.post0/libs/gtest-1.7.0/msvc/gtest_main.vcproj000077500000000000000000000066641346026536000226410ustar00rootroot00000000000000 ffc-2019.1.0.post0/libs/gtest-1.7.0/msvc/gtest_prod_test-md.vcproj000077500000000000000000000106431346026536000243060ustar00rootroot00000000000000 ffc-2019.1.0.post0/libs/gtest-1.7.0/msvc/gtest_prod_test.vcproj000077500000000000000000000106351346026536000237110ustar00rootroot00000000000000 ffc-2019.1.0.post0/libs/gtest-1.7.0/msvc/gtest_unittest-md.vcproj000077500000000000000000000076511346026536000241670ustar00rootroot00000000000000 ffc-2019.1.0.post0/libs/gtest-1.7.0/msvc/gtest_unittest.vcproj000077500000000000000000000076431346026536000235720ustar00rootroot00000000000000 ffc-2019.1.0.post0/libs/gtest-1.7.0/samples/000077500000000000000000000000001346026536000177375ustar00rootroot00000000000000ffc-2019.1.0.post0/libs/gtest-1.7.0/samples/prime_tables.h000066400000000000000000000077501346026536000225670ustar00rootroot00000000000000// Copyright 2008 Google Inc. // All Rights Reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: wan@google.com (Zhanyong Wan) // Author: vladl@google.com (Vlad Losev) // This provides interface PrimeTable that determines whether a number is a // prime and determines a next prime number. This interface is used // in Google Test samples demonstrating use of parameterized tests. #ifndef GTEST_SAMPLES_PRIME_TABLES_H_ #define GTEST_SAMPLES_PRIME_TABLES_H_ #include // The prime table interface. class PrimeTable { public: virtual ~PrimeTable() {} // Returns true iff n is a prime number. virtual bool IsPrime(int n) const = 0; // Returns the smallest prime number greater than p; or returns -1 // if the next prime is beyond the capacity of the table. virtual int GetNextPrime(int p) const = 0; }; // Implementation #1 calculates the primes on-the-fly. class OnTheFlyPrimeTable : public PrimeTable { public: virtual bool IsPrime(int n) const { if (n <= 1) return false; for (int i = 2; i*i <= n; i++) { // n is divisible by an integer other than 1 and itself. if ((n % i) == 0) return false; } return true; } virtual int GetNextPrime(int p) const { for (int n = p + 1; n > 0; n++) { if (IsPrime(n)) return n; } return -1; } }; // Implementation #2 pre-calculates the primes and stores the result // in an array. class PreCalculatedPrimeTable : public PrimeTable { public: // 'max' specifies the maximum number the prime table holds. explicit PreCalculatedPrimeTable(int max) : is_prime_size_(max + 1), is_prime_(new bool[max + 1]) { CalculatePrimesUpTo(max); } virtual ~PreCalculatedPrimeTable() { delete[] is_prime_; } virtual bool IsPrime(int n) const { return 0 <= n && n < is_prime_size_ && is_prime_[n]; } virtual int GetNextPrime(int p) const { for (int n = p + 1; n < is_prime_size_; n++) { if (is_prime_[n]) return n; } return -1; } private: void CalculatePrimesUpTo(int max) { ::std::fill(is_prime_, is_prime_ + is_prime_size_, true); is_prime_[0] = is_prime_[1] = false; for (int i = 2; i <= max; i++) { if (!is_prime_[i]) continue; // Marks all multiples of i (except i itself) as non-prime. for (int j = 2*i; j <= max; j += i) { is_prime_[j] = false; } } } const int is_prime_size_; bool* const is_prime_; // Disables compiler warning "assignment operator could not be generated." void operator=(const PreCalculatedPrimeTable& rhs); }; #endif // GTEST_SAMPLES_PRIME_TABLES_H_ ffc-2019.1.0.post0/libs/gtest-1.7.0/samples/sample1.cc000066400000000000000000000047061346026536000216170ustar00rootroot00000000000000// Copyright 2005, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // A sample program demonstrating using Google C++ testing framework. // // Author: wan@google.com (Zhanyong Wan) #include "sample1.h" // Returns n! (the factorial of n). For negative n, n! is defined to be 1. int Factorial(int n) { int result = 1; for (int i = 1; i <= n; i++) { result *= i; } return result; } // Returns true iff n is a prime number. bool IsPrime(int n) { // Trivial case 1: small numbers if (n <= 1) return false; // Trivial case 2: even numbers if (n % 2 == 0) return n == 2; // Now, we have that n is odd and n >= 3. // Try to divide n by every odd number i, starting from 3 for (int i = 3; ; i += 2) { // We only have to try i up to the squre root of n if (i > n/i) break; // Now, we have i <= n/i < n. // If n is divisible by i, n is not prime. if (n % i == 0) return false; } // n has no integer factor in the range (1, n), and thus is prime. return true; } ffc-2019.1.0.post0/libs/gtest-1.7.0/samples/sample1.h000066400000000000000000000036211346026536000214540ustar00rootroot00000000000000// Copyright 2005, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // A sample program demonstrating using Google C++ testing framework. // // Author: wan@google.com (Zhanyong Wan) #ifndef GTEST_SAMPLES_SAMPLE1_H_ #define GTEST_SAMPLES_SAMPLE1_H_ // Returns n! (the factorial of n). For negative n, n! is defined to be 1. int Factorial(int n); // Returns true iff n is a prime number. bool IsPrime(int n); #endif // GTEST_SAMPLES_SAMPLE1_H_ ffc-2019.1.0.post0/libs/gtest-1.7.0/samples/sample10_unittest.cc000066400000000000000000000116751346026536000236410ustar00rootroot00000000000000// Copyright 2009 Google Inc. All Rights Reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: vladl@google.com (Vlad Losev) // This sample shows how to use Google Test listener API to implement // a primitive leak checker. #include #include #include "gtest/gtest.h" using ::testing::EmptyTestEventListener; using ::testing::InitGoogleTest; using ::testing::Test; using ::testing::TestCase; using ::testing::TestEventListeners; using ::testing::TestInfo; using ::testing::TestPartResult; using ::testing::UnitTest; namespace { // We will track memory used by this class. class Water { public: // Normal Water declarations go here. // operator new and operator delete help us control water allocation. void* operator new(size_t allocation_size) { allocated_++; return malloc(allocation_size); } void operator delete(void* block, size_t /* allocation_size */) { allocated_--; free(block); } static int allocated() { return allocated_; } private: static int allocated_; }; int Water::allocated_ = 0; // This event listener monitors how many Water objects are created and // destroyed by each test, and reports a failure if a test leaks some Water // objects. It does this by comparing the number of live Water objects at // the beginning of a test and at the end of a test. class LeakChecker : public EmptyTestEventListener { private: // Called before a test starts. virtual void OnTestStart(const TestInfo& /* test_info */) { initially_allocated_ = Water::allocated(); } // Called after a test ends. virtual void OnTestEnd(const TestInfo& /* test_info */) { int difference = Water::allocated() - initially_allocated_; // You can generate a failure in any event handler except // OnTestPartResult. Just use an appropriate Google Test assertion to do // it. EXPECT_LE(difference, 0) << "Leaked " << difference << " unit(s) of Water!"; } int initially_allocated_; }; TEST(ListenersTest, DoesNotLeak) { Water* water = new Water; delete water; } // This should fail when the --check_for_leaks command line flag is // specified. TEST(ListenersTest, LeaksWater) { Water* water = new Water; EXPECT_TRUE(water != NULL); } } // namespace int main(int argc, char **argv) { InitGoogleTest(&argc, argv); bool check_for_leaks = false; if (argc > 1 && strcmp(argv[1], "--check_for_leaks") == 0 ) check_for_leaks = true; else printf("%s\n", "Run this program with --check_for_leaks to enable " "custom leak checking in the tests."); // If we are given the --check_for_leaks command line flag, installs the // leak checker. if (check_for_leaks) { TestEventListeners& listeners = UnitTest::GetInstance()->listeners(); // Adds the leak checker to the end of the test event listener list, // after the default text output printer and the default XML report // generator. // // The order is important - it ensures that failures generated in the // leak checker's OnTestEnd() method are processed by the text and XML // printers *before* their OnTestEnd() methods are called, such that // they are attributed to the right test. Remember that a listener // receives an OnXyzStart event *after* listeners preceding it in the // list received that event, and receives an OnXyzEnd event *before* // listeners preceding it. // // We don't need to worry about deleting the new listener later, as // Google Test will do it. listeners.Append(new LeakChecker); } return RUN_ALL_TESTS(); } ffc-2019.1.0.post0/libs/gtest-1.7.0/samples/sample1_unittest.cc000066400000000000000000000120111346026536000235420ustar00rootroot00000000000000// Copyright 2005, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // A sample program demonstrating using Google C++ testing framework. // // Author: wan@google.com (Zhanyong Wan) // This sample shows how to write a simple unit test for a function, // using Google C++ testing framework. // // Writing a unit test using Google C++ testing framework is easy as 1-2-3: // Step 1. Include necessary header files such that the stuff your // test logic needs is declared. // // Don't forget gtest.h, which declares the testing framework. #include #include "sample1.h" #include "gtest/gtest.h" // Step 2. Use the TEST macro to define your tests. // // TEST has two parameters: the test case name and the test name. // After using the macro, you should define your test logic between a // pair of braces. You can use a bunch of macros to indicate the // success or failure of a test. EXPECT_TRUE and EXPECT_EQ are // examples of such macros. For a complete list, see gtest.h. // // // // In Google Test, tests are grouped into test cases. This is how we // keep test code organized. You should put logically related tests // into the same test case. // // The test case name and the test name should both be valid C++ // identifiers. And you should not use underscore (_) in the names. // // Google Test guarantees that each test you define is run exactly // once, but it makes no guarantee on the order the tests are // executed. Therefore, you should write your tests in such a way // that their results don't depend on their order. // // // Tests Factorial(). // Tests factorial of negative numbers. TEST(FactorialTest, Negative) { // This test is named "Negative", and belongs to the "FactorialTest" // test case. EXPECT_EQ(1, Factorial(-5)); EXPECT_EQ(1, Factorial(-1)); EXPECT_GT(Factorial(-10), 0); // // // EXPECT_EQ(expected, actual) is the same as // // EXPECT_TRUE((expected) == (actual)) // // except that it will print both the expected value and the actual // value when the assertion fails. This is very helpful for // debugging. Therefore in this case EXPECT_EQ is preferred. // // On the other hand, EXPECT_TRUE accepts any Boolean expression, // and is thus more general. // // } // Tests factorial of 0. TEST(FactorialTest, Zero) { EXPECT_EQ(1, Factorial(0)); } // Tests factorial of positive numbers. TEST(FactorialTest, Positive) { EXPECT_EQ(1, Factorial(1)); EXPECT_EQ(2, Factorial(2)); EXPECT_EQ(6, Factorial(3)); EXPECT_EQ(40320, Factorial(8)); } // Tests IsPrime() // Tests negative input. TEST(IsPrimeTest, Negative) { // This test belongs to the IsPrimeTest test case. EXPECT_FALSE(IsPrime(-1)); EXPECT_FALSE(IsPrime(-2)); EXPECT_FALSE(IsPrime(INT_MIN)); } // Tests some trivial cases. TEST(IsPrimeTest, Trivial) { EXPECT_FALSE(IsPrime(0)); EXPECT_FALSE(IsPrime(1)); EXPECT_TRUE(IsPrime(2)); EXPECT_TRUE(IsPrime(3)); } // Tests positive input. TEST(IsPrimeTest, Positive) { EXPECT_FALSE(IsPrime(4)); EXPECT_TRUE(IsPrime(5)); EXPECT_FALSE(IsPrime(6)); EXPECT_TRUE(IsPrime(23)); } // Step 3. Call RUN_ALL_TESTS() in main(). // // We do this by linking in src/gtest_main.cc file, which consists of // a main() function which calls RUN_ALL_TESTS() for us. // // This runs all the tests you've defined, prints the result, and // returns 0 if successful, or 1 otherwise. // // Did you notice that we didn't register the tests? The // RUN_ALL_TESTS() macro magically knows about all the tests we // defined. Isn't this convenient? ffc-2019.1.0.post0/libs/gtest-1.7.0/samples/sample2.cc000066400000000000000000000043721346026536000216170ustar00rootroot00000000000000// Copyright 2005, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // A sample program demonstrating using Google C++ testing framework. // // Author: wan@google.com (Zhanyong Wan) #include "sample2.h" #include // Clones a 0-terminated C string, allocating memory using new. const char* MyString::CloneCString(const char* a_c_string) { if (a_c_string == NULL) return NULL; const size_t len = strlen(a_c_string); char* const clone = new char[ len + 1 ]; memcpy(clone, a_c_string, len + 1); return clone; } // Sets the 0-terminated C string this MyString object // represents. void MyString::Set(const char* a_c_string) { // Makes sure this works when c_string == c_string_ const char* const temp = MyString::CloneCString(a_c_string); delete[] c_string_; c_string_ = temp; } ffc-2019.1.0.post0/libs/gtest-1.7.0/samples/sample2.h000066400000000000000000000056761346026536000214710ustar00rootroot00000000000000// Copyright 2005, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // A sample program demonstrating using Google C++ testing framework. // // Author: wan@google.com (Zhanyong Wan) #ifndef GTEST_SAMPLES_SAMPLE2_H_ #define GTEST_SAMPLES_SAMPLE2_H_ #include // A simple string class. class MyString { private: const char* c_string_; const MyString& operator=(const MyString& rhs); public: // Clones a 0-terminated C string, allocating memory using new. static const char* CloneCString(const char* a_c_string); //////////////////////////////////////////////////////////// // // C'tors // The default c'tor constructs a NULL string. MyString() : c_string_(NULL) {} // Constructs a MyString by cloning a 0-terminated C string. explicit MyString(const char* a_c_string) : c_string_(NULL) { Set(a_c_string); } // Copy c'tor MyString(const MyString& string) : c_string_(NULL) { Set(string.c_string_); } //////////////////////////////////////////////////////////// // // D'tor. MyString is intended to be a final class, so the d'tor // doesn't need to be virtual. ~MyString() { delete[] c_string_; } // Gets the 0-terminated C string this MyString object represents. const char* c_string() const { return c_string_; } size_t Length() const { return c_string_ == NULL ? 0 : strlen(c_string_); } // Sets the 0-terminated C string this MyString object represents. void Set(const char* c_string); }; #endif // GTEST_SAMPLES_SAMPLE2_H_ ffc-2019.1.0.post0/libs/gtest-1.7.0/samples/sample2_unittest.cc000066400000000000000000000075261346026536000235620ustar00rootroot00000000000000// Copyright 2005, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // A sample program demonstrating using Google C++ testing framework. // // Author: wan@google.com (Zhanyong Wan) // This sample shows how to write a more complex unit test for a class // that has multiple member functions. // // Usually, it's a good idea to have one test for each method in your // class. You don't have to do that exactly, but it helps to keep // your tests organized. You may also throw in additional tests as // needed. #include "sample2.h" #include "gtest/gtest.h" // In this example, we test the MyString class (a simple string). // Tests the default c'tor. TEST(MyString, DefaultConstructor) { const MyString s; // Asserts that s.c_string() returns NULL. // // // // If we write NULL instead of // // static_cast(NULL) // // in this assertion, it will generate a warning on gcc 3.4. The // reason is that EXPECT_EQ needs to know the types of its // arguments in order to print them when it fails. Since NULL is // #defined as 0, the compiler will use the formatter function for // int to print it. However, gcc thinks that NULL should be used as // a pointer, not an int, and therefore complains. // // The root of the problem is C++'s lack of distinction between the // integer number 0 and the null pointer constant. Unfortunately, // we have to live with this fact. // // EXPECT_STREQ(NULL, s.c_string()); EXPECT_EQ(0u, s.Length()); } const char kHelloString[] = "Hello, world!"; // Tests the c'tor that accepts a C string. TEST(MyString, ConstructorFromCString) { const MyString s(kHelloString); EXPECT_EQ(0, strcmp(s.c_string(), kHelloString)); EXPECT_EQ(sizeof(kHelloString)/sizeof(kHelloString[0]) - 1, s.Length()); } // Tests the copy c'tor. TEST(MyString, CopyConstructor) { const MyString s1(kHelloString); const MyString s2 = s1; EXPECT_EQ(0, strcmp(s2.c_string(), kHelloString)); } // Tests the Set method. TEST(MyString, Set) { MyString s; s.Set(kHelloString); EXPECT_EQ(0, strcmp(s.c_string(), kHelloString)); // Set should work when the input pointer is the same as the one // already in the MyString object. s.Set(s.c_string()); EXPECT_EQ(0, strcmp(s.c_string(), kHelloString)); // Can we set the MyString to NULL? s.Set(NULL); EXPECT_STREQ(NULL, s.c_string()); } ffc-2019.1.0.post0/libs/gtest-1.7.0/samples/sample3-inl.h000066400000000000000000000123651346026536000222430ustar00rootroot00000000000000// Copyright 2005, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // A sample program demonstrating using Google C++ testing framework. // // Author: wan@google.com (Zhanyong Wan) #ifndef GTEST_SAMPLES_SAMPLE3_INL_H_ #define GTEST_SAMPLES_SAMPLE3_INL_H_ #include // Queue is a simple queue implemented as a singled-linked list. // // The element type must support copy constructor. template // E is the element type class Queue; // QueueNode is a node in a Queue, which consists of an element of // type E and a pointer to the next node. template // E is the element type class QueueNode { friend class Queue; public: // Gets the element in this node. const E& element() const { return element_; } // Gets the next node in the queue. QueueNode* next() { return next_; } const QueueNode* next() const { return next_; } private: // Creates a node with a given element value. The next pointer is // set to NULL. explicit QueueNode(const E& an_element) : element_(an_element), next_(NULL) {} // We disable the default assignment operator and copy c'tor. const QueueNode& operator = (const QueueNode&); QueueNode(const QueueNode&); E element_; QueueNode* next_; }; template // E is the element type. class Queue { public: // Creates an empty queue. Queue() : head_(NULL), last_(NULL), size_(0) {} // D'tor. Clears the queue. ~Queue() { Clear(); } // Clears the queue. void Clear() { if (size_ > 0) { // 1. Deletes every node. QueueNode* node = head_; QueueNode* next = node->next(); for (; ;) { delete node; node = next; if (node == NULL) break; next = node->next(); } // 2. Resets the member variables. head_ = last_ = NULL; size_ = 0; } } // Gets the number of elements. size_t Size() const { return size_; } // Gets the first element of the queue, or NULL if the queue is empty. QueueNode* Head() { return head_; } const QueueNode* Head() const { return head_; } // Gets the last element of the queue, or NULL if the queue is empty. QueueNode* Last() { return last_; } const QueueNode* Last() const { return last_; } // Adds an element to the end of the queue. A copy of the element is // created using the copy constructor, and then stored in the queue. // Changes made to the element in the queue doesn't affect the source // object, and vice versa. void Enqueue(const E& element) { QueueNode* new_node = new QueueNode(element); if (size_ == 0) { head_ = last_ = new_node; size_ = 1; } else { last_->next_ = new_node; last_ = new_node; size_++; } } // Removes the head of the queue and returns it. Returns NULL if // the queue is empty. E* Dequeue() { if (size_ == 0) { return NULL; } const QueueNode* const old_head = head_; head_ = head_->next_; size_--; if (size_ == 0) { last_ = NULL; } E* element = new E(old_head->element()); delete old_head; return element; } // Applies a function/functor on each element of the queue, and // returns the result in a new queue. The original queue is not // affected. template Queue* Map(F function) const { Queue* new_queue = new Queue(); for (const QueueNode* node = head_; node != NULL; node = node->next_) { new_queue->Enqueue(function(node->element())); } return new_queue; } private: QueueNode* head_; // The first node of the queue. QueueNode* last_; // The last node of the queue. size_t size_; // The number of elements in the queue. // We disallow copying a queue. Queue(const Queue&); const Queue& operator = (const Queue&); }; #endif // GTEST_SAMPLES_SAMPLE3_INL_H_ ffc-2019.1.0.post0/libs/gtest-1.7.0/samples/sample3_unittest.cc000066400000000000000000000123471346026536000235600ustar00rootroot00000000000000// Copyright 2005, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // A sample program demonstrating using Google C++ testing framework. // // Author: wan@google.com (Zhanyong Wan) // In this example, we use a more advanced feature of Google Test called // test fixture. // // A test fixture is a place to hold objects and functions shared by // all tests in a test case. Using a test fixture avoids duplicating // the test code necessary to initialize and cleanup those common // objects for each test. It is also useful for defining sub-routines // that your tests need to invoke a lot. // // // // The tests share the test fixture in the sense of code sharing, not // data sharing. Each test is given its own fresh copy of the // fixture. You cannot expect the data modified by one test to be // passed on to another test, which is a bad idea. // // The reason for this design is that tests should be independent and // repeatable. In particular, a test should not fail as the result of // another test's failure. If one test depends on info produced by // another test, then the two tests should really be one big test. // // The macros for indicating the success/failure of a test // (EXPECT_TRUE, FAIL, etc) need to know what the current test is // (when Google Test prints the test result, it tells you which test // each failure belongs to). Technically, these macros invoke a // member function of the Test class. Therefore, you cannot use them // in a global function. That's why you should put test sub-routines // in a test fixture. // // #include "sample3-inl.h" #include "gtest/gtest.h" // To use a test fixture, derive a class from testing::Test. class QueueTest : public testing::Test { protected: // You should make the members protected s.t. they can be // accessed from sub-classes. // virtual void SetUp() will be called before each test is run. You // should define it if you need to initialize the varaibles. // Otherwise, this can be skipped. virtual void SetUp() { q1_.Enqueue(1); q2_.Enqueue(2); q2_.Enqueue(3); } // virtual void TearDown() will be called after each test is run. // You should define it if there is cleanup work to do. Otherwise, // you don't have to provide it. // // virtual void TearDown() { // } // A helper function that some test uses. static int Double(int n) { return 2*n; } // A helper function for testing Queue::Map(). void MapTester(const Queue * q) { // Creates a new queue, where each element is twice as big as the // corresponding one in q. const Queue * const new_q = q->Map(Double); // Verifies that the new queue has the same size as q. ASSERT_EQ(q->Size(), new_q->Size()); // Verifies the relationship between the elements of the two queues. for ( const QueueNode * n1 = q->Head(), * n2 = new_q->Head(); n1 != NULL; n1 = n1->next(), n2 = n2->next() ) { EXPECT_EQ(2 * n1->element(), n2->element()); } delete new_q; } // Declares the variables your tests want to use. Queue q0_; Queue q1_; Queue q2_; }; // When you have a test fixture, you define a test using TEST_F // instead of TEST. // Tests the default c'tor. TEST_F(QueueTest, DefaultConstructor) { // You can access data in the test fixture here. EXPECT_EQ(0u, q0_.Size()); } // Tests Dequeue(). TEST_F(QueueTest, Dequeue) { int * n = q0_.Dequeue(); EXPECT_TRUE(n == NULL); n = q1_.Dequeue(); ASSERT_TRUE(n != NULL); EXPECT_EQ(1, *n); EXPECT_EQ(0u, q1_.Size()); delete n; n = q2_.Dequeue(); ASSERT_TRUE(n != NULL); EXPECT_EQ(2, *n); EXPECT_EQ(1u, q2_.Size()); delete n; } // Tests the Queue::Map() function. TEST_F(QueueTest, Map) { MapTester(&q0_); MapTester(&q1_); MapTester(&q2_); } ffc-2019.1.0.post0/libs/gtest-1.7.0/samples/sample4.cc000066400000000000000000000036071346026536000216210ustar00rootroot00000000000000// Copyright 2005, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // A sample program demonstrating using Google C++ testing framework. // // Author: wan@google.com (Zhanyong Wan) #include #include "sample4.h" // Returns the current counter value, and increments it. int Counter::Increment() { return counter_++; } // Prints the current counter value to STDOUT. void Counter::Print() const { printf("%d", counter_); } ffc-2019.1.0.post0/libs/gtest-1.7.0/samples/sample4.h000066400000000000000000000040431346026536000214560ustar00rootroot00000000000000// Copyright 2005, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // A sample program demonstrating using Google C++ testing framework. // // Author: wan@google.com (Zhanyong Wan) #ifndef GTEST_SAMPLES_SAMPLE4_H_ #define GTEST_SAMPLES_SAMPLE4_H_ // A simple monotonic counter. class Counter { private: int counter_; public: // Creates a counter that starts at 0. Counter() : counter_(0) {} // Returns the current counter value, and increments it. int Increment(); // Prints the current counter value to STDOUT. void Print() const; }; #endif // GTEST_SAMPLES_SAMPLE4_H_ ffc-2019.1.0.post0/libs/gtest-1.7.0/samples/sample4_unittest.cc000066400000000000000000000035651346026536000235630ustar00rootroot00000000000000// Copyright 2005, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: wan@google.com (Zhanyong Wan) #include "gtest/gtest.h" #include "sample4.h" // Tests the Increment() method. TEST(Counter, Increment) { Counter c; // EXPECT_EQ() evaluates its arguments exactly once, so they // can have side effects. EXPECT_EQ(0, c.Increment()); EXPECT_EQ(1, c.Increment()); EXPECT_EQ(2, c.Increment()); } ffc-2019.1.0.post0/libs/gtest-1.7.0/samples/sample5_unittest.cc000066400000000000000000000146761346026536000235710ustar00rootroot00000000000000// Copyright 2005, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: wan@google.com (Zhanyong Wan) // This sample teaches how to reuse a test fixture in multiple test // cases by deriving sub-fixtures from it. // // When you define a test fixture, you specify the name of the test // case that will use this fixture. Therefore, a test fixture can // be used by only one test case. // // Sometimes, more than one test cases may want to use the same or // slightly different test fixtures. For example, you may want to // make sure that all tests for a GUI library don't leak important // system resources like fonts and brushes. In Google Test, you do // this by putting the shared logic in a super (as in "super class") // test fixture, and then have each test case use a fixture derived // from this super fixture. #include #include #include "sample3-inl.h" #include "gtest/gtest.h" #include "sample1.h" // In this sample, we want to ensure that every test finishes within // ~5 seconds. If a test takes longer to run, we consider it a // failure. // // We put the code for timing a test in a test fixture called // "QuickTest". QuickTest is intended to be the super fixture that // other fixtures derive from, therefore there is no test case with // the name "QuickTest". This is OK. // // Later, we will derive multiple test fixtures from QuickTest. class QuickTest : public testing::Test { protected: // Remember that SetUp() is run immediately before a test starts. // This is a good place to record the start time. virtual void SetUp() { start_time_ = time(NULL); } // TearDown() is invoked immediately after a test finishes. Here we // check if the test was too slow. virtual void TearDown() { // Gets the time when the test finishes const time_t end_time = time(NULL); // Asserts that the test took no more than ~5 seconds. Did you // know that you can use assertions in SetUp() and TearDown() as // well? EXPECT_TRUE(end_time - start_time_ <= 5) << "The test took too long."; } // The UTC time (in seconds) when the test starts time_t start_time_; }; // We derive a fixture named IntegerFunctionTest from the QuickTest // fixture. All tests using this fixture will be automatically // required to be quick. class IntegerFunctionTest : public QuickTest { // We don't need any more logic than already in the QuickTest fixture. // Therefore the body is empty. }; // Now we can write tests in the IntegerFunctionTest test case. // Tests Factorial() TEST_F(IntegerFunctionTest, Factorial) { // Tests factorial of negative numbers. EXPECT_EQ(1, Factorial(-5)); EXPECT_EQ(1, Factorial(-1)); EXPECT_GT(Factorial(-10), 0); // Tests factorial of 0. EXPECT_EQ(1, Factorial(0)); // Tests factorial of positive numbers. EXPECT_EQ(1, Factorial(1)); EXPECT_EQ(2, Factorial(2)); EXPECT_EQ(6, Factorial(3)); EXPECT_EQ(40320, Factorial(8)); } // Tests IsPrime() TEST_F(IntegerFunctionTest, IsPrime) { // Tests negative input. EXPECT_FALSE(IsPrime(-1)); EXPECT_FALSE(IsPrime(-2)); EXPECT_FALSE(IsPrime(INT_MIN)); // Tests some trivial cases. EXPECT_FALSE(IsPrime(0)); EXPECT_FALSE(IsPrime(1)); EXPECT_TRUE(IsPrime(2)); EXPECT_TRUE(IsPrime(3)); // Tests positive input. EXPECT_FALSE(IsPrime(4)); EXPECT_TRUE(IsPrime(5)); EXPECT_FALSE(IsPrime(6)); EXPECT_TRUE(IsPrime(23)); } // The next test case (named "QueueTest") also needs to be quick, so // we derive another fixture from QuickTest. // // The QueueTest test fixture has some logic and shared objects in // addition to what's in QuickTest already. We define the additional // stuff inside the body of the test fixture, as usual. class QueueTest : public QuickTest { protected: virtual void SetUp() { // First, we need to set up the super fixture (QuickTest). QuickTest::SetUp(); // Second, some additional setup for this fixture. q1_.Enqueue(1); q2_.Enqueue(2); q2_.Enqueue(3); } // By default, TearDown() inherits the behavior of // QuickTest::TearDown(). As we have no additional cleaning work // for QueueTest, we omit it here. // // virtual void TearDown() { // QuickTest::TearDown(); // } Queue q0_; Queue q1_; Queue q2_; }; // Now, let's write tests using the QueueTest fixture. // Tests the default constructor. TEST_F(QueueTest, DefaultConstructor) { EXPECT_EQ(0u, q0_.Size()); } // Tests Dequeue(). TEST_F(QueueTest, Dequeue) { int* n = q0_.Dequeue(); EXPECT_TRUE(n == NULL); n = q1_.Dequeue(); EXPECT_TRUE(n != NULL); EXPECT_EQ(1, *n); EXPECT_EQ(0u, q1_.Size()); delete n; n = q2_.Dequeue(); EXPECT_TRUE(n != NULL); EXPECT_EQ(2, *n); EXPECT_EQ(1u, q2_.Size()); delete n; } // If necessary, you can derive further test fixtures from a derived // fixture itself. For example, you can derive another fixture from // QueueTest. Google Test imposes no limit on how deep the hierarchy // can be. In practice, however, you probably don't want it to be too // deep as to be confusing. ffc-2019.1.0.post0/libs/gtest-1.7.0/samples/sample6_unittest.cc000066400000000000000000000214351346026536000235610ustar00rootroot00000000000000// Copyright 2008 Google Inc. // All Rights Reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: wan@google.com (Zhanyong Wan) // This sample shows how to test common properties of multiple // implementations of the same interface (aka interface tests). // The interface and its implementations are in this header. #include "prime_tables.h" #include "gtest/gtest.h" // First, we define some factory functions for creating instances of // the implementations. You may be able to skip this step if all your // implementations can be constructed the same way. template PrimeTable* CreatePrimeTable(); template <> PrimeTable* CreatePrimeTable() { return new OnTheFlyPrimeTable; } template <> PrimeTable* CreatePrimeTable() { return new PreCalculatedPrimeTable(10000); } // Then we define a test fixture class template. template class PrimeTableTest : public testing::Test { protected: // The ctor calls the factory function to create a prime table // implemented by T. PrimeTableTest() : table_(CreatePrimeTable()) {} virtual ~PrimeTableTest() { delete table_; } // Note that we test an implementation via the base interface // instead of the actual implementation class. This is important // for keeping the tests close to the real world scenario, where the // implementation is invoked via the base interface. It avoids // got-yas where the implementation class has a method that shadows // a method with the same name (but slightly different argument // types) in the base interface, for example. PrimeTable* const table_; }; #if GTEST_HAS_TYPED_TEST using testing::Types; // Google Test offers two ways for reusing tests for different types. // The first is called "typed tests". You should use it if you // already know *all* the types you are gonna exercise when you write // the tests. // To write a typed test case, first use // // TYPED_TEST_CASE(TestCaseName, TypeList); // // to declare it and specify the type parameters. As with TEST_F, // TestCaseName must match the test fixture name. // The list of types we want to test. typedef Types Implementations; TYPED_TEST_CASE(PrimeTableTest, Implementations); // Then use TYPED_TEST(TestCaseName, TestName) to define a typed test, // similar to TEST_F. TYPED_TEST(PrimeTableTest, ReturnsFalseForNonPrimes) { // Inside the test body, you can refer to the type parameter by // TypeParam, and refer to the fixture class by TestFixture. We // don't need them in this example. // Since we are in the template world, C++ requires explicitly // writing 'this->' when referring to members of the fixture class. // This is something you have to learn to live with. EXPECT_FALSE(this->table_->IsPrime(-5)); EXPECT_FALSE(this->table_->IsPrime(0)); EXPECT_FALSE(this->table_->IsPrime(1)); EXPECT_FALSE(this->table_->IsPrime(4)); EXPECT_FALSE(this->table_->IsPrime(6)); EXPECT_FALSE(this->table_->IsPrime(100)); } TYPED_TEST(PrimeTableTest, ReturnsTrueForPrimes) { EXPECT_TRUE(this->table_->IsPrime(2)); EXPECT_TRUE(this->table_->IsPrime(3)); EXPECT_TRUE(this->table_->IsPrime(5)); EXPECT_TRUE(this->table_->IsPrime(7)); EXPECT_TRUE(this->table_->IsPrime(11)); EXPECT_TRUE(this->table_->IsPrime(131)); } TYPED_TEST(PrimeTableTest, CanGetNextPrime) { EXPECT_EQ(2, this->table_->GetNextPrime(0)); EXPECT_EQ(3, this->table_->GetNextPrime(2)); EXPECT_EQ(5, this->table_->GetNextPrime(3)); EXPECT_EQ(7, this->table_->GetNextPrime(5)); EXPECT_EQ(11, this->table_->GetNextPrime(7)); EXPECT_EQ(131, this->table_->GetNextPrime(128)); } // That's it! Google Test will repeat each TYPED_TEST for each type // in the type list specified in TYPED_TEST_CASE. Sit back and be // happy that you don't have to define them multiple times. #endif // GTEST_HAS_TYPED_TEST #if GTEST_HAS_TYPED_TEST_P using testing::Types; // Sometimes, however, you don't yet know all the types that you want // to test when you write the tests. For example, if you are the // author of an interface and expect other people to implement it, you // might want to write a set of tests to make sure each implementation // conforms to some basic requirements, but you don't know what // implementations will be written in the future. // // How can you write the tests without committing to the type // parameters? That's what "type-parameterized tests" can do for you. // It is a bit more involved than typed tests, but in return you get a // test pattern that can be reused in many contexts, which is a big // win. Here's how you do it: // First, define a test fixture class template. Here we just reuse // the PrimeTableTest fixture defined earlier: template class PrimeTableTest2 : public PrimeTableTest { }; // Then, declare the test case. The argument is the name of the test // fixture, and also the name of the test case (as usual). The _P // suffix is for "parameterized" or "pattern". TYPED_TEST_CASE_P(PrimeTableTest2); // Next, use TYPED_TEST_P(TestCaseName, TestName) to define a test, // similar to what you do with TEST_F. TYPED_TEST_P(PrimeTableTest2, ReturnsFalseForNonPrimes) { EXPECT_FALSE(this->table_->IsPrime(-5)); EXPECT_FALSE(this->table_->IsPrime(0)); EXPECT_FALSE(this->table_->IsPrime(1)); EXPECT_FALSE(this->table_->IsPrime(4)); EXPECT_FALSE(this->table_->IsPrime(6)); EXPECT_FALSE(this->table_->IsPrime(100)); } TYPED_TEST_P(PrimeTableTest2, ReturnsTrueForPrimes) { EXPECT_TRUE(this->table_->IsPrime(2)); EXPECT_TRUE(this->table_->IsPrime(3)); EXPECT_TRUE(this->table_->IsPrime(5)); EXPECT_TRUE(this->table_->IsPrime(7)); EXPECT_TRUE(this->table_->IsPrime(11)); EXPECT_TRUE(this->table_->IsPrime(131)); } TYPED_TEST_P(PrimeTableTest2, CanGetNextPrime) { EXPECT_EQ(2, this->table_->GetNextPrime(0)); EXPECT_EQ(3, this->table_->GetNextPrime(2)); EXPECT_EQ(5, this->table_->GetNextPrime(3)); EXPECT_EQ(7, this->table_->GetNextPrime(5)); EXPECT_EQ(11, this->table_->GetNextPrime(7)); EXPECT_EQ(131, this->table_->GetNextPrime(128)); } // Type-parameterized tests involve one extra step: you have to // enumerate the tests you defined: REGISTER_TYPED_TEST_CASE_P( PrimeTableTest2, // The first argument is the test case name. // The rest of the arguments are the test names. ReturnsFalseForNonPrimes, ReturnsTrueForPrimes, CanGetNextPrime); // At this point the test pattern is done. However, you don't have // any real test yet as you haven't said which types you want to run // the tests with. // To turn the abstract test pattern into real tests, you instantiate // it with a list of types. Usually the test pattern will be defined // in a .h file, and anyone can #include and instantiate it. You can // even instantiate it more than once in the same program. To tell // different instances apart, you give each of them a name, which will // become part of the test case name and can be used in test filters. // The list of types we want to test. Note that it doesn't have to be // defined at the time we write the TYPED_TEST_P()s. typedef Types PrimeTableImplementations; INSTANTIATE_TYPED_TEST_CASE_P(OnTheFlyAndPreCalculated, // Instance name PrimeTableTest2, // Test case name PrimeTableImplementations); // Type list #endif // GTEST_HAS_TYPED_TEST_P ffc-2019.1.0.post0/libs/gtest-1.7.0/samples/sample7_unittest.cc000066400000000000000000000117531346026536000235640ustar00rootroot00000000000000// Copyright 2008 Google Inc. // All Rights Reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: vladl@google.com (Vlad Losev) // This sample shows how to test common properties of multiple // implementations of an interface (aka interface tests) using // value-parameterized tests. Each test in the test case has // a parameter that is an interface pointer to an implementation // tested. // The interface and its implementations are in this header. #include "prime_tables.h" #include "gtest/gtest.h" #if GTEST_HAS_PARAM_TEST using ::testing::TestWithParam; using ::testing::Values; // As a general rule, to prevent a test from affecting the tests that come // after it, you should create and destroy the tested objects for each test // instead of reusing them. In this sample we will define a simple factory // function for PrimeTable objects. We will instantiate objects in test's // SetUp() method and delete them in TearDown() method. typedef PrimeTable* CreatePrimeTableFunc(); PrimeTable* CreateOnTheFlyPrimeTable() { return new OnTheFlyPrimeTable(); } template PrimeTable* CreatePreCalculatedPrimeTable() { return new PreCalculatedPrimeTable(max_precalculated); } // Inside the test body, fixture constructor, SetUp(), and TearDown() you // can refer to the test parameter by GetParam(). In this case, the test // parameter is a factory function which we call in fixture's SetUp() to // create and store an instance of PrimeTable. class PrimeTableTest : public TestWithParam { public: virtual ~PrimeTableTest() { delete table_; } virtual void SetUp() { table_ = (*GetParam())(); } virtual void TearDown() { delete table_; table_ = NULL; } protected: PrimeTable* table_; }; TEST_P(PrimeTableTest, ReturnsFalseForNonPrimes) { EXPECT_FALSE(table_->IsPrime(-5)); EXPECT_FALSE(table_->IsPrime(0)); EXPECT_FALSE(table_->IsPrime(1)); EXPECT_FALSE(table_->IsPrime(4)); EXPECT_FALSE(table_->IsPrime(6)); EXPECT_FALSE(table_->IsPrime(100)); } TEST_P(PrimeTableTest, ReturnsTrueForPrimes) { EXPECT_TRUE(table_->IsPrime(2)); EXPECT_TRUE(table_->IsPrime(3)); EXPECT_TRUE(table_->IsPrime(5)); EXPECT_TRUE(table_->IsPrime(7)); EXPECT_TRUE(table_->IsPrime(11)); EXPECT_TRUE(table_->IsPrime(131)); } TEST_P(PrimeTableTest, CanGetNextPrime) { EXPECT_EQ(2, table_->GetNextPrime(0)); EXPECT_EQ(3, table_->GetNextPrime(2)); EXPECT_EQ(5, table_->GetNextPrime(3)); EXPECT_EQ(7, table_->GetNextPrime(5)); EXPECT_EQ(11, table_->GetNextPrime(7)); EXPECT_EQ(131, table_->GetNextPrime(128)); } // In order to run value-parameterized tests, you need to instantiate them, // or bind them to a list of values which will be used as test parameters. // You can instantiate them in a different translation module, or even // instantiate them several times. // // Here, we instantiate our tests with a list of two PrimeTable object // factory functions: INSTANTIATE_TEST_CASE_P( OnTheFlyAndPreCalculated, PrimeTableTest, Values(&CreateOnTheFlyPrimeTable, &CreatePreCalculatedPrimeTable<1000>)); #else // Google Test may not support value-parameterized tests with some // compilers. If we use conditional compilation to compile out all // code referring to the gtest_main library, MSVC linker will not link // that library at all and consequently complain about missing entry // point defined in that library (fatal error LNK1561: entry point // must be defined). This dummy test keeps gtest_main linked in. TEST(DummyTest, ValueParameterizedTestsAreNotSupportedOnThisPlatform) {} #endif // GTEST_HAS_PARAM_TEST ffc-2019.1.0.post0/libs/gtest-1.7.0/samples/sample8_unittest.cc000066400000000000000000000154401346026536000235620ustar00rootroot00000000000000// Copyright 2008 Google Inc. // All Rights Reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: vladl@google.com (Vlad Losev) // This sample shows how to test code relying on some global flag variables. // Combine() helps with generating all possible combinations of such flags, // and each test is given one combination as a parameter. // Use class definitions to test from this header. #include "prime_tables.h" #include "gtest/gtest.h" #if GTEST_HAS_COMBINE // Suppose we want to introduce a new, improved implementation of PrimeTable // which combines speed of PrecalcPrimeTable and versatility of // OnTheFlyPrimeTable (see prime_tables.h). Inside it instantiates both // PrecalcPrimeTable and OnTheFlyPrimeTable and uses the one that is more // appropriate under the circumstances. But in low memory conditions, it can be // told to instantiate without PrecalcPrimeTable instance at all and use only // OnTheFlyPrimeTable. class HybridPrimeTable : public PrimeTable { public: HybridPrimeTable(bool force_on_the_fly, int max_precalculated) : on_the_fly_impl_(new OnTheFlyPrimeTable), precalc_impl_(force_on_the_fly ? NULL : new PreCalculatedPrimeTable(max_precalculated)), max_precalculated_(max_precalculated) {} virtual ~HybridPrimeTable() { delete on_the_fly_impl_; delete precalc_impl_; } virtual bool IsPrime(int n) const { if (precalc_impl_ != NULL && n < max_precalculated_) return precalc_impl_->IsPrime(n); else return on_the_fly_impl_->IsPrime(n); } virtual int GetNextPrime(int p) const { int next_prime = -1; if (precalc_impl_ != NULL && p < max_precalculated_) next_prime = precalc_impl_->GetNextPrime(p); return next_prime != -1 ? next_prime : on_the_fly_impl_->GetNextPrime(p); } private: OnTheFlyPrimeTable* on_the_fly_impl_; PreCalculatedPrimeTable* precalc_impl_; int max_precalculated_; }; using ::testing::TestWithParam; using ::testing::Bool; using ::testing::Values; using ::testing::Combine; // To test all code paths for HybridPrimeTable we must test it with numbers // both within and outside PreCalculatedPrimeTable's capacity and also with // PreCalculatedPrimeTable disabled. We do this by defining fixture which will // accept different combinations of parameters for instantiating a // HybridPrimeTable instance. class PrimeTableTest : public TestWithParam< ::std::tr1::tuple > { protected: virtual void SetUp() { // This can be written as // // bool force_on_the_fly; // int max_precalculated; // tie(force_on_the_fly, max_precalculated) = GetParam(); // // once the Google C++ Style Guide allows use of ::std::tr1::tie. // bool force_on_the_fly = ::std::tr1::get<0>(GetParam()); int max_precalculated = ::std::tr1::get<1>(GetParam()); table_ = new HybridPrimeTable(force_on_the_fly, max_precalculated); } virtual void TearDown() { delete table_; table_ = NULL; } HybridPrimeTable* table_; }; TEST_P(PrimeTableTest, ReturnsFalseForNonPrimes) { // Inside the test body, you can refer to the test parameter by GetParam(). // In this case, the test parameter is a PrimeTable interface pointer which // we can use directly. // Please note that you can also save it in the fixture's SetUp() method // or constructor and use saved copy in the tests. EXPECT_FALSE(table_->IsPrime(-5)); EXPECT_FALSE(table_->IsPrime(0)); EXPECT_FALSE(table_->IsPrime(1)); EXPECT_FALSE(table_->IsPrime(4)); EXPECT_FALSE(table_->IsPrime(6)); EXPECT_FALSE(table_->IsPrime(100)); } TEST_P(PrimeTableTest, ReturnsTrueForPrimes) { EXPECT_TRUE(table_->IsPrime(2)); EXPECT_TRUE(table_->IsPrime(3)); EXPECT_TRUE(table_->IsPrime(5)); EXPECT_TRUE(table_->IsPrime(7)); EXPECT_TRUE(table_->IsPrime(11)); EXPECT_TRUE(table_->IsPrime(131)); } TEST_P(PrimeTableTest, CanGetNextPrime) { EXPECT_EQ(2, table_->GetNextPrime(0)); EXPECT_EQ(3, table_->GetNextPrime(2)); EXPECT_EQ(5, table_->GetNextPrime(3)); EXPECT_EQ(7, table_->GetNextPrime(5)); EXPECT_EQ(11, table_->GetNextPrime(7)); EXPECT_EQ(131, table_->GetNextPrime(128)); } // In order to run value-parameterized tests, you need to instantiate them, // or bind them to a list of values which will be used as test parameters. // You can instantiate them in a different translation module, or even // instantiate them several times. // // Here, we instantiate our tests with a list of parameters. We must combine // all variations of the boolean flag suppressing PrecalcPrimeTable and some // meaningful values for tests. We choose a small value (1), and a value that // will put some of the tested numbers beyond the capability of the // PrecalcPrimeTable instance and some inside it (10). Combine will produce all // possible combinations. INSTANTIATE_TEST_CASE_P(MeaningfulTestParameters, PrimeTableTest, Combine(Bool(), Values(1, 10))); #else // Google Test may not support Combine() with some compilers. If we // use conditional compilation to compile out all code referring to // the gtest_main library, MSVC linker will not link that library at // all and consequently complain about missing entry point defined in // that library (fatal error LNK1561: entry point must be // defined). This dummy test keeps gtest_main linked in. TEST(DummyTest, CombineIsNotSupportedOnThisPlatform) {} #endif // GTEST_HAS_COMBINE ffc-2019.1.0.post0/libs/gtest-1.7.0/samples/sample9_unittest.cc000066400000000000000000000134771346026536000235730ustar00rootroot00000000000000// Copyright 2009 Google Inc. All Rights Reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: vladl@google.com (Vlad Losev) // This sample shows how to use Google Test listener API to implement // an alternative console output and how to use the UnitTest reflection API // to enumerate test cases and tests and to inspect their results. #include #include "gtest/gtest.h" using ::testing::EmptyTestEventListener; using ::testing::InitGoogleTest; using ::testing::Test; using ::testing::TestCase; using ::testing::TestEventListeners; using ::testing::TestInfo; using ::testing::TestPartResult; using ::testing::UnitTest; namespace { // Provides alternative output mode which produces minimal amount of // information about tests. class TersePrinter : public EmptyTestEventListener { private: // Called before any test activity starts. virtual void OnTestProgramStart(const UnitTest& /* unit_test */) {} // Called after all test activities have ended. virtual void OnTestProgramEnd(const UnitTest& unit_test) { fprintf(stdout, "TEST %s\n", unit_test.Passed() ? "PASSED" : "FAILED"); fflush(stdout); } // Called before a test starts. virtual void OnTestStart(const TestInfo& test_info) { fprintf(stdout, "*** Test %s.%s starting.\n", test_info.test_case_name(), test_info.name()); fflush(stdout); } // Called after a failed assertion or a SUCCEED() invocation. virtual void OnTestPartResult(const TestPartResult& test_part_result) { fprintf(stdout, "%s in %s:%d\n%s\n", test_part_result.failed() ? "*** Failure" : "Success", test_part_result.file_name(), test_part_result.line_number(), test_part_result.summary()); fflush(stdout); } // Called after a test ends. virtual void OnTestEnd(const TestInfo& test_info) { fprintf(stdout, "*** Test %s.%s ending.\n", test_info.test_case_name(), test_info.name()); fflush(stdout); } }; // class TersePrinter TEST(CustomOutputTest, PrintsMessage) { printf("Printing something from the test body...\n"); } TEST(CustomOutputTest, Succeeds) { SUCCEED() << "SUCCEED() has been invoked from here"; } TEST(CustomOutputTest, Fails) { EXPECT_EQ(1, 2) << "This test fails in order to demonstrate alternative failure messages"; } } // namespace int main(int argc, char **argv) { InitGoogleTest(&argc, argv); bool terse_output = false; if (argc > 1 && strcmp(argv[1], "--terse_output") == 0 ) terse_output = true; else printf("%s\n", "Run this program with --terse_output to change the way " "it prints its output."); UnitTest& unit_test = *UnitTest::GetInstance(); // If we are given the --terse_output command line flag, suppresses the // standard output and attaches own result printer. if (terse_output) { TestEventListeners& listeners = unit_test.listeners(); // Removes the default console output listener from the list so it will // not receive events from Google Test and won't print any output. Since // this operation transfers ownership of the listener to the caller we // have to delete it as well. delete listeners.Release(listeners.default_result_printer()); // Adds the custom output listener to the list. It will now receive // events from Google Test and print the alternative output. We don't // have to worry about deleting it since Google Test assumes ownership // over it after adding it to the list. listeners.Append(new TersePrinter); } int ret_val = RUN_ALL_TESTS(); // This is an example of using the UnitTest reflection API to inspect test // results. Here we discount failures from the tests we expected to fail. int unexpectedly_failed_tests = 0; for (int i = 0; i < unit_test.total_test_case_count(); ++i) { const TestCase& test_case = *unit_test.GetTestCase(i); for (int j = 0; j < test_case.total_test_count(); ++j) { const TestInfo& test_info = *test_case.GetTestInfo(j); // Counts failed tests that were not meant to fail (those without // 'Fails' in the name). if (test_info.result()->Failed() && strcmp(test_info.name(), "Fails") != 0) { unexpectedly_failed_tests++; } } } // Test that were meant to fail should not affect the test program outcome. if (unexpectedly_failed_tests == 0) ret_val = 0; return ret_val; } ffc-2019.1.0.post0/libs/gtest-1.7.0/scripts/000077500000000000000000000000001346026536000177625ustar00rootroot00000000000000ffc-2019.1.0.post0/libs/gtest-1.7.0/scripts/fuse_gtest_files.py000077500000000000000000000211551346026536000236750ustar00rootroot00000000000000#!/usr/bin/env python # # Copyright 2009, Google Inc. # All rights reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions are # met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # * Redistributions in binary form must reproduce the above # copyright notice, this list of conditions and the following disclaimer # in the documentation and/or other materials provided with the # distribution. # * Neither the name of Google Inc. nor the names of its # contributors may be used to endorse or promote products derived from # this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """fuse_gtest_files.py v0.2.0 Fuses Google Test source code into a .h file and a .cc file. SYNOPSIS fuse_gtest_files.py [GTEST_ROOT_DIR] OUTPUT_DIR Scans GTEST_ROOT_DIR for Google Test source code, and generates two files: OUTPUT_DIR/gtest/gtest.h and OUTPUT_DIR/gtest/gtest-all.cc. Then you can build your tests by adding OUTPUT_DIR to the include search path and linking with OUTPUT_DIR/gtest/gtest-all.cc. These two files contain everything you need to use Google Test. Hence you can "install" Google Test by copying them to wherever you want. GTEST_ROOT_DIR can be omitted and defaults to the parent directory of the directory holding this script. EXAMPLES ./fuse_gtest_files.py fused_gtest ./fuse_gtest_files.py path/to/unpacked/gtest fused_gtest This tool is experimental. In particular, it assumes that there is no conditional inclusion of Google Test headers. Please report any problems to googletestframework@googlegroups.com. You can read http://code.google.com/p/googletest/wiki/GoogleTestAdvancedGuide for more information. """ __author__ = 'wan@google.com (Zhanyong Wan)' import os import re import sets import sys # We assume that this file is in the scripts/ directory in the Google # Test root directory. DEFAULT_GTEST_ROOT_DIR = os.path.join(os.path.dirname(__file__), '..') # Regex for matching '#include "gtest/..."'. INCLUDE_GTEST_FILE_REGEX = re.compile(r'^\s*#\s*include\s*"(gtest/.+)"') # Regex for matching '#include "src/..."'. INCLUDE_SRC_FILE_REGEX = re.compile(r'^\s*#\s*include\s*"(src/.+)"') # Where to find the source seed files. GTEST_H_SEED = 'include/gtest/gtest.h' GTEST_SPI_H_SEED = 'include/gtest/gtest-spi.h' GTEST_ALL_CC_SEED = 'src/gtest-all.cc' # Where to put the generated files. GTEST_H_OUTPUT = 'gtest/gtest.h' GTEST_ALL_CC_OUTPUT = 'gtest/gtest-all.cc' def VerifyFileExists(directory, relative_path): """Verifies that the given file exists; aborts on failure. relative_path is the file path relative to the given directory. """ if not os.path.isfile(os.path.join(directory, relative_path)): print 'ERROR: Cannot find %s in directory %s.' % (relative_path, directory) print ('Please either specify a valid project root directory ' 'or omit it on the command line.') sys.exit(1) def ValidateGTestRootDir(gtest_root): """Makes sure gtest_root points to a valid gtest root directory. The function aborts the program on failure. """ VerifyFileExists(gtest_root, GTEST_H_SEED) VerifyFileExists(gtest_root, GTEST_ALL_CC_SEED) def VerifyOutputFile(output_dir, relative_path): """Verifies that the given output file path is valid. relative_path is relative to the output_dir directory. """ # Makes sure the output file either doesn't exist or can be overwritten. output_file = os.path.join(output_dir, relative_path) if os.path.exists(output_file): # TODO(wan@google.com): The following user-interaction doesn't # work with automated processes. We should provide a way for the # Makefile to force overwriting the files. print ('%s already exists in directory %s - overwrite it? (y/N) ' % (relative_path, output_dir)) answer = sys.stdin.readline().strip() if answer not in ['y', 'Y']: print 'ABORTED.' sys.exit(1) # Makes sure the directory holding the output file exists; creates # it and all its ancestors if necessary. parent_directory = os.path.dirname(output_file) if not os.path.isdir(parent_directory): os.makedirs(parent_directory) def ValidateOutputDir(output_dir): """Makes sure output_dir points to a valid output directory. The function aborts the program on failure. """ VerifyOutputFile(output_dir, GTEST_H_OUTPUT) VerifyOutputFile(output_dir, GTEST_ALL_CC_OUTPUT) def FuseGTestH(gtest_root, output_dir): """Scans folder gtest_root to generate gtest/gtest.h in output_dir.""" output_file = file(os.path.join(output_dir, GTEST_H_OUTPUT), 'w') processed_files = sets.Set() # Holds all gtest headers we've processed. def ProcessFile(gtest_header_path): """Processes the given gtest header file.""" # We don't process the same header twice. if gtest_header_path in processed_files: return processed_files.add(gtest_header_path) # Reads each line in the given gtest header. for line in file(os.path.join(gtest_root, gtest_header_path), 'r'): m = INCLUDE_GTEST_FILE_REGEX.match(line) if m: # It's '#include "gtest/..."' - let's process it recursively. ProcessFile('include/' + m.group(1)) else: # Otherwise we copy the line unchanged to the output file. output_file.write(line) ProcessFile(GTEST_H_SEED) output_file.close() def FuseGTestAllCcToFile(gtest_root, output_file): """Scans folder gtest_root to generate gtest/gtest-all.cc in output_file.""" processed_files = sets.Set() def ProcessFile(gtest_source_file): """Processes the given gtest source file.""" # We don't process the same #included file twice. if gtest_source_file in processed_files: return processed_files.add(gtest_source_file) # Reads each line in the given gtest source file. for line in file(os.path.join(gtest_root, gtest_source_file), 'r'): m = INCLUDE_GTEST_FILE_REGEX.match(line) if m: if 'include/' + m.group(1) == GTEST_SPI_H_SEED: # It's '#include "gtest/gtest-spi.h"'. This file is not # #included by "gtest/gtest.h", so we need to process it. ProcessFile(GTEST_SPI_H_SEED) else: # It's '#include "gtest/foo.h"' where foo is not gtest-spi. # We treat it as '#include "gtest/gtest.h"', as all other # gtest headers are being fused into gtest.h and cannot be # #included directly. # There is no need to #include "gtest/gtest.h" more than once. if not GTEST_H_SEED in processed_files: processed_files.add(GTEST_H_SEED) output_file.write('#include "%s"\n' % (GTEST_H_OUTPUT,)) else: m = INCLUDE_SRC_FILE_REGEX.match(line) if m: # It's '#include "src/foo"' - let's process it recursively. ProcessFile(m.group(1)) else: output_file.write(line) ProcessFile(GTEST_ALL_CC_SEED) def FuseGTestAllCc(gtest_root, output_dir): """Scans folder gtest_root to generate gtest/gtest-all.cc in output_dir.""" output_file = file(os.path.join(output_dir, GTEST_ALL_CC_OUTPUT), 'w') FuseGTestAllCcToFile(gtest_root, output_file) output_file.close() def FuseGTest(gtest_root, output_dir): """Fuses gtest.h and gtest-all.cc.""" ValidateGTestRootDir(gtest_root) ValidateOutputDir(output_dir) FuseGTestH(gtest_root, output_dir) FuseGTestAllCc(gtest_root, output_dir) def main(): argc = len(sys.argv) if argc == 2: # fuse_gtest_files.py OUTPUT_DIR FuseGTest(DEFAULT_GTEST_ROOT_DIR, sys.argv[1]) elif argc == 3: # fuse_gtest_files.py GTEST_ROOT_DIR OUTPUT_DIR FuseGTest(sys.argv[1], sys.argv[2]) else: print __doc__ sys.exit(1) if __name__ == '__main__': main() ffc-2019.1.0.post0/libs/gtest-1.7.0/scripts/gen_gtest_pred_impl.py000077500000000000000000000527421346026536000243630ustar00rootroot00000000000000#!/usr/bin/env python # # Copyright 2006, Google Inc. # All rights reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions are # met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # * Redistributions in binary form must reproduce the above # copyright notice, this list of conditions and the following disclaimer # in the documentation and/or other materials provided with the # distribution. # * Neither the name of Google Inc. nor the names of its # contributors may be used to endorse or promote products derived from # this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """gen_gtest_pred_impl.py v0.1 Generates the implementation of Google Test predicate assertions and accompanying tests. Usage: gen_gtest_pred_impl.py MAX_ARITY where MAX_ARITY is a positive integer. The command generates the implementation of up-to MAX_ARITY-ary predicate assertions, and writes it to file gtest_pred_impl.h in the directory where the script is. It also generates the accompanying unit test in file gtest_pred_impl_unittest.cc. """ __author__ = 'wan@google.com (Zhanyong Wan)' import os import sys import time # Where this script is. SCRIPT_DIR = os.path.dirname(sys.argv[0]) # Where to store the generated header. HEADER = os.path.join(SCRIPT_DIR, '../include/gtest/gtest_pred_impl.h') # Where to store the generated unit test. UNIT_TEST = os.path.join(SCRIPT_DIR, '../test/gtest_pred_impl_unittest.cc') def HeaderPreamble(n): """Returns the preamble for the header file. Args: n: the maximum arity of the predicate macros to be generated. """ # A map that defines the values used in the preamble template. DEFS = { 'today' : time.strftime('%m/%d/%Y'), 'year' : time.strftime('%Y'), 'command' : '%s %s' % (os.path.basename(sys.argv[0]), n), 'n' : n } return ( """// Copyright 2006, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // This file is AUTOMATICALLY GENERATED on %(today)s by command // '%(command)s'. DO NOT EDIT BY HAND! // // Implements a family of generic predicate assertion macros. #ifndef GTEST_INCLUDE_GTEST_GTEST_PRED_IMPL_H_ #define GTEST_INCLUDE_GTEST_GTEST_PRED_IMPL_H_ // Makes sure this header is not included before gtest.h. #ifndef GTEST_INCLUDE_GTEST_GTEST_H_ # error Do not include gtest_pred_impl.h directly. Include gtest.h instead. #endif // GTEST_INCLUDE_GTEST_GTEST_H_ // This header implements a family of generic predicate assertion // macros: // // ASSERT_PRED_FORMAT1(pred_format, v1) // ASSERT_PRED_FORMAT2(pred_format, v1, v2) // ... // // where pred_format is a function or functor that takes n (in the // case of ASSERT_PRED_FORMATn) values and their source expression // text, and returns a testing::AssertionResult. See the definition // of ASSERT_EQ in gtest.h for an example. // // If you don't care about formatting, you can use the more // restrictive version: // // ASSERT_PRED1(pred, v1) // ASSERT_PRED2(pred, v1, v2) // ... // // where pred is an n-ary function or functor that returns bool, // and the values v1, v2, ..., must support the << operator for // streaming to std::ostream. // // We also define the EXPECT_* variations. // // For now we only support predicates whose arity is at most %(n)s. // Please email googletestframework@googlegroups.com if you need // support for higher arities. // GTEST_ASSERT_ is the basic statement to which all of the assertions // in this file reduce. Don't use this in your code. #define GTEST_ASSERT_(expression, on_failure) \\ GTEST_AMBIGUOUS_ELSE_BLOCKER_ \\ if (const ::testing::AssertionResult gtest_ar = (expression)) \\ ; \\ else \\ on_failure(gtest_ar.failure_message()) """ % DEFS) def Arity(n): """Returns the English name of the given arity.""" if n < 0: return None elif n <= 3: return ['nullary', 'unary', 'binary', 'ternary'][n] else: return '%s-ary' % n def Title(word): """Returns the given word in title case. The difference between this and string's title() method is that Title('4-ary') is '4-ary' while '4-ary'.title() is '4-Ary'.""" return word[0].upper() + word[1:] def OneTo(n): """Returns the list [1, 2, 3, ..., n].""" return range(1, n + 1) def Iter(n, format, sep=''): """Given a positive integer n, a format string that contains 0 or more '%s' format specs, and optionally a separator string, returns the join of n strings, each formatted with the format string on an iterator ranged from 1 to n. Example: Iter(3, 'v%s', sep=', ') returns 'v1, v2, v3'. """ # How many '%s' specs are in format? spec_count = len(format.split('%s')) - 1 return sep.join([format % (spec_count * (i,)) for i in OneTo(n)]) def ImplementationForArity(n): """Returns the implementation of n-ary predicate assertions.""" # A map the defines the values used in the implementation template. DEFS = { 'n' : str(n), 'vs' : Iter(n, 'v%s', sep=', '), 'vts' : Iter(n, '#v%s', sep=', '), 'arity' : Arity(n), 'Arity' : Title(Arity(n)) } impl = """ // Helper function for implementing {EXPECT|ASSERT}_PRED%(n)s. Don't use // this in your code. template AssertionResult AssertPred%(n)sHelper(const char* pred_text""" % DEFS impl += Iter(n, """, const char* e%s""") impl += """, Pred pred""" impl += Iter(n, """, const T%s& v%s""") impl += """) { if (pred(%(vs)s)) return AssertionSuccess(); """ % DEFS impl += ' return AssertionFailure() << pred_text << "("' impl += Iter(n, """ << e%s""", sep=' << ", "') impl += ' << ") evaluates to false, where"' impl += Iter(n, """ << "\\n" << e%s << " evaluates to " << v%s""") impl += """; } // Internal macro for implementing {EXPECT|ASSERT}_PRED_FORMAT%(n)s. // Don't use this in your code. #define GTEST_PRED_FORMAT%(n)s_(pred_format, %(vs)s, on_failure)\\ GTEST_ASSERT_(pred_format(%(vts)s, %(vs)s), \\ on_failure) // Internal macro for implementing {EXPECT|ASSERT}_PRED%(n)s. Don't use // this in your code. #define GTEST_PRED%(n)s_(pred, %(vs)s, on_failure)\\ GTEST_ASSERT_(::testing::AssertPred%(n)sHelper(#pred""" % DEFS impl += Iter(n, """, \\ #v%s""") impl += """, \\ pred""" impl += Iter(n, """, \\ v%s""") impl += """), on_failure) // %(Arity)s predicate assertion macros. #define EXPECT_PRED_FORMAT%(n)s(pred_format, %(vs)s) \\ GTEST_PRED_FORMAT%(n)s_(pred_format, %(vs)s, GTEST_NONFATAL_FAILURE_) #define EXPECT_PRED%(n)s(pred, %(vs)s) \\ GTEST_PRED%(n)s_(pred, %(vs)s, GTEST_NONFATAL_FAILURE_) #define ASSERT_PRED_FORMAT%(n)s(pred_format, %(vs)s) \\ GTEST_PRED_FORMAT%(n)s_(pred_format, %(vs)s, GTEST_FATAL_FAILURE_) #define ASSERT_PRED%(n)s(pred, %(vs)s) \\ GTEST_PRED%(n)s_(pred, %(vs)s, GTEST_FATAL_FAILURE_) """ % DEFS return impl def HeaderPostamble(): """Returns the postamble for the header file.""" return """ #endif // GTEST_INCLUDE_GTEST_GTEST_PRED_IMPL_H_ """ def GenerateFile(path, content): """Given a file path and a content string, overwrites it with the given content.""" print 'Updating file %s . . .' % path f = file(path, 'w+') print >>f, content, f.close() print 'File %s has been updated.' % path def GenerateHeader(n): """Given the maximum arity n, updates the header file that implements the predicate assertions.""" GenerateFile(HEADER, HeaderPreamble(n) + ''.join([ImplementationForArity(i) for i in OneTo(n)]) + HeaderPostamble()) def UnitTestPreamble(): """Returns the preamble for the unit test file.""" # A map that defines the values used in the preamble template. DEFS = { 'today' : time.strftime('%m/%d/%Y'), 'year' : time.strftime('%Y'), 'command' : '%s %s' % (os.path.basename(sys.argv[0]), sys.argv[1]), } return ( """// Copyright 2006, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // This file is AUTOMATICALLY GENERATED on %(today)s by command // '%(command)s'. DO NOT EDIT BY HAND! // Regression test for gtest_pred_impl.h // // This file is generated by a script and quite long. If you intend to // learn how Google Test works by reading its unit tests, read // gtest_unittest.cc instead. // // This is intended as a regression test for the Google Test predicate // assertions. We compile it as part of the gtest_unittest target // only to keep the implementation tidy and compact, as it is quite // involved to set up the stage for testing Google Test using Google // Test itself. // // Currently, gtest_unittest takes ~11 seconds to run in the testing // daemon. In the future, if it grows too large and needs much more // time to finish, we should consider separating this file into a // stand-alone regression test. #include #include "gtest/gtest.h" #include "gtest/gtest-spi.h" // A user-defined data type. struct Bool { explicit Bool(int val) : value(val != 0) {} bool operator>(int n) const { return value > Bool(n).value; } Bool operator+(const Bool& rhs) const { return Bool(value + rhs.value); } bool operator==(const Bool& rhs) const { return value == rhs.value; } bool value; }; // Enables Bool to be used in assertions. std::ostream& operator<<(std::ostream& os, const Bool& x) { return os << (x.value ? "true" : "false"); } """ % DEFS) def TestsForArity(n): """Returns the tests for n-ary predicate assertions.""" # A map that defines the values used in the template for the tests. DEFS = { 'n' : n, 'es' : Iter(n, 'e%s', sep=', '), 'vs' : Iter(n, 'v%s', sep=', '), 'vts' : Iter(n, '#v%s', sep=', '), 'tvs' : Iter(n, 'T%s v%s', sep=', '), 'int_vs' : Iter(n, 'int v%s', sep=', '), 'Bool_vs' : Iter(n, 'Bool v%s', sep=', '), 'types' : Iter(n, 'typename T%s', sep=', '), 'v_sum' : Iter(n, 'v%s', sep=' + '), 'arity' : Arity(n), 'Arity' : Title(Arity(n)), } tests = ( """// Sample functions/functors for testing %(arity)s predicate assertions. // A %(arity)s predicate function. template <%(types)s> bool PredFunction%(n)s(%(tvs)s) { return %(v_sum)s > 0; } // The following two functions are needed to circumvent a bug in // gcc 2.95.3, which sometimes has problem with the above template // function. bool PredFunction%(n)sInt(%(int_vs)s) { return %(v_sum)s > 0; } bool PredFunction%(n)sBool(%(Bool_vs)s) { return %(v_sum)s > 0; } """ % DEFS) tests += """ // A %(arity)s predicate functor. struct PredFunctor%(n)s { template <%(types)s> bool operator()(""" % DEFS tests += Iter(n, 'const T%s& v%s', sep=""", """) tests += """) { return %(v_sum)s > 0; } }; """ % DEFS tests += """ // A %(arity)s predicate-formatter function. template <%(types)s> testing::AssertionResult PredFormatFunction%(n)s(""" % DEFS tests += Iter(n, 'const char* e%s', sep=""", """) tests += Iter(n, """, const T%s& v%s""") tests += """) { if (PredFunction%(n)s(%(vs)s)) return testing::AssertionSuccess(); return testing::AssertionFailure() << """ % DEFS tests += Iter(n, 'e%s', sep=' << " + " << ') tests += """ << " is expected to be positive, but evaluates to " << %(v_sum)s << "."; } """ % DEFS tests += """ // A %(arity)s predicate-formatter functor. struct PredFormatFunctor%(n)s { template <%(types)s> testing::AssertionResult operator()(""" % DEFS tests += Iter(n, 'const char* e%s', sep=""", """) tests += Iter(n, """, const T%s& v%s""") tests += """) const { return PredFormatFunction%(n)s(%(es)s, %(vs)s); } }; """ % DEFS tests += """ // Tests for {EXPECT|ASSERT}_PRED_FORMAT%(n)s. class Predicate%(n)sTest : public testing::Test { protected: virtual void SetUp() { expected_to_finish_ = true; finished_ = false;""" % DEFS tests += """ """ + Iter(n, 'n%s_ = ') + """0; } """ tests += """ virtual void TearDown() { // Verifies that each of the predicate's arguments was evaluated // exactly once.""" tests += ''.join([""" EXPECT_EQ(1, n%s_) << "The predicate assertion didn't evaluate argument %s " "exactly once.";""" % (i, i + 1) for i in OneTo(n)]) tests += """ // Verifies that the control flow in the test function is expected. if (expected_to_finish_ && !finished_) { FAIL() << "The predicate assertion unexpactedly aborted the test."; } else if (!expected_to_finish_ && finished_) { FAIL() << "The failed predicate assertion didn't abort the test " "as expected."; } } // true iff the test function is expected to run to finish. static bool expected_to_finish_; // true iff the test function did run to finish. static bool finished_; """ % DEFS tests += Iter(n, """ static int n%s_;""") tests += """ }; bool Predicate%(n)sTest::expected_to_finish_; bool Predicate%(n)sTest::finished_; """ % DEFS tests += Iter(n, """int Predicate%%(n)sTest::n%s_; """) % DEFS tests += """ typedef Predicate%(n)sTest EXPECT_PRED_FORMAT%(n)sTest; typedef Predicate%(n)sTest ASSERT_PRED_FORMAT%(n)sTest; typedef Predicate%(n)sTest EXPECT_PRED%(n)sTest; typedef Predicate%(n)sTest ASSERT_PRED%(n)sTest; """ % DEFS def GenTest(use_format, use_assert, expect_failure, use_functor, use_user_type): """Returns the test for a predicate assertion macro. Args: use_format: true iff the assertion is a *_PRED_FORMAT*. use_assert: true iff the assertion is a ASSERT_*. expect_failure: true iff the assertion is expected to fail. use_functor: true iff the first argument of the assertion is a functor (as opposed to a function) use_user_type: true iff the predicate functor/function takes argument(s) of a user-defined type. Example: GenTest(1, 0, 0, 1, 0) returns a test that tests the behavior of a successful EXPECT_PRED_FORMATn() that takes a functor whose arguments have built-in types.""" if use_assert: assrt = 'ASSERT' # 'assert' is reserved, so we cannot use # that identifier here. else: assrt = 'EXPECT' assertion = assrt + '_PRED' if use_format: pred_format = 'PredFormat' assertion += '_FORMAT' else: pred_format = 'Pred' assertion += '%(n)s' % DEFS if use_functor: pred_format_type = 'functor' pred_format += 'Functor%(n)s()' else: pred_format_type = 'function' pred_format += 'Function%(n)s' if not use_format: if use_user_type: pred_format += 'Bool' else: pred_format += 'Int' test_name = pred_format_type.title() if use_user_type: arg_type = 'user-defined type (Bool)' test_name += 'OnUserType' if expect_failure: arg = 'Bool(n%s_++)' else: arg = 'Bool(++n%s_)' else: arg_type = 'built-in type (int)' test_name += 'OnBuiltInType' if expect_failure: arg = 'n%s_++' else: arg = '++n%s_' if expect_failure: successful_or_failed = 'failed' expected_or_not = 'expected.' test_name += 'Failure' else: successful_or_failed = 'successful' expected_or_not = 'UNEXPECTED!' test_name += 'Success' # A map that defines the values used in the test template. defs = DEFS.copy() defs.update({ 'assert' : assrt, 'assertion' : assertion, 'test_name' : test_name, 'pf_type' : pred_format_type, 'pf' : pred_format, 'arg_type' : arg_type, 'arg' : arg, 'successful' : successful_or_failed, 'expected' : expected_or_not, }) test = """ // Tests a %(successful)s %(assertion)s where the // predicate-formatter is a %(pf_type)s on a %(arg_type)s. TEST_F(%(assertion)sTest, %(test_name)s) {""" % defs indent = (len(assertion) + 3)*' ' extra_indent = '' if expect_failure: extra_indent = ' ' if use_assert: test += """ expected_to_finish_ = false; EXPECT_FATAL_FAILURE({ // NOLINT""" else: test += """ EXPECT_NONFATAL_FAILURE({ // NOLINT""" test += '\n' + extra_indent + """ %(assertion)s(%(pf)s""" % defs test = test % defs test += Iter(n, ',\n' + indent + extra_indent + '%(arg)s' % defs) test += ');\n' + extra_indent + ' finished_ = true;\n' if expect_failure: test += ' }, "");\n' test += '}\n' return test # Generates tests for all 2**6 = 64 combinations. tests += ''.join([GenTest(use_format, use_assert, expect_failure, use_functor, use_user_type) for use_format in [0, 1] for use_assert in [0, 1] for expect_failure in [0, 1] for use_functor in [0, 1] for use_user_type in [0, 1] ]) return tests def UnitTestPostamble(): """Returns the postamble for the tests.""" return '' def GenerateUnitTest(n): """Returns the tests for up-to n-ary predicate assertions.""" GenerateFile(UNIT_TEST, UnitTestPreamble() + ''.join([TestsForArity(i) for i in OneTo(n)]) + UnitTestPostamble()) def _Main(): """The entry point of the script. Generates the header file and its unit test.""" if len(sys.argv) != 2: print __doc__ print 'Author: ' + __author__ sys.exit(1) n = int(sys.argv[1]) GenerateHeader(n) GenerateUnitTest(n) if __name__ == '__main__': _Main() ffc-2019.1.0.post0/libs/gtest-1.7.0/scripts/gtest-config.in000077500000000000000000000235471346026536000227210ustar00rootroot00000000000000#!/bin/sh # These variables are automatically filled in by the configure script. name="@PACKAGE_TARNAME@" version="@PACKAGE_VERSION@" show_usage() { echo "Usage: gtest-config [OPTIONS...]" } show_help() { show_usage cat <<\EOF The `gtest-config' script provides access to the necessary compile and linking flags to connect with Google C++ Testing Framework, both in a build prior to installation, and on the system proper after installation. The installation overrides may be issued in combination with any other queries, but will only affect installation queries if called on a built but not installed gtest. The installation queries may not be issued with any other types of queries, and only one installation query may be made at a time. The version queries and compiler flag queries may be combined as desired but not mixed. Different version queries are always combined with logical "and" semantics, and only the last of any particular query is used while all previous ones ignored. All versions must be specified as a sequence of numbers separated by periods. Compiler flag queries output the union of the sets of flags when combined. Examples: gtest-config --min-version=1.0 || echo "Insufficient Google Test version." g++ $(gtest-config --cppflags --cxxflags) -o foo.o -c foo.cpp g++ $(gtest-config --ldflags --libs) -o foo foo.o # When using a built but not installed Google Test: g++ $(../../my_gtest_build/scripts/gtest-config ...) ... # When using an installed Google Test, but with installation overrides: export GTEST_PREFIX="/opt" g++ $(gtest-config --libdir="/opt/lib64" ...) ... Help: --usage brief usage information --help display this help message Installation Overrides: --prefix= overrides the installation prefix --exec-prefix= overrides the executable installation prefix --libdir= overrides the library installation prefix --includedir= overrides the header file installation prefix Installation Queries: --prefix installation prefix --exec-prefix executable installation prefix --libdir library installation directory --includedir header file installation directory --version the version of the Google Test installation Version Queries: --min-version=VERSION return 0 if the version is at least VERSION --exact-version=VERSION return 0 if the version is exactly VERSION --max-version=VERSION return 0 if the version is at most VERSION Compilation Flag Queries: --cppflags compile flags specific to the C-like preprocessors --cxxflags compile flags appropriate for C++ programs --ldflags linker flags --libs libraries for linking EOF } # This function bounds our version with a min and a max. It uses some clever # POSIX-compliant variable expansion to portably do all the work in the shell # and avoid any dependency on a particular "sed" or "awk" implementation. # Notable is that it will only ever compare the first 3 components of versions. # Further components will be cleanly stripped off. All versions must be # unadorned, so "v1.0" will *not* work. The minimum version must be in $1, and # the max in $2. TODO(chandlerc@google.com): If this ever breaks, we should # investigate expanding this via autom4te from AS_VERSION_COMPARE rather than # continuing to maintain our own shell version. check_versions() { major_version=${version%%.*} minor_version="0" point_version="0" if test "${version#*.}" != "${version}"; then minor_version=${version#*.} minor_version=${minor_version%%.*} fi if test "${version#*.*.}" != "${version}"; then point_version=${version#*.*.} point_version=${point_version%%.*} fi min_version="$1" min_major_version=${min_version%%.*} min_minor_version="0" min_point_version="0" if test "${min_version#*.}" != "${min_version}"; then min_minor_version=${min_version#*.} min_minor_version=${min_minor_version%%.*} fi if test "${min_version#*.*.}" != "${min_version}"; then min_point_version=${min_version#*.*.} min_point_version=${min_point_version%%.*} fi max_version="$2" max_major_version=${max_version%%.*} max_minor_version="0" max_point_version="0" if test "${max_version#*.}" != "${max_version}"; then max_minor_version=${max_version#*.} max_minor_version=${max_minor_version%%.*} fi if test "${max_version#*.*.}" != "${max_version}"; then max_point_version=${max_version#*.*.} max_point_version=${max_point_version%%.*} fi test $(($major_version)) -lt $(($min_major_version)) && exit 1 if test $(($major_version)) -eq $(($min_major_version)); then test $(($minor_version)) -lt $(($min_minor_version)) && exit 1 if test $(($minor_version)) -eq $(($min_minor_version)); then test $(($point_version)) -lt $(($min_point_version)) && exit 1 fi fi test $(($major_version)) -gt $(($max_major_version)) && exit 1 if test $(($major_version)) -eq $(($max_major_version)); then test $(($minor_version)) -gt $(($max_minor_version)) && exit 1 if test $(($minor_version)) -eq $(($max_minor_version)); then test $(($point_version)) -gt $(($max_point_version)) && exit 1 fi fi exit 0 } # Show the usage line when no arguments are specified. if test $# -eq 0; then show_usage exit 1 fi while test $# -gt 0; do case $1 in --usage) show_usage; exit 0;; --help) show_help; exit 0;; # Installation overrides --prefix=*) GTEST_PREFIX=${1#--prefix=};; --exec-prefix=*) GTEST_EXEC_PREFIX=${1#--exec-prefix=};; --libdir=*) GTEST_LIBDIR=${1#--libdir=};; --includedir=*) GTEST_INCLUDEDIR=${1#--includedir=};; # Installation queries --prefix|--exec-prefix|--libdir|--includedir|--version) if test -n "${do_query}"; then show_usage exit 1 fi do_query=${1#--} ;; # Version checking --min-version=*) do_check_versions=yes min_version=${1#--min-version=} ;; --max-version=*) do_check_versions=yes max_version=${1#--max-version=} ;; --exact-version=*) do_check_versions=yes exact_version=${1#--exact-version=} ;; # Compiler flag output --cppflags) echo_cppflags=yes;; --cxxflags) echo_cxxflags=yes;; --ldflags) echo_ldflags=yes;; --libs) echo_libs=yes;; # Everything else is an error *) show_usage; exit 1;; esac shift done # These have defaults filled in by the configure script but can also be # overridden by environment variables or command line parameters. prefix="${GTEST_PREFIX:-@prefix@}" exec_prefix="${GTEST_EXEC_PREFIX:-@exec_prefix@}" libdir="${GTEST_LIBDIR:-@libdir@}" includedir="${GTEST_INCLUDEDIR:-@includedir@}" # We try and detect if our binary is not located at its installed location. If # it's not, we provide variables pointing to the source and build tree rather # than to the install tree. This allows building against a just-built gtest # rather than an installed gtest. bindir="@bindir@" this_relative_bindir=`dirname $0` this_bindir=`cd ${this_relative_bindir}; pwd -P` if test "${this_bindir}" = "${this_bindir%${bindir}}"; then # The path to the script doesn't end in the bindir sequence from Autoconf, # assume that we are in a build tree. build_dir=`dirname ${this_bindir}` src_dir=`cd ${this_bindir}; cd @top_srcdir@; pwd -P` # TODO(chandlerc@google.com): This is a dangerous dependency on libtool, we # should work to remove it, and/or remove libtool altogether, replacing it # with direct references to the library and a link path. gtest_libs="${build_dir}/lib/libgtest.la @PTHREAD_CFLAGS@ @PTHREAD_LIBS@" gtest_ldflags="" # We provide hooks to include from either the source or build dir, where the # build dir is always preferred. This will potentially allow us to write # build rules for generated headers and have them automatically be preferred # over provided versions. gtest_cppflags="-I${build_dir}/include -I${src_dir}/include" gtest_cxxflags="@PTHREAD_CFLAGS@" else # We're using an installed gtest, although it may be staged under some # prefix. Assume (as our own libraries do) that we can resolve the prefix, # and are present in the dynamic link paths. gtest_ldflags="-L${libdir}" gtest_libs="-l${name} @PTHREAD_CFLAGS@ @PTHREAD_LIBS@" gtest_cppflags="-I${includedir}" gtest_cxxflags="@PTHREAD_CFLAGS@" fi # Do an installation query if requested. if test -n "$do_query"; then case $do_query in prefix) echo $prefix; exit 0;; exec-prefix) echo $exec_prefix; exit 0;; libdir) echo $libdir; exit 0;; includedir) echo $includedir; exit 0;; version) echo $version; exit 0;; *) show_usage; exit 1;; esac fi # Do a version check if requested. if test "$do_check_versions" = "yes"; then # Make sure we didn't receive a bad combination of parameters. test "$echo_cppflags" = "yes" && show_usage && exit 1 test "$echo_cxxflags" = "yes" && show_usage && exit 1 test "$echo_ldflags" = "yes" && show_usage && exit 1 test "$echo_libs" = "yes" && show_usage && exit 1 if test "$exact_version" != ""; then check_versions $exact_version $exact_version # unreachable else check_versions ${min_version:-0.0.0} ${max_version:-9999.9999.9999} # unreachable fi fi # Do the output in the correct order so that these can be used in-line of # a compiler invocation. output="" test "$echo_cppflags" = "yes" && output="$output $gtest_cppflags" test "$echo_cxxflags" = "yes" && output="$output $gtest_cxxflags" test "$echo_ldflags" = "yes" && output="$output $gtest_ldflags" test "$echo_libs" = "yes" && output="$output $gtest_libs" echo $output exit 0 ffc-2019.1.0.post0/libs/gtest-1.7.0/scripts/pump.py000077500000000000000000000561711346026536000213320ustar00rootroot00000000000000#!/usr/bin/env python # # Copyright 2008, Google Inc. # All rights reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions are # met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # * Redistributions in binary form must reproduce the above # copyright notice, this list of conditions and the following disclaimer # in the documentation and/or other materials provided with the # distribution. # * Neither the name of Google Inc. nor the names of its # contributors may be used to endorse or promote products derived from # this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """pump v0.2.0 - Pretty Useful for Meta Programming. A tool for preprocessor meta programming. Useful for generating repetitive boilerplate code. Especially useful for writing C++ classes, functions, macros, and templates that need to work with various number of arguments. USAGE: pump.py SOURCE_FILE EXAMPLES: pump.py foo.cc.pump Converts foo.cc.pump to foo.cc. GRAMMAR: CODE ::= ATOMIC_CODE* ATOMIC_CODE ::= $var ID = EXPRESSION | $var ID = [[ CODE ]] | $range ID EXPRESSION..EXPRESSION | $for ID SEPARATOR [[ CODE ]] | $($) | $ID | $(EXPRESSION) | $if EXPRESSION [[ CODE ]] ELSE_BRANCH | [[ CODE ]] | RAW_CODE SEPARATOR ::= RAW_CODE | EMPTY ELSE_BRANCH ::= $else [[ CODE ]] | $elif EXPRESSION [[ CODE ]] ELSE_BRANCH | EMPTY EXPRESSION has Python syntax. """ __author__ = 'wan@google.com (Zhanyong Wan)' import os import re import sys TOKEN_TABLE = [ (re.compile(r'\$var\s+'), '$var'), (re.compile(r'\$elif\s+'), '$elif'), (re.compile(r'\$else\s+'), '$else'), (re.compile(r'\$for\s+'), '$for'), (re.compile(r'\$if\s+'), '$if'), (re.compile(r'\$range\s+'), '$range'), (re.compile(r'\$[_A-Za-z]\w*'), '$id'), (re.compile(r'\$\(\$\)'), '$($)'), (re.compile(r'\$'), '$'), (re.compile(r'\[\[\n?'), '[['), (re.compile(r'\]\]\n?'), ']]'), ] class Cursor: """Represents a position (line and column) in a text file.""" def __init__(self, line=-1, column=-1): self.line = line self.column = column def __eq__(self, rhs): return self.line == rhs.line and self.column == rhs.column def __ne__(self, rhs): return not self == rhs def __lt__(self, rhs): return self.line < rhs.line or ( self.line == rhs.line and self.column < rhs.column) def __le__(self, rhs): return self < rhs or self == rhs def __gt__(self, rhs): return rhs < self def __ge__(self, rhs): return rhs <= self def __str__(self): if self == Eof(): return 'EOF' else: return '%s(%s)' % (self.line + 1, self.column) def __add__(self, offset): return Cursor(self.line, self.column + offset) def __sub__(self, offset): return Cursor(self.line, self.column - offset) def Clone(self): """Returns a copy of self.""" return Cursor(self.line, self.column) # Special cursor to indicate the end-of-file. def Eof(): """Returns the special cursor to denote the end-of-file.""" return Cursor(-1, -1) class Token: """Represents a token in a Pump source file.""" def __init__(self, start=None, end=None, value=None, token_type=None): if start is None: self.start = Eof() else: self.start = start if end is None: self.end = Eof() else: self.end = end self.value = value self.token_type = token_type def __str__(self): return 'Token @%s: \'%s\' type=%s' % ( self.start, self.value, self.token_type) def Clone(self): """Returns a copy of self.""" return Token(self.start.Clone(), self.end.Clone(), self.value, self.token_type) def StartsWith(lines, pos, string): """Returns True iff the given position in lines starts with 'string'.""" return lines[pos.line][pos.column:].startswith(string) def FindFirstInLine(line, token_table): best_match_start = -1 for (regex, token_type) in token_table: m = regex.search(line) if m: # We found regex in lines if best_match_start < 0 or m.start() < best_match_start: best_match_start = m.start() best_match_length = m.end() - m.start() best_match_token_type = token_type if best_match_start < 0: return None return (best_match_start, best_match_length, best_match_token_type) def FindFirst(lines, token_table, cursor): """Finds the first occurrence of any string in strings in lines.""" start = cursor.Clone() cur_line_number = cursor.line for line in lines[start.line:]: if cur_line_number == start.line: line = line[start.column:] m = FindFirstInLine(line, token_table) if m: # We found a regex in line. (start_column, length, token_type) = m if cur_line_number == start.line: start_column += start.column found_start = Cursor(cur_line_number, start_column) found_end = found_start + length return MakeToken(lines, found_start, found_end, token_type) cur_line_number += 1 # We failed to find str in lines return None def SubString(lines, start, end): """Returns a substring in lines.""" if end == Eof(): end = Cursor(len(lines) - 1, len(lines[-1])) if start >= end: return '' if start.line == end.line: return lines[start.line][start.column:end.column] result_lines = ([lines[start.line][start.column:]] + lines[start.line + 1:end.line] + [lines[end.line][:end.column]]) return ''.join(result_lines) def StripMetaComments(str): """Strip meta comments from each line in the given string.""" # First, completely remove lines containing nothing but a meta # comment, including the trailing \n. str = re.sub(r'^\s*\$\$.*\n', '', str) # Then, remove meta comments from contentful lines. return re.sub(r'\s*\$\$.*', '', str) def MakeToken(lines, start, end, token_type): """Creates a new instance of Token.""" return Token(start, end, SubString(lines, start, end), token_type) def ParseToken(lines, pos, regex, token_type): line = lines[pos.line][pos.column:] m = regex.search(line) if m and not m.start(): return MakeToken(lines, pos, pos + m.end(), token_type) else: print 'ERROR: %s expected at %s.' % (token_type, pos) sys.exit(1) ID_REGEX = re.compile(r'[_A-Za-z]\w*') EQ_REGEX = re.compile(r'=') REST_OF_LINE_REGEX = re.compile(r'.*?(?=$|\$\$)') OPTIONAL_WHITE_SPACES_REGEX = re.compile(r'\s*') WHITE_SPACE_REGEX = re.compile(r'\s') DOT_DOT_REGEX = re.compile(r'\.\.') def Skip(lines, pos, regex): line = lines[pos.line][pos.column:] m = re.search(regex, line) if m and not m.start(): return pos + m.end() else: return pos def SkipUntil(lines, pos, regex, token_type): line = lines[pos.line][pos.column:] m = re.search(regex, line) if m: return pos + m.start() else: print ('ERROR: %s expected on line %s after column %s.' % (token_type, pos.line + 1, pos.column)) sys.exit(1) def ParseExpTokenInParens(lines, pos): def ParseInParens(pos): pos = Skip(lines, pos, OPTIONAL_WHITE_SPACES_REGEX) pos = Skip(lines, pos, r'\(') pos = Parse(pos) pos = Skip(lines, pos, r'\)') return pos def Parse(pos): pos = SkipUntil(lines, pos, r'\(|\)', ')') if SubString(lines, pos, pos + 1) == '(': pos = Parse(pos + 1) pos = Skip(lines, pos, r'\)') return Parse(pos) else: return pos start = pos.Clone() pos = ParseInParens(pos) return MakeToken(lines, start, pos, 'exp') def RStripNewLineFromToken(token): if token.value.endswith('\n'): return Token(token.start, token.end, token.value[:-1], token.token_type) else: return token def TokenizeLines(lines, pos): while True: found = FindFirst(lines, TOKEN_TABLE, pos) if not found: yield MakeToken(lines, pos, Eof(), 'code') return if found.start == pos: prev_token = None prev_token_rstripped = None else: prev_token = MakeToken(lines, pos, found.start, 'code') prev_token_rstripped = RStripNewLineFromToken(prev_token) if found.token_type == '$var': if prev_token_rstripped: yield prev_token_rstripped yield found id_token = ParseToken(lines, found.end, ID_REGEX, 'id') yield id_token pos = Skip(lines, id_token.end, OPTIONAL_WHITE_SPACES_REGEX) eq_token = ParseToken(lines, pos, EQ_REGEX, '=') yield eq_token pos = Skip(lines, eq_token.end, r'\s*') if SubString(lines, pos, pos + 2) != '[[': exp_token = ParseToken(lines, pos, REST_OF_LINE_REGEX, 'exp') yield exp_token pos = Cursor(exp_token.end.line + 1, 0) elif found.token_type == '$for': if prev_token_rstripped: yield prev_token_rstripped yield found id_token = ParseToken(lines, found.end, ID_REGEX, 'id') yield id_token pos = Skip(lines, id_token.end, WHITE_SPACE_REGEX) elif found.token_type == '$range': if prev_token_rstripped: yield prev_token_rstripped yield found id_token = ParseToken(lines, found.end, ID_REGEX, 'id') yield id_token pos = Skip(lines, id_token.end, OPTIONAL_WHITE_SPACES_REGEX) dots_pos = SkipUntil(lines, pos, DOT_DOT_REGEX, '..') yield MakeToken(lines, pos, dots_pos, 'exp') yield MakeToken(lines, dots_pos, dots_pos + 2, '..') pos = dots_pos + 2 new_pos = Cursor(pos.line + 1, 0) yield MakeToken(lines, pos, new_pos, 'exp') pos = new_pos elif found.token_type == '$': if prev_token: yield prev_token yield found exp_token = ParseExpTokenInParens(lines, found.end) yield exp_token pos = exp_token.end elif (found.token_type == ']]' or found.token_type == '$if' or found.token_type == '$elif' or found.token_type == '$else'): if prev_token_rstripped: yield prev_token_rstripped yield found pos = found.end else: if prev_token: yield prev_token yield found pos = found.end def Tokenize(s): """A generator that yields the tokens in the given string.""" if s != '': lines = s.splitlines(True) for token in TokenizeLines(lines, Cursor(0, 0)): yield token class CodeNode: def __init__(self, atomic_code_list=None): self.atomic_code = atomic_code_list class VarNode: def __init__(self, identifier=None, atomic_code=None): self.identifier = identifier self.atomic_code = atomic_code class RangeNode: def __init__(self, identifier=None, exp1=None, exp2=None): self.identifier = identifier self.exp1 = exp1 self.exp2 = exp2 class ForNode: def __init__(self, identifier=None, sep=None, code=None): self.identifier = identifier self.sep = sep self.code = code class ElseNode: def __init__(self, else_branch=None): self.else_branch = else_branch class IfNode: def __init__(self, exp=None, then_branch=None, else_branch=None): self.exp = exp self.then_branch = then_branch self.else_branch = else_branch class RawCodeNode: def __init__(self, token=None): self.raw_code = token class LiteralDollarNode: def __init__(self, token): self.token = token class ExpNode: def __init__(self, token, python_exp): self.token = token self.python_exp = python_exp def PopFront(a_list): head = a_list[0] a_list[:1] = [] return head def PushFront(a_list, elem): a_list[:0] = [elem] def PopToken(a_list, token_type=None): token = PopFront(a_list) if token_type is not None and token.token_type != token_type: print 'ERROR: %s expected at %s' % (token_type, token.start) print 'ERROR: %s found instead' % (token,) sys.exit(1) return token def PeekToken(a_list): if not a_list: return None return a_list[0] def ParseExpNode(token): python_exp = re.sub(r'([_A-Za-z]\w*)', r'self.GetValue("\1")', token.value) return ExpNode(token, python_exp) def ParseElseNode(tokens): def Pop(token_type=None): return PopToken(tokens, token_type) next = PeekToken(tokens) if not next: return None if next.token_type == '$else': Pop('$else') Pop('[[') code_node = ParseCodeNode(tokens) Pop(']]') return code_node elif next.token_type == '$elif': Pop('$elif') exp = Pop('code') Pop('[[') code_node = ParseCodeNode(tokens) Pop(']]') inner_else_node = ParseElseNode(tokens) return CodeNode([IfNode(ParseExpNode(exp), code_node, inner_else_node)]) elif not next.value.strip(): Pop('code') return ParseElseNode(tokens) else: return None def ParseAtomicCodeNode(tokens): def Pop(token_type=None): return PopToken(tokens, token_type) head = PopFront(tokens) t = head.token_type if t == 'code': return RawCodeNode(head) elif t == '$var': id_token = Pop('id') Pop('=') next = PeekToken(tokens) if next.token_type == 'exp': exp_token = Pop() return VarNode(id_token, ParseExpNode(exp_token)) Pop('[[') code_node = ParseCodeNode(tokens) Pop(']]') return VarNode(id_token, code_node) elif t == '$for': id_token = Pop('id') next_token = PeekToken(tokens) if next_token.token_type == 'code': sep_token = next_token Pop('code') else: sep_token = None Pop('[[') code_node = ParseCodeNode(tokens) Pop(']]') return ForNode(id_token, sep_token, code_node) elif t == '$if': exp_token = Pop('code') Pop('[[') code_node = ParseCodeNode(tokens) Pop(']]') else_node = ParseElseNode(tokens) return IfNode(ParseExpNode(exp_token), code_node, else_node) elif t == '$range': id_token = Pop('id') exp1_token = Pop('exp') Pop('..') exp2_token = Pop('exp') return RangeNode(id_token, ParseExpNode(exp1_token), ParseExpNode(exp2_token)) elif t == '$id': return ParseExpNode(Token(head.start + 1, head.end, head.value[1:], 'id')) elif t == '$($)': return LiteralDollarNode(head) elif t == '$': exp_token = Pop('exp') return ParseExpNode(exp_token) elif t == '[[': code_node = ParseCodeNode(tokens) Pop(']]') return code_node else: PushFront(tokens, head) return None def ParseCodeNode(tokens): atomic_code_list = [] while True: if not tokens: break atomic_code_node = ParseAtomicCodeNode(tokens) if atomic_code_node: atomic_code_list.append(atomic_code_node) else: break return CodeNode(atomic_code_list) def ParseToAST(pump_src_text): """Convert the given Pump source text into an AST.""" tokens = list(Tokenize(pump_src_text)) code_node = ParseCodeNode(tokens) return code_node class Env: def __init__(self): self.variables = [] self.ranges = [] def Clone(self): clone = Env() clone.variables = self.variables[:] clone.ranges = self.ranges[:] return clone def PushVariable(self, var, value): # If value looks like an int, store it as an int. try: int_value = int(value) if ('%s' % int_value) == value: value = int_value except Exception: pass self.variables[:0] = [(var, value)] def PopVariable(self): self.variables[:1] = [] def PushRange(self, var, lower, upper): self.ranges[:0] = [(var, lower, upper)] def PopRange(self): self.ranges[:1] = [] def GetValue(self, identifier): for (var, value) in self.variables: if identifier == var: return value print 'ERROR: meta variable %s is undefined.' % (identifier,) sys.exit(1) def EvalExp(self, exp): try: result = eval(exp.python_exp) except Exception, e: print 'ERROR: caught exception %s: %s' % (e.__class__.__name__, e) print ('ERROR: failed to evaluate meta expression %s at %s' % (exp.python_exp, exp.token.start)) sys.exit(1) return result def GetRange(self, identifier): for (var, lower, upper) in self.ranges: if identifier == var: return (lower, upper) print 'ERROR: range %s is undefined.' % (identifier,) sys.exit(1) class Output: def __init__(self): self.string = '' def GetLastLine(self): index = self.string.rfind('\n') if index < 0: return '' return self.string[index + 1:] def Append(self, s): self.string += s def RunAtomicCode(env, node, output): if isinstance(node, VarNode): identifier = node.identifier.value.strip() result = Output() RunAtomicCode(env.Clone(), node.atomic_code, result) value = result.string env.PushVariable(identifier, value) elif isinstance(node, RangeNode): identifier = node.identifier.value.strip() lower = int(env.EvalExp(node.exp1)) upper = int(env.EvalExp(node.exp2)) env.PushRange(identifier, lower, upper) elif isinstance(node, ForNode): identifier = node.identifier.value.strip() if node.sep is None: sep = '' else: sep = node.sep.value (lower, upper) = env.GetRange(identifier) for i in range(lower, upper + 1): new_env = env.Clone() new_env.PushVariable(identifier, i) RunCode(new_env, node.code, output) if i != upper: output.Append(sep) elif isinstance(node, RawCodeNode): output.Append(node.raw_code.value) elif isinstance(node, IfNode): cond = env.EvalExp(node.exp) if cond: RunCode(env.Clone(), node.then_branch, output) elif node.else_branch is not None: RunCode(env.Clone(), node.else_branch, output) elif isinstance(node, ExpNode): value = env.EvalExp(node) output.Append('%s' % (value,)) elif isinstance(node, LiteralDollarNode): output.Append('$') elif isinstance(node, CodeNode): RunCode(env.Clone(), node, output) else: print 'BAD' print node sys.exit(1) def RunCode(env, code_node, output): for atomic_code in code_node.atomic_code: RunAtomicCode(env, atomic_code, output) def IsSingleLineComment(cur_line): return '//' in cur_line def IsInPreprocessorDirective(prev_lines, cur_line): if cur_line.lstrip().startswith('#'): return True return prev_lines and prev_lines[-1].endswith('\\') def WrapComment(line, output): loc = line.find('//') before_comment = line[:loc].rstrip() if before_comment == '': indent = loc else: output.append(before_comment) indent = len(before_comment) - len(before_comment.lstrip()) prefix = indent*' ' + '// ' max_len = 80 - len(prefix) comment = line[loc + 2:].strip() segs = [seg for seg in re.split(r'(\w+\W*)', comment) if seg != ''] cur_line = '' for seg in segs: if len((cur_line + seg).rstrip()) < max_len: cur_line += seg else: if cur_line.strip() != '': output.append(prefix + cur_line.rstrip()) cur_line = seg.lstrip() if cur_line.strip() != '': output.append(prefix + cur_line.strip()) def WrapCode(line, line_concat, output): indent = len(line) - len(line.lstrip()) prefix = indent*' ' # Prefix of the current line max_len = 80 - indent - len(line_concat) # Maximum length of the current line new_prefix = prefix + 4*' ' # Prefix of a continuation line new_max_len = max_len - 4 # Maximum length of a continuation line # Prefers to wrap a line after a ',' or ';'. segs = [seg for seg in re.split(r'([^,;]+[,;]?)', line.strip()) if seg != ''] cur_line = '' # The current line without leading spaces. for seg in segs: # If the line is still too long, wrap at a space. while cur_line == '' and len(seg.strip()) > max_len: seg = seg.lstrip() split_at = seg.rfind(' ', 0, max_len) output.append(prefix + seg[:split_at].strip() + line_concat) seg = seg[split_at + 1:] prefix = new_prefix max_len = new_max_len if len((cur_line + seg).rstrip()) < max_len: cur_line = (cur_line + seg).lstrip() else: output.append(prefix + cur_line.rstrip() + line_concat) prefix = new_prefix max_len = new_max_len cur_line = seg.lstrip() if cur_line.strip() != '': output.append(prefix + cur_line.strip()) def WrapPreprocessorDirective(line, output): WrapCode(line, ' \\', output) def WrapPlainCode(line, output): WrapCode(line, '', output) def IsMultiLineIWYUPragma(line): return re.search(r'/\* IWYU pragma: ', line) def IsHeaderGuardIncludeOrOneLineIWYUPragma(line): return (re.match(r'^#(ifndef|define|endif\s*//)\s*[\w_]+\s*$', line) or re.match(r'^#include\s', line) or # Don't break IWYU pragmas, either; that causes iwyu.py problems. re.search(r'// IWYU pragma: ', line)) def WrapLongLine(line, output): line = line.rstrip() if len(line) <= 80: output.append(line) elif IsSingleLineComment(line): if IsHeaderGuardIncludeOrOneLineIWYUPragma(line): # The style guide made an exception to allow long header guard lines, # includes and IWYU pragmas. output.append(line) else: WrapComment(line, output) elif IsInPreprocessorDirective(output, line): if IsHeaderGuardIncludeOrOneLineIWYUPragma(line): # The style guide made an exception to allow long header guard lines, # includes and IWYU pragmas. output.append(line) else: WrapPreprocessorDirective(line, output) elif IsMultiLineIWYUPragma(line): output.append(line) else: WrapPlainCode(line, output) def BeautifyCode(string): lines = string.splitlines() output = [] for line in lines: WrapLongLine(line, output) output2 = [line.rstrip() for line in output] return '\n'.join(output2) + '\n' def ConvertFromPumpSource(src_text): """Return the text generated from the given Pump source text.""" ast = ParseToAST(StripMetaComments(src_text)) output = Output() RunCode(Env(), ast, output) return BeautifyCode(output.string) def main(argv): if len(argv) == 1: print __doc__ sys.exit(1) file_path = argv[-1] output_str = ConvertFromPumpSource(file(file_path, 'r').read()) if file_path.endswith('.pump'): output_file_path = file_path[:-5] else: output_file_path = '-' if output_file_path == '-': print output_str, else: output_file = file(output_file_path, 'w') output_file.write('// This file was GENERATED by command:\n') output_file.write('// %s %s\n' % (os.path.basename(__file__), os.path.basename(file_path))) output_file.write('// DO NOT EDIT BY HAND!!!\n\n') output_file.write(output_str) output_file.close() if __name__ == '__main__': main(sys.argv) ffc-2019.1.0.post0/libs/gtest-1.7.0/scripts/test/000077500000000000000000000000001346026536000207415ustar00rootroot00000000000000ffc-2019.1.0.post0/libs/gtest-1.7.0/scripts/test/Makefile000066400000000000000000000034121346026536000224010ustar00rootroot00000000000000# A Makefile for fusing Google Test and building a sample test against it. # # SYNOPSIS: # # make [all] - makes everything. # make TARGET - makes the given target. # make check - makes everything and runs the built sample test. # make clean - removes all files generated by make. # Points to the root of fused Google Test, relative to where this file is. FUSED_GTEST_DIR = output # Paths to the fused gtest files. FUSED_GTEST_H = $(FUSED_GTEST_DIR)/gtest/gtest.h FUSED_GTEST_ALL_CC = $(FUSED_GTEST_DIR)/gtest/gtest-all.cc # Where to find the sample test. SAMPLE_DIR = ../../samples # Where to find gtest_main.cc. GTEST_MAIN_CC = ../../src/gtest_main.cc # Flags passed to the preprocessor. # We have no idea here whether pthreads is available in the system, so # disable its use. CPPFLAGS += -I$(FUSED_GTEST_DIR) -DGTEST_HAS_PTHREAD=0 # Flags passed to the C++ compiler. CXXFLAGS += -g all : sample1_unittest check : all ./sample1_unittest clean : rm -rf $(FUSED_GTEST_DIR) sample1_unittest *.o $(FUSED_GTEST_H) : ../fuse_gtest_files.py $(FUSED_GTEST_DIR) $(FUSED_GTEST_ALL_CC) : ../fuse_gtest_files.py $(FUSED_GTEST_DIR) gtest-all.o : $(FUSED_GTEST_H) $(FUSED_GTEST_ALL_CC) $(CXX) $(CPPFLAGS) $(CXXFLAGS) -c $(FUSED_GTEST_DIR)/gtest/gtest-all.cc gtest_main.o : $(FUSED_GTEST_H) $(GTEST_MAIN_CC) $(CXX) $(CPPFLAGS) $(CXXFLAGS) -c $(GTEST_MAIN_CC) sample1.o : $(SAMPLE_DIR)/sample1.cc $(SAMPLE_DIR)/sample1.h $(CXX) $(CPPFLAGS) $(CXXFLAGS) -c $(SAMPLE_DIR)/sample1.cc sample1_unittest.o : $(SAMPLE_DIR)/sample1_unittest.cc \ $(SAMPLE_DIR)/sample1.h $(FUSED_GTEST_H) $(CXX) $(CPPFLAGS) $(CXXFLAGS) -c $(SAMPLE_DIR)/sample1_unittest.cc sample1_unittest : sample1.o sample1_unittest.o gtest-all.o gtest_main.o $(CXX) $(CPPFLAGS) $(CXXFLAGS) $^ -o $@ ffc-2019.1.0.post0/libs/gtest-1.7.0/src/000077500000000000000000000000001346026536000170625ustar00rootroot00000000000000ffc-2019.1.0.post0/libs/gtest-1.7.0/src/gtest-all.cc000066400000000000000000000041611346026536000212670ustar00rootroot00000000000000// Copyright 2008, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: mheule@google.com (Markus Heule) // // Google C++ Testing Framework (Google Test) // // Sometimes it's desirable to build Google Test by compiling a single file. // This file serves this purpose. // This line ensures that gtest.h can be compiled on its own, even // when it's fused. #include "gtest/gtest.h" // The following lines pull in the real gtest *.cc files. #include "src/gtest.cc" #include "src/gtest-death-test.cc" #include "src/gtest-filepath.cc" #include "src/gtest-port.cc" #include "src/gtest-printers.cc" #include "src/gtest-test-part.cc" #include "src/gtest-typed-test.cc" ffc-2019.1.0.post0/libs/gtest-1.7.0/src/gtest-death-test.cc000066400000000000000000001434051346026536000225660ustar00rootroot00000000000000// Copyright 2005, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: wan@google.com (Zhanyong Wan), vladl@google.com (Vlad Losev) // // This file implements death tests. #include "gtest/gtest-death-test.h" #include "gtest/internal/gtest-port.h" #if GTEST_HAS_DEATH_TEST # if GTEST_OS_MAC # include # endif // GTEST_OS_MAC # include # include # include # if GTEST_OS_LINUX # include # endif // GTEST_OS_LINUX # include # if GTEST_OS_WINDOWS # include # else # include # include # endif // GTEST_OS_WINDOWS # if GTEST_OS_QNX # include # endif // GTEST_OS_QNX #endif // GTEST_HAS_DEATH_TEST #include "gtest/gtest-message.h" #include "gtest/internal/gtest-string.h" // Indicates that this translation unit is part of Google Test's // implementation. It must come before gtest-internal-inl.h is // included, or there will be a compiler error. This trick is to // prevent a user from accidentally including gtest-internal-inl.h in // his code. #define GTEST_IMPLEMENTATION_ 1 #include "src/gtest-internal-inl.h" #undef GTEST_IMPLEMENTATION_ namespace testing { // Constants. // The default death test style. static const char kDefaultDeathTestStyle[] = "fast"; GTEST_DEFINE_string_( death_test_style, internal::StringFromGTestEnv("death_test_style", kDefaultDeathTestStyle), "Indicates how to run a death test in a forked child process: " "\"threadsafe\" (child process re-executes the test binary " "from the beginning, running only the specific death test) or " "\"fast\" (child process runs the death test immediately " "after forking)."); GTEST_DEFINE_bool_( death_test_use_fork, internal::BoolFromGTestEnv("death_test_use_fork", false), "Instructs to use fork()/_exit() instead of clone() in death tests. " "Ignored and always uses fork() on POSIX systems where clone() is not " "implemented. Useful when running under valgrind or similar tools if " "those do not support clone(). Valgrind 3.3.1 will just fail if " "it sees an unsupported combination of clone() flags. " "It is not recommended to use this flag w/o valgrind though it will " "work in 99% of the cases. Once valgrind is fixed, this flag will " "most likely be removed."); namespace internal { GTEST_DEFINE_string_( internal_run_death_test, "", "Indicates the file, line number, temporal index of " "the single death test to run, and a file descriptor to " "which a success code may be sent, all separated by " "the '|' characters. This flag is specified if and only if the current " "process is a sub-process launched for running a thread-safe " "death test. FOR INTERNAL USE ONLY."); } // namespace internal #if GTEST_HAS_DEATH_TEST namespace internal { // Valid only for fast death tests. Indicates the code is running in the // child process of a fast style death test. static bool g_in_fast_death_test_child = false; // Returns a Boolean value indicating whether the caller is currently // executing in the context of the death test child process. Tools such as // Valgrind heap checkers may need this to modify their behavior in death // tests. IMPORTANT: This is an internal utility. Using it may break the // implementation of death tests. User code MUST NOT use it. bool InDeathTestChild() { # if GTEST_OS_WINDOWS // On Windows, death tests are thread-safe regardless of the value of the // death_test_style flag. return !GTEST_FLAG(internal_run_death_test).empty(); # else if (GTEST_FLAG(death_test_style) == "threadsafe") return !GTEST_FLAG(internal_run_death_test).empty(); else return g_in_fast_death_test_child; #endif } } // namespace internal // ExitedWithCode constructor. ExitedWithCode::ExitedWithCode(int exit_code) : exit_code_(exit_code) { } // ExitedWithCode function-call operator. bool ExitedWithCode::operator()(int exit_status) const { # if GTEST_OS_WINDOWS return exit_status == exit_code_; # else return WIFEXITED(exit_status) && WEXITSTATUS(exit_status) == exit_code_; # endif // GTEST_OS_WINDOWS } # if !GTEST_OS_WINDOWS // KilledBySignal constructor. KilledBySignal::KilledBySignal(int signum) : signum_(signum) { } // KilledBySignal function-call operator. bool KilledBySignal::operator()(int exit_status) const { return WIFSIGNALED(exit_status) && WTERMSIG(exit_status) == signum_; } # endif // !GTEST_OS_WINDOWS namespace internal { // Utilities needed for death tests. // Generates a textual description of a given exit code, in the format // specified by wait(2). static std::string ExitSummary(int exit_code) { Message m; # if GTEST_OS_WINDOWS m << "Exited with exit status " << exit_code; # else if (WIFEXITED(exit_code)) { m << "Exited with exit status " << WEXITSTATUS(exit_code); } else if (WIFSIGNALED(exit_code)) { m << "Terminated by signal " << WTERMSIG(exit_code); } # ifdef WCOREDUMP if (WCOREDUMP(exit_code)) { m << " (core dumped)"; } # endif # endif // GTEST_OS_WINDOWS return m.GetString(); } // Returns true if exit_status describes a process that was terminated // by a signal, or exited normally with a nonzero exit code. bool ExitedUnsuccessfully(int exit_status) { return !ExitedWithCode(0)(exit_status); } # if !GTEST_OS_WINDOWS // Generates a textual failure message when a death test finds more than // one thread running, or cannot determine the number of threads, prior // to executing the given statement. It is the responsibility of the // caller not to pass a thread_count of 1. static std::string DeathTestThreadWarning(size_t thread_count) { Message msg; msg << "Death tests use fork(), which is unsafe particularly" << " in a threaded context. For this test, " << GTEST_NAME_ << " "; if (thread_count == 0) msg << "couldn't detect the number of threads."; else msg << "detected " << thread_count << " threads."; return msg.GetString(); } # endif // !GTEST_OS_WINDOWS // Flag characters for reporting a death test that did not die. static const char kDeathTestLived = 'L'; static const char kDeathTestReturned = 'R'; static const char kDeathTestThrew = 'T'; static const char kDeathTestInternalError = 'I'; // An enumeration describing all of the possible ways that a death test can // conclude. DIED means that the process died while executing the test // code; LIVED means that process lived beyond the end of the test code; // RETURNED means that the test statement attempted to execute a return // statement, which is not allowed; THREW means that the test statement // returned control by throwing an exception. IN_PROGRESS means the test // has not yet concluded. // TODO(vladl@google.com): Unify names and possibly values for // AbortReason, DeathTestOutcome, and flag characters above. enum DeathTestOutcome { IN_PROGRESS, DIED, LIVED, RETURNED, THREW }; // Routine for aborting the program which is safe to call from an // exec-style death test child process, in which case the error // message is propagated back to the parent process. Otherwise, the // message is simply printed to stderr. In either case, the program // then exits with status 1. void DeathTestAbort(const std::string& message) { // On a POSIX system, this function may be called from a threadsafe-style // death test child process, which operates on a very small stack. Use // the heap for any additional non-minuscule memory requirements. const InternalRunDeathTestFlag* const flag = GetUnitTestImpl()->internal_run_death_test_flag(); if (flag != NULL) { FILE* parent = posix::FDOpen(flag->write_fd(), "w"); fputc(kDeathTestInternalError, parent); fprintf(parent, "%s", message.c_str()); fflush(parent); _exit(1); } else { fprintf(stderr, "%s", message.c_str()); fflush(stderr); posix::Abort(); } } // A replacement for CHECK that calls DeathTestAbort if the assertion // fails. # define GTEST_DEATH_TEST_CHECK_(expression) \ do { \ if (!::testing::internal::IsTrue(expression)) { \ DeathTestAbort( \ ::std::string("CHECK failed: File ") + __FILE__ + ", line " \ + ::testing::internal::StreamableToString(__LINE__) + ": " \ + #expression); \ } \ } while (::testing::internal::AlwaysFalse()) // This macro is similar to GTEST_DEATH_TEST_CHECK_, but it is meant for // evaluating any system call that fulfills two conditions: it must return // -1 on failure, and set errno to EINTR when it is interrupted and // should be tried again. The macro expands to a loop that repeatedly // evaluates the expression as long as it evaluates to -1 and sets // errno to EINTR. If the expression evaluates to -1 but errno is // something other than EINTR, DeathTestAbort is called. # define GTEST_DEATH_TEST_CHECK_SYSCALL_(expression) \ do { \ int gtest_retval; \ do { \ gtest_retval = (expression); \ } while (gtest_retval == -1 && errno == EINTR); \ if (gtest_retval == -1) { \ DeathTestAbort( \ ::std::string("CHECK failed: File ") + __FILE__ + ", line " \ + ::testing::internal::StreamableToString(__LINE__) + ": " \ + #expression + " != -1"); \ } \ } while (::testing::internal::AlwaysFalse()) // Returns the message describing the last system error in errno. std::string GetLastErrnoDescription() { return errno == 0 ? "" : posix::StrError(errno); } // This is called from a death test parent process to read a failure // message from the death test child process and log it with the FATAL // severity. On Windows, the message is read from a pipe handle. On other // platforms, it is read from a file descriptor. static void FailFromInternalError(int fd) { Message error; char buffer[256]; int num_read; do { while ((num_read = posix::Read(fd, buffer, 255)) > 0) { buffer[num_read] = '\0'; error << buffer; } } while (num_read == -1 && errno == EINTR); if (num_read == 0) { GTEST_LOG_(FATAL) << error.GetString(); } else { const int last_error = errno; GTEST_LOG_(FATAL) << "Error while reading death test internal: " << GetLastErrnoDescription() << " [" << last_error << "]"; } } // Death test constructor. Increments the running death test count // for the current test. DeathTest::DeathTest() { TestInfo* const info = GetUnitTestImpl()->current_test_info(); if (info == NULL) { DeathTestAbort("Cannot run a death test outside of a TEST or " "TEST_F construct"); } } // Creates and returns a death test by dispatching to the current // death test factory. bool DeathTest::Create(const char* statement, const RE* regex, const char* file, int line, DeathTest** test) { return GetUnitTestImpl()->death_test_factory()->Create( statement, regex, file, line, test); } const char* DeathTest::LastMessage() { return last_death_test_message_.c_str(); } void DeathTest::set_last_death_test_message(const std::string& message) { last_death_test_message_ = message; } std::string DeathTest::last_death_test_message_; // Provides cross platform implementation for some death functionality. class DeathTestImpl : public DeathTest { protected: DeathTestImpl(const char* a_statement, const RE* a_regex) : statement_(a_statement), regex_(a_regex), spawned_(false), status_(-1), outcome_(IN_PROGRESS), read_fd_(-1), write_fd_(-1) {} // read_fd_ is expected to be closed and cleared by a derived class. ~DeathTestImpl() { GTEST_DEATH_TEST_CHECK_(read_fd_ == -1); } void Abort(AbortReason reason); virtual bool Passed(bool status_ok); const char* statement() const { return statement_; } const RE* regex() const { return regex_; } bool spawned() const { return spawned_; } void set_spawned(bool is_spawned) { spawned_ = is_spawned; } int status() const { return status_; } void set_status(int a_status) { status_ = a_status; } DeathTestOutcome outcome() const { return outcome_; } void set_outcome(DeathTestOutcome an_outcome) { outcome_ = an_outcome; } int read_fd() const { return read_fd_; } void set_read_fd(int fd) { read_fd_ = fd; } int write_fd() const { return write_fd_; } void set_write_fd(int fd) { write_fd_ = fd; } // Called in the parent process only. Reads the result code of the death // test child process via a pipe, interprets it to set the outcome_ // member, and closes read_fd_. Outputs diagnostics and terminates in // case of unexpected codes. void ReadAndInterpretStatusByte(); private: // The textual content of the code this object is testing. This class // doesn't own this string and should not attempt to delete it. const char* const statement_; // The regular expression which test output must match. DeathTestImpl // doesn't own this object and should not attempt to delete it. const RE* const regex_; // True if the death test child process has been successfully spawned. bool spawned_; // The exit status of the child process. int status_; // How the death test concluded. DeathTestOutcome outcome_; // Descriptor to the read end of the pipe to the child process. It is // always -1 in the child process. The child keeps its write end of the // pipe in write_fd_. int read_fd_; // Descriptor to the child's write end of the pipe to the parent process. // It is always -1 in the parent process. The parent keeps its end of the // pipe in read_fd_. int write_fd_; }; // Called in the parent process only. Reads the result code of the death // test child process via a pipe, interprets it to set the outcome_ // member, and closes read_fd_. Outputs diagnostics and terminates in // case of unexpected codes. void DeathTestImpl::ReadAndInterpretStatusByte() { char flag; int bytes_read; // The read() here blocks until data is available (signifying the // failure of the death test) or until the pipe is closed (signifying // its success), so it's okay to call this in the parent before // the child process has exited. do { bytes_read = posix::Read(read_fd(), &flag, 1); } while (bytes_read == -1 && errno == EINTR); if (bytes_read == 0) { set_outcome(DIED); } else if (bytes_read == 1) { switch (flag) { case kDeathTestReturned: set_outcome(RETURNED); break; case kDeathTestThrew: set_outcome(THREW); break; case kDeathTestLived: set_outcome(LIVED); break; case kDeathTestInternalError: FailFromInternalError(read_fd()); // Does not return. break; default: GTEST_LOG_(FATAL) << "Death test child process reported " << "unexpected status byte (" << static_cast(flag) << ")"; } } else { GTEST_LOG_(FATAL) << "Read from death test child process failed: " << GetLastErrnoDescription(); } GTEST_DEATH_TEST_CHECK_SYSCALL_(posix::Close(read_fd())); set_read_fd(-1); } // Signals that the death test code which should have exited, didn't. // Should be called only in a death test child process. // Writes a status byte to the child's status file descriptor, then // calls _exit(1). void DeathTestImpl::Abort(AbortReason reason) { // The parent process considers the death test to be a failure if // it finds any data in our pipe. So, here we write a single flag byte // to the pipe, then exit. const char status_ch = reason == TEST_DID_NOT_DIE ? kDeathTestLived : reason == TEST_THREW_EXCEPTION ? kDeathTestThrew : kDeathTestReturned; GTEST_DEATH_TEST_CHECK_SYSCALL_(posix::Write(write_fd(), &status_ch, 1)); // We are leaking the descriptor here because on some platforms (i.e., // when built as Windows DLL), destructors of global objects will still // run after calling _exit(). On such systems, write_fd_ will be // indirectly closed from the destructor of UnitTestImpl, causing double // close if it is also closed here. On debug configurations, double close // may assert. As there are no in-process buffers to flush here, we are // relying on the OS to close the descriptor after the process terminates // when the destructors are not run. _exit(1); // Exits w/o any normal exit hooks (we were supposed to crash) } // Returns an indented copy of stderr output for a death test. // This makes distinguishing death test output lines from regular log lines // much easier. static ::std::string FormatDeathTestOutput(const ::std::string& output) { ::std::string ret; for (size_t at = 0; ; ) { const size_t line_end = output.find('\n', at); ret += "[ DEATH ] "; if (line_end == ::std::string::npos) { ret += output.substr(at); break; } ret += output.substr(at, line_end + 1 - at); at = line_end + 1; } return ret; } // Assesses the success or failure of a death test, using both private // members which have previously been set, and one argument: // // Private data members: // outcome: An enumeration describing how the death test // concluded: DIED, LIVED, THREW, or RETURNED. The death test // fails in the latter three cases. // status: The exit status of the child process. On *nix, it is in the // in the format specified by wait(2). On Windows, this is the // value supplied to the ExitProcess() API or a numeric code // of the exception that terminated the program. // regex: A regular expression object to be applied to // the test's captured standard error output; the death test // fails if it does not match. // // Argument: // status_ok: true if exit_status is acceptable in the context of // this particular death test, which fails if it is false // // Returns true iff all of the above conditions are met. Otherwise, the // first failing condition, in the order given above, is the one that is // reported. Also sets the last death test message string. bool DeathTestImpl::Passed(bool status_ok) { if (!spawned()) return false; const std::string error_message = GetCapturedStderr(); bool success = false; Message buffer; buffer << "Death test: " << statement() << "\n"; switch (outcome()) { case LIVED: buffer << " Result: failed to die.\n" << " Error msg:\n" << FormatDeathTestOutput(error_message); break; case THREW: buffer << " Result: threw an exception.\n" << " Error msg:\n" << FormatDeathTestOutput(error_message); break; case RETURNED: buffer << " Result: illegal return in test statement.\n" << " Error msg:\n" << FormatDeathTestOutput(error_message); break; case DIED: if (status_ok) { const bool matched = RE::PartialMatch(error_message.c_str(), *regex()); if (matched) { success = true; } else { buffer << " Result: died but not with expected error.\n" << " Expected: " << regex()->pattern() << "\n" << "Actual msg:\n" << FormatDeathTestOutput(error_message); } } else { buffer << " Result: died but not with expected exit code:\n" << " " << ExitSummary(status()) << "\n" << "Actual msg:\n" << FormatDeathTestOutput(error_message); } break; case IN_PROGRESS: default: GTEST_LOG_(FATAL) << "DeathTest::Passed somehow called before conclusion of test"; } DeathTest::set_last_death_test_message(buffer.GetString()); return success; } # if GTEST_OS_WINDOWS // WindowsDeathTest implements death tests on Windows. Due to the // specifics of starting new processes on Windows, death tests there are // always threadsafe, and Google Test considers the // --gtest_death_test_style=fast setting to be equivalent to // --gtest_death_test_style=threadsafe there. // // A few implementation notes: Like the Linux version, the Windows // implementation uses pipes for child-to-parent communication. But due to // the specifics of pipes on Windows, some extra steps are required: // // 1. The parent creates a communication pipe and stores handles to both // ends of it. // 2. The parent starts the child and provides it with the information // necessary to acquire the handle to the write end of the pipe. // 3. The child acquires the write end of the pipe and signals the parent // using a Windows event. // 4. Now the parent can release the write end of the pipe on its side. If // this is done before step 3, the object's reference count goes down to // 0 and it is destroyed, preventing the child from acquiring it. The // parent now has to release it, or read operations on the read end of // the pipe will not return when the child terminates. // 5. The parent reads child's output through the pipe (outcome code and // any possible error messages) from the pipe, and its stderr and then // determines whether to fail the test. // // Note: to distinguish Win32 API calls from the local method and function // calls, the former are explicitly resolved in the global namespace. // class WindowsDeathTest : public DeathTestImpl { public: WindowsDeathTest(const char* a_statement, const RE* a_regex, const char* file, int line) : DeathTestImpl(a_statement, a_regex), file_(file), line_(line) {} // All of these virtual functions are inherited from DeathTest. virtual int Wait(); virtual TestRole AssumeRole(); private: // The name of the file in which the death test is located. const char* const file_; // The line number on which the death test is located. const int line_; // Handle to the write end of the pipe to the child process. AutoHandle write_handle_; // Child process handle. AutoHandle child_handle_; // Event the child process uses to signal the parent that it has // acquired the handle to the write end of the pipe. After seeing this // event the parent can release its own handles to make sure its // ReadFile() calls return when the child terminates. AutoHandle event_handle_; }; // Waits for the child in a death test to exit, returning its exit // status, or 0 if no child process exists. As a side effect, sets the // outcome data member. int WindowsDeathTest::Wait() { if (!spawned()) return 0; // Wait until the child either signals that it has acquired the write end // of the pipe or it dies. const HANDLE wait_handles[2] = { child_handle_.Get(), event_handle_.Get() }; switch (::WaitForMultipleObjects(2, wait_handles, FALSE, // Waits for any of the handles. INFINITE)) { case WAIT_OBJECT_0: case WAIT_OBJECT_0 + 1: break; default: GTEST_DEATH_TEST_CHECK_(false); // Should not get here. } // The child has acquired the write end of the pipe or exited. // We release the handle on our side and continue. write_handle_.Reset(); event_handle_.Reset(); ReadAndInterpretStatusByte(); // Waits for the child process to exit if it haven't already. This // returns immediately if the child has already exited, regardless of // whether previous calls to WaitForMultipleObjects synchronized on this // handle or not. GTEST_DEATH_TEST_CHECK_( WAIT_OBJECT_0 == ::WaitForSingleObject(child_handle_.Get(), INFINITE)); DWORD status_code; GTEST_DEATH_TEST_CHECK_( ::GetExitCodeProcess(child_handle_.Get(), &status_code) != FALSE); child_handle_.Reset(); set_status(static_cast(status_code)); return status(); } // The AssumeRole process for a Windows death test. It creates a child // process with the same executable as the current process to run the // death test. The child process is given the --gtest_filter and // --gtest_internal_run_death_test flags such that it knows to run the // current death test only. DeathTest::TestRole WindowsDeathTest::AssumeRole() { const UnitTestImpl* const impl = GetUnitTestImpl(); const InternalRunDeathTestFlag* const flag = impl->internal_run_death_test_flag(); const TestInfo* const info = impl->current_test_info(); const int death_test_index = info->result()->death_test_count(); if (flag != NULL) { // ParseInternalRunDeathTestFlag() has performed all the necessary // processing. set_write_fd(flag->write_fd()); return EXECUTE_TEST; } // WindowsDeathTest uses an anonymous pipe to communicate results of // a death test. SECURITY_ATTRIBUTES handles_are_inheritable = { sizeof(SECURITY_ATTRIBUTES), NULL, TRUE }; HANDLE read_handle, write_handle; GTEST_DEATH_TEST_CHECK_( ::CreatePipe(&read_handle, &write_handle, &handles_are_inheritable, 0) // Default buffer size. != FALSE); set_read_fd(::_open_osfhandle(reinterpret_cast(read_handle), O_RDONLY)); write_handle_.Reset(write_handle); event_handle_.Reset(::CreateEvent( &handles_are_inheritable, TRUE, // The event will automatically reset to non-signaled state. FALSE, // The initial state is non-signalled. NULL)); // The even is unnamed. GTEST_DEATH_TEST_CHECK_(event_handle_.Get() != NULL); const std::string filter_flag = std::string("--") + GTEST_FLAG_PREFIX_ + kFilterFlag + "=" + info->test_case_name() + "." + info->name(); const std::string internal_flag = std::string("--") + GTEST_FLAG_PREFIX_ + kInternalRunDeathTestFlag + "=" + file_ + "|" + StreamableToString(line_) + "|" + StreamableToString(death_test_index) + "|" + StreamableToString(static_cast(::GetCurrentProcessId())) + // size_t has the same width as pointers on both 32-bit and 64-bit // Windows platforms. // See http://msdn.microsoft.com/en-us/library/tcxf1dw6.aspx. "|" + StreamableToString(reinterpret_cast(write_handle)) + "|" + StreamableToString(reinterpret_cast(event_handle_.Get())); char executable_path[_MAX_PATH + 1]; // NOLINT GTEST_DEATH_TEST_CHECK_( _MAX_PATH + 1 != ::GetModuleFileNameA(NULL, executable_path, _MAX_PATH)); std::string command_line = std::string(::GetCommandLineA()) + " " + filter_flag + " \"" + internal_flag + "\""; DeathTest::set_last_death_test_message(""); CaptureStderr(); // Flush the log buffers since the log streams are shared with the child. FlushInfoLog(); // The child process will share the standard handles with the parent. STARTUPINFOA startup_info; memset(&startup_info, 0, sizeof(STARTUPINFO)); startup_info.dwFlags = STARTF_USESTDHANDLES; startup_info.hStdInput = ::GetStdHandle(STD_INPUT_HANDLE); startup_info.hStdOutput = ::GetStdHandle(STD_OUTPUT_HANDLE); startup_info.hStdError = ::GetStdHandle(STD_ERROR_HANDLE); PROCESS_INFORMATION process_info; GTEST_DEATH_TEST_CHECK_(::CreateProcessA( executable_path, const_cast(command_line.c_str()), NULL, // Retuned process handle is not inheritable. NULL, // Retuned thread handle is not inheritable. TRUE, // Child inherits all inheritable handles (for write_handle_). 0x0, // Default creation flags. NULL, // Inherit the parent's environment. UnitTest::GetInstance()->original_working_dir(), &startup_info, &process_info) != FALSE); child_handle_.Reset(process_info.hProcess); ::CloseHandle(process_info.hThread); set_spawned(true); return OVERSEE_TEST; } # else // We are not on Windows. // ForkingDeathTest provides implementations for most of the abstract // methods of the DeathTest interface. Only the AssumeRole method is // left undefined. class ForkingDeathTest : public DeathTestImpl { public: ForkingDeathTest(const char* statement, const RE* regex); // All of these virtual functions are inherited from DeathTest. virtual int Wait(); protected: void set_child_pid(pid_t child_pid) { child_pid_ = child_pid; } private: // PID of child process during death test; 0 in the child process itself. pid_t child_pid_; }; // Constructs a ForkingDeathTest. ForkingDeathTest::ForkingDeathTest(const char* a_statement, const RE* a_regex) : DeathTestImpl(a_statement, a_regex), child_pid_(-1) {} // Waits for the child in a death test to exit, returning its exit // status, or 0 if no child process exists. As a side effect, sets the // outcome data member. int ForkingDeathTest::Wait() { if (!spawned()) return 0; ReadAndInterpretStatusByte(); int status_value; GTEST_DEATH_TEST_CHECK_SYSCALL_(waitpid(child_pid_, &status_value, 0)); set_status(status_value); return status_value; } // A concrete death test class that forks, then immediately runs the test // in the child process. class NoExecDeathTest : public ForkingDeathTest { public: NoExecDeathTest(const char* a_statement, const RE* a_regex) : ForkingDeathTest(a_statement, a_regex) { } virtual TestRole AssumeRole(); }; // The AssumeRole process for a fork-and-run death test. It implements a // straightforward fork, with a simple pipe to transmit the status byte. DeathTest::TestRole NoExecDeathTest::AssumeRole() { const size_t thread_count = GetThreadCount(); if (thread_count != 1) { GTEST_LOG_(WARNING) << DeathTestThreadWarning(thread_count); } int pipe_fd[2]; GTEST_DEATH_TEST_CHECK_(pipe(pipe_fd) != -1); DeathTest::set_last_death_test_message(""); CaptureStderr(); // When we fork the process below, the log file buffers are copied, but the // file descriptors are shared. We flush all log files here so that closing // the file descriptors in the child process doesn't throw off the // synchronization between descriptors and buffers in the parent process. // This is as close to the fork as possible to avoid a race condition in case // there are multiple threads running before the death test, and another // thread writes to the log file. FlushInfoLog(); const pid_t child_pid = fork(); GTEST_DEATH_TEST_CHECK_(child_pid != -1); set_child_pid(child_pid); if (child_pid == 0) { GTEST_DEATH_TEST_CHECK_SYSCALL_(close(pipe_fd[0])); set_write_fd(pipe_fd[1]); // Redirects all logging to stderr in the child process to prevent // concurrent writes to the log files. We capture stderr in the parent // process and append the child process' output to a log. LogToStderr(); // Event forwarding to the listeners of event listener API mush be shut // down in death test subprocesses. GetUnitTestImpl()->listeners()->SuppressEventForwarding(); g_in_fast_death_test_child = true; return EXECUTE_TEST; } else { GTEST_DEATH_TEST_CHECK_SYSCALL_(close(pipe_fd[1])); set_read_fd(pipe_fd[0]); set_spawned(true); return OVERSEE_TEST; } } // A concrete death test class that forks and re-executes the main // program from the beginning, with command-line flags set that cause // only this specific death test to be run. class ExecDeathTest : public ForkingDeathTest { public: ExecDeathTest(const char* a_statement, const RE* a_regex, const char* file, int line) : ForkingDeathTest(a_statement, a_regex), file_(file), line_(line) { } virtual TestRole AssumeRole(); private: static ::std::vector GetArgvsForDeathTestChildProcess() { ::std::vector args = GetInjectableArgvs(); return args; } // The name of the file in which the death test is located. const char* const file_; // The line number on which the death test is located. const int line_; }; // Utility class for accumulating command-line arguments. class Arguments { public: Arguments() { args_.push_back(NULL); } ~Arguments() { for (std::vector::iterator i = args_.begin(); i != args_.end(); ++i) { free(*i); } } void AddArgument(const char* argument) { args_.insert(args_.end() - 1, posix::StrDup(argument)); } template void AddArguments(const ::std::vector& arguments) { for (typename ::std::vector::const_iterator i = arguments.begin(); i != arguments.end(); ++i) { args_.insert(args_.end() - 1, posix::StrDup(i->c_str())); } } char* const* Argv() { return &args_[0]; } private: std::vector args_; }; // A struct that encompasses the arguments to the child process of a // threadsafe-style death test process. struct ExecDeathTestArgs { char* const* argv; // Command-line arguments for the child's call to exec int close_fd; // File descriptor to close; the read end of a pipe }; # if GTEST_OS_MAC inline char** GetEnviron() { // When Google Test is built as a framework on MacOS X, the environ variable // is unavailable. Apple's documentation (man environ) recommends using // _NSGetEnviron() instead. return *_NSGetEnviron(); } # else // Some POSIX platforms expect you to declare environ. extern "C" makes // it reside in the global namespace. extern "C" char** environ; inline char** GetEnviron() { return environ; } # endif // GTEST_OS_MAC # if !GTEST_OS_QNX // The main function for a threadsafe-style death test child process. // This function is called in a clone()-ed process and thus must avoid // any potentially unsafe operations like malloc or libc functions. static int ExecDeathTestChildMain(void* child_arg) { ExecDeathTestArgs* const args = static_cast(child_arg); GTEST_DEATH_TEST_CHECK_SYSCALL_(close(args->close_fd)); // We need to execute the test program in the same environment where // it was originally invoked. Therefore we change to the original // working directory first. const char* const original_dir = UnitTest::GetInstance()->original_working_dir(); // We can safely call chdir() as it's a direct system call. if (chdir(original_dir) != 0) { DeathTestAbort(std::string("chdir(\"") + original_dir + "\") failed: " + GetLastErrnoDescription()); return EXIT_FAILURE; } // We can safely call execve() as it's a direct system call. We // cannot use execvp() as it's a libc function and thus potentially // unsafe. Since execve() doesn't search the PATH, the user must // invoke the test program via a valid path that contains at least // one path separator. execve(args->argv[0], args->argv, GetEnviron()); DeathTestAbort(std::string("execve(") + args->argv[0] + ", ...) in " + original_dir + " failed: " + GetLastErrnoDescription()); return EXIT_FAILURE; } # endif // !GTEST_OS_QNX // Two utility routines that together determine the direction the stack // grows. // This could be accomplished more elegantly by a single recursive // function, but we want to guard against the unlikely possibility of // a smart compiler optimizing the recursion away. // // GTEST_NO_INLINE_ is required to prevent GCC 4.6 from inlining // StackLowerThanAddress into StackGrowsDown, which then doesn't give // correct answer. void StackLowerThanAddress(const void* ptr, bool* result) GTEST_NO_INLINE_; void StackLowerThanAddress(const void* ptr, bool* result) { int dummy; *result = (&dummy < ptr); } bool StackGrowsDown() { int dummy; bool result; StackLowerThanAddress(&dummy, &result); return result; } // Spawns a child process with the same executable as the current process in // a thread-safe manner and instructs it to run the death test. The // implementation uses fork(2) + exec. On systems where clone(2) is // available, it is used instead, being slightly more thread-safe. On QNX, // fork supports only single-threaded environments, so this function uses // spawn(2) there instead. The function dies with an error message if // anything goes wrong. static pid_t ExecDeathTestSpawnChild(char* const* argv, int close_fd) { ExecDeathTestArgs args = { argv, close_fd }; pid_t child_pid = -1; # if GTEST_OS_QNX // Obtains the current directory and sets it to be closed in the child // process. const int cwd_fd = open(".", O_RDONLY); GTEST_DEATH_TEST_CHECK_(cwd_fd != -1); GTEST_DEATH_TEST_CHECK_SYSCALL_(fcntl(cwd_fd, F_SETFD, FD_CLOEXEC)); // We need to execute the test program in the same environment where // it was originally invoked. Therefore we change to the original // working directory first. const char* const original_dir = UnitTest::GetInstance()->original_working_dir(); // We can safely call chdir() as it's a direct system call. if (chdir(original_dir) != 0) { DeathTestAbort(std::string("chdir(\"") + original_dir + "\") failed: " + GetLastErrnoDescription()); return EXIT_FAILURE; } int fd_flags; // Set close_fd to be closed after spawn. GTEST_DEATH_TEST_CHECK_SYSCALL_(fd_flags = fcntl(close_fd, F_GETFD)); GTEST_DEATH_TEST_CHECK_SYSCALL_(fcntl(close_fd, F_SETFD, fd_flags | FD_CLOEXEC)); struct inheritance inherit = {0}; // spawn is a system call. child_pid = spawn(args.argv[0], 0, NULL, &inherit, args.argv, GetEnviron()); // Restores the current working directory. GTEST_DEATH_TEST_CHECK_(fchdir(cwd_fd) != -1); GTEST_DEATH_TEST_CHECK_SYSCALL_(close(cwd_fd)); # else // GTEST_OS_QNX # if GTEST_OS_LINUX // When a SIGPROF signal is received while fork() or clone() are executing, // the process may hang. To avoid this, we ignore SIGPROF here and re-enable // it after the call to fork()/clone() is complete. struct sigaction saved_sigprof_action; struct sigaction ignore_sigprof_action; memset(&ignore_sigprof_action, 0, sizeof(ignore_sigprof_action)); sigemptyset(&ignore_sigprof_action.sa_mask); ignore_sigprof_action.sa_handler = SIG_IGN; GTEST_DEATH_TEST_CHECK_SYSCALL_(sigaction( SIGPROF, &ignore_sigprof_action, &saved_sigprof_action)); # endif // GTEST_OS_LINUX # if GTEST_HAS_CLONE const bool use_fork = GTEST_FLAG(death_test_use_fork); if (!use_fork) { static const bool stack_grows_down = StackGrowsDown(); const size_t stack_size = getpagesize(); // MMAP_ANONYMOUS is not defined on Mac, so we use MAP_ANON instead. void* const stack = mmap(NULL, stack_size, PROT_READ | PROT_WRITE, MAP_ANON | MAP_PRIVATE, -1, 0); GTEST_DEATH_TEST_CHECK_(stack != MAP_FAILED); // Maximum stack alignment in bytes: For a downward-growing stack, this // amount is subtracted from size of the stack space to get an address // that is within the stack space and is aligned on all systems we care // about. As far as I know there is no ABI with stack alignment greater // than 64. We assume stack and stack_size already have alignment of // kMaxStackAlignment. const size_t kMaxStackAlignment = 64; void* const stack_top = static_cast(stack) + (stack_grows_down ? stack_size - kMaxStackAlignment : 0); GTEST_DEATH_TEST_CHECK_(stack_size > kMaxStackAlignment && reinterpret_cast(stack_top) % kMaxStackAlignment == 0); child_pid = clone(&ExecDeathTestChildMain, stack_top, SIGCHLD, &args); GTEST_DEATH_TEST_CHECK_(munmap(stack, stack_size) != -1); } # else const bool use_fork = true; # endif // GTEST_HAS_CLONE if (use_fork && (child_pid = fork()) == 0) { ExecDeathTestChildMain(&args); _exit(0); } # endif // GTEST_OS_QNX # if GTEST_OS_LINUX GTEST_DEATH_TEST_CHECK_SYSCALL_( sigaction(SIGPROF, &saved_sigprof_action, NULL)); # endif // GTEST_OS_LINUX GTEST_DEATH_TEST_CHECK_(child_pid != -1); return child_pid; } // The AssumeRole process for a fork-and-exec death test. It re-executes the // main program from the beginning, setting the --gtest_filter // and --gtest_internal_run_death_test flags to cause only the current // death test to be re-run. DeathTest::TestRole ExecDeathTest::AssumeRole() { const UnitTestImpl* const impl = GetUnitTestImpl(); const InternalRunDeathTestFlag* const flag = impl->internal_run_death_test_flag(); const TestInfo* const info = impl->current_test_info(); const int death_test_index = info->result()->death_test_count(); if (flag != NULL) { set_write_fd(flag->write_fd()); return EXECUTE_TEST; } int pipe_fd[2]; GTEST_DEATH_TEST_CHECK_(pipe(pipe_fd) != -1); // Clear the close-on-exec flag on the write end of the pipe, lest // it be closed when the child process does an exec: GTEST_DEATH_TEST_CHECK_(fcntl(pipe_fd[1], F_SETFD, 0) != -1); const std::string filter_flag = std::string("--") + GTEST_FLAG_PREFIX_ + kFilterFlag + "=" + info->test_case_name() + "." + info->name(); const std::string internal_flag = std::string("--") + GTEST_FLAG_PREFIX_ + kInternalRunDeathTestFlag + "=" + file_ + "|" + StreamableToString(line_) + "|" + StreamableToString(death_test_index) + "|" + StreamableToString(pipe_fd[1]); Arguments args; args.AddArguments(GetArgvsForDeathTestChildProcess()); args.AddArgument(filter_flag.c_str()); args.AddArgument(internal_flag.c_str()); DeathTest::set_last_death_test_message(""); CaptureStderr(); // See the comment in NoExecDeathTest::AssumeRole for why the next line // is necessary. FlushInfoLog(); const pid_t child_pid = ExecDeathTestSpawnChild(args.Argv(), pipe_fd[0]); GTEST_DEATH_TEST_CHECK_SYSCALL_(close(pipe_fd[1])); set_child_pid(child_pid); set_read_fd(pipe_fd[0]); set_spawned(true); return OVERSEE_TEST; } # endif // !GTEST_OS_WINDOWS // Creates a concrete DeathTest-derived class that depends on the // --gtest_death_test_style flag, and sets the pointer pointed to // by the "test" argument to its address. If the test should be // skipped, sets that pointer to NULL. Returns true, unless the // flag is set to an invalid value. bool DefaultDeathTestFactory::Create(const char* statement, const RE* regex, const char* file, int line, DeathTest** test) { UnitTestImpl* const impl = GetUnitTestImpl(); const InternalRunDeathTestFlag* const flag = impl->internal_run_death_test_flag(); const int death_test_index = impl->current_test_info() ->increment_death_test_count(); if (flag != NULL) { if (death_test_index > flag->index()) { DeathTest::set_last_death_test_message( "Death test count (" + StreamableToString(death_test_index) + ") somehow exceeded expected maximum (" + StreamableToString(flag->index()) + ")"); return false; } if (!(flag->file() == file && flag->line() == line && flag->index() == death_test_index)) { *test = NULL; return true; } } # if GTEST_OS_WINDOWS if (GTEST_FLAG(death_test_style) == "threadsafe" || GTEST_FLAG(death_test_style) == "fast") { *test = new WindowsDeathTest(statement, regex, file, line); } # else if (GTEST_FLAG(death_test_style) == "threadsafe") { *test = new ExecDeathTest(statement, regex, file, line); } else if (GTEST_FLAG(death_test_style) == "fast") { *test = new NoExecDeathTest(statement, regex); } # endif // GTEST_OS_WINDOWS else { // NOLINT - this is more readable than unbalanced brackets inside #if. DeathTest::set_last_death_test_message( "Unknown death test style \"" + GTEST_FLAG(death_test_style) + "\" encountered"); return false; } return true; } // Splits a given string on a given delimiter, populating a given // vector with the fields. GTEST_HAS_DEATH_TEST implies that we have // ::std::string, so we can use it here. static void SplitString(const ::std::string& str, char delimiter, ::std::vector< ::std::string>* dest) { ::std::vector< ::std::string> parsed; ::std::string::size_type pos = 0; while (::testing::internal::AlwaysTrue()) { const ::std::string::size_type colon = str.find(delimiter, pos); if (colon == ::std::string::npos) { parsed.push_back(str.substr(pos)); break; } else { parsed.push_back(str.substr(pos, colon - pos)); pos = colon + 1; } } dest->swap(parsed); } # if GTEST_OS_WINDOWS // Recreates the pipe and event handles from the provided parameters, // signals the event, and returns a file descriptor wrapped around the pipe // handle. This function is called in the child process only. int GetStatusFileDescriptor(unsigned int parent_process_id, size_t write_handle_as_size_t, size_t event_handle_as_size_t) { AutoHandle parent_process_handle(::OpenProcess(PROCESS_DUP_HANDLE, FALSE, // Non-inheritable. parent_process_id)); if (parent_process_handle.Get() == INVALID_HANDLE_VALUE) { DeathTestAbort("Unable to open parent process " + StreamableToString(parent_process_id)); } // TODO(vladl@google.com): Replace the following check with a // compile-time assertion when available. GTEST_CHECK_(sizeof(HANDLE) <= sizeof(size_t)); const HANDLE write_handle = reinterpret_cast(write_handle_as_size_t); HANDLE dup_write_handle; // The newly initialized handle is accessible only in in the parent // process. To obtain one accessible within the child, we need to use // DuplicateHandle. if (!::DuplicateHandle(parent_process_handle.Get(), write_handle, ::GetCurrentProcess(), &dup_write_handle, 0x0, // Requested privileges ignored since // DUPLICATE_SAME_ACCESS is used. FALSE, // Request non-inheritable handler. DUPLICATE_SAME_ACCESS)) { DeathTestAbort("Unable to duplicate the pipe handle " + StreamableToString(write_handle_as_size_t) + " from the parent process " + StreamableToString(parent_process_id)); } const HANDLE event_handle = reinterpret_cast(event_handle_as_size_t); HANDLE dup_event_handle; if (!::DuplicateHandle(parent_process_handle.Get(), event_handle, ::GetCurrentProcess(), &dup_event_handle, 0x0, FALSE, DUPLICATE_SAME_ACCESS)) { DeathTestAbort("Unable to duplicate the event handle " + StreamableToString(event_handle_as_size_t) + " from the parent process " + StreamableToString(parent_process_id)); } const int write_fd = ::_open_osfhandle(reinterpret_cast(dup_write_handle), O_APPEND); if (write_fd == -1) { DeathTestAbort("Unable to convert pipe handle " + StreamableToString(write_handle_as_size_t) + " to a file descriptor"); } // Signals the parent that the write end of the pipe has been acquired // so the parent can release its own write end. ::SetEvent(dup_event_handle); return write_fd; } # endif // GTEST_OS_WINDOWS // Returns a newly created InternalRunDeathTestFlag object with fields // initialized from the GTEST_FLAG(internal_run_death_test) flag if // the flag is specified; otherwise returns NULL. InternalRunDeathTestFlag* ParseInternalRunDeathTestFlag() { if (GTEST_FLAG(internal_run_death_test) == "") return NULL; // GTEST_HAS_DEATH_TEST implies that we have ::std::string, so we // can use it here. int line = -1; int index = -1; ::std::vector< ::std::string> fields; SplitString(GTEST_FLAG(internal_run_death_test).c_str(), '|', &fields); int write_fd = -1; # if GTEST_OS_WINDOWS unsigned int parent_process_id = 0; size_t write_handle_as_size_t = 0; size_t event_handle_as_size_t = 0; if (fields.size() != 6 || !ParseNaturalNumber(fields[1], &line) || !ParseNaturalNumber(fields[2], &index) || !ParseNaturalNumber(fields[3], &parent_process_id) || !ParseNaturalNumber(fields[4], &write_handle_as_size_t) || !ParseNaturalNumber(fields[5], &event_handle_as_size_t)) { DeathTestAbort("Bad --gtest_internal_run_death_test flag: " + GTEST_FLAG(internal_run_death_test)); } write_fd = GetStatusFileDescriptor(parent_process_id, write_handle_as_size_t, event_handle_as_size_t); # else if (fields.size() != 4 || !ParseNaturalNumber(fields[1], &line) || !ParseNaturalNumber(fields[2], &index) || !ParseNaturalNumber(fields[3], &write_fd)) { DeathTestAbort("Bad --gtest_internal_run_death_test flag: " + GTEST_FLAG(internal_run_death_test)); } # endif // GTEST_OS_WINDOWS return new InternalRunDeathTestFlag(fields[0], line, index, write_fd); } } // namespace internal #endif // GTEST_HAS_DEATH_TEST } // namespace testing ffc-2019.1.0.post0/libs/gtest-1.7.0/src/gtest-filepath.cc000066400000000000000000000336361346026536000223240ustar00rootroot00000000000000// Copyright 2008, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Authors: keith.ray@gmail.com (Keith Ray) #include "gtest/gtest-message.h" #include "gtest/internal/gtest-filepath.h" #include "gtest/internal/gtest-port.h" #include #if GTEST_OS_WINDOWS_MOBILE # include #elif GTEST_OS_WINDOWS # include # include #elif GTEST_OS_SYMBIAN // Symbian OpenC has PATH_MAX in sys/syslimits.h # include #else # include # include // Some Linux distributions define PATH_MAX here. #endif // GTEST_OS_WINDOWS_MOBILE #if GTEST_OS_WINDOWS # define GTEST_PATH_MAX_ _MAX_PATH #elif defined(PATH_MAX) # define GTEST_PATH_MAX_ PATH_MAX #elif defined(_XOPEN_PATH_MAX) # define GTEST_PATH_MAX_ _XOPEN_PATH_MAX #else # define GTEST_PATH_MAX_ _POSIX_PATH_MAX #endif // GTEST_OS_WINDOWS #include "gtest/internal/gtest-string.h" namespace testing { namespace internal { #if GTEST_OS_WINDOWS // On Windows, '\\' is the standard path separator, but many tools and the // Windows API also accept '/' as an alternate path separator. Unless otherwise // noted, a file path can contain either kind of path separators, or a mixture // of them. const char kPathSeparator = '\\'; const char kAlternatePathSeparator = '/'; const char kPathSeparatorString[] = "\\"; const char kAlternatePathSeparatorString[] = "/"; # if GTEST_OS_WINDOWS_MOBILE // Windows CE doesn't have a current directory. You should not use // the current directory in tests on Windows CE, but this at least // provides a reasonable fallback. const char kCurrentDirectoryString[] = "\\"; // Windows CE doesn't define INVALID_FILE_ATTRIBUTES const DWORD kInvalidFileAttributes = 0xffffffff; # else const char kCurrentDirectoryString[] = ".\\"; # endif // GTEST_OS_WINDOWS_MOBILE #else const char kPathSeparator = '/'; const char kPathSeparatorString[] = "/"; const char kCurrentDirectoryString[] = "./"; #endif // GTEST_OS_WINDOWS // Returns whether the given character is a valid path separator. static bool IsPathSeparator(char c) { #if GTEST_HAS_ALT_PATH_SEP_ return (c == kPathSeparator) || (c == kAlternatePathSeparator); #else return c == kPathSeparator; #endif } // Returns the current working directory, or "" if unsuccessful. FilePath FilePath::GetCurrentDir() { #if GTEST_OS_WINDOWS_MOBILE // Windows CE doesn't have a current directory, so we just return // something reasonable. return FilePath(kCurrentDirectoryString); #elif GTEST_OS_WINDOWS char cwd[GTEST_PATH_MAX_ + 1] = { '\0' }; return FilePath(_getcwd(cwd, sizeof(cwd)) == NULL ? "" : cwd); #else char cwd[GTEST_PATH_MAX_ + 1] = { '\0' }; return FilePath(getcwd(cwd, sizeof(cwd)) == NULL ? "" : cwd); #endif // GTEST_OS_WINDOWS_MOBILE } // Returns a copy of the FilePath with the case-insensitive extension removed. // Example: FilePath("dir/file.exe").RemoveExtension("EXE") returns // FilePath("dir/file"). If a case-insensitive extension is not // found, returns a copy of the original FilePath. FilePath FilePath::RemoveExtension(const char* extension) const { const std::string dot_extension = std::string(".") + extension; if (String::EndsWithCaseInsensitive(pathname_, dot_extension)) { return FilePath(pathname_.substr( 0, pathname_.length() - dot_extension.length())); } return *this; } // Returns a pointer to the last occurence of a valid path separator in // the FilePath. On Windows, for example, both '/' and '\' are valid path // separators. Returns NULL if no path separator was found. const char* FilePath::FindLastPathSeparator() const { const char* const last_sep = strrchr(c_str(), kPathSeparator); #if GTEST_HAS_ALT_PATH_SEP_ const char* const last_alt_sep = strrchr(c_str(), kAlternatePathSeparator); // Comparing two pointers of which only one is NULL is undefined. if (last_alt_sep != NULL && (last_sep == NULL || last_alt_sep > last_sep)) { return last_alt_sep; } #endif return last_sep; } // Returns a copy of the FilePath with the directory part removed. // Example: FilePath("path/to/file").RemoveDirectoryName() returns // FilePath("file"). If there is no directory part ("just_a_file"), it returns // the FilePath unmodified. If there is no file part ("just_a_dir/") it // returns an empty FilePath (""). // On Windows platform, '\' is the path separator, otherwise it is '/'. FilePath FilePath::RemoveDirectoryName() const { const char* const last_sep = FindLastPathSeparator(); return last_sep ? FilePath(last_sep + 1) : *this; } // RemoveFileName returns the directory path with the filename removed. // Example: FilePath("path/to/file").RemoveFileName() returns "path/to/". // If the FilePath is "a_file" or "/a_file", RemoveFileName returns // FilePath("./") or, on Windows, FilePath(".\\"). If the filepath does // not have a file, like "just/a/dir/", it returns the FilePath unmodified. // On Windows platform, '\' is the path separator, otherwise it is '/'. FilePath FilePath::RemoveFileName() const { const char* const last_sep = FindLastPathSeparator(); std::string dir; if (last_sep) { dir = std::string(c_str(), last_sep + 1 - c_str()); } else { dir = kCurrentDirectoryString; } return FilePath(dir); } // Helper functions for naming files in a directory for xml output. // Given directory = "dir", base_name = "test", number = 0, // extension = "xml", returns "dir/test.xml". If number is greater // than zero (e.g., 12), returns "dir/test_12.xml". // On Windows platform, uses \ as the separator rather than /. FilePath FilePath::MakeFileName(const FilePath& directory, const FilePath& base_name, int number, const char* extension) { std::string file; if (number == 0) { file = base_name.string() + "." + extension; } else { file = base_name.string() + "_" + StreamableToString(number) + "." + extension; } return ConcatPaths(directory, FilePath(file)); } // Given directory = "dir", relative_path = "test.xml", returns "dir/test.xml". // On Windows, uses \ as the separator rather than /. FilePath FilePath::ConcatPaths(const FilePath& directory, const FilePath& relative_path) { if (directory.IsEmpty()) return relative_path; const FilePath dir(directory.RemoveTrailingPathSeparator()); return FilePath(dir.string() + kPathSeparator + relative_path.string()); } // Returns true if pathname describes something findable in the file-system, // either a file, directory, or whatever. bool FilePath::FileOrDirectoryExists() const { #if GTEST_OS_WINDOWS_MOBILE LPCWSTR unicode = String::AnsiToUtf16(pathname_.c_str()); const DWORD attributes = GetFileAttributes(unicode); delete [] unicode; return attributes != kInvalidFileAttributes; #else posix::StatStruct file_stat; return posix::Stat(pathname_.c_str(), &file_stat) == 0; #endif // GTEST_OS_WINDOWS_MOBILE } // Returns true if pathname describes a directory in the file-system // that exists. bool FilePath::DirectoryExists() const { bool result = false; #if GTEST_OS_WINDOWS // Don't strip off trailing separator if path is a root directory on // Windows (like "C:\\"). const FilePath& path(IsRootDirectory() ? *this : RemoveTrailingPathSeparator()); #else const FilePath& path(*this); #endif #if GTEST_OS_WINDOWS_MOBILE LPCWSTR unicode = String::AnsiToUtf16(path.c_str()); const DWORD attributes = GetFileAttributes(unicode); delete [] unicode; if ((attributes != kInvalidFileAttributes) && (attributes & FILE_ATTRIBUTE_DIRECTORY)) { result = true; } #else posix::StatStruct file_stat; result = posix::Stat(path.c_str(), &file_stat) == 0 && posix::IsDir(file_stat); #endif // GTEST_OS_WINDOWS_MOBILE return result; } // Returns true if pathname describes a root directory. (Windows has one // root directory per disk drive.) bool FilePath::IsRootDirectory() const { #if GTEST_OS_WINDOWS // TODO(wan@google.com): on Windows a network share like // \\server\share can be a root directory, although it cannot be the // current directory. Handle this properly. return pathname_.length() == 3 && IsAbsolutePath(); #else return pathname_.length() == 1 && IsPathSeparator(pathname_.c_str()[0]); #endif } // Returns true if pathname describes an absolute path. bool FilePath::IsAbsolutePath() const { const char* const name = pathname_.c_str(); #if GTEST_OS_WINDOWS return pathname_.length() >= 3 && ((name[0] >= 'a' && name[0] <= 'z') || (name[0] >= 'A' && name[0] <= 'Z')) && name[1] == ':' && IsPathSeparator(name[2]); #else return IsPathSeparator(name[0]); #endif } // Returns a pathname for a file that does not currently exist. The pathname // will be directory/base_name.extension or // directory/base_name_.extension if directory/base_name.extension // already exists. The number will be incremented until a pathname is found // that does not already exist. // Examples: 'dir/foo_test.xml' or 'dir/foo_test_1.xml'. // There could be a race condition if two or more processes are calling this // function at the same time -- they could both pick the same filename. FilePath FilePath::GenerateUniqueFileName(const FilePath& directory, const FilePath& base_name, const char* extension) { FilePath full_pathname; int number = 0; do { full_pathname.Set(MakeFileName(directory, base_name, number++, extension)); } while (full_pathname.FileOrDirectoryExists()); return full_pathname; } // Returns true if FilePath ends with a path separator, which indicates that // it is intended to represent a directory. Returns false otherwise. // This does NOT check that a directory (or file) actually exists. bool FilePath::IsDirectory() const { return !pathname_.empty() && IsPathSeparator(pathname_.c_str()[pathname_.length() - 1]); } // Create directories so that path exists. Returns true if successful or if // the directories already exist; returns false if unable to create directories // for any reason. bool FilePath::CreateDirectoriesRecursively() const { if (!this->IsDirectory()) { return false; } if (pathname_.length() == 0 || this->DirectoryExists()) { return true; } const FilePath parent(this->RemoveTrailingPathSeparator().RemoveFileName()); return parent.CreateDirectoriesRecursively() && this->CreateFolder(); } // Create the directory so that path exists. Returns true if successful or // if the directory already exists; returns false if unable to create the // directory for any reason, including if the parent directory does not // exist. Not named "CreateDirectory" because that's a macro on Windows. bool FilePath::CreateFolder() const { #if GTEST_OS_WINDOWS_MOBILE FilePath removed_sep(this->RemoveTrailingPathSeparator()); LPCWSTR unicode = String::AnsiToUtf16(removed_sep.c_str()); int result = CreateDirectory(unicode, NULL) ? 0 : -1; delete [] unicode; #elif GTEST_OS_WINDOWS int result = _mkdir(pathname_.c_str()); #else int result = mkdir(pathname_.c_str(), 0777); #endif // GTEST_OS_WINDOWS_MOBILE if (result == -1) { return this->DirectoryExists(); // An error is OK if the directory exists. } return true; // No error. } // If input name has a trailing separator character, remove it and return the // name, otherwise return the name string unmodified. // On Windows platform, uses \ as the separator, other platforms use /. FilePath FilePath::RemoveTrailingPathSeparator() const { return IsDirectory() ? FilePath(pathname_.substr(0, pathname_.length() - 1)) : *this; } // Removes any redundant separators that might be in the pathname. // For example, "bar///foo" becomes "bar/foo". Does not eliminate other // redundancies that might be in a pathname involving "." or "..". // TODO(wan@google.com): handle Windows network shares (e.g. \\server\share). void FilePath::Normalize() { if (pathname_.c_str() == NULL) { pathname_ = ""; return; } const char* src = pathname_.c_str(); char* const dest = new char[pathname_.length() + 1]; char* dest_ptr = dest; memset(dest_ptr, 0, pathname_.length() + 1); while (*src != '\0') { *dest_ptr = *src; if (!IsPathSeparator(*src)) { src++; } else { #if GTEST_HAS_ALT_PATH_SEP_ if (*dest_ptr == kAlternatePathSeparator) { *dest_ptr = kPathSeparator; } #endif while (IsPathSeparator(*src)) src++; } dest_ptr++; } *dest_ptr = '\0'; pathname_ = dest; delete[] dest; } } // namespace internal } // namespace testing ffc-2019.1.0.post0/libs/gtest-1.7.0/src/gtest-internal-inl.h000066400000000000000000001326011346026536000227560ustar00rootroot00000000000000// Copyright 2005, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // Utility functions and classes used by the Google C++ testing framework. // // Author: wan@google.com (Zhanyong Wan) // // This file contains purely Google Test's internal implementation. Please // DO NOT #INCLUDE IT IN A USER PROGRAM. #ifndef GTEST_SRC_GTEST_INTERNAL_INL_H_ #define GTEST_SRC_GTEST_INTERNAL_INL_H_ // GTEST_IMPLEMENTATION_ is defined to 1 iff the current translation unit is // part of Google Test's implementation; otherwise it's undefined. #if !GTEST_IMPLEMENTATION_ // A user is trying to include this from his code - just say no. # error "gtest-internal-inl.h is part of Google Test's internal implementation." # error "It must not be included except by Google Test itself." #endif // GTEST_IMPLEMENTATION_ #ifndef _WIN32_WCE # include #endif // !_WIN32_WCE #include #include // For strtoll/_strtoul64/malloc/free. #include // For memmove. #include #include #include #include "gtest/internal/gtest-port.h" #if GTEST_CAN_STREAM_RESULTS_ # include // NOLINT # include // NOLINT #endif #if GTEST_OS_WINDOWS # include // NOLINT #endif // GTEST_OS_WINDOWS #include "gtest/gtest.h" // NOLINT #include "gtest/gtest-spi.h" namespace testing { // Declares the flags. // // We don't want the users to modify this flag in the code, but want // Google Test's own unit tests to be able to access it. Therefore we // declare it here as opposed to in gtest.h. GTEST_DECLARE_bool_(death_test_use_fork); namespace internal { // The value of GetTestTypeId() as seen from within the Google Test // library. This is solely for testing GetTestTypeId(). GTEST_API_ extern const TypeId kTestTypeIdInGoogleTest; // Names of the flags (needed for parsing Google Test flags). const char kAlsoRunDisabledTestsFlag[] = "also_run_disabled_tests"; const char kBreakOnFailureFlag[] = "break_on_failure"; const char kCatchExceptionsFlag[] = "catch_exceptions"; const char kColorFlag[] = "color"; const char kFilterFlag[] = "filter"; const char kListTestsFlag[] = "list_tests"; const char kOutputFlag[] = "output"; const char kPrintTimeFlag[] = "print_time"; const char kRandomSeedFlag[] = "random_seed"; const char kRepeatFlag[] = "repeat"; const char kShuffleFlag[] = "shuffle"; const char kStackTraceDepthFlag[] = "stack_trace_depth"; const char kStreamResultToFlag[] = "stream_result_to"; const char kThrowOnFailureFlag[] = "throw_on_failure"; // A valid random seed must be in [1, kMaxRandomSeed]. const int kMaxRandomSeed = 99999; // g_help_flag is true iff the --help flag or an equivalent form is // specified on the command line. GTEST_API_ extern bool g_help_flag; // Returns the current time in milliseconds. GTEST_API_ TimeInMillis GetTimeInMillis(); // Returns true iff Google Test should use colors in the output. GTEST_API_ bool ShouldUseColor(bool stdout_is_tty); // Formats the given time in milliseconds as seconds. GTEST_API_ std::string FormatTimeInMillisAsSeconds(TimeInMillis ms); // Converts the given time in milliseconds to a date string in the ISO 8601 // format, without the timezone information. N.B.: due to the use the // non-reentrant localtime() function, this function is not thread safe. Do // not use it in any code that can be called from multiple threads. GTEST_API_ std::string FormatEpochTimeInMillisAsIso8601(TimeInMillis ms); // Parses a string for an Int32 flag, in the form of "--flag=value". // // On success, stores the value of the flag in *value, and returns // true. On failure, returns false without changing *value. GTEST_API_ bool ParseInt32Flag( const char* str, const char* flag, Int32* value); // Returns a random seed in range [1, kMaxRandomSeed] based on the // given --gtest_random_seed flag value. inline int GetRandomSeedFromFlag(Int32 random_seed_flag) { const unsigned int raw_seed = (random_seed_flag == 0) ? static_cast(GetTimeInMillis()) : static_cast(random_seed_flag); // Normalizes the actual seed to range [1, kMaxRandomSeed] such that // it's easy to type. const int normalized_seed = static_cast((raw_seed - 1U) % static_cast(kMaxRandomSeed)) + 1; return normalized_seed; } // Returns the first valid random seed after 'seed'. The behavior is // undefined if 'seed' is invalid. The seed after kMaxRandomSeed is // considered to be 1. inline int GetNextRandomSeed(int seed) { GTEST_CHECK_(1 <= seed && seed <= kMaxRandomSeed) << "Invalid random seed " << seed << " - must be in [1, " << kMaxRandomSeed << "]."; const int next_seed = seed + 1; return (next_seed > kMaxRandomSeed) ? 1 : next_seed; } // This class saves the values of all Google Test flags in its c'tor, and // restores them in its d'tor. class GTestFlagSaver { public: // The c'tor. GTestFlagSaver() { also_run_disabled_tests_ = GTEST_FLAG(also_run_disabled_tests); break_on_failure_ = GTEST_FLAG(break_on_failure); catch_exceptions_ = GTEST_FLAG(catch_exceptions); color_ = GTEST_FLAG(color); death_test_style_ = GTEST_FLAG(death_test_style); death_test_use_fork_ = GTEST_FLAG(death_test_use_fork); filter_ = GTEST_FLAG(filter); internal_run_death_test_ = GTEST_FLAG(internal_run_death_test); list_tests_ = GTEST_FLAG(list_tests); output_ = GTEST_FLAG(output); print_time_ = GTEST_FLAG(print_time); random_seed_ = GTEST_FLAG(random_seed); repeat_ = GTEST_FLAG(repeat); shuffle_ = GTEST_FLAG(shuffle); stack_trace_depth_ = GTEST_FLAG(stack_trace_depth); stream_result_to_ = GTEST_FLAG(stream_result_to); throw_on_failure_ = GTEST_FLAG(throw_on_failure); } // The d'tor is not virtual. DO NOT INHERIT FROM THIS CLASS. ~GTestFlagSaver() { GTEST_FLAG(also_run_disabled_tests) = also_run_disabled_tests_; GTEST_FLAG(break_on_failure) = break_on_failure_; GTEST_FLAG(catch_exceptions) = catch_exceptions_; GTEST_FLAG(color) = color_; GTEST_FLAG(death_test_style) = death_test_style_; GTEST_FLAG(death_test_use_fork) = death_test_use_fork_; GTEST_FLAG(filter) = filter_; GTEST_FLAG(internal_run_death_test) = internal_run_death_test_; GTEST_FLAG(list_tests) = list_tests_; GTEST_FLAG(output) = output_; GTEST_FLAG(print_time) = print_time_; GTEST_FLAG(random_seed) = random_seed_; GTEST_FLAG(repeat) = repeat_; GTEST_FLAG(shuffle) = shuffle_; GTEST_FLAG(stack_trace_depth) = stack_trace_depth_; GTEST_FLAG(stream_result_to) = stream_result_to_; GTEST_FLAG(throw_on_failure) = throw_on_failure_; } private: // Fields for saving the original values of flags. bool also_run_disabled_tests_; bool break_on_failure_; bool catch_exceptions_; std::string color_; std::string death_test_style_; bool death_test_use_fork_; std::string filter_; std::string internal_run_death_test_; bool list_tests_; std::string output_; bool print_time_; internal::Int32 random_seed_; internal::Int32 repeat_; bool shuffle_; internal::Int32 stack_trace_depth_; std::string stream_result_to_; bool throw_on_failure_; } GTEST_ATTRIBUTE_UNUSED_; // Converts a Unicode code point to a narrow string in UTF-8 encoding. // code_point parameter is of type UInt32 because wchar_t may not be // wide enough to contain a code point. // If the code_point is not a valid Unicode code point // (i.e. outside of Unicode range U+0 to U+10FFFF) it will be converted // to "(Invalid Unicode 0xXXXXXXXX)". GTEST_API_ std::string CodePointToUtf8(UInt32 code_point); // Converts a wide string to a narrow string in UTF-8 encoding. // The wide string is assumed to have the following encoding: // UTF-16 if sizeof(wchar_t) == 2 (on Windows, Cygwin, Symbian OS) // UTF-32 if sizeof(wchar_t) == 4 (on Linux) // Parameter str points to a null-terminated wide string. // Parameter num_chars may additionally limit the number // of wchar_t characters processed. -1 is used when the entire string // should be processed. // If the string contains code points that are not valid Unicode code points // (i.e. outside of Unicode range U+0 to U+10FFFF) they will be output // as '(Invalid Unicode 0xXXXXXXXX)'. If the string is in UTF16 encoding // and contains invalid UTF-16 surrogate pairs, values in those pairs // will be encoded as individual Unicode characters from Basic Normal Plane. GTEST_API_ std::string WideStringToUtf8(const wchar_t* str, int num_chars); // Reads the GTEST_SHARD_STATUS_FILE environment variable, and creates the file // if the variable is present. If a file already exists at this location, this // function will write over it. If the variable is present, but the file cannot // be created, prints an error and exits. void WriteToShardStatusFileIfNeeded(); // Checks whether sharding is enabled by examining the relevant // environment variable values. If the variables are present, // but inconsistent (e.g., shard_index >= total_shards), prints // an error and exits. If in_subprocess_for_death_test, sharding is // disabled because it must only be applied to the original test // process. Otherwise, we could filter out death tests we intended to execute. GTEST_API_ bool ShouldShard(const char* total_shards_str, const char* shard_index_str, bool in_subprocess_for_death_test); // Parses the environment variable var as an Int32. If it is unset, // returns default_val. If it is not an Int32, prints an error and // and aborts. GTEST_API_ Int32 Int32FromEnvOrDie(const char* env_var, Int32 default_val); // Given the total number of shards, the shard index, and the test id, // returns true iff the test should be run on this shard. The test id is // some arbitrary but unique non-negative integer assigned to each test // method. Assumes that 0 <= shard_index < total_shards. GTEST_API_ bool ShouldRunTestOnShard( int total_shards, int shard_index, int test_id); // STL container utilities. // Returns the number of elements in the given container that satisfy // the given predicate. template inline int CountIf(const Container& c, Predicate predicate) { // Implemented as an explicit loop since std::count_if() in libCstd on // Solaris has a non-standard signature. int count = 0; for (typename Container::const_iterator it = c.begin(); it != c.end(); ++it) { if (predicate(*it)) ++count; } return count; } // Applies a function/functor to each element in the container. template void ForEach(const Container& c, Functor functor) { std::for_each(c.begin(), c.end(), functor); } // Returns the i-th element of the vector, or default_value if i is not // in range [0, v.size()). template inline E GetElementOr(const std::vector& v, int i, E default_value) { return (i < 0 || i >= static_cast(v.size())) ? default_value : v[i]; } // Performs an in-place shuffle of a range of the vector's elements. // 'begin' and 'end' are element indices as an STL-style range; // i.e. [begin, end) are shuffled, where 'end' == size() means to // shuffle to the end of the vector. template void ShuffleRange(internal::Random* random, int begin, int end, std::vector* v) { const int size = static_cast(v->size()); GTEST_CHECK_(0 <= begin && begin <= size) << "Invalid shuffle range start " << begin << ": must be in range [0, " << size << "]."; GTEST_CHECK_(begin <= end && end <= size) << "Invalid shuffle range finish " << end << ": must be in range [" << begin << ", " << size << "]."; // Fisher-Yates shuffle, from // http://en.wikipedia.org/wiki/Fisher-Yates_shuffle for (int range_width = end - begin; range_width >= 2; range_width--) { const int last_in_range = begin + range_width - 1; const int selected = begin + random->Generate(range_width); std::swap((*v)[selected], (*v)[last_in_range]); } } // Performs an in-place shuffle of the vector's elements. template inline void Shuffle(internal::Random* random, std::vector* v) { ShuffleRange(random, 0, static_cast(v->size()), v); } // A function for deleting an object. Handy for being used as a // functor. template static void Delete(T* x) { delete x; } // A predicate that checks the key of a TestProperty against a known key. // // TestPropertyKeyIs is copyable. class TestPropertyKeyIs { public: // Constructor. // // TestPropertyKeyIs has NO default constructor. explicit TestPropertyKeyIs(const std::string& key) : key_(key) {} // Returns true iff the test name of test property matches on key_. bool operator()(const TestProperty& test_property) const { return test_property.key() == key_; } private: std::string key_; }; // Class UnitTestOptions. // // This class contains functions for processing options the user // specifies when running the tests. It has only static members. // // In most cases, the user can specify an option using either an // environment variable or a command line flag. E.g. you can set the // test filter using either GTEST_FILTER or --gtest_filter. If both // the variable and the flag are present, the latter overrides the // former. class GTEST_API_ UnitTestOptions { public: // Functions for processing the gtest_output flag. // Returns the output format, or "" for normal printed output. static std::string GetOutputFormat(); // Returns the absolute path of the requested output file, or the // default (test_detail.xml in the original working directory) if // none was explicitly specified. static std::string GetAbsolutePathToOutputFile(); // Functions for processing the gtest_filter flag. // Returns true iff the wildcard pattern matches the string. The // first ':' or '\0' character in pattern marks the end of it. // // This recursive algorithm isn't very efficient, but is clear and // works well enough for matching test names, which are short. static bool PatternMatchesString(const char *pattern, const char *str); // Returns true iff the user-specified filter matches the test case // name and the test name. static bool FilterMatchesTest(const std::string &test_case_name, const std::string &test_name); #if GTEST_OS_WINDOWS // Function for supporting the gtest_catch_exception flag. // Returns EXCEPTION_EXECUTE_HANDLER if Google Test should handle the // given SEH exception, or EXCEPTION_CONTINUE_SEARCH otherwise. // This function is useful as an __except condition. static int GTestShouldProcessSEH(DWORD exception_code); #endif // GTEST_OS_WINDOWS // Returns true if "name" matches the ':' separated list of glob-style // filters in "filter". static bool MatchesFilter(const std::string& name, const char* filter); }; // Returns the current application's name, removing directory path if that // is present. Used by UnitTestOptions::GetOutputFile. GTEST_API_ FilePath GetCurrentExecutableName(); // The role interface for getting the OS stack trace as a string. class OsStackTraceGetterInterface { public: OsStackTraceGetterInterface() {} virtual ~OsStackTraceGetterInterface() {} // Returns the current OS stack trace as an std::string. Parameters: // // max_depth - the maximum number of stack frames to be included // in the trace. // skip_count - the number of top frames to be skipped; doesn't count // against max_depth. virtual string CurrentStackTrace(int max_depth, int skip_count) = 0; // UponLeavingGTest() should be called immediately before Google Test calls // user code. It saves some information about the current stack that // CurrentStackTrace() will use to find and hide Google Test stack frames. virtual void UponLeavingGTest() = 0; private: GTEST_DISALLOW_COPY_AND_ASSIGN_(OsStackTraceGetterInterface); }; // A working implementation of the OsStackTraceGetterInterface interface. class OsStackTraceGetter : public OsStackTraceGetterInterface { public: OsStackTraceGetter() : caller_frame_(NULL) {} virtual string CurrentStackTrace(int max_depth, int skip_count) GTEST_LOCK_EXCLUDED_(mutex_); virtual void UponLeavingGTest() GTEST_LOCK_EXCLUDED_(mutex_); // This string is inserted in place of stack frames that are part of // Google Test's implementation. static const char* const kElidedFramesMarker; private: Mutex mutex_; // protects all internal state // We save the stack frame below the frame that calls user code. // We do this because the address of the frame immediately below // the user code changes between the call to UponLeavingGTest() // and any calls to CurrentStackTrace() from within the user code. void* caller_frame_; GTEST_DISALLOW_COPY_AND_ASSIGN_(OsStackTraceGetter); }; // Information about a Google Test trace point. struct TraceInfo { const char* file; int line; std::string message; }; // This is the default global test part result reporter used in UnitTestImpl. // This class should only be used by UnitTestImpl. class DefaultGlobalTestPartResultReporter : public TestPartResultReporterInterface { public: explicit DefaultGlobalTestPartResultReporter(UnitTestImpl* unit_test); // Implements the TestPartResultReporterInterface. Reports the test part // result in the current test. virtual void ReportTestPartResult(const TestPartResult& result); private: UnitTestImpl* const unit_test_; GTEST_DISALLOW_COPY_AND_ASSIGN_(DefaultGlobalTestPartResultReporter); }; // This is the default per thread test part result reporter used in // UnitTestImpl. This class should only be used by UnitTestImpl. class DefaultPerThreadTestPartResultReporter : public TestPartResultReporterInterface { public: explicit DefaultPerThreadTestPartResultReporter(UnitTestImpl* unit_test); // Implements the TestPartResultReporterInterface. The implementation just // delegates to the current global test part result reporter of *unit_test_. virtual void ReportTestPartResult(const TestPartResult& result); private: UnitTestImpl* const unit_test_; GTEST_DISALLOW_COPY_AND_ASSIGN_(DefaultPerThreadTestPartResultReporter); }; // The private implementation of the UnitTest class. We don't protect // the methods under a mutex, as this class is not accessible by a // user and the UnitTest class that delegates work to this class does // proper locking. class GTEST_API_ UnitTestImpl { public: explicit UnitTestImpl(UnitTest* parent); virtual ~UnitTestImpl(); // There are two different ways to register your own TestPartResultReporter. // You can register your own repoter to listen either only for test results // from the current thread or for results from all threads. // By default, each per-thread test result repoter just passes a new // TestPartResult to the global test result reporter, which registers the // test part result for the currently running test. // Returns the global test part result reporter. TestPartResultReporterInterface* GetGlobalTestPartResultReporter(); // Sets the global test part result reporter. void SetGlobalTestPartResultReporter( TestPartResultReporterInterface* reporter); // Returns the test part result reporter for the current thread. TestPartResultReporterInterface* GetTestPartResultReporterForCurrentThread(); // Sets the test part result reporter for the current thread. void SetTestPartResultReporterForCurrentThread( TestPartResultReporterInterface* reporter); // Gets the number of successful test cases. int successful_test_case_count() const; // Gets the number of failed test cases. int failed_test_case_count() const; // Gets the number of all test cases. int total_test_case_count() const; // Gets the number of all test cases that contain at least one test // that should run. int test_case_to_run_count() const; // Gets the number of successful tests. int successful_test_count() const; // Gets the number of failed tests. int failed_test_count() const; // Gets the number of disabled tests that will be reported in the XML report. int reportable_disabled_test_count() const; // Gets the number of disabled tests. int disabled_test_count() const; // Gets the number of tests to be printed in the XML report. int reportable_test_count() const; // Gets the number of all tests. int total_test_count() const; // Gets the number of tests that should run. int test_to_run_count() const; // Gets the time of the test program start, in ms from the start of the // UNIX epoch. TimeInMillis start_timestamp() const { return start_timestamp_; } // Gets the elapsed time, in milliseconds. TimeInMillis elapsed_time() const { return elapsed_time_; } // Returns true iff the unit test passed (i.e. all test cases passed). bool Passed() const { return !Failed(); } // Returns true iff the unit test failed (i.e. some test case failed // or something outside of all tests failed). bool Failed() const { return failed_test_case_count() > 0 || ad_hoc_test_result()->Failed(); } // Gets the i-th test case among all the test cases. i can range from 0 to // total_test_case_count() - 1. If i is not in that range, returns NULL. const TestCase* GetTestCase(int i) const { const int index = GetElementOr(test_case_indices_, i, -1); return index < 0 ? NULL : test_cases_[i]; } // Gets the i-th test case among all the test cases. i can range from 0 to // total_test_case_count() - 1. If i is not in that range, returns NULL. TestCase* GetMutableTestCase(int i) { const int index = GetElementOr(test_case_indices_, i, -1); return index < 0 ? NULL : test_cases_[index]; } // Provides access to the event listener list. TestEventListeners* listeners() { return &listeners_; } // Returns the TestResult for the test that's currently running, or // the TestResult for the ad hoc test if no test is running. TestResult* current_test_result(); // Returns the TestResult for the ad hoc test. const TestResult* ad_hoc_test_result() const { return &ad_hoc_test_result_; } // Sets the OS stack trace getter. // // Does nothing if the input and the current OS stack trace getter // are the same; otherwise, deletes the old getter and makes the // input the current getter. void set_os_stack_trace_getter(OsStackTraceGetterInterface* getter); // Returns the current OS stack trace getter if it is not NULL; // otherwise, creates an OsStackTraceGetter, makes it the current // getter, and returns it. OsStackTraceGetterInterface* os_stack_trace_getter(); // Returns the current OS stack trace as an std::string. // // The maximum number of stack frames to be included is specified by // the gtest_stack_trace_depth flag. The skip_count parameter // specifies the number of top frames to be skipped, which doesn't // count against the number of frames to be included. // // For example, if Foo() calls Bar(), which in turn calls // CurrentOsStackTraceExceptTop(1), Foo() will be included in the // trace but Bar() and CurrentOsStackTraceExceptTop() won't. std::string CurrentOsStackTraceExceptTop(int skip_count) GTEST_NO_INLINE_; // Finds and returns a TestCase with the given name. If one doesn't // exist, creates one and returns it. // // Arguments: // // test_case_name: name of the test case // type_param: the name of the test's type parameter, or NULL if // this is not a typed or a type-parameterized test. // set_up_tc: pointer to the function that sets up the test case // tear_down_tc: pointer to the function that tears down the test case TestCase* GetTestCase(const char* test_case_name, const char* type_param, Test::SetUpTestCaseFunc set_up_tc, Test::TearDownTestCaseFunc tear_down_tc); // Adds a TestInfo to the unit test. // // Arguments: // // set_up_tc: pointer to the function that sets up the test case // tear_down_tc: pointer to the function that tears down the test case // test_info: the TestInfo object void AddTestInfo(Test::SetUpTestCaseFunc set_up_tc, Test::TearDownTestCaseFunc tear_down_tc, TestInfo* test_info) { // In order to support thread-safe death tests, we need to // remember the original working directory when the test program // was first invoked. We cannot do this in RUN_ALL_TESTS(), as // the user may have changed the current directory before calling // RUN_ALL_TESTS(). Therefore we capture the current directory in // AddTestInfo(), which is called to register a TEST or TEST_F // before main() is reached. if (original_working_dir_.IsEmpty()) { original_working_dir_.Set(FilePath::GetCurrentDir()); GTEST_CHECK_(!original_working_dir_.IsEmpty()) << "Failed to get the current working directory."; } GetTestCase(test_info->test_case_name(), test_info->type_param(), set_up_tc, tear_down_tc)->AddTestInfo(test_info); } #if GTEST_HAS_PARAM_TEST // Returns ParameterizedTestCaseRegistry object used to keep track of // value-parameterized tests and instantiate and register them. internal::ParameterizedTestCaseRegistry& parameterized_test_registry() { return parameterized_test_registry_; } #endif // GTEST_HAS_PARAM_TEST // Sets the TestCase object for the test that's currently running. void set_current_test_case(TestCase* a_current_test_case) { current_test_case_ = a_current_test_case; } // Sets the TestInfo object for the test that's currently running. If // current_test_info is NULL, the assertion results will be stored in // ad_hoc_test_result_. void set_current_test_info(TestInfo* a_current_test_info) { current_test_info_ = a_current_test_info; } // Registers all parameterized tests defined using TEST_P and // INSTANTIATE_TEST_CASE_P, creating regular tests for each test/parameter // combination. This method can be called more then once; it has guards // protecting from registering the tests more then once. If // value-parameterized tests are disabled, RegisterParameterizedTests is // present but does nothing. void RegisterParameterizedTests(); // Runs all tests in this UnitTest object, prints the result, and // returns true if all tests are successful. If any exception is // thrown during a test, this test is considered to be failed, but // the rest of the tests will still be run. bool RunAllTests(); // Clears the results of all tests, except the ad hoc tests. void ClearNonAdHocTestResult() { ForEach(test_cases_, TestCase::ClearTestCaseResult); } // Clears the results of ad-hoc test assertions. void ClearAdHocTestResult() { ad_hoc_test_result_.Clear(); } // Adds a TestProperty to the current TestResult object when invoked in a // context of a test or a test case, or to the global property set. If the // result already contains a property with the same key, the value will be // updated. void RecordProperty(const TestProperty& test_property); enum ReactionToSharding { HONOR_SHARDING_PROTOCOL, IGNORE_SHARDING_PROTOCOL }; // Matches the full name of each test against the user-specified // filter to decide whether the test should run, then records the // result in each TestCase and TestInfo object. // If shard_tests == HONOR_SHARDING_PROTOCOL, further filters tests // based on sharding variables in the environment. // Returns the number of tests that should run. int FilterTests(ReactionToSharding shard_tests); // Prints the names of the tests matching the user-specified filter flag. void ListTestsMatchingFilter(); const TestCase* current_test_case() const { return current_test_case_; } TestInfo* current_test_info() { return current_test_info_; } const TestInfo* current_test_info() const { return current_test_info_; } // Returns the vector of environments that need to be set-up/torn-down // before/after the tests are run. std::vector& environments() { return environments_; } // Getters for the per-thread Google Test trace stack. std::vector& gtest_trace_stack() { return *(gtest_trace_stack_.pointer()); } const std::vector& gtest_trace_stack() const { return gtest_trace_stack_.get(); } #if GTEST_HAS_DEATH_TEST void InitDeathTestSubprocessControlInfo() { internal_run_death_test_flag_.reset(ParseInternalRunDeathTestFlag()); } // Returns a pointer to the parsed --gtest_internal_run_death_test // flag, or NULL if that flag was not specified. // This information is useful only in a death test child process. // Must not be called before a call to InitGoogleTest. const InternalRunDeathTestFlag* internal_run_death_test_flag() const { return internal_run_death_test_flag_.get(); } // Returns a pointer to the current death test factory. internal::DeathTestFactory* death_test_factory() { return death_test_factory_.get(); } void SuppressTestEventsIfInSubprocess(); friend class ReplaceDeathTestFactory; #endif // GTEST_HAS_DEATH_TEST // Initializes the event listener performing XML output as specified by // UnitTestOptions. Must not be called before InitGoogleTest. void ConfigureXmlOutput(); #if GTEST_CAN_STREAM_RESULTS_ // Initializes the event listener for streaming test results to a socket. // Must not be called before InitGoogleTest. void ConfigureStreamingOutput(); #endif // Performs initialization dependent upon flag values obtained in // ParseGoogleTestFlagsOnly. Is called from InitGoogleTest after the call to // ParseGoogleTestFlagsOnly. In case a user neglects to call InitGoogleTest // this function is also called from RunAllTests. Since this function can be // called more than once, it has to be idempotent. void PostFlagParsingInit(); // Gets the random seed used at the start of the current test iteration. int random_seed() const { return random_seed_; } // Gets the random number generator. internal::Random* random() { return &random_; } // Shuffles all test cases, and the tests within each test case, // making sure that death tests are still run first. void ShuffleTests(); // Restores the test cases and tests to their order before the first shuffle. void UnshuffleTests(); // Returns the value of GTEST_FLAG(catch_exceptions) at the moment // UnitTest::Run() starts. bool catch_exceptions() const { return catch_exceptions_; } private: friend class ::testing::UnitTest; // Used by UnitTest::Run() to capture the state of // GTEST_FLAG(catch_exceptions) at the moment it starts. void set_catch_exceptions(bool value) { catch_exceptions_ = value; } // The UnitTest object that owns this implementation object. UnitTest* const parent_; // The working directory when the first TEST() or TEST_F() was // executed. internal::FilePath original_working_dir_; // The default test part result reporters. DefaultGlobalTestPartResultReporter default_global_test_part_result_reporter_; DefaultPerThreadTestPartResultReporter default_per_thread_test_part_result_reporter_; // Points to (but doesn't own) the global test part result reporter. TestPartResultReporterInterface* global_test_part_result_repoter_; // Protects read and write access to global_test_part_result_reporter_. internal::Mutex global_test_part_result_reporter_mutex_; // Points to (but doesn't own) the per-thread test part result reporter. internal::ThreadLocal per_thread_test_part_result_reporter_; // The vector of environments that need to be set-up/torn-down // before/after the tests are run. std::vector environments_; // The vector of TestCases in their original order. It owns the // elements in the vector. std::vector test_cases_; // Provides a level of indirection for the test case list to allow // easy shuffling and restoring the test case order. The i-th // element of this vector is the index of the i-th test case in the // shuffled order. std::vector test_case_indices_; #if GTEST_HAS_PARAM_TEST // ParameterizedTestRegistry object used to register value-parameterized // tests. internal::ParameterizedTestCaseRegistry parameterized_test_registry_; // Indicates whether RegisterParameterizedTests() has been called already. bool parameterized_tests_registered_; #endif // GTEST_HAS_PARAM_TEST // Index of the last death test case registered. Initially -1. int last_death_test_case_; // This points to the TestCase for the currently running test. It // changes as Google Test goes through one test case after another. // When no test is running, this is set to NULL and Google Test // stores assertion results in ad_hoc_test_result_. Initially NULL. TestCase* current_test_case_; // This points to the TestInfo for the currently running test. It // changes as Google Test goes through one test after another. When // no test is running, this is set to NULL and Google Test stores // assertion results in ad_hoc_test_result_. Initially NULL. TestInfo* current_test_info_; // Normally, a user only writes assertions inside a TEST or TEST_F, // or inside a function called by a TEST or TEST_F. Since Google // Test keeps track of which test is current running, it can // associate such an assertion with the test it belongs to. // // If an assertion is encountered when no TEST or TEST_F is running, // Google Test attributes the assertion result to an imaginary "ad hoc" // test, and records the result in ad_hoc_test_result_. TestResult ad_hoc_test_result_; // The list of event listeners that can be used to track events inside // Google Test. TestEventListeners listeners_; // The OS stack trace getter. Will be deleted when the UnitTest // object is destructed. By default, an OsStackTraceGetter is used, // but the user can set this field to use a custom getter if that is // desired. OsStackTraceGetterInterface* os_stack_trace_getter_; // True iff PostFlagParsingInit() has been called. bool post_flag_parse_init_performed_; // The random number seed used at the beginning of the test run. int random_seed_; // Our random number generator. internal::Random random_; // The time of the test program start, in ms from the start of the // UNIX epoch. TimeInMillis start_timestamp_; // How long the test took to run, in milliseconds. TimeInMillis elapsed_time_; #if GTEST_HAS_DEATH_TEST // The decomposed components of the gtest_internal_run_death_test flag, // parsed when RUN_ALL_TESTS is called. internal::scoped_ptr internal_run_death_test_flag_; internal::scoped_ptr death_test_factory_; #endif // GTEST_HAS_DEATH_TEST // A per-thread stack of traces created by the SCOPED_TRACE() macro. internal::ThreadLocal > gtest_trace_stack_; // The value of GTEST_FLAG(catch_exceptions) at the moment RunAllTests() // starts. bool catch_exceptions_; GTEST_DISALLOW_COPY_AND_ASSIGN_(UnitTestImpl); }; // class UnitTestImpl // Convenience function for accessing the global UnitTest // implementation object. inline UnitTestImpl* GetUnitTestImpl() { return UnitTest::GetInstance()->impl(); } #if GTEST_USES_SIMPLE_RE // Internal helper functions for implementing the simple regular // expression matcher. GTEST_API_ bool IsInSet(char ch, const char* str); GTEST_API_ bool IsAsciiDigit(char ch); GTEST_API_ bool IsAsciiPunct(char ch); GTEST_API_ bool IsRepeat(char ch); GTEST_API_ bool IsAsciiWhiteSpace(char ch); GTEST_API_ bool IsAsciiWordChar(char ch); GTEST_API_ bool IsValidEscape(char ch); GTEST_API_ bool AtomMatchesChar(bool escaped, char pattern, char ch); GTEST_API_ bool ValidateRegex(const char* regex); GTEST_API_ bool MatchRegexAtHead(const char* regex, const char* str); GTEST_API_ bool MatchRepetitionAndRegexAtHead( bool escaped, char ch, char repeat, const char* regex, const char* str); GTEST_API_ bool MatchRegexAnywhere(const char* regex, const char* str); #endif // GTEST_USES_SIMPLE_RE // Parses the command line for Google Test flags, without initializing // other parts of Google Test. GTEST_API_ void ParseGoogleTestFlagsOnly(int* argc, char** argv); GTEST_API_ void ParseGoogleTestFlagsOnly(int* argc, wchar_t** argv); #if GTEST_HAS_DEATH_TEST // Returns the message describing the last system error, regardless of the // platform. GTEST_API_ std::string GetLastErrnoDescription(); # if GTEST_OS_WINDOWS // Provides leak-safe Windows kernel handle ownership. class AutoHandle { public: AutoHandle() : handle_(INVALID_HANDLE_VALUE) {} explicit AutoHandle(HANDLE handle) : handle_(handle) {} ~AutoHandle() { Reset(); } HANDLE Get() const { return handle_; } void Reset() { Reset(INVALID_HANDLE_VALUE); } void Reset(HANDLE handle) { if (handle != handle_) { if (handle_ != INVALID_HANDLE_VALUE) ::CloseHandle(handle_); handle_ = handle; } } private: HANDLE handle_; GTEST_DISALLOW_COPY_AND_ASSIGN_(AutoHandle); }; # endif // GTEST_OS_WINDOWS // Attempts to parse a string into a positive integer pointed to by the // number parameter. Returns true if that is possible. // GTEST_HAS_DEATH_TEST implies that we have ::std::string, so we can use // it here. template bool ParseNaturalNumber(const ::std::string& str, Integer* number) { // Fail fast if the given string does not begin with a digit; // this bypasses strtoXXX's "optional leading whitespace and plus // or minus sign" semantics, which are undesirable here. if (str.empty() || !IsDigit(str[0])) { return false; } errno = 0; char* end; // BiggestConvertible is the largest integer type that system-provided // string-to-number conversion routines can return. # if GTEST_OS_WINDOWS && !defined(__GNUC__) // MSVC and C++ Builder define __int64 instead of the standard long long. typedef unsigned __int64 BiggestConvertible; const BiggestConvertible parsed = _strtoui64(str.c_str(), &end, 10); # else typedef unsigned long long BiggestConvertible; // NOLINT const BiggestConvertible parsed = strtoull(str.c_str(), &end, 10); # endif // GTEST_OS_WINDOWS && !defined(__GNUC__) const bool parse_success = *end == '\0' && errno == 0; // TODO(vladl@google.com): Convert this to compile time assertion when it is // available. GTEST_CHECK_(sizeof(Integer) <= sizeof(parsed)); const Integer result = static_cast(parsed); if (parse_success && static_cast(result) == parsed) { *number = result; return true; } return false; } #endif // GTEST_HAS_DEATH_TEST // TestResult contains some private methods that should be hidden from // Google Test user but are required for testing. This class allow our tests // to access them. // // This class is supplied only for the purpose of testing Google Test's own // constructs. Do not use it in user tests, either directly or indirectly. class TestResultAccessor { public: static void RecordProperty(TestResult* test_result, const std::string& xml_element, const TestProperty& property) { test_result->RecordProperty(xml_element, property); } static void ClearTestPartResults(TestResult* test_result) { test_result->ClearTestPartResults(); } static const std::vector& test_part_results( const TestResult& test_result) { return test_result.test_part_results(); } }; #if GTEST_CAN_STREAM_RESULTS_ // Streams test results to the given port on the given host machine. class StreamingListener : public EmptyTestEventListener { public: // Abstract base class for writing strings to a socket. class AbstractSocketWriter { public: virtual ~AbstractSocketWriter() {} // Sends a string to the socket. virtual void Send(const string& message) = 0; // Closes the socket. virtual void CloseConnection() {} // Sends a string and a newline to the socket. void SendLn(const string& message) { Send(message + "\n"); } }; // Concrete class for actually writing strings to a socket. class SocketWriter : public AbstractSocketWriter { public: SocketWriter(const string& host, const string& port) : sockfd_(-1), host_name_(host), port_num_(port) { MakeConnection(); } virtual ~SocketWriter() { if (sockfd_ != -1) CloseConnection(); } // Sends a string to the socket. virtual void Send(const string& message) { GTEST_CHECK_(sockfd_ != -1) << "Send() can be called only when there is a connection."; const int len = static_cast(message.length()); if (write(sockfd_, message.c_str(), len) != len) { GTEST_LOG_(WARNING) << "stream_result_to: failed to stream to " << host_name_ << ":" << port_num_; } } private: // Creates a client socket and connects to the server. void MakeConnection(); // Closes the socket. void CloseConnection() { GTEST_CHECK_(sockfd_ != -1) << "CloseConnection() can be called only when there is a connection."; close(sockfd_); sockfd_ = -1; } int sockfd_; // socket file descriptor const string host_name_; const string port_num_; GTEST_DISALLOW_COPY_AND_ASSIGN_(SocketWriter); }; // class SocketWriter // Escapes '=', '&', '%', and '\n' characters in str as "%xx". static string UrlEncode(const char* str); StreamingListener(const string& host, const string& port) : socket_writer_(new SocketWriter(host, port)) { Start(); } explicit StreamingListener(AbstractSocketWriter* socket_writer) : socket_writer_(socket_writer) { Start(); } void OnTestProgramStart(const UnitTest& /* unit_test */) { SendLn("event=TestProgramStart"); } void OnTestProgramEnd(const UnitTest& unit_test) { // Note that Google Test current only report elapsed time for each // test iteration, not for the entire test program. SendLn("event=TestProgramEnd&passed=" + FormatBool(unit_test.Passed())); // Notify the streaming server to stop. socket_writer_->CloseConnection(); } void OnTestIterationStart(const UnitTest& /* unit_test */, int iteration) { SendLn("event=TestIterationStart&iteration=" + StreamableToString(iteration)); } void OnTestIterationEnd(const UnitTest& unit_test, int /* iteration */) { SendLn("event=TestIterationEnd&passed=" + FormatBool(unit_test.Passed()) + "&elapsed_time=" + StreamableToString(unit_test.elapsed_time()) + "ms"); } void OnTestCaseStart(const TestCase& test_case) { SendLn(std::string("event=TestCaseStart&name=") + test_case.name()); } void OnTestCaseEnd(const TestCase& test_case) { SendLn("event=TestCaseEnd&passed=" + FormatBool(test_case.Passed()) + "&elapsed_time=" + StreamableToString(test_case.elapsed_time()) + "ms"); } void OnTestStart(const TestInfo& test_info) { SendLn(std::string("event=TestStart&name=") + test_info.name()); } void OnTestEnd(const TestInfo& test_info) { SendLn("event=TestEnd&passed=" + FormatBool((test_info.result())->Passed()) + "&elapsed_time=" + StreamableToString((test_info.result())->elapsed_time()) + "ms"); } void OnTestPartResult(const TestPartResult& test_part_result) { const char* file_name = test_part_result.file_name(); if (file_name == NULL) file_name = ""; SendLn("event=TestPartResult&file=" + UrlEncode(file_name) + "&line=" + StreamableToString(test_part_result.line_number()) + "&message=" + UrlEncode(test_part_result.message())); } private: // Sends the given message and a newline to the socket. void SendLn(const string& message) { socket_writer_->SendLn(message); } // Called at the start of streaming to notify the receiver what // protocol we are using. void Start() { SendLn("gtest_streaming_protocol_version=1.0"); } string FormatBool(bool value) { return value ? "1" : "0"; } const scoped_ptr socket_writer_; GTEST_DISALLOW_COPY_AND_ASSIGN_(StreamingListener); }; // class StreamingListener #endif // GTEST_CAN_STREAM_RESULTS_ } // namespace internal } // namespace testing #endif // GTEST_SRC_GTEST_INTERNAL_INL_H_ ffc-2019.1.0.post0/libs/gtest-1.7.0/src/gtest-port.cc000066400000000000000000000655121346026536000215120ustar00rootroot00000000000000// Copyright 2008, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: wan@google.com (Zhanyong Wan) #include "gtest/internal/gtest-port.h" #include #include #include #include #if GTEST_OS_WINDOWS_MOBILE # include // For TerminateProcess() #elif GTEST_OS_WINDOWS # include # include #else # include #endif // GTEST_OS_WINDOWS_MOBILE #if GTEST_OS_MAC # include # include # include #endif // GTEST_OS_MAC #if GTEST_OS_QNX # include # include #endif // GTEST_OS_QNX #include "gtest/gtest-spi.h" #include "gtest/gtest-message.h" #include "gtest/internal/gtest-internal.h" #include "gtest/internal/gtest-string.h" // Indicates that this translation unit is part of Google Test's // implementation. It must come before gtest-internal-inl.h is // included, or there will be a compiler error. This trick is to // prevent a user from accidentally including gtest-internal-inl.h in // his code. #define GTEST_IMPLEMENTATION_ 1 #include "src/gtest-internal-inl.h" #undef GTEST_IMPLEMENTATION_ namespace testing { namespace internal { #if defined(_MSC_VER) || defined(__BORLANDC__) // MSVC and C++Builder do not provide a definition of STDERR_FILENO. const int kStdOutFileno = 1; const int kStdErrFileno = 2; #else const int kStdOutFileno = STDOUT_FILENO; const int kStdErrFileno = STDERR_FILENO; #endif // _MSC_VER #if GTEST_OS_MAC // Returns the number of threads running in the process, or 0 to indicate that // we cannot detect it. size_t GetThreadCount() { const task_t task = mach_task_self(); mach_msg_type_number_t thread_count; thread_act_array_t thread_list; const kern_return_t status = task_threads(task, &thread_list, &thread_count); if (status == KERN_SUCCESS) { // task_threads allocates resources in thread_list and we need to free them // to avoid leaks. vm_deallocate(task, reinterpret_cast(thread_list), sizeof(thread_t) * thread_count); return static_cast(thread_count); } else { return 0; } } #elif GTEST_OS_QNX // Returns the number of threads running in the process, or 0 to indicate that // we cannot detect it. size_t GetThreadCount() { const int fd = open("/proc/self/as", O_RDONLY); if (fd < 0) { return 0; } procfs_info process_info; const int status = devctl(fd, DCMD_PROC_INFO, &process_info, sizeof(process_info), NULL); close(fd); if (status == EOK) { return static_cast(process_info.num_threads); } else { return 0; } } #else size_t GetThreadCount() { // There's no portable way to detect the number of threads, so we just // return 0 to indicate that we cannot detect it. return 0; } #endif // GTEST_OS_MAC #if GTEST_USES_POSIX_RE // Implements RE. Currently only needed for death tests. RE::~RE() { if (is_valid_) { // regfree'ing an invalid regex might crash because the content // of the regex is undefined. Since the regex's are essentially // the same, one cannot be valid (or invalid) without the other // being so too. regfree(&partial_regex_); regfree(&full_regex_); } free(const_cast(pattern_)); } // Returns true iff regular expression re matches the entire str. bool RE::FullMatch(const char* str, const RE& re) { if (!re.is_valid_) return false; regmatch_t match; return regexec(&re.full_regex_, str, 1, &match, 0) == 0; } // Returns true iff regular expression re matches a substring of str // (including str itself). bool RE::PartialMatch(const char* str, const RE& re) { if (!re.is_valid_) return false; regmatch_t match; return regexec(&re.partial_regex_, str, 1, &match, 0) == 0; } // Initializes an RE from its string representation. void RE::Init(const char* regex) { pattern_ = posix::StrDup(regex); // Reserves enough bytes to hold the regular expression used for a // full match. const size_t full_regex_len = strlen(regex) + 10; char* const full_pattern = new char[full_regex_len]; snprintf(full_pattern, full_regex_len, "^(%s)$", regex); is_valid_ = regcomp(&full_regex_, full_pattern, REG_EXTENDED) == 0; // We want to call regcomp(&partial_regex_, ...) even if the // previous expression returns false. Otherwise partial_regex_ may // not be properly initialized can may cause trouble when it's // freed. // // Some implementation of POSIX regex (e.g. on at least some // versions of Cygwin) doesn't accept the empty string as a valid // regex. We change it to an equivalent form "()" to be safe. if (is_valid_) { const char* const partial_regex = (*regex == '\0') ? "()" : regex; is_valid_ = regcomp(&partial_regex_, partial_regex, REG_EXTENDED) == 0; } EXPECT_TRUE(is_valid_) << "Regular expression \"" << regex << "\" is not a valid POSIX Extended regular expression."; delete[] full_pattern; } #elif GTEST_USES_SIMPLE_RE // Returns true iff ch appears anywhere in str (excluding the // terminating '\0' character). bool IsInSet(char ch, const char* str) { return ch != '\0' && strchr(str, ch) != NULL; } // Returns true iff ch belongs to the given classification. Unlike // similar functions in , these aren't affected by the // current locale. bool IsAsciiDigit(char ch) { return '0' <= ch && ch <= '9'; } bool IsAsciiPunct(char ch) { return IsInSet(ch, "^-!\"#$%&'()*+,./:;<=>?@[\\]_`{|}~"); } bool IsRepeat(char ch) { return IsInSet(ch, "?*+"); } bool IsAsciiWhiteSpace(char ch) { return IsInSet(ch, " \f\n\r\t\v"); } bool IsAsciiWordChar(char ch) { return ('a' <= ch && ch <= 'z') || ('A' <= ch && ch <= 'Z') || ('0' <= ch && ch <= '9') || ch == '_'; } // Returns true iff "\\c" is a supported escape sequence. bool IsValidEscape(char c) { return (IsAsciiPunct(c) || IsInSet(c, "dDfnrsStvwW")); } // Returns true iff the given atom (specified by escaped and pattern) // matches ch. The result is undefined if the atom is invalid. bool AtomMatchesChar(bool escaped, char pattern_char, char ch) { if (escaped) { // "\\p" where p is pattern_char. switch (pattern_char) { case 'd': return IsAsciiDigit(ch); case 'D': return !IsAsciiDigit(ch); case 'f': return ch == '\f'; case 'n': return ch == '\n'; case 'r': return ch == '\r'; case 's': return IsAsciiWhiteSpace(ch); case 'S': return !IsAsciiWhiteSpace(ch); case 't': return ch == '\t'; case 'v': return ch == '\v'; case 'w': return IsAsciiWordChar(ch); case 'W': return !IsAsciiWordChar(ch); } return IsAsciiPunct(pattern_char) && pattern_char == ch; } return (pattern_char == '.' && ch != '\n') || pattern_char == ch; } // Helper function used by ValidateRegex() to format error messages. std::string FormatRegexSyntaxError(const char* regex, int index) { return (Message() << "Syntax error at index " << index << " in simple regular expression \"" << regex << "\": ").GetString(); } // Generates non-fatal failures and returns false if regex is invalid; // otherwise returns true. bool ValidateRegex(const char* regex) { if (regex == NULL) { // TODO(wan@google.com): fix the source file location in the // assertion failures to match where the regex is used in user // code. ADD_FAILURE() << "NULL is not a valid simple regular expression."; return false; } bool is_valid = true; // True iff ?, *, or + can follow the previous atom. bool prev_repeatable = false; for (int i = 0; regex[i]; i++) { if (regex[i] == '\\') { // An escape sequence i++; if (regex[i] == '\0') { ADD_FAILURE() << FormatRegexSyntaxError(regex, i - 1) << "'\\' cannot appear at the end."; return false; } if (!IsValidEscape(regex[i])) { ADD_FAILURE() << FormatRegexSyntaxError(regex, i - 1) << "invalid escape sequence \"\\" << regex[i] << "\"."; is_valid = false; } prev_repeatable = true; } else { // Not an escape sequence. const char ch = regex[i]; if (ch == '^' && i > 0) { ADD_FAILURE() << FormatRegexSyntaxError(regex, i) << "'^' can only appear at the beginning."; is_valid = false; } else if (ch == '$' && regex[i + 1] != '\0') { ADD_FAILURE() << FormatRegexSyntaxError(regex, i) << "'$' can only appear at the end."; is_valid = false; } else if (IsInSet(ch, "()[]{}|")) { ADD_FAILURE() << FormatRegexSyntaxError(regex, i) << "'" << ch << "' is unsupported."; is_valid = false; } else if (IsRepeat(ch) && !prev_repeatable) { ADD_FAILURE() << FormatRegexSyntaxError(regex, i) << "'" << ch << "' can only follow a repeatable token."; is_valid = false; } prev_repeatable = !IsInSet(ch, "^$?*+"); } } return is_valid; } // Matches a repeated regex atom followed by a valid simple regular // expression. The regex atom is defined as c if escaped is false, // or \c otherwise. repeat is the repetition meta character (?, *, // or +). The behavior is undefined if str contains too many // characters to be indexable by size_t, in which case the test will // probably time out anyway. We are fine with this limitation as // std::string has it too. bool MatchRepetitionAndRegexAtHead( bool escaped, char c, char repeat, const char* regex, const char* str) { const size_t min_count = (repeat == '+') ? 1 : 0; const size_t max_count = (repeat == '?') ? 1 : static_cast(-1) - 1; // We cannot call numeric_limits::max() as it conflicts with the // max() macro on Windows. for (size_t i = 0; i <= max_count; ++i) { // We know that the atom matches each of the first i characters in str. if (i >= min_count && MatchRegexAtHead(regex, str + i)) { // We have enough matches at the head, and the tail matches too. // Since we only care about *whether* the pattern matches str // (as opposed to *how* it matches), there is no need to find a // greedy match. return true; } if (str[i] == '\0' || !AtomMatchesChar(escaped, c, str[i])) return false; } return false; } // Returns true iff regex matches a prefix of str. regex must be a // valid simple regular expression and not start with "^", or the // result is undefined. bool MatchRegexAtHead(const char* regex, const char* str) { if (*regex == '\0') // An empty regex matches a prefix of anything. return true; // "$" only matches the end of a string. Note that regex being // valid guarantees that there's nothing after "$" in it. if (*regex == '$') return *str == '\0'; // Is the first thing in regex an escape sequence? const bool escaped = *regex == '\\'; if (escaped) ++regex; if (IsRepeat(regex[1])) { // MatchRepetitionAndRegexAtHead() calls MatchRegexAtHead(), so // here's an indirect recursion. It terminates as the regex gets // shorter in each recursion. return MatchRepetitionAndRegexAtHead( escaped, regex[0], regex[1], regex + 2, str); } else { // regex isn't empty, isn't "$", and doesn't start with a // repetition. We match the first atom of regex with the first // character of str and recurse. return (*str != '\0') && AtomMatchesChar(escaped, *regex, *str) && MatchRegexAtHead(regex + 1, str + 1); } } // Returns true iff regex matches any substring of str. regex must be // a valid simple regular expression, or the result is undefined. // // The algorithm is recursive, but the recursion depth doesn't exceed // the regex length, so we won't need to worry about running out of // stack space normally. In rare cases the time complexity can be // exponential with respect to the regex length + the string length, // but usually it's must faster (often close to linear). bool MatchRegexAnywhere(const char* regex, const char* str) { if (regex == NULL || str == NULL) return false; if (*regex == '^') return MatchRegexAtHead(regex + 1, str); // A successful match can be anywhere in str. do { if (MatchRegexAtHead(regex, str)) return true; } while (*str++ != '\0'); return false; } // Implements the RE class. RE::~RE() { free(const_cast(pattern_)); free(const_cast(full_pattern_)); } // Returns true iff regular expression re matches the entire str. bool RE::FullMatch(const char* str, const RE& re) { return re.is_valid_ && MatchRegexAnywhere(re.full_pattern_, str); } // Returns true iff regular expression re matches a substring of str // (including str itself). bool RE::PartialMatch(const char* str, const RE& re) { return re.is_valid_ && MatchRegexAnywhere(re.pattern_, str); } // Initializes an RE from its string representation. void RE::Init(const char* regex) { pattern_ = full_pattern_ = NULL; if (regex != NULL) { pattern_ = posix::StrDup(regex); } is_valid_ = ValidateRegex(regex); if (!is_valid_) { // No need to calculate the full pattern when the regex is invalid. return; } const size_t len = strlen(regex); // Reserves enough bytes to hold the regular expression used for a // full match: we need space to prepend a '^', append a '$', and // terminate the string with '\0'. char* buffer = static_cast(malloc(len + 3)); full_pattern_ = buffer; if (*regex != '^') *buffer++ = '^'; // Makes sure full_pattern_ starts with '^'. // We don't use snprintf or strncpy, as they trigger a warning when // compiled with VC++ 8.0. memcpy(buffer, regex, len); buffer += len; if (len == 0 || regex[len - 1] != '$') *buffer++ = '$'; // Makes sure full_pattern_ ends with '$'. *buffer = '\0'; } #endif // GTEST_USES_POSIX_RE const char kUnknownFile[] = "unknown file"; // Formats a source file path and a line number as they would appear // in an error message from the compiler used to compile this code. GTEST_API_ ::std::string FormatFileLocation(const char* file, int line) { const std::string file_name(file == NULL ? kUnknownFile : file); if (line < 0) { return file_name + ":"; } #ifdef _MSC_VER return file_name + "(" + StreamableToString(line) + "):"; #else return file_name + ":" + StreamableToString(line) + ":"; #endif // _MSC_VER } // Formats a file location for compiler-independent XML output. // Although this function is not platform dependent, we put it next to // FormatFileLocation in order to contrast the two functions. // Note that FormatCompilerIndependentFileLocation() does NOT append colon // to the file location it produces, unlike FormatFileLocation(). GTEST_API_ ::std::string FormatCompilerIndependentFileLocation( const char* file, int line) { const std::string file_name(file == NULL ? kUnknownFile : file); if (line < 0) return file_name; else return file_name + ":" + StreamableToString(line); } GTestLog::GTestLog(GTestLogSeverity severity, const char* file, int line) : severity_(severity) { const char* const marker = severity == GTEST_INFO ? "[ INFO ]" : severity == GTEST_WARNING ? "[WARNING]" : severity == GTEST_ERROR ? "[ ERROR ]" : "[ FATAL ]"; GetStream() << ::std::endl << marker << " " << FormatFileLocation(file, line).c_str() << ": "; } // Flushes the buffers and, if severity is GTEST_FATAL, aborts the program. GTestLog::~GTestLog() { GetStream() << ::std::endl; if (severity_ == GTEST_FATAL) { fflush(stderr); posix::Abort(); } } // Disable Microsoft deprecation warnings for POSIX functions called from // this class (creat, dup, dup2, and close) #ifdef _MSC_VER # pragma warning(push) # pragma warning(disable: 4996) #endif // _MSC_VER #if GTEST_HAS_STREAM_REDIRECTION // Object that captures an output stream (stdout/stderr). class CapturedStream { public: // The ctor redirects the stream to a temporary file. explicit CapturedStream(int fd) : fd_(fd), uncaptured_fd_(dup(fd)) { # if GTEST_OS_WINDOWS char temp_dir_path[MAX_PATH + 1] = { '\0' }; // NOLINT char temp_file_path[MAX_PATH + 1] = { '\0' }; // NOLINT ::GetTempPathA(sizeof(temp_dir_path), temp_dir_path); const UINT success = ::GetTempFileNameA(temp_dir_path, "gtest_redir", 0, // Generate unique file name. temp_file_path); GTEST_CHECK_(success != 0) << "Unable to create a temporary file in " << temp_dir_path; const int captured_fd = creat(temp_file_path, _S_IREAD | _S_IWRITE); GTEST_CHECK_(captured_fd != -1) << "Unable to open temporary file " << temp_file_path; filename_ = temp_file_path; # else // There's no guarantee that a test has write access to the current // directory, so we create the temporary file in the /tmp directory // instead. We use /tmp on most systems, and /sdcard on Android. // That's because Android doesn't have /tmp. # if GTEST_OS_LINUX_ANDROID // Note: Android applications are expected to call the framework's // Context.getExternalStorageDirectory() method through JNI to get // the location of the world-writable SD Card directory. However, // this requires a Context handle, which cannot be retrieved // globally from native code. Doing so also precludes running the // code as part of a regular standalone executable, which doesn't // run in a Dalvik process (e.g. when running it through 'adb shell'). // // The location /sdcard is directly accessible from native code // and is the only location (unofficially) supported by the Android // team. It's generally a symlink to the real SD Card mount point // which can be /mnt/sdcard, /mnt/sdcard0, /system/media/sdcard, or // other OEM-customized locations. Never rely on these, and always // use /sdcard. char name_template[] = "/sdcard/gtest_captured_stream.XXXXXX"; # else char name_template[] = "/tmp/captured_stream.XXXXXX"; # endif // GTEST_OS_LINUX_ANDROID const int captured_fd = mkstemp(name_template); filename_ = name_template; # endif // GTEST_OS_WINDOWS fflush(NULL); dup2(captured_fd, fd_); close(captured_fd); } ~CapturedStream() { remove(filename_.c_str()); } std::string GetCapturedString() { if (uncaptured_fd_ != -1) { // Restores the original stream. fflush(NULL); dup2(uncaptured_fd_, fd_); close(uncaptured_fd_); uncaptured_fd_ = -1; } FILE* const file = posix::FOpen(filename_.c_str(), "r"); const std::string content = ReadEntireFile(file); posix::FClose(file); return content; } private: // Reads the entire content of a file as an std::string. static std::string ReadEntireFile(FILE* file); // Returns the size (in bytes) of a file. static size_t GetFileSize(FILE* file); const int fd_; // A stream to capture. int uncaptured_fd_; // Name of the temporary file holding the stderr output. ::std::string filename_; GTEST_DISALLOW_COPY_AND_ASSIGN_(CapturedStream); }; // Returns the size (in bytes) of a file. size_t CapturedStream::GetFileSize(FILE* file) { fseek(file, 0, SEEK_END); return static_cast(ftell(file)); } // Reads the entire content of a file as a string. std::string CapturedStream::ReadEntireFile(FILE* file) { const size_t file_size = GetFileSize(file); char* const buffer = new char[file_size]; size_t bytes_last_read = 0; // # of bytes read in the last fread() size_t bytes_read = 0; // # of bytes read so far fseek(file, 0, SEEK_SET); // Keeps reading the file until we cannot read further or the // pre-determined file size is reached. do { bytes_last_read = fread(buffer+bytes_read, 1, file_size-bytes_read, file); bytes_read += bytes_last_read; } while (bytes_last_read > 0 && bytes_read < file_size); const std::string content(buffer, bytes_read); delete[] buffer; return content; } # ifdef _MSC_VER # pragma warning(pop) # endif // _MSC_VER static CapturedStream* g_captured_stderr = NULL; static CapturedStream* g_captured_stdout = NULL; // Starts capturing an output stream (stdout/stderr). void CaptureStream(int fd, const char* stream_name, CapturedStream** stream) { if (*stream != NULL) { GTEST_LOG_(FATAL) << "Only one " << stream_name << " capturer can exist at a time."; } *stream = new CapturedStream(fd); } // Stops capturing the output stream and returns the captured string. std::string GetCapturedStream(CapturedStream** captured_stream) { const std::string content = (*captured_stream)->GetCapturedString(); delete *captured_stream; *captured_stream = NULL; return content; } // Starts capturing stdout. void CaptureStdout() { CaptureStream(kStdOutFileno, "stdout", &g_captured_stdout); } // Starts capturing stderr. void CaptureStderr() { CaptureStream(kStdErrFileno, "stderr", &g_captured_stderr); } // Stops capturing stdout and returns the captured string. std::string GetCapturedStdout() { return GetCapturedStream(&g_captured_stdout); } // Stops capturing stderr and returns the captured string. std::string GetCapturedStderr() { return GetCapturedStream(&g_captured_stderr); } #endif // GTEST_HAS_STREAM_REDIRECTION #if GTEST_HAS_DEATH_TEST // A copy of all command line arguments. Set by InitGoogleTest(). ::std::vector g_argvs; static const ::std::vector* g_injected_test_argvs = NULL; // Owned. void SetInjectableArgvs(const ::std::vector* argvs) { if (g_injected_test_argvs != argvs) delete g_injected_test_argvs; g_injected_test_argvs = argvs; } const ::std::vector& GetInjectableArgvs() { if (g_injected_test_argvs != NULL) { return *g_injected_test_argvs; } return g_argvs; } #endif // GTEST_HAS_DEATH_TEST #if GTEST_OS_WINDOWS_MOBILE namespace posix { void Abort() { DebugBreak(); TerminateProcess(GetCurrentProcess(), 1); } } // namespace posix #endif // GTEST_OS_WINDOWS_MOBILE // Returns the name of the environment variable corresponding to the // given flag. For example, FlagToEnvVar("foo") will return // "GTEST_FOO" in the open-source version. static std::string FlagToEnvVar(const char* flag) { const std::string full_flag = (Message() << GTEST_FLAG_PREFIX_ << flag).GetString(); Message env_var; for (size_t i = 0; i != full_flag.length(); i++) { env_var << ToUpper(full_flag.c_str()[i]); } return env_var.GetString(); } // Parses 'str' for a 32-bit signed integer. If successful, writes // the result to *value and returns true; otherwise leaves *value // unchanged and returns false. bool ParseInt32(const Message& src_text, const char* str, Int32* value) { // Parses the environment variable as a decimal integer. char* end = NULL; const long long_value = strtol(str, &end, 10); // NOLINT // Has strtol() consumed all characters in the string? if (*end != '\0') { // No - an invalid character was encountered. Message msg; msg << "WARNING: " << src_text << " is expected to be a 32-bit integer, but actually" << " has value \"" << str << "\".\n"; printf("%s", msg.GetString().c_str()); fflush(stdout); return false; } // Is the parsed value in the range of an Int32? const Int32 result = static_cast(long_value); if (long_value == LONG_MAX || long_value == LONG_MIN || // The parsed value overflows as a long. (strtol() returns // LONG_MAX or LONG_MIN when the input overflows.) result != long_value // The parsed value overflows as an Int32. ) { Message msg; msg << "WARNING: " << src_text << " is expected to be a 32-bit integer, but actually" << " has value " << str << ", which overflows.\n"; printf("%s", msg.GetString().c_str()); fflush(stdout); return false; } *value = result; return true; } // Reads and returns the Boolean environment variable corresponding to // the given flag; if it's not set, returns default_value. // // The value is considered true iff it's not "0". bool BoolFromGTestEnv(const char* flag, bool default_value) { const std::string env_var = FlagToEnvVar(flag); const char* const string_value = posix::GetEnv(env_var.c_str()); return string_value == NULL ? default_value : strcmp(string_value, "0") != 0; } // Reads and returns a 32-bit integer stored in the environment // variable corresponding to the given flag; if it isn't set or // doesn't represent a valid 32-bit integer, returns default_value. Int32 Int32FromGTestEnv(const char* flag, Int32 default_value) { const std::string env_var = FlagToEnvVar(flag); const char* const string_value = posix::GetEnv(env_var.c_str()); if (string_value == NULL) { // The environment variable is not set. return default_value; } Int32 result = default_value; if (!ParseInt32(Message() << "Environment variable " << env_var, string_value, &result)) { printf("The default value %s is used.\n", (Message() << default_value).GetString().c_str()); fflush(stdout); return default_value; } return result; } // Reads and returns the string environment variable corresponding to // the given flag; if it's not set, returns default_value. const char* StringFromGTestEnv(const char* flag, const char* default_value) { const std::string env_var = FlagToEnvVar(flag); const char* const value = posix::GetEnv(env_var.c_str()); return value == NULL ? default_value : value; } } // namespace internal } // namespace testing ffc-2019.1.0.post0/libs/gtest-1.7.0/src/gtest-printers.cc000066400000000000000000000277631346026536000224020ustar00rootroot00000000000000// Copyright 2007, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: wan@google.com (Zhanyong Wan) // Google Test - The Google C++ Testing Framework // // This file implements a universal value printer that can print a // value of any type T: // // void ::testing::internal::UniversalPrinter::Print(value, ostream_ptr); // // It uses the << operator when possible, and prints the bytes in the // object otherwise. A user can override its behavior for a class // type Foo by defining either operator<<(::std::ostream&, const Foo&) // or void PrintTo(const Foo&, ::std::ostream*) in the namespace that // defines Foo. #include "gtest/gtest-printers.h" #include #include #include // NOLINT #include #include "gtest/internal/gtest-port.h" namespace testing { namespace { using ::std::ostream; // Prints a segment of bytes in the given object. void PrintByteSegmentInObjectTo(const unsigned char* obj_bytes, size_t start, size_t count, ostream* os) { char text[5] = ""; for (size_t i = 0; i != count; i++) { const size_t j = start + i; if (i != 0) { // Organizes the bytes into groups of 2 for easy parsing by // human. if ((j % 2) == 0) *os << ' '; else *os << '-'; } GTEST_SNPRINTF_(text, sizeof(text), "%02X", obj_bytes[j]); *os << text; } } // Prints the bytes in the given value to the given ostream. void PrintBytesInObjectToImpl(const unsigned char* obj_bytes, size_t count, ostream* os) { // Tells the user how big the object is. *os << count << "-byte object <"; const size_t kThreshold = 132; const size_t kChunkSize = 64; // If the object size is bigger than kThreshold, we'll have to omit // some details by printing only the first and the last kChunkSize // bytes. // TODO(wan): let the user control the threshold using a flag. if (count < kThreshold) { PrintByteSegmentInObjectTo(obj_bytes, 0, count, os); } else { PrintByteSegmentInObjectTo(obj_bytes, 0, kChunkSize, os); *os << " ... "; // Rounds up to 2-byte boundary. const size_t resume_pos = (count - kChunkSize + 1)/2*2; PrintByteSegmentInObjectTo(obj_bytes, resume_pos, count - resume_pos, os); } *os << ">"; } } // namespace namespace internal2 { // Delegates to PrintBytesInObjectToImpl() to print the bytes in the // given object. The delegation simplifies the implementation, which // uses the << operator and thus is easier done outside of the // ::testing::internal namespace, which contains a << operator that // sometimes conflicts with the one in STL. void PrintBytesInObjectTo(const unsigned char* obj_bytes, size_t count, ostream* os) { PrintBytesInObjectToImpl(obj_bytes, count, os); } } // namespace internal2 namespace internal { // Depending on the value of a char (or wchar_t), we print it in one // of three formats: // - as is if it's a printable ASCII (e.g. 'a', '2', ' '), // - as a hexidecimal escape sequence (e.g. '\x7F'), or // - as a special escape sequence (e.g. '\r', '\n'). enum CharFormat { kAsIs, kHexEscape, kSpecialEscape }; // Returns true if c is a printable ASCII character. We test the // value of c directly instead of calling isprint(), which is buggy on // Windows Mobile. inline bool IsPrintableAscii(wchar_t c) { return 0x20 <= c && c <= 0x7E; } // Prints a wide or narrow char c as a character literal without the // quotes, escaping it when necessary; returns how c was formatted. // The template argument UnsignedChar is the unsigned version of Char, // which is the type of c. template static CharFormat PrintAsCharLiteralTo(Char c, ostream* os) { switch (static_cast(c)) { case L'\0': *os << "\\0"; break; case L'\'': *os << "\\'"; break; case L'\\': *os << "\\\\"; break; case L'\a': *os << "\\a"; break; case L'\b': *os << "\\b"; break; case L'\f': *os << "\\f"; break; case L'\n': *os << "\\n"; break; case L'\r': *os << "\\r"; break; case L'\t': *os << "\\t"; break; case L'\v': *os << "\\v"; break; default: if (IsPrintableAscii(c)) { *os << static_cast(c); return kAsIs; } else { *os << "\\x" + String::FormatHexInt(static_cast(c)); return kHexEscape; } } return kSpecialEscape; } // Prints a wchar_t c as if it's part of a string literal, escaping it when // necessary; returns how c was formatted. static CharFormat PrintAsStringLiteralTo(wchar_t c, ostream* os) { switch (c) { case L'\'': *os << "'"; return kAsIs; case L'"': *os << "\\\""; return kSpecialEscape; default: return PrintAsCharLiteralTo(c, os); } } // Prints a char c as if it's part of a string literal, escaping it when // necessary; returns how c was formatted. static CharFormat PrintAsStringLiteralTo(char c, ostream* os) { return PrintAsStringLiteralTo( static_cast(static_cast(c)), os); } // Prints a wide or narrow character c and its code. '\0' is printed // as "'\\0'", other unprintable characters are also properly escaped // using the standard C++ escape sequence. The template argument // UnsignedChar is the unsigned version of Char, which is the type of c. template void PrintCharAndCodeTo(Char c, ostream* os) { // First, print c as a literal in the most readable form we can find. *os << ((sizeof(c) > 1) ? "L'" : "'"); const CharFormat format = PrintAsCharLiteralTo(c, os); *os << "'"; // To aid user debugging, we also print c's code in decimal, unless // it's 0 (in which case c was printed as '\\0', making the code // obvious). if (c == 0) return; *os << " (" << static_cast(c); // For more convenience, we print c's code again in hexidecimal, // unless c was already printed in the form '\x##' or the code is in // [1, 9]. if (format == kHexEscape || (1 <= c && c <= 9)) { // Do nothing. } else { *os << ", 0x" << String::FormatHexInt(static_cast(c)); } *os << ")"; } void PrintTo(unsigned char c, ::std::ostream* os) { PrintCharAndCodeTo(c, os); } void PrintTo(signed char c, ::std::ostream* os) { PrintCharAndCodeTo(c, os); } // Prints a wchar_t as a symbol if it is printable or as its internal // code otherwise and also as its code. L'\0' is printed as "L'\\0'". void PrintTo(wchar_t wc, ostream* os) { PrintCharAndCodeTo(wc, os); } // Prints the given array of characters to the ostream. CharType must be either // char or wchar_t. // The array starts at begin, the length is len, it may include '\0' characters // and may not be NUL-terminated. template static void PrintCharsAsStringTo( const CharType* begin, size_t len, ostream* os) { const char* const kQuoteBegin = sizeof(CharType) == 1 ? "\"" : "L\""; *os << kQuoteBegin; bool is_previous_hex = false; for (size_t index = 0; index < len; ++index) { const CharType cur = begin[index]; if (is_previous_hex && IsXDigit(cur)) { // Previous character is of '\x..' form and this character can be // interpreted as another hexadecimal digit in its number. Break string to // disambiguate. *os << "\" " << kQuoteBegin; } is_previous_hex = PrintAsStringLiteralTo(cur, os) == kHexEscape; } *os << "\""; } // Prints a (const) char/wchar_t array of 'len' elements, starting at address // 'begin'. CharType must be either char or wchar_t. template static void UniversalPrintCharArray( const CharType* begin, size_t len, ostream* os) { // The code // const char kFoo[] = "foo"; // generates an array of 4, not 3, elements, with the last one being '\0'. // // Therefore when printing a char array, we don't print the last element if // it's '\0', such that the output matches the string literal as it's // written in the source code. if (len > 0 && begin[len - 1] == '\0') { PrintCharsAsStringTo(begin, len - 1, os); return; } // If, however, the last element in the array is not '\0', e.g. // const char kFoo[] = { 'f', 'o', 'o' }; // we must print the entire array. We also print a message to indicate // that the array is not NUL-terminated. PrintCharsAsStringTo(begin, len, os); *os << " (no terminating NUL)"; } // Prints a (const) char array of 'len' elements, starting at address 'begin'. void UniversalPrintArray(const char* begin, size_t len, ostream* os) { UniversalPrintCharArray(begin, len, os); } // Prints a (const) wchar_t array of 'len' elements, starting at address // 'begin'. void UniversalPrintArray(const wchar_t* begin, size_t len, ostream* os) { UniversalPrintCharArray(begin, len, os); } // Prints the given C string to the ostream. void PrintTo(const char* s, ostream* os) { if (s == NULL) { *os << "NULL"; } else { *os << ImplicitCast_(s) << " pointing to "; PrintCharsAsStringTo(s, strlen(s), os); } } // MSVC compiler can be configured to define whar_t as a typedef // of unsigned short. Defining an overload for const wchar_t* in that case // would cause pointers to unsigned shorts be printed as wide strings, // possibly accessing more memory than intended and causing invalid // memory accesses. MSVC defines _NATIVE_WCHAR_T_DEFINED symbol when // wchar_t is implemented as a native type. #if !defined(_MSC_VER) || defined(_NATIVE_WCHAR_T_DEFINED) // Prints the given wide C string to the ostream. void PrintTo(const wchar_t* s, ostream* os) { if (s == NULL) { *os << "NULL"; } else { *os << ImplicitCast_(s) << " pointing to "; PrintCharsAsStringTo(s, wcslen(s), os); } } #endif // wchar_t is native // Prints a ::string object. #if GTEST_HAS_GLOBAL_STRING void PrintStringTo(const ::string& s, ostream* os) { PrintCharsAsStringTo(s.data(), s.size(), os); } #endif // GTEST_HAS_GLOBAL_STRING void PrintStringTo(const ::std::string& s, ostream* os) { PrintCharsAsStringTo(s.data(), s.size(), os); } // Prints a ::wstring object. #if GTEST_HAS_GLOBAL_WSTRING void PrintWideStringTo(const ::wstring& s, ostream* os) { PrintCharsAsStringTo(s.data(), s.size(), os); } #endif // GTEST_HAS_GLOBAL_WSTRING #if GTEST_HAS_STD_WSTRING void PrintWideStringTo(const ::std::wstring& s, ostream* os) { PrintCharsAsStringTo(s.data(), s.size(), os); } #endif // GTEST_HAS_STD_WSTRING } // namespace internal } // namespace testing ffc-2019.1.0.post0/libs/gtest-1.7.0/src/gtest-test-part.cc000066400000000000000000000100771346026536000224450ustar00rootroot00000000000000// Copyright 2008, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: mheule@google.com (Markus Heule) // // The Google C++ Testing Framework (Google Test) #include "gtest/gtest-test-part.h" // Indicates that this translation unit is part of Google Test's // implementation. It must come before gtest-internal-inl.h is // included, or there will be a compiler error. This trick is to // prevent a user from accidentally including gtest-internal-inl.h in // his code. #define GTEST_IMPLEMENTATION_ 1 #include "src/gtest-internal-inl.h" #undef GTEST_IMPLEMENTATION_ namespace testing { using internal::GetUnitTestImpl; // Gets the summary of the failure message by omitting the stack trace // in it. std::string TestPartResult::ExtractSummary(const char* message) { const char* const stack_trace = strstr(message, internal::kStackTraceMarker); return stack_trace == NULL ? message : std::string(message, stack_trace); } // Prints a TestPartResult object. std::ostream& operator<<(std::ostream& os, const TestPartResult& result) { return os << result.file_name() << ":" << result.line_number() << ": " << (result.type() == TestPartResult::kSuccess ? "Success" : result.type() == TestPartResult::kFatalFailure ? "Fatal failure" : "Non-fatal failure") << ":\n" << result.message() << std::endl; } // Appends a TestPartResult to the array. void TestPartResultArray::Append(const TestPartResult& result) { array_.push_back(result); } // Returns the TestPartResult at the given index (0-based). const TestPartResult& TestPartResultArray::GetTestPartResult(int index) const { if (index < 0 || index >= size()) { printf("\nInvalid index (%d) into TestPartResultArray.\n", index); internal::posix::Abort(); } return array_[index]; } // Returns the number of TestPartResult objects in the array. int TestPartResultArray::size() const { return static_cast(array_.size()); } namespace internal { HasNewFatalFailureHelper::HasNewFatalFailureHelper() : has_new_fatal_failure_(false), original_reporter_(GetUnitTestImpl()-> GetTestPartResultReporterForCurrentThread()) { GetUnitTestImpl()->SetTestPartResultReporterForCurrentThread(this); } HasNewFatalFailureHelper::~HasNewFatalFailureHelper() { GetUnitTestImpl()->SetTestPartResultReporterForCurrentThread( original_reporter_); } void HasNewFatalFailureHelper::ReportTestPartResult( const TestPartResult& result) { if (result.fatally_failed()) has_new_fatal_failure_ = true; original_reporter_->ReportTestPartResult(result); } } // namespace internal } // namespace testing ffc-2019.1.0.post0/libs/gtest-1.7.0/src/gtest-typed-test.cc000066400000000000000000000072621346026536000226260ustar00rootroot00000000000000// Copyright 2008 Google Inc. // All Rights Reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: wan@google.com (Zhanyong Wan) #include "gtest/gtest-typed-test.h" #include "gtest/gtest.h" namespace testing { namespace internal { #if GTEST_HAS_TYPED_TEST_P // Skips to the first non-space char in str. Returns an empty string if str // contains only whitespace characters. static const char* SkipSpaces(const char* str) { while (IsSpace(*str)) str++; return str; } // Verifies that registered_tests match the test names in // defined_test_names_; returns registered_tests if successful, or // aborts the program otherwise. const char* TypedTestCasePState::VerifyRegisteredTestNames( const char* file, int line, const char* registered_tests) { typedef ::std::set::const_iterator DefinedTestIter; registered_ = true; // Skip initial whitespace in registered_tests since some // preprocessors prefix stringizied literals with whitespace. registered_tests = SkipSpaces(registered_tests); Message errors; ::std::set tests; for (const char* names = registered_tests; names != NULL; names = SkipComma(names)) { const std::string name = GetPrefixUntilComma(names); if (tests.count(name) != 0) { errors << "Test " << name << " is listed more than once.\n"; continue; } bool found = false; for (DefinedTestIter it = defined_test_names_.begin(); it != defined_test_names_.end(); ++it) { if (name == *it) { found = true; break; } } if (found) { tests.insert(name); } else { errors << "No test named " << name << " can be found in this test case.\n"; } } for (DefinedTestIter it = defined_test_names_.begin(); it != defined_test_names_.end(); ++it) { if (tests.count(*it) == 0) { errors << "You forgot to list test " << *it << ".\n"; } } const std::string& errors_str = errors.GetString(); if (errors_str != "") { fprintf(stderr, "%s %s", FormatFileLocation(file, line).c_str(), errors_str.c_str()); fflush(stderr); posix::Abort(); } return registered_tests; } #endif // GTEST_HAS_TYPED_TEST_P } // namespace internal } // namespace testing ffc-2019.1.0.post0/libs/gtest-1.7.0/src/gtest.cc000066400000000000000000005475161346026536000205410ustar00rootroot00000000000000// Copyright 2005, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: wan@google.com (Zhanyong Wan) // // The Google C++ Testing Framework (Google Test) #include "gtest/gtest.h" #include "gtest/gtest-spi.h" #include #include #include #include #include #include #include #include #include #include #include #include // NOLINT #include #include #if GTEST_OS_LINUX // TODO(kenton@google.com): Use autoconf to detect availability of // gettimeofday(). # define GTEST_HAS_GETTIMEOFDAY_ 1 # include // NOLINT # include // NOLINT # include // NOLINT // Declares vsnprintf(). This header is not available on Windows. # include // NOLINT # include // NOLINT # include // NOLINT # include // NOLINT # include #elif GTEST_OS_SYMBIAN # define GTEST_HAS_GETTIMEOFDAY_ 1 # include // NOLINT #elif GTEST_OS_ZOS # define GTEST_HAS_GETTIMEOFDAY_ 1 # include // NOLINT // On z/OS we additionally need strings.h for strcasecmp. # include // NOLINT #elif GTEST_OS_WINDOWS_MOBILE // We are on Windows CE. # include // NOLINT #elif GTEST_OS_WINDOWS // We are on Windows proper. # include // NOLINT # include // NOLINT # include // NOLINT # include // NOLINT # if GTEST_OS_WINDOWS_MINGW // MinGW has gettimeofday() but not _ftime64(). // TODO(kenton@google.com): Use autoconf to detect availability of // gettimeofday(). // TODO(kenton@google.com): There are other ways to get the time on // Windows, like GetTickCount() or GetSystemTimeAsFileTime(). MinGW // supports these. consider using them instead. # define GTEST_HAS_GETTIMEOFDAY_ 1 # include // NOLINT # endif // GTEST_OS_WINDOWS_MINGW // cpplint thinks that the header is already included, so we want to // silence it. # include // NOLINT #else // Assume other platforms have gettimeofday(). // TODO(kenton@google.com): Use autoconf to detect availability of // gettimeofday(). # define GTEST_HAS_GETTIMEOFDAY_ 1 // cpplint thinks that the header is already included, so we want to // silence it. # include // NOLINT # include // NOLINT #endif // GTEST_OS_LINUX #if GTEST_HAS_EXCEPTIONS # include #endif #if GTEST_CAN_STREAM_RESULTS_ # include // NOLINT # include // NOLINT #endif // Indicates that this translation unit is part of Google Test's // implementation. It must come before gtest-internal-inl.h is // included, or there will be a compiler error. This trick is to // prevent a user from accidentally including gtest-internal-inl.h in // his code. #define GTEST_IMPLEMENTATION_ 1 #include "src/gtest-internal-inl.h" #undef GTEST_IMPLEMENTATION_ #if GTEST_OS_WINDOWS # define vsnprintf _vsnprintf #endif // GTEST_OS_WINDOWS namespace testing { using internal::CountIf; using internal::ForEach; using internal::GetElementOr; using internal::Shuffle; // Constants. // A test whose test case name or test name matches this filter is // disabled and not run. static const char kDisableTestFilter[] = "DISABLED_*:*/DISABLED_*"; // A test case whose name matches this filter is considered a death // test case and will be run before test cases whose name doesn't // match this filter. static const char kDeathTestCaseFilter[] = "*DeathTest:*DeathTest/*"; // A test filter that matches everything. static const char kUniversalFilter[] = "*"; // The default output file for XML output. static const char kDefaultOutputFile[] = "test_detail.xml"; // The environment variable name for the test shard index. static const char kTestShardIndex[] = "GTEST_SHARD_INDEX"; // The environment variable name for the total number of test shards. static const char kTestTotalShards[] = "GTEST_TOTAL_SHARDS"; // The environment variable name for the test shard status file. static const char kTestShardStatusFile[] = "GTEST_SHARD_STATUS_FILE"; namespace internal { // The text used in failure messages to indicate the start of the // stack trace. const char kStackTraceMarker[] = "\nStack trace:\n"; // g_help_flag is true iff the --help flag or an equivalent form is // specified on the command line. bool g_help_flag = false; } // namespace internal static const char* GetDefaultFilter() { return kUniversalFilter; } GTEST_DEFINE_bool_( also_run_disabled_tests, internal::BoolFromGTestEnv("also_run_disabled_tests", false), "Run disabled tests too, in addition to the tests normally being run."); GTEST_DEFINE_bool_( break_on_failure, internal::BoolFromGTestEnv("break_on_failure", false), "True iff a failed assertion should be a debugger break-point."); GTEST_DEFINE_bool_( catch_exceptions, internal::BoolFromGTestEnv("catch_exceptions", true), "True iff " GTEST_NAME_ " should catch exceptions and treat them as test failures."); GTEST_DEFINE_string_( color, internal::StringFromGTestEnv("color", "auto"), "Whether to use colors in the output. Valid values: yes, no, " "and auto. 'auto' means to use colors if the output is " "being sent to a terminal and the TERM environment variable " "is set to a terminal type that supports colors."); GTEST_DEFINE_string_( filter, internal::StringFromGTestEnv("filter", GetDefaultFilter()), "A colon-separated list of glob (not regex) patterns " "for filtering the tests to run, optionally followed by a " "'-' and a : separated list of negative patterns (tests to " "exclude). A test is run if it matches one of the positive " "patterns and does not match any of the negative patterns."); GTEST_DEFINE_bool_(list_tests, false, "List all tests without running them."); GTEST_DEFINE_string_( output, internal::StringFromGTestEnv("output", ""), "A format (currently must be \"xml\"), optionally followed " "by a colon and an output file name or directory. A directory " "is indicated by a trailing pathname separator. " "Examples: \"xml:filename.xml\", \"xml::directoryname/\". " "If a directory is specified, output files will be created " "within that directory, with file-names based on the test " "executable's name and, if necessary, made unique by adding " "digits."); GTEST_DEFINE_bool_( print_time, internal::BoolFromGTestEnv("print_time", true), "True iff " GTEST_NAME_ " should display elapsed time in text output."); GTEST_DEFINE_int32_( random_seed, internal::Int32FromGTestEnv("random_seed", 0), "Random number seed to use when shuffling test orders. Must be in range " "[1, 99999], or 0 to use a seed based on the current time."); GTEST_DEFINE_int32_( repeat, internal::Int32FromGTestEnv("repeat", 1), "How many times to repeat each test. Specify a negative number " "for repeating forever. Useful for shaking out flaky tests."); GTEST_DEFINE_bool_( show_internal_stack_frames, false, "True iff " GTEST_NAME_ " should include internal stack frames when " "printing test failure stack traces."); GTEST_DEFINE_bool_( shuffle, internal::BoolFromGTestEnv("shuffle", false), "True iff " GTEST_NAME_ " should randomize tests' order on every run."); GTEST_DEFINE_int32_( stack_trace_depth, internal::Int32FromGTestEnv("stack_trace_depth", kMaxStackTraceDepth), "The maximum number of stack frames to print when an " "assertion fails. The valid range is 0 through 100, inclusive."); GTEST_DEFINE_string_( stream_result_to, internal::StringFromGTestEnv("stream_result_to", ""), "This flag specifies the host name and the port number on which to stream " "test results. Example: \"localhost:555\". The flag is effective only on " "Linux."); GTEST_DEFINE_bool_( throw_on_failure, internal::BoolFromGTestEnv("throw_on_failure", false), "When this flag is specified, a failed assertion will throw an exception " "if exceptions are enabled or exit the program with a non-zero code " "otherwise."); namespace internal { // Generates a random number from [0, range), using a Linear // Congruential Generator (LCG). Crashes if 'range' is 0 or greater // than kMaxRange. UInt32 Random::Generate(UInt32 range) { // These constants are the same as are used in glibc's rand(3). state_ = (1103515245U*state_ + 12345U) % kMaxRange; GTEST_CHECK_(range > 0) << "Cannot generate a number in the range [0, 0)."; GTEST_CHECK_(range <= kMaxRange) << "Generation of a number in [0, " << range << ") was requested, " << "but this can only generate numbers in [0, " << kMaxRange << ")."; // Converting via modulus introduces a bit of downward bias, but // it's simple, and a linear congruential generator isn't too good // to begin with. return state_ % range; } // GTestIsInitialized() returns true iff the user has initialized // Google Test. Useful for catching the user mistake of not initializing // Google Test before calling RUN_ALL_TESTS(). // // A user must call testing::InitGoogleTest() to initialize Google // Test. g_init_gtest_count is set to the number of times // InitGoogleTest() has been called. We don't protect this variable // under a mutex as it is only accessed in the main thread. GTEST_API_ int g_init_gtest_count = 0; static bool GTestIsInitialized() { return g_init_gtest_count != 0; } // Iterates over a vector of TestCases, keeping a running sum of the // results of calling a given int-returning method on each. // Returns the sum. static int SumOverTestCaseList(const std::vector& case_list, int (TestCase::*method)() const) { int sum = 0; for (size_t i = 0; i < case_list.size(); i++) { sum += (case_list[i]->*method)(); } return sum; } // Returns true iff the test case passed. static bool TestCasePassed(const TestCase* test_case) { return test_case->should_run() && test_case->Passed(); } // Returns true iff the test case failed. static bool TestCaseFailed(const TestCase* test_case) { return test_case->should_run() && test_case->Failed(); } // Returns true iff test_case contains at least one test that should // run. static bool ShouldRunTestCase(const TestCase* test_case) { return test_case->should_run(); } // AssertHelper constructor. AssertHelper::AssertHelper(TestPartResult::Type type, const char* file, int line, const char* message) : data_(new AssertHelperData(type, file, line, message)) { } AssertHelper::~AssertHelper() { delete data_; } // Message assignment, for assertion streaming support. void AssertHelper::operator=(const Message& message) const { UnitTest::GetInstance()-> AddTestPartResult(data_->type, data_->file, data_->line, AppendUserMessage(data_->message, message), UnitTest::GetInstance()->impl() ->CurrentOsStackTraceExceptTop(1) // Skips the stack frame for this function itself. ); // NOLINT } // Mutex for linked pointers. GTEST_API_ GTEST_DEFINE_STATIC_MUTEX_(g_linked_ptr_mutex); // Application pathname gotten in InitGoogleTest. std::string g_executable_path; // Returns the current application's name, removing directory path if that // is present. FilePath GetCurrentExecutableName() { FilePath result; #if GTEST_OS_WINDOWS result.Set(FilePath(g_executable_path).RemoveExtension("exe")); #else result.Set(FilePath(g_executable_path)); #endif // GTEST_OS_WINDOWS return result.RemoveDirectoryName(); } // Functions for processing the gtest_output flag. // Returns the output format, or "" for normal printed output. std::string UnitTestOptions::GetOutputFormat() { const char* const gtest_output_flag = GTEST_FLAG(output).c_str(); if (gtest_output_flag == NULL) return std::string(""); const char* const colon = strchr(gtest_output_flag, ':'); return (colon == NULL) ? std::string(gtest_output_flag) : std::string(gtest_output_flag, colon - gtest_output_flag); } // Returns the name of the requested output file, or the default if none // was explicitly specified. std::string UnitTestOptions::GetAbsolutePathToOutputFile() { const char* const gtest_output_flag = GTEST_FLAG(output).c_str(); if (gtest_output_flag == NULL) return ""; const char* const colon = strchr(gtest_output_flag, ':'); if (colon == NULL) return internal::FilePath::ConcatPaths( internal::FilePath( UnitTest::GetInstance()->original_working_dir()), internal::FilePath(kDefaultOutputFile)).string(); internal::FilePath output_name(colon + 1); if (!output_name.IsAbsolutePath()) // TODO(wan@google.com): on Windows \some\path is not an absolute // path (as its meaning depends on the current drive), yet the // following logic for turning it into an absolute path is wrong. // Fix it. output_name = internal::FilePath::ConcatPaths( internal::FilePath(UnitTest::GetInstance()->original_working_dir()), internal::FilePath(colon + 1)); if (!output_name.IsDirectory()) return output_name.string(); internal::FilePath result(internal::FilePath::GenerateUniqueFileName( output_name, internal::GetCurrentExecutableName(), GetOutputFormat().c_str())); return result.string(); } // Returns true iff the wildcard pattern matches the string. The // first ':' or '\0' character in pattern marks the end of it. // // This recursive algorithm isn't very efficient, but is clear and // works well enough for matching test names, which are short. bool UnitTestOptions::PatternMatchesString(const char *pattern, const char *str) { switch (*pattern) { case '\0': case ':': // Either ':' or '\0' marks the end of the pattern. return *str == '\0'; case '?': // Matches any single character. return *str != '\0' && PatternMatchesString(pattern + 1, str + 1); case '*': // Matches any string (possibly empty) of characters. return (*str != '\0' && PatternMatchesString(pattern, str + 1)) || PatternMatchesString(pattern + 1, str); default: // Non-special character. Matches itself. return *pattern == *str && PatternMatchesString(pattern + 1, str + 1); } } bool UnitTestOptions::MatchesFilter( const std::string& name, const char* filter) { const char *cur_pattern = filter; for (;;) { if (PatternMatchesString(cur_pattern, name.c_str())) { return true; } // Finds the next pattern in the filter. cur_pattern = strchr(cur_pattern, ':'); // Returns if no more pattern can be found. if (cur_pattern == NULL) { return false; } // Skips the pattern separater (the ':' character). cur_pattern++; } } // Returns true iff the user-specified filter matches the test case // name and the test name. bool UnitTestOptions::FilterMatchesTest(const std::string &test_case_name, const std::string &test_name) { const std::string& full_name = test_case_name + "." + test_name.c_str(); // Split --gtest_filter at '-', if there is one, to separate into // positive filter and negative filter portions const char* const p = GTEST_FLAG(filter).c_str(); const char* const dash = strchr(p, '-'); std::string positive; std::string negative; if (dash == NULL) { positive = GTEST_FLAG(filter).c_str(); // Whole string is a positive filter negative = ""; } else { positive = std::string(p, dash); // Everything up to the dash negative = std::string(dash + 1); // Everything after the dash if (positive.empty()) { // Treat '-test1' as the same as '*-test1' positive = kUniversalFilter; } } // A filter is a colon-separated list of patterns. It matches a // test if any pattern in it matches the test. return (MatchesFilter(full_name, positive.c_str()) && !MatchesFilter(full_name, negative.c_str())); } #if GTEST_HAS_SEH // Returns EXCEPTION_EXECUTE_HANDLER if Google Test should handle the // given SEH exception, or EXCEPTION_CONTINUE_SEARCH otherwise. // This function is useful as an __except condition. int UnitTestOptions::GTestShouldProcessSEH(DWORD exception_code) { // Google Test should handle a SEH exception if: // 1. the user wants it to, AND // 2. this is not a breakpoint exception, AND // 3. this is not a C++ exception (VC++ implements them via SEH, // apparently). // // SEH exception code for C++ exceptions. // (see http://support.microsoft.com/kb/185294 for more information). const DWORD kCxxExceptionCode = 0xe06d7363; bool should_handle = true; if (!GTEST_FLAG(catch_exceptions)) should_handle = false; else if (exception_code == EXCEPTION_BREAKPOINT) should_handle = false; else if (exception_code == kCxxExceptionCode) should_handle = false; return should_handle ? EXCEPTION_EXECUTE_HANDLER : EXCEPTION_CONTINUE_SEARCH; } #endif // GTEST_HAS_SEH } // namespace internal // The c'tor sets this object as the test part result reporter used by // Google Test. The 'result' parameter specifies where to report the // results. Intercepts only failures from the current thread. ScopedFakeTestPartResultReporter::ScopedFakeTestPartResultReporter( TestPartResultArray* result) : intercept_mode_(INTERCEPT_ONLY_CURRENT_THREAD), result_(result) { Init(); } // The c'tor sets this object as the test part result reporter used by // Google Test. The 'result' parameter specifies where to report the // results. ScopedFakeTestPartResultReporter::ScopedFakeTestPartResultReporter( InterceptMode intercept_mode, TestPartResultArray* result) : intercept_mode_(intercept_mode), result_(result) { Init(); } void ScopedFakeTestPartResultReporter::Init() { internal::UnitTestImpl* const impl = internal::GetUnitTestImpl(); if (intercept_mode_ == INTERCEPT_ALL_THREADS) { old_reporter_ = impl->GetGlobalTestPartResultReporter(); impl->SetGlobalTestPartResultReporter(this); } else { old_reporter_ = impl->GetTestPartResultReporterForCurrentThread(); impl->SetTestPartResultReporterForCurrentThread(this); } } // The d'tor restores the test part result reporter used by Google Test // before. ScopedFakeTestPartResultReporter::~ScopedFakeTestPartResultReporter() { internal::UnitTestImpl* const impl = internal::GetUnitTestImpl(); if (intercept_mode_ == INTERCEPT_ALL_THREADS) { impl->SetGlobalTestPartResultReporter(old_reporter_); } else { impl->SetTestPartResultReporterForCurrentThread(old_reporter_); } } // Increments the test part result count and remembers the result. // This method is from the TestPartResultReporterInterface interface. void ScopedFakeTestPartResultReporter::ReportTestPartResult( const TestPartResult& result) { result_->Append(result); } namespace internal { // Returns the type ID of ::testing::Test. We should always call this // instead of GetTypeId< ::testing::Test>() to get the type ID of // testing::Test. This is to work around a suspected linker bug when // using Google Test as a framework on Mac OS X. The bug causes // GetTypeId< ::testing::Test>() to return different values depending // on whether the call is from the Google Test framework itself or // from user test code. GetTestTypeId() is guaranteed to always // return the same value, as it always calls GetTypeId<>() from the // gtest.cc, which is within the Google Test framework. TypeId GetTestTypeId() { return GetTypeId(); } // The value of GetTestTypeId() as seen from within the Google Test // library. This is solely for testing GetTestTypeId(). extern const TypeId kTestTypeIdInGoogleTest = GetTestTypeId(); // This predicate-formatter checks that 'results' contains a test part // failure of the given type and that the failure message contains the // given substring. AssertionResult HasOneFailure(const char* /* results_expr */, const char* /* type_expr */, const char* /* substr_expr */, const TestPartResultArray& results, TestPartResult::Type type, const string& substr) { const std::string expected(type == TestPartResult::kFatalFailure ? "1 fatal failure" : "1 non-fatal failure"); Message msg; if (results.size() != 1) { msg << "Expected: " << expected << "\n" << " Actual: " << results.size() << " failures"; for (int i = 0; i < results.size(); i++) { msg << "\n" << results.GetTestPartResult(i); } return AssertionFailure() << msg; } const TestPartResult& r = results.GetTestPartResult(0); if (r.type() != type) { return AssertionFailure() << "Expected: " << expected << "\n" << " Actual:\n" << r; } if (strstr(r.message(), substr.c_str()) == NULL) { return AssertionFailure() << "Expected: " << expected << " containing \"" << substr << "\"\n" << " Actual:\n" << r; } return AssertionSuccess(); } // The constructor of SingleFailureChecker remembers where to look up // test part results, what type of failure we expect, and what // substring the failure message should contain. SingleFailureChecker:: SingleFailureChecker( const TestPartResultArray* results, TestPartResult::Type type, const string& substr) : results_(results), type_(type), substr_(substr) {} // The destructor of SingleFailureChecker verifies that the given // TestPartResultArray contains exactly one failure that has the given // type and contains the given substring. If that's not the case, a // non-fatal failure will be generated. SingleFailureChecker::~SingleFailureChecker() { EXPECT_PRED_FORMAT3(HasOneFailure, *results_, type_, substr_); } DefaultGlobalTestPartResultReporter::DefaultGlobalTestPartResultReporter( UnitTestImpl* unit_test) : unit_test_(unit_test) {} void DefaultGlobalTestPartResultReporter::ReportTestPartResult( const TestPartResult& result) { unit_test_->current_test_result()->AddTestPartResult(result); unit_test_->listeners()->repeater()->OnTestPartResult(result); } DefaultPerThreadTestPartResultReporter::DefaultPerThreadTestPartResultReporter( UnitTestImpl* unit_test) : unit_test_(unit_test) {} void DefaultPerThreadTestPartResultReporter::ReportTestPartResult( const TestPartResult& result) { unit_test_->GetGlobalTestPartResultReporter()->ReportTestPartResult(result); } // Returns the global test part result reporter. TestPartResultReporterInterface* UnitTestImpl::GetGlobalTestPartResultReporter() { internal::MutexLock lock(&global_test_part_result_reporter_mutex_); return global_test_part_result_repoter_; } // Sets the global test part result reporter. void UnitTestImpl::SetGlobalTestPartResultReporter( TestPartResultReporterInterface* reporter) { internal::MutexLock lock(&global_test_part_result_reporter_mutex_); global_test_part_result_repoter_ = reporter; } // Returns the test part result reporter for the current thread. TestPartResultReporterInterface* UnitTestImpl::GetTestPartResultReporterForCurrentThread() { return per_thread_test_part_result_reporter_.get(); } // Sets the test part result reporter for the current thread. void UnitTestImpl::SetTestPartResultReporterForCurrentThread( TestPartResultReporterInterface* reporter) { per_thread_test_part_result_reporter_.set(reporter); } // Gets the number of successful test cases. int UnitTestImpl::successful_test_case_count() const { return CountIf(test_cases_, TestCasePassed); } // Gets the number of failed test cases. int UnitTestImpl::failed_test_case_count() const { return CountIf(test_cases_, TestCaseFailed); } // Gets the number of all test cases. int UnitTestImpl::total_test_case_count() const { return static_cast(test_cases_.size()); } // Gets the number of all test cases that contain at least one test // that should run. int UnitTestImpl::test_case_to_run_count() const { return CountIf(test_cases_, ShouldRunTestCase); } // Gets the number of successful tests. int UnitTestImpl::successful_test_count() const { return SumOverTestCaseList(test_cases_, &TestCase::successful_test_count); } // Gets the number of failed tests. int UnitTestImpl::failed_test_count() const { return SumOverTestCaseList(test_cases_, &TestCase::failed_test_count); } // Gets the number of disabled tests that will be reported in the XML report. int UnitTestImpl::reportable_disabled_test_count() const { return SumOverTestCaseList(test_cases_, &TestCase::reportable_disabled_test_count); } // Gets the number of disabled tests. int UnitTestImpl::disabled_test_count() const { return SumOverTestCaseList(test_cases_, &TestCase::disabled_test_count); } // Gets the number of tests to be printed in the XML report. int UnitTestImpl::reportable_test_count() const { return SumOverTestCaseList(test_cases_, &TestCase::reportable_test_count); } // Gets the number of all tests. int UnitTestImpl::total_test_count() const { return SumOverTestCaseList(test_cases_, &TestCase::total_test_count); } // Gets the number of tests that should run. int UnitTestImpl::test_to_run_count() const { return SumOverTestCaseList(test_cases_, &TestCase::test_to_run_count); } // Returns the current OS stack trace as an std::string. // // The maximum number of stack frames to be included is specified by // the gtest_stack_trace_depth flag. The skip_count parameter // specifies the number of top frames to be skipped, which doesn't // count against the number of frames to be included. // // For example, if Foo() calls Bar(), which in turn calls // CurrentOsStackTraceExceptTop(1), Foo() will be included in the // trace but Bar() and CurrentOsStackTraceExceptTop() won't. std::string UnitTestImpl::CurrentOsStackTraceExceptTop(int skip_count) { (void)skip_count; return ""; } // Returns the current time in milliseconds. TimeInMillis GetTimeInMillis() { #if GTEST_OS_WINDOWS_MOBILE || defined(__BORLANDC__) // Difference between 1970-01-01 and 1601-01-01 in milliseconds. // http://analogous.blogspot.com/2005/04/epoch.html const TimeInMillis kJavaEpochToWinFileTimeDelta = static_cast(116444736UL) * 100000UL; const DWORD kTenthMicrosInMilliSecond = 10000; SYSTEMTIME now_systime; FILETIME now_filetime; ULARGE_INTEGER now_int64; // TODO(kenton@google.com): Shouldn't this just use // GetSystemTimeAsFileTime()? GetSystemTime(&now_systime); if (SystemTimeToFileTime(&now_systime, &now_filetime)) { now_int64.LowPart = now_filetime.dwLowDateTime; now_int64.HighPart = now_filetime.dwHighDateTime; now_int64.QuadPart = (now_int64.QuadPart / kTenthMicrosInMilliSecond) - kJavaEpochToWinFileTimeDelta; return now_int64.QuadPart; } return 0; #elif GTEST_OS_WINDOWS && !GTEST_HAS_GETTIMEOFDAY_ __timeb64 now; # ifdef _MSC_VER // MSVC 8 deprecates _ftime64(), so we want to suppress warning 4996 // (deprecated function) there. // TODO(kenton@google.com): Use GetTickCount()? Or use // SystemTimeToFileTime() # pragma warning(push) // Saves the current warning state. # pragma warning(disable:4996) // Temporarily disables warning 4996. _ftime64(&now); # pragma warning(pop) // Restores the warning state. # else _ftime64(&now); # endif // _MSC_VER return static_cast(now.time) * 1000 + now.millitm; #elif GTEST_HAS_GETTIMEOFDAY_ struct timeval now; gettimeofday(&now, NULL); return static_cast(now.tv_sec) * 1000 + now.tv_usec / 1000; #else # error "Don't know how to get the current time on your system." #endif } // Utilities // class String. #if GTEST_OS_WINDOWS_MOBILE // Creates a UTF-16 wide string from the given ANSI string, allocating // memory using new. The caller is responsible for deleting the return // value using delete[]. Returns the wide string, or NULL if the // input is NULL. LPCWSTR String::AnsiToUtf16(const char* ansi) { if (!ansi) return NULL; const int length = strlen(ansi); const int unicode_length = MultiByteToWideChar(CP_ACP, 0, ansi, length, NULL, 0); WCHAR* unicode = new WCHAR[unicode_length + 1]; MultiByteToWideChar(CP_ACP, 0, ansi, length, unicode, unicode_length); unicode[unicode_length] = 0; return unicode; } // Creates an ANSI string from the given wide string, allocating // memory using new. The caller is responsible for deleting the return // value using delete[]. Returns the ANSI string, or NULL if the // input is NULL. const char* String::Utf16ToAnsi(LPCWSTR utf16_str) { if (!utf16_str) return NULL; const int ansi_length = WideCharToMultiByte(CP_ACP, 0, utf16_str, -1, NULL, 0, NULL, NULL); char* ansi = new char[ansi_length + 1]; WideCharToMultiByte(CP_ACP, 0, utf16_str, -1, ansi, ansi_length, NULL, NULL); ansi[ansi_length] = 0; return ansi; } #endif // GTEST_OS_WINDOWS_MOBILE // Compares two C strings. Returns true iff they have the same content. // // Unlike strcmp(), this function can handle NULL argument(s). A NULL // C string is considered different to any non-NULL C string, // including the empty string. bool String::CStringEquals(const char * lhs, const char * rhs) { if ( lhs == NULL ) return rhs == NULL; if ( rhs == NULL ) return false; return strcmp(lhs, rhs) == 0; } #if GTEST_HAS_STD_WSTRING || GTEST_HAS_GLOBAL_WSTRING // Converts an array of wide chars to a narrow string using the UTF-8 // encoding, and streams the result to the given Message object. static void StreamWideCharsToMessage(const wchar_t* wstr, size_t length, Message* msg) { for (size_t i = 0; i != length; ) { // NOLINT if (wstr[i] != L'\0') { *msg << WideStringToUtf8(wstr + i, static_cast(length - i)); while (i != length && wstr[i] != L'\0') i++; } else { *msg << '\0'; i++; } } } #endif // GTEST_HAS_STD_WSTRING || GTEST_HAS_GLOBAL_WSTRING } // namespace internal // Constructs an empty Message. // We allocate the stringstream separately because otherwise each use of // ASSERT/EXPECT in a procedure adds over 200 bytes to the procedure's // stack frame leading to huge stack frames in some cases; gcc does not reuse // the stack space. Message::Message() : ss_(new ::std::stringstream) { // By default, we want there to be enough precision when printing // a double to a Message. *ss_ << std::setprecision(std::numeric_limits::digits10 + 2); } // These two overloads allow streaming a wide C string to a Message // using the UTF-8 encoding. Message& Message::operator <<(const wchar_t* wide_c_str) { return *this << internal::String::ShowWideCString(wide_c_str); } Message& Message::operator <<(wchar_t* wide_c_str) { return *this << internal::String::ShowWideCString(wide_c_str); } #if GTEST_HAS_STD_WSTRING // Converts the given wide string to a narrow string using the UTF-8 // encoding, and streams the result to this Message object. Message& Message::operator <<(const ::std::wstring& wstr) { internal::StreamWideCharsToMessage(wstr.c_str(), wstr.length(), this); return *this; } #endif // GTEST_HAS_STD_WSTRING #if GTEST_HAS_GLOBAL_WSTRING // Converts the given wide string to a narrow string using the UTF-8 // encoding, and streams the result to this Message object. Message& Message::operator <<(const ::wstring& wstr) { internal::StreamWideCharsToMessage(wstr.c_str(), wstr.length(), this); return *this; } #endif // GTEST_HAS_GLOBAL_WSTRING // Gets the text streamed to this object so far as an std::string. // Each '\0' character in the buffer is replaced with "\\0". std::string Message::GetString() const { return internal::StringStreamToString(ss_.get()); } // AssertionResult constructors. // Used in EXPECT_TRUE/FALSE(assertion_result). AssertionResult::AssertionResult(const AssertionResult& other) : success_(other.success_), message_(other.message_.get() != NULL ? new ::std::string(*other.message_) : static_cast< ::std::string*>(NULL)) { } // Returns the assertion's negation. Used with EXPECT/ASSERT_FALSE. AssertionResult AssertionResult::operator!() const { AssertionResult negation(!success_); if (message_.get() != NULL) negation << *message_; return negation; } // Makes a successful assertion result. AssertionResult AssertionSuccess() { return AssertionResult(true); } // Makes a failed assertion result. AssertionResult AssertionFailure() { return AssertionResult(false); } // Makes a failed assertion result with the given failure message. // Deprecated; use AssertionFailure() << message. AssertionResult AssertionFailure(const Message& message) { return AssertionFailure() << message; } namespace internal { // Constructs and returns the message for an equality assertion // (e.g. ASSERT_EQ, EXPECT_STREQ, etc) failure. // // The first four parameters are the expressions used in the assertion // and their values, as strings. For example, for ASSERT_EQ(foo, bar) // where foo is 5 and bar is 6, we have: // // expected_expression: "foo" // actual_expression: "bar" // expected_value: "5" // actual_value: "6" // // The ignoring_case parameter is true iff the assertion is a // *_STRCASEEQ*. When it's true, the string " (ignoring case)" will // be inserted into the message. AssertionResult EqFailure(const char* expected_expression, const char* actual_expression, const std::string& expected_value, const std::string& actual_value, bool ignoring_case) { Message msg; msg << "Value of: " << actual_expression; if (actual_value != actual_expression) { msg << "\n Actual: " << actual_value; } msg << "\nExpected: " << expected_expression; if (ignoring_case) { msg << " (ignoring case)"; } if (expected_value != expected_expression) { msg << "\nWhich is: " << expected_value; } return AssertionFailure() << msg; } // Constructs a failure message for Boolean assertions such as EXPECT_TRUE. std::string GetBoolAssertionFailureMessage( const AssertionResult& assertion_result, const char* expression_text, const char* actual_predicate_value, const char* expected_predicate_value) { const char* actual_message = assertion_result.message(); Message msg; msg << "Value of: " << expression_text << "\n Actual: " << actual_predicate_value; if (actual_message[0] != '\0') msg << " (" << actual_message << ")"; msg << "\nExpected: " << expected_predicate_value; return msg.GetString(); } // Helper function for implementing ASSERT_NEAR. AssertionResult DoubleNearPredFormat(const char* expr1, const char* expr2, const char* abs_error_expr, double val1, double val2, double abs_error) { const double diff = fabs(val1 - val2); if (diff <= abs_error) return AssertionSuccess(); // TODO(wan): do not print the value of an expression if it's // already a literal. return AssertionFailure() << "The difference between " << expr1 << " and " << expr2 << " is " << diff << ", which exceeds " << abs_error_expr << ", where\n" << expr1 << " evaluates to " << val1 << ",\n" << expr2 << " evaluates to " << val2 << ", and\n" << abs_error_expr << " evaluates to " << abs_error << "."; } // Helper template for implementing FloatLE() and DoubleLE(). template AssertionResult FloatingPointLE(const char* expr1, const char* expr2, RawType val1, RawType val2) { // Returns success if val1 is less than val2, if (val1 < val2) { return AssertionSuccess(); } // or if val1 is almost equal to val2. const FloatingPoint lhs(val1), rhs(val2); if (lhs.AlmostEquals(rhs)) { return AssertionSuccess(); } // Note that the above two checks will both fail if either val1 or // val2 is NaN, as the IEEE floating-point standard requires that // any predicate involving a NaN must return false. ::std::stringstream val1_ss; val1_ss << std::setprecision(std::numeric_limits::digits10 + 2) << val1; ::std::stringstream val2_ss; val2_ss << std::setprecision(std::numeric_limits::digits10 + 2) << val2; return AssertionFailure() << "Expected: (" << expr1 << ") <= (" << expr2 << ")\n" << " Actual: " << StringStreamToString(&val1_ss) << " vs " << StringStreamToString(&val2_ss); } } // namespace internal // Asserts that val1 is less than, or almost equal to, val2. Fails // otherwise. In particular, it fails if either val1 or val2 is NaN. AssertionResult FloatLE(const char* expr1, const char* expr2, float val1, float val2) { return internal::FloatingPointLE(expr1, expr2, val1, val2); } // Asserts that val1 is less than, or almost equal to, val2. Fails // otherwise. In particular, it fails if either val1 or val2 is NaN. AssertionResult DoubleLE(const char* expr1, const char* expr2, double val1, double val2) { return internal::FloatingPointLE(expr1, expr2, val1, val2); } namespace internal { // The helper function for {ASSERT|EXPECT}_EQ with int or enum // arguments. AssertionResult CmpHelperEQ(const char* expected_expression, const char* actual_expression, BiggestInt expected, BiggestInt actual) { if (expected == actual) { return AssertionSuccess(); } return EqFailure(expected_expression, actual_expression, FormatForComparisonFailureMessage(expected, actual), FormatForComparisonFailureMessage(actual, expected), false); } // A macro for implementing the helper functions needed to implement // ASSERT_?? and EXPECT_?? with integer or enum arguments. It is here // just to avoid copy-and-paste of similar code. #define GTEST_IMPL_CMP_HELPER_(op_name, op)\ AssertionResult CmpHelper##op_name(const char* expr1, const char* expr2, \ BiggestInt val1, BiggestInt val2) {\ if (val1 op val2) {\ return AssertionSuccess();\ } else {\ return AssertionFailure() \ << "Expected: (" << expr1 << ") " #op " (" << expr2\ << "), actual: " << FormatForComparisonFailureMessage(val1, val2)\ << " vs " << FormatForComparisonFailureMessage(val2, val1);\ }\ } // Implements the helper function for {ASSERT|EXPECT}_NE with int or // enum arguments. GTEST_IMPL_CMP_HELPER_(NE, !=) // Implements the helper function for {ASSERT|EXPECT}_LE with int or // enum arguments. GTEST_IMPL_CMP_HELPER_(LE, <=) // Implements the helper function for {ASSERT|EXPECT}_LT with int or // enum arguments. GTEST_IMPL_CMP_HELPER_(LT, < ) // Implements the helper function for {ASSERT|EXPECT}_GE with int or // enum arguments. GTEST_IMPL_CMP_HELPER_(GE, >=) // Implements the helper function for {ASSERT|EXPECT}_GT with int or // enum arguments. GTEST_IMPL_CMP_HELPER_(GT, > ) #undef GTEST_IMPL_CMP_HELPER_ // The helper function for {ASSERT|EXPECT}_STREQ. AssertionResult CmpHelperSTREQ(const char* expected_expression, const char* actual_expression, const char* expected, const char* actual) { if (String::CStringEquals(expected, actual)) { return AssertionSuccess(); } return EqFailure(expected_expression, actual_expression, PrintToString(expected), PrintToString(actual), false); } // The helper function for {ASSERT|EXPECT}_STRCASEEQ. AssertionResult CmpHelperSTRCASEEQ(const char* expected_expression, const char* actual_expression, const char* expected, const char* actual) { if (String::CaseInsensitiveCStringEquals(expected, actual)) { return AssertionSuccess(); } return EqFailure(expected_expression, actual_expression, PrintToString(expected), PrintToString(actual), true); } // The helper function for {ASSERT|EXPECT}_STRNE. AssertionResult CmpHelperSTRNE(const char* s1_expression, const char* s2_expression, const char* s1, const char* s2) { if (!String::CStringEquals(s1, s2)) { return AssertionSuccess(); } else { return AssertionFailure() << "Expected: (" << s1_expression << ") != (" << s2_expression << "), actual: \"" << s1 << "\" vs \"" << s2 << "\""; } } // The helper function for {ASSERT|EXPECT}_STRCASENE. AssertionResult CmpHelperSTRCASENE(const char* s1_expression, const char* s2_expression, const char* s1, const char* s2) { if (!String::CaseInsensitiveCStringEquals(s1, s2)) { return AssertionSuccess(); } else { return AssertionFailure() << "Expected: (" << s1_expression << ") != (" << s2_expression << ") (ignoring case), actual: \"" << s1 << "\" vs \"" << s2 << "\""; } } } // namespace internal namespace { // Helper functions for implementing IsSubString() and IsNotSubstring(). // This group of overloaded functions return true iff needle is a // substring of haystack. NULL is considered a substring of itself // only. bool IsSubstringPred(const char* needle, const char* haystack) { if (needle == NULL || haystack == NULL) return needle == haystack; return strstr(haystack, needle) != NULL; } bool IsSubstringPred(const wchar_t* needle, const wchar_t* haystack) { if (needle == NULL || haystack == NULL) return needle == haystack; return wcsstr(haystack, needle) != NULL; } // StringType here can be either ::std::string or ::std::wstring. template bool IsSubstringPred(const StringType& needle, const StringType& haystack) { return haystack.find(needle) != StringType::npos; } // This function implements either IsSubstring() or IsNotSubstring(), // depending on the value of the expected_to_be_substring parameter. // StringType here can be const char*, const wchar_t*, ::std::string, // or ::std::wstring. template AssertionResult IsSubstringImpl( bool expected_to_be_substring, const char* needle_expr, const char* haystack_expr, const StringType& needle, const StringType& haystack) { if (IsSubstringPred(needle, haystack) == expected_to_be_substring) return AssertionSuccess(); const bool is_wide_string = sizeof(needle[0]) > 1; const char* const begin_string_quote = is_wide_string ? "L\"" : "\""; return AssertionFailure() << "Value of: " << needle_expr << "\n" << " Actual: " << begin_string_quote << needle << "\"\n" << "Expected: " << (expected_to_be_substring ? "" : "not ") << "a substring of " << haystack_expr << "\n" << "Which is: " << begin_string_quote << haystack << "\""; } } // namespace // IsSubstring() and IsNotSubstring() check whether needle is a // substring of haystack (NULL is considered a substring of itself // only), and return an appropriate error message when they fail. AssertionResult IsSubstring( const char* needle_expr, const char* haystack_expr, const char* needle, const char* haystack) { return IsSubstringImpl(true, needle_expr, haystack_expr, needle, haystack); } AssertionResult IsSubstring( const char* needle_expr, const char* haystack_expr, const wchar_t* needle, const wchar_t* haystack) { return IsSubstringImpl(true, needle_expr, haystack_expr, needle, haystack); } AssertionResult IsNotSubstring( const char* needle_expr, const char* haystack_expr, const char* needle, const char* haystack) { return IsSubstringImpl(false, needle_expr, haystack_expr, needle, haystack); } AssertionResult IsNotSubstring( const char* needle_expr, const char* haystack_expr, const wchar_t* needle, const wchar_t* haystack) { return IsSubstringImpl(false, needle_expr, haystack_expr, needle, haystack); } AssertionResult IsSubstring( const char* needle_expr, const char* haystack_expr, const ::std::string& needle, const ::std::string& haystack) { return IsSubstringImpl(true, needle_expr, haystack_expr, needle, haystack); } AssertionResult IsNotSubstring( const char* needle_expr, const char* haystack_expr, const ::std::string& needle, const ::std::string& haystack) { return IsSubstringImpl(false, needle_expr, haystack_expr, needle, haystack); } #if GTEST_HAS_STD_WSTRING AssertionResult IsSubstring( const char* needle_expr, const char* haystack_expr, const ::std::wstring& needle, const ::std::wstring& haystack) { return IsSubstringImpl(true, needle_expr, haystack_expr, needle, haystack); } AssertionResult IsNotSubstring( const char* needle_expr, const char* haystack_expr, const ::std::wstring& needle, const ::std::wstring& haystack) { return IsSubstringImpl(false, needle_expr, haystack_expr, needle, haystack); } #endif // GTEST_HAS_STD_WSTRING namespace internal { #if GTEST_OS_WINDOWS namespace { // Helper function for IsHRESULT{SuccessFailure} predicates AssertionResult HRESULTFailureHelper(const char* expr, const char* expected, long hr) { // NOLINT # if GTEST_OS_WINDOWS_MOBILE // Windows CE doesn't support FormatMessage. const char error_text[] = ""; # else // Looks up the human-readable system message for the HRESULT code // and since we're not passing any params to FormatMessage, we don't // want inserts expanded. const DWORD kFlags = FORMAT_MESSAGE_FROM_SYSTEM | FORMAT_MESSAGE_IGNORE_INSERTS; const DWORD kBufSize = 4096; // Gets the system's human readable message string for this HRESULT. char error_text[kBufSize] = { '\0' }; DWORD message_length = ::FormatMessageA(kFlags, 0, // no source, we're asking system hr, // the error 0, // no line width restrictions error_text, // output buffer kBufSize, // buf size NULL); // no arguments for inserts // Trims tailing white space (FormatMessage leaves a trailing CR-LF) for (; message_length && IsSpace(error_text[message_length - 1]); --message_length) { error_text[message_length - 1] = '\0'; } # endif // GTEST_OS_WINDOWS_MOBILE const std::string error_hex("0x" + String::FormatHexInt(hr)); return ::testing::AssertionFailure() << "Expected: " << expr << " " << expected << ".\n" << " Actual: " << error_hex << " " << error_text << "\n"; } } // namespace AssertionResult IsHRESULTSuccess(const char* expr, long hr) { // NOLINT if (SUCCEEDED(hr)) { return AssertionSuccess(); } return HRESULTFailureHelper(expr, "succeeds", hr); } AssertionResult IsHRESULTFailure(const char* expr, long hr) { // NOLINT if (FAILED(hr)) { return AssertionSuccess(); } return HRESULTFailureHelper(expr, "fails", hr); } #endif // GTEST_OS_WINDOWS // Utility functions for encoding Unicode text (wide strings) in // UTF-8. // A Unicode code-point can have upto 21 bits, and is encoded in UTF-8 // like this: // // Code-point length Encoding // 0 - 7 bits 0xxxxxxx // 8 - 11 bits 110xxxxx 10xxxxxx // 12 - 16 bits 1110xxxx 10xxxxxx 10xxxxxx // 17 - 21 bits 11110xxx 10xxxxxx 10xxxxxx 10xxxxxx // The maximum code-point a one-byte UTF-8 sequence can represent. const UInt32 kMaxCodePoint1 = (static_cast(1) << 7) - 1; // The maximum code-point a two-byte UTF-8 sequence can represent. const UInt32 kMaxCodePoint2 = (static_cast(1) << (5 + 6)) - 1; // The maximum code-point a three-byte UTF-8 sequence can represent. const UInt32 kMaxCodePoint3 = (static_cast(1) << (4 + 2*6)) - 1; // The maximum code-point a four-byte UTF-8 sequence can represent. const UInt32 kMaxCodePoint4 = (static_cast(1) << (3 + 3*6)) - 1; // Chops off the n lowest bits from a bit pattern. Returns the n // lowest bits. As a side effect, the original bit pattern will be // shifted to the right by n bits. inline UInt32 ChopLowBits(UInt32* bits, int n) { const UInt32 low_bits = *bits & ((static_cast(1) << n) - 1); *bits >>= n; return low_bits; } // Converts a Unicode code point to a narrow string in UTF-8 encoding. // code_point parameter is of type UInt32 because wchar_t may not be // wide enough to contain a code point. // If the code_point is not a valid Unicode code point // (i.e. outside of Unicode range U+0 to U+10FFFF) it will be converted // to "(Invalid Unicode 0xXXXXXXXX)". std::string CodePointToUtf8(UInt32 code_point) { if (code_point > kMaxCodePoint4) { return "(Invalid Unicode 0x" + String::FormatHexInt(code_point) + ")"; } char str[5]; // Big enough for the largest valid code point. if (code_point <= kMaxCodePoint1) { str[1] = '\0'; str[0] = static_cast(code_point); // 0xxxxxxx } else if (code_point <= kMaxCodePoint2) { str[2] = '\0'; str[1] = static_cast(0x80 | ChopLowBits(&code_point, 6)); // 10xxxxxx str[0] = static_cast(0xC0 | code_point); // 110xxxxx } else if (code_point <= kMaxCodePoint3) { str[3] = '\0'; str[2] = static_cast(0x80 | ChopLowBits(&code_point, 6)); // 10xxxxxx str[1] = static_cast(0x80 | ChopLowBits(&code_point, 6)); // 10xxxxxx str[0] = static_cast(0xE0 | code_point); // 1110xxxx } else { // code_point <= kMaxCodePoint4 str[4] = '\0'; str[3] = static_cast(0x80 | ChopLowBits(&code_point, 6)); // 10xxxxxx str[2] = static_cast(0x80 | ChopLowBits(&code_point, 6)); // 10xxxxxx str[1] = static_cast(0x80 | ChopLowBits(&code_point, 6)); // 10xxxxxx str[0] = static_cast(0xF0 | code_point); // 11110xxx } return str; } // The following two functions only make sense if the the system // uses UTF-16 for wide string encoding. All supported systems // with 16 bit wchar_t (Windows, Cygwin, Symbian OS) do use UTF-16. // Determines if the arguments constitute UTF-16 surrogate pair // and thus should be combined into a single Unicode code point // using CreateCodePointFromUtf16SurrogatePair. inline bool IsUtf16SurrogatePair(wchar_t first, wchar_t second) { return sizeof(wchar_t) == 2 && (first & 0xFC00) == 0xD800 && (second & 0xFC00) == 0xDC00; } // Creates a Unicode code point from UTF16 surrogate pair. inline UInt32 CreateCodePointFromUtf16SurrogatePair(wchar_t first, wchar_t second) { const UInt32 mask = (1 << 10) - 1; return (sizeof(wchar_t) == 2) ? (((first & mask) << 10) | (second & mask)) + 0x10000 : // This function should not be called when the condition is // false, but we provide a sensible default in case it is. static_cast(first); } // Converts a wide string to a narrow string in UTF-8 encoding. // The wide string is assumed to have the following encoding: // UTF-16 if sizeof(wchar_t) == 2 (on Windows, Cygwin, Symbian OS) // UTF-32 if sizeof(wchar_t) == 4 (on Linux) // Parameter str points to a null-terminated wide string. // Parameter num_chars may additionally limit the number // of wchar_t characters processed. -1 is used when the entire string // should be processed. // If the string contains code points that are not valid Unicode code points // (i.e. outside of Unicode range U+0 to U+10FFFF) they will be output // as '(Invalid Unicode 0xXXXXXXXX)'. If the string is in UTF16 encoding // and contains invalid UTF-16 surrogate pairs, values in those pairs // will be encoded as individual Unicode characters from Basic Normal Plane. std::string WideStringToUtf8(const wchar_t* str, int num_chars) { if (num_chars == -1) num_chars = static_cast(wcslen(str)); ::std::stringstream stream; for (int i = 0; i < num_chars; ++i) { UInt32 unicode_code_point; if (str[i] == L'\0') { break; } else if (i + 1 < num_chars && IsUtf16SurrogatePair(str[i], str[i + 1])) { unicode_code_point = CreateCodePointFromUtf16SurrogatePair(str[i], str[i + 1]); i++; } else { unicode_code_point = static_cast(str[i]); } stream << CodePointToUtf8(unicode_code_point); } return StringStreamToString(&stream); } // Converts a wide C string to an std::string using the UTF-8 encoding. // NULL will be converted to "(null)". std::string String::ShowWideCString(const wchar_t * wide_c_str) { if (wide_c_str == NULL) return "(null)"; return internal::WideStringToUtf8(wide_c_str, -1); } // Compares two wide C strings. Returns true iff they have the same // content. // // Unlike wcscmp(), this function can handle NULL argument(s). A NULL // C string is considered different to any non-NULL C string, // including the empty string. bool String::WideCStringEquals(const wchar_t * lhs, const wchar_t * rhs) { if (lhs == NULL) return rhs == NULL; if (rhs == NULL) return false; return wcscmp(lhs, rhs) == 0; } // Helper function for *_STREQ on wide strings. AssertionResult CmpHelperSTREQ(const char* expected_expression, const char* actual_expression, const wchar_t* expected, const wchar_t* actual) { if (String::WideCStringEquals(expected, actual)) { return AssertionSuccess(); } return EqFailure(expected_expression, actual_expression, PrintToString(expected), PrintToString(actual), false); } // Helper function for *_STRNE on wide strings. AssertionResult CmpHelperSTRNE(const char* s1_expression, const char* s2_expression, const wchar_t* s1, const wchar_t* s2) { if (!String::WideCStringEquals(s1, s2)) { return AssertionSuccess(); } return AssertionFailure() << "Expected: (" << s1_expression << ") != (" << s2_expression << "), actual: " << PrintToString(s1) << " vs " << PrintToString(s2); } // Compares two C strings, ignoring case. Returns true iff they have // the same content. // // Unlike strcasecmp(), this function can handle NULL argument(s). A // NULL C string is considered different to any non-NULL C string, // including the empty string. bool String::CaseInsensitiveCStringEquals(const char * lhs, const char * rhs) { if (lhs == NULL) return rhs == NULL; if (rhs == NULL) return false; return posix::StrCaseCmp(lhs, rhs) == 0; } // Compares two wide C strings, ignoring case. Returns true iff they // have the same content. // // Unlike wcscasecmp(), this function can handle NULL argument(s). // A NULL C string is considered different to any non-NULL wide C string, // including the empty string. // NB: The implementations on different platforms slightly differ. // On windows, this method uses _wcsicmp which compares according to LC_CTYPE // environment variable. On GNU platform this method uses wcscasecmp // which compares according to LC_CTYPE category of the current locale. // On MacOS X, it uses towlower, which also uses LC_CTYPE category of the // current locale. bool String::CaseInsensitiveWideCStringEquals(const wchar_t* lhs, const wchar_t* rhs) { if (lhs == NULL) return rhs == NULL; if (rhs == NULL) return false; #if GTEST_OS_WINDOWS return _wcsicmp(lhs, rhs) == 0; #elif GTEST_OS_LINUX && !GTEST_OS_LINUX_ANDROID return wcscasecmp(lhs, rhs) == 0; #else // Android, Mac OS X and Cygwin don't define wcscasecmp. // Other unknown OSes may not define it either. wint_t left, right; do { left = towlower(*lhs++); right = towlower(*rhs++); } while (left && left == right); return left == right; #endif // OS selector } // Returns true iff str ends with the given suffix, ignoring case. // Any string is considered to end with an empty suffix. bool String::EndsWithCaseInsensitive( const std::string& str, const std::string& suffix) { const size_t str_len = str.length(); const size_t suffix_len = suffix.length(); return (str_len >= suffix_len) && CaseInsensitiveCStringEquals(str.c_str() + str_len - suffix_len, suffix.c_str()); } // Formats an int value as "%02d". std::string String::FormatIntWidth2(int value) { std::stringstream ss; ss << std::setfill('0') << std::setw(2) << value; return ss.str(); } // Formats an int value as "%X". std::string String::FormatHexInt(int value) { std::stringstream ss; ss << std::hex << std::uppercase << value; return ss.str(); } // Formats a byte as "%02X". std::string String::FormatByte(unsigned char value) { std::stringstream ss; ss << std::setfill('0') << std::setw(2) << std::hex << std::uppercase << static_cast(value); return ss.str(); } // Converts the buffer in a stringstream to an std::string, converting NUL // bytes to "\\0" along the way. std::string StringStreamToString(::std::stringstream* ss) { const ::std::string& str = ss->str(); const char* const start = str.c_str(); const char* const end = start + str.length(); std::string result; result.reserve(2 * (end - start)); for (const char* ch = start; ch != end; ++ch) { if (*ch == '\0') { result += "\\0"; // Replaces NUL with "\\0"; } else { result += *ch; } } return result; } // Appends the user-supplied message to the Google-Test-generated message. std::string AppendUserMessage(const std::string& gtest_msg, const Message& user_msg) { // Appends the user message if it's non-empty. const std::string user_msg_string = user_msg.GetString(); if (user_msg_string.empty()) { return gtest_msg; } return gtest_msg + "\n" + user_msg_string; } } // namespace internal // class TestResult // Creates an empty TestResult. TestResult::TestResult() : death_test_count_(0), elapsed_time_(0) { } // D'tor. TestResult::~TestResult() { } // Returns the i-th test part result among all the results. i can // range from 0 to total_part_count() - 1. If i is not in that range, // aborts the program. const TestPartResult& TestResult::GetTestPartResult(int i) const { if (i < 0 || i >= total_part_count()) internal::posix::Abort(); return test_part_results_.at(i); } // Returns the i-th test property. i can range from 0 to // test_property_count() - 1. If i is not in that range, aborts the // program. const TestProperty& TestResult::GetTestProperty(int i) const { if (i < 0 || i >= test_property_count()) internal::posix::Abort(); return test_properties_.at(i); } // Clears the test part results. void TestResult::ClearTestPartResults() { test_part_results_.clear(); } // Adds a test part result to the list. void TestResult::AddTestPartResult(const TestPartResult& test_part_result) { test_part_results_.push_back(test_part_result); } // Adds a test property to the list. If a property with the same key as the // supplied property is already represented, the value of this test_property // replaces the old value for that key. void TestResult::RecordProperty(const std::string& xml_element, const TestProperty& test_property) { if (!ValidateTestProperty(xml_element, test_property)) { return; } internal::MutexLock lock(&test_properites_mutex_); const std::vector::iterator property_with_matching_key = std::find_if(test_properties_.begin(), test_properties_.end(), internal::TestPropertyKeyIs(test_property.key())); if (property_with_matching_key == test_properties_.end()) { test_properties_.push_back(test_property); return; } property_with_matching_key->SetValue(test_property.value()); } // The list of reserved attributes used in the element of XML // output. static const char* const kReservedTestSuitesAttributes[] = { "disabled", "errors", "failures", "name", "random_seed", "tests", "time", "timestamp" }; // The list of reserved attributes used in the element of XML // output. static const char* const kReservedTestSuiteAttributes[] = { "disabled", "errors", "failures", "name", "tests", "time" }; // The list of reserved attributes used in the element of XML output. static const char* const kReservedTestCaseAttributes[] = { "classname", "name", "status", "time", "type_param", "value_param" }; template std::vector ArrayAsVector(const char* const (&array)[kSize]) { return std::vector(array, array + kSize); } static std::vector GetReservedAttributesForElement( const std::string& xml_element) { if (xml_element == "testsuites") { return ArrayAsVector(kReservedTestSuitesAttributes); } else if (xml_element == "testsuite") { return ArrayAsVector(kReservedTestSuiteAttributes); } else if (xml_element == "testcase") { return ArrayAsVector(kReservedTestCaseAttributes); } else { GTEST_CHECK_(false) << "Unrecognized xml_element provided: " << xml_element; } // This code is unreachable but some compilers may not realizes that. return std::vector(); } static std::string FormatWordList(const std::vector& words) { Message word_list; for (size_t i = 0; i < words.size(); ++i) { if (i > 0 && words.size() > 2) { word_list << ", "; } if (i == words.size() - 1) { word_list << "and "; } word_list << "'" << words[i] << "'"; } return word_list.GetString(); } bool ValidateTestPropertyName(const std::string& property_name, const std::vector& reserved_names) { if (std::find(reserved_names.begin(), reserved_names.end(), property_name) != reserved_names.end()) { ADD_FAILURE() << "Reserved key used in RecordProperty(): " << property_name << " (" << FormatWordList(reserved_names) << " are reserved by " << GTEST_NAME_ << ")"; return false; } return true; } // Adds a failure if the key is a reserved attribute of the element named // xml_element. Returns true if the property is valid. bool TestResult::ValidateTestProperty(const std::string& xml_element, const TestProperty& test_property) { return ValidateTestPropertyName(test_property.key(), GetReservedAttributesForElement(xml_element)); } // Clears the object. void TestResult::Clear() { test_part_results_.clear(); test_properties_.clear(); death_test_count_ = 0; elapsed_time_ = 0; } // Returns true iff the test failed. bool TestResult::Failed() const { for (int i = 0; i < total_part_count(); ++i) { if (GetTestPartResult(i).failed()) return true; } return false; } // Returns true iff the test part fatally failed. static bool TestPartFatallyFailed(const TestPartResult& result) { return result.fatally_failed(); } // Returns true iff the test fatally failed. bool TestResult::HasFatalFailure() const { return CountIf(test_part_results_, TestPartFatallyFailed) > 0; } // Returns true iff the test part non-fatally failed. static bool TestPartNonfatallyFailed(const TestPartResult& result) { return result.nonfatally_failed(); } // Returns true iff the test has a non-fatal failure. bool TestResult::HasNonfatalFailure() const { return CountIf(test_part_results_, TestPartNonfatallyFailed) > 0; } // Gets the number of all test parts. This is the sum of the number // of successful test parts and the number of failed test parts. int TestResult::total_part_count() const { return static_cast(test_part_results_.size()); } // Returns the number of the test properties. int TestResult::test_property_count() const { return static_cast(test_properties_.size()); } // class Test // Creates a Test object. // The c'tor saves the values of all Google Test flags. Test::Test() : gtest_flag_saver_(new internal::GTestFlagSaver) { } // The d'tor restores the values of all Google Test flags. Test::~Test() { delete gtest_flag_saver_; } // Sets up the test fixture. // // A sub-class may override this. void Test::SetUp() { } // Tears down the test fixture. // // A sub-class may override this. void Test::TearDown() { } // Allows user supplied key value pairs to be recorded for later output. void Test::RecordProperty(const std::string& key, const std::string& value) { UnitTest::GetInstance()->RecordProperty(key, value); } // Allows user supplied key value pairs to be recorded for later output. void Test::RecordProperty(const std::string& key, int value) { Message value_message; value_message << value; RecordProperty(key, value_message.GetString().c_str()); } namespace internal { void ReportFailureInUnknownLocation(TestPartResult::Type result_type, const std::string& message) { // This function is a friend of UnitTest and as such has access to // AddTestPartResult. UnitTest::GetInstance()->AddTestPartResult( result_type, NULL, // No info about the source file where the exception occurred. -1, // We have no info on which line caused the exception. message, ""); // No stack trace, either. } } // namespace internal // Google Test requires all tests in the same test case to use the same test // fixture class. This function checks if the current test has the // same fixture class as the first test in the current test case. If // yes, it returns true; otherwise it generates a Google Test failure and // returns false. bool Test::HasSameFixtureClass() { internal::UnitTestImpl* const impl = internal::GetUnitTestImpl(); const TestCase* const test_case = impl->current_test_case(); // Info about the first test in the current test case. const TestInfo* const first_test_info = test_case->test_info_list()[0]; const internal::TypeId first_fixture_id = first_test_info->fixture_class_id_; const char* const first_test_name = first_test_info->name(); // Info about the current test. const TestInfo* const this_test_info = impl->current_test_info(); const internal::TypeId this_fixture_id = this_test_info->fixture_class_id_; const char* const this_test_name = this_test_info->name(); if (this_fixture_id != first_fixture_id) { // Is the first test defined using TEST? const bool first_is_TEST = first_fixture_id == internal::GetTestTypeId(); // Is this test defined using TEST? const bool this_is_TEST = this_fixture_id == internal::GetTestTypeId(); if (first_is_TEST || this_is_TEST) { // The user mixed TEST and TEST_F in this test case - we'll tell // him/her how to fix it. // Gets the name of the TEST and the name of the TEST_F. Note // that first_is_TEST and this_is_TEST cannot both be true, as // the fixture IDs are different for the two tests. const char* const TEST_name = first_is_TEST ? first_test_name : this_test_name; const char* const TEST_F_name = first_is_TEST ? this_test_name : first_test_name; ADD_FAILURE() << "All tests in the same test case must use the same test fixture\n" << "class, so mixing TEST_F and TEST in the same test case is\n" << "illegal. In test case " << this_test_info->test_case_name() << ",\n" << "test " << TEST_F_name << " is defined using TEST_F but\n" << "test " << TEST_name << " is defined using TEST. You probably\n" << "want to change the TEST to TEST_F or move it to another test\n" << "case."; } else { // The user defined two fixture classes with the same name in // two namespaces - we'll tell him/her how to fix it. ADD_FAILURE() << "All tests in the same test case must use the same test fixture\n" << "class. However, in test case " << this_test_info->test_case_name() << ",\n" << "you defined test " << first_test_name << " and test " << this_test_name << "\n" << "using two different test fixture classes. This can happen if\n" << "the two classes are from different namespaces or translation\n" << "units and have the same name. You should probably rename one\n" << "of the classes to put the tests into different test cases."; } return false; } return true; } #if GTEST_HAS_SEH // Adds an "exception thrown" fatal failure to the current test. This // function returns its result via an output parameter pointer because VC++ // prohibits creation of objects with destructors on stack in functions // using __try (see error C2712). static std::string* FormatSehExceptionMessage(DWORD exception_code, const char* location) { Message message; message << "SEH exception with code 0x" << std::setbase(16) << exception_code << std::setbase(10) << " thrown in " << location << "."; return new std::string(message.GetString()); } #endif // GTEST_HAS_SEH namespace internal { #if GTEST_HAS_EXCEPTIONS // Adds an "exception thrown" fatal failure to the current test. static std::string FormatCxxExceptionMessage(const char* description, const char* location) { Message message; if (description != NULL) { message << "C++ exception with description \"" << description << "\""; } else { message << "Unknown C++ exception"; } message << " thrown in " << location << "."; return message.GetString(); } static std::string PrintTestPartResultToString( const TestPartResult& test_part_result); GoogleTestFailureException::GoogleTestFailureException( const TestPartResult& failure) : ::std::runtime_error(PrintTestPartResultToString(failure).c_str()) {} #endif // GTEST_HAS_EXCEPTIONS // We put these helper functions in the internal namespace as IBM's xlC // compiler rejects the code if they were declared static. // Runs the given method and handles SEH exceptions it throws, when // SEH is supported; returns the 0-value for type Result in case of an // SEH exception. (Microsoft compilers cannot handle SEH and C++ // exceptions in the same function. Therefore, we provide a separate // wrapper function for handling SEH exceptions.) template Result HandleSehExceptionsInMethodIfSupported( T* object, Result (T::*method)(), const char* location) { #if GTEST_HAS_SEH __try { return (object->*method)(); } __except (internal::UnitTestOptions::GTestShouldProcessSEH( // NOLINT GetExceptionCode())) { // We create the exception message on the heap because VC++ prohibits // creation of objects with destructors on stack in functions using __try // (see error C2712). std::string* exception_message = FormatSehExceptionMessage( GetExceptionCode(), location); internal::ReportFailureInUnknownLocation(TestPartResult::kFatalFailure, *exception_message); delete exception_message; return static_cast(0); } #else (void)location; return (object->*method)(); #endif // GTEST_HAS_SEH } // Runs the given method and catches and reports C++ and/or SEH-style // exceptions, if they are supported; returns the 0-value for type // Result in case of an SEH exception. template Result HandleExceptionsInMethodIfSupported( T* object, Result (T::*method)(), const char* location) { // NOTE: The user code can affect the way in which Google Test handles // exceptions by setting GTEST_FLAG(catch_exceptions), but only before // RUN_ALL_TESTS() starts. It is technically possible to check the flag // after the exception is caught and either report or re-throw the // exception based on the flag's value: // // try { // // Perform the test method. // } catch (...) { // if (GTEST_FLAG(catch_exceptions)) // // Report the exception as failure. // else // throw; // Re-throws the original exception. // } // // However, the purpose of this flag is to allow the program to drop into // the debugger when the exception is thrown. On most platforms, once the // control enters the catch block, the exception origin information is // lost and the debugger will stop the program at the point of the // re-throw in this function -- instead of at the point of the original // throw statement in the code under test. For this reason, we perform // the check early, sacrificing the ability to affect Google Test's // exception handling in the method where the exception is thrown. if (internal::GetUnitTestImpl()->catch_exceptions()) { #if GTEST_HAS_EXCEPTIONS try { return HandleSehExceptionsInMethodIfSupported(object, method, location); } catch (const internal::GoogleTestFailureException&) { // NOLINT // This exception type can only be thrown by a failed Google // Test assertion with the intention of letting another testing // framework catch it. Therefore we just re-throw it. throw; } catch (const std::exception& e) { // NOLINT internal::ReportFailureInUnknownLocation( TestPartResult::kFatalFailure, FormatCxxExceptionMessage(e.what(), location)); } catch (...) { // NOLINT internal::ReportFailureInUnknownLocation( TestPartResult::kFatalFailure, FormatCxxExceptionMessage(NULL, location)); } return static_cast(0); #else return HandleSehExceptionsInMethodIfSupported(object, method, location); #endif // GTEST_HAS_EXCEPTIONS } else { return (object->*method)(); } } } // namespace internal // Runs the test and updates the test result. void Test::Run() { if (!HasSameFixtureClass()) return; internal::UnitTestImpl* const impl = internal::GetUnitTestImpl(); impl->os_stack_trace_getter()->UponLeavingGTest(); internal::HandleExceptionsInMethodIfSupported(this, &Test::SetUp, "SetUp()"); // We will run the test only if SetUp() was successful. if (!HasFatalFailure()) { impl->os_stack_trace_getter()->UponLeavingGTest(); internal::HandleExceptionsInMethodIfSupported( this, &Test::TestBody, "the test body"); } // However, we want to clean up as much as possible. Hence we will // always call TearDown(), even if SetUp() or the test body has // failed. impl->os_stack_trace_getter()->UponLeavingGTest(); internal::HandleExceptionsInMethodIfSupported( this, &Test::TearDown, "TearDown()"); } // Returns true iff the current test has a fatal failure. bool Test::HasFatalFailure() { return internal::GetUnitTestImpl()->current_test_result()->HasFatalFailure(); } // Returns true iff the current test has a non-fatal failure. bool Test::HasNonfatalFailure() { return internal::GetUnitTestImpl()->current_test_result()-> HasNonfatalFailure(); } // class TestInfo // Constructs a TestInfo object. It assumes ownership of the test factory // object. TestInfo::TestInfo(const std::string& a_test_case_name, const std::string& a_name, const char* a_type_param, const char* a_value_param, internal::TypeId fixture_class_id, internal::TestFactoryBase* factory) : test_case_name_(a_test_case_name), name_(a_name), type_param_(a_type_param ? new std::string(a_type_param) : NULL), value_param_(a_value_param ? new std::string(a_value_param) : NULL), fixture_class_id_(fixture_class_id), should_run_(false), is_disabled_(false), matches_filter_(false), factory_(factory), result_() {} // Destructs a TestInfo object. TestInfo::~TestInfo() { delete factory_; } namespace internal { // Creates a new TestInfo object and registers it with Google Test; // returns the created object. // // Arguments: // // test_case_name: name of the test case // name: name of the test // type_param: the name of the test's type parameter, or NULL if // this is not a typed or a type-parameterized test. // value_param: text representation of the test's value parameter, // or NULL if this is not a value-parameterized test. // fixture_class_id: ID of the test fixture class // set_up_tc: pointer to the function that sets up the test case // tear_down_tc: pointer to the function that tears down the test case // factory: pointer to the factory that creates a test object. // The newly created TestInfo instance will assume // ownership of the factory object. TestInfo* MakeAndRegisterTestInfo( const char* test_case_name, const char* name, const char* type_param, const char* value_param, TypeId fixture_class_id, SetUpTestCaseFunc set_up_tc, TearDownTestCaseFunc tear_down_tc, TestFactoryBase* factory) { TestInfo* const test_info = new TestInfo(test_case_name, name, type_param, value_param, fixture_class_id, factory); GetUnitTestImpl()->AddTestInfo(set_up_tc, tear_down_tc, test_info); return test_info; } #if GTEST_HAS_PARAM_TEST void ReportInvalidTestCaseType(const char* test_case_name, const char* file, int line) { Message errors; errors << "Attempted redefinition of test case " << test_case_name << ".\n" << "All tests in the same test case must use the same test fixture\n" << "class. However, in test case " << test_case_name << ", you tried\n" << "to define a test using a fixture class different from the one\n" << "used earlier. This can happen if the two fixture classes are\n" << "from different namespaces and have the same name. You should\n" << "probably rename one of the classes to put the tests into different\n" << "test cases."; fprintf(stderr, "%s %s", FormatFileLocation(file, line).c_str(), errors.GetString().c_str()); } #endif // GTEST_HAS_PARAM_TEST } // namespace internal namespace { // A predicate that checks the test name of a TestInfo against a known // value. // // This is used for implementation of the TestCase class only. We put // it in the anonymous namespace to prevent polluting the outer // namespace. // // TestNameIs is copyable. class TestNameIs { public: // Constructor. // // TestNameIs has NO default constructor. explicit TestNameIs(const char* name) : name_(name) {} // Returns true iff the test name of test_info matches name_. bool operator()(const TestInfo * test_info) const { return test_info && test_info->name() == name_; } private: std::string name_; }; } // namespace namespace internal { // This method expands all parameterized tests registered with macros TEST_P // and INSTANTIATE_TEST_CASE_P into regular tests and registers those. // This will be done just once during the program runtime. void UnitTestImpl::RegisterParameterizedTests() { #if GTEST_HAS_PARAM_TEST if (!parameterized_tests_registered_) { parameterized_test_registry_.RegisterTests(); parameterized_tests_registered_ = true; } #endif } } // namespace internal // Creates the test object, runs it, records its result, and then // deletes it. void TestInfo::Run() { if (!should_run_) return; // Tells UnitTest where to store test result. internal::UnitTestImpl* const impl = internal::GetUnitTestImpl(); impl->set_current_test_info(this); TestEventListener* repeater = UnitTest::GetInstance()->listeners().repeater(); // Notifies the unit test event listeners that a test is about to start. repeater->OnTestStart(*this); const TimeInMillis start = internal::GetTimeInMillis(); impl->os_stack_trace_getter()->UponLeavingGTest(); // Creates the test object. Test* const test = internal::HandleExceptionsInMethodIfSupported( factory_, &internal::TestFactoryBase::CreateTest, "the test fixture's constructor"); // Runs the test only if the test object was created and its // constructor didn't generate a fatal failure. if ((test != NULL) && !Test::HasFatalFailure()) { // This doesn't throw as all user code that can throw are wrapped into // exception handling code. test->Run(); } // Deletes the test object. impl->os_stack_trace_getter()->UponLeavingGTest(); internal::HandleExceptionsInMethodIfSupported( test, &Test::DeleteSelf_, "the test fixture's destructor"); result_.set_elapsed_time(internal::GetTimeInMillis() - start); // Notifies the unit test event listener that a test has just finished. repeater->OnTestEnd(*this); // Tells UnitTest to stop associating assertion results to this // test. impl->set_current_test_info(NULL); } // class TestCase // Gets the number of successful tests in this test case. int TestCase::successful_test_count() const { return CountIf(test_info_list_, TestPassed); } // Gets the number of failed tests in this test case. int TestCase::failed_test_count() const { return CountIf(test_info_list_, TestFailed); } // Gets the number of disabled tests that will be reported in the XML report. int TestCase::reportable_disabled_test_count() const { return CountIf(test_info_list_, TestReportableDisabled); } // Gets the number of disabled tests in this test case. int TestCase::disabled_test_count() const { return CountIf(test_info_list_, TestDisabled); } // Gets the number of tests to be printed in the XML report. int TestCase::reportable_test_count() const { return CountIf(test_info_list_, TestReportable); } // Get the number of tests in this test case that should run. int TestCase::test_to_run_count() const { return CountIf(test_info_list_, ShouldRunTest); } // Gets the number of all tests. int TestCase::total_test_count() const { return static_cast(test_info_list_.size()); } // Creates a TestCase with the given name. // // Arguments: // // name: name of the test case // a_type_param: the name of the test case's type parameter, or NULL if // this is not a typed or a type-parameterized test case. // set_up_tc: pointer to the function that sets up the test case // tear_down_tc: pointer to the function that tears down the test case TestCase::TestCase(const char* a_name, const char* a_type_param, Test::SetUpTestCaseFunc set_up_tc, Test::TearDownTestCaseFunc tear_down_tc) : name_(a_name), type_param_(a_type_param ? new std::string(a_type_param) : NULL), set_up_tc_(set_up_tc), tear_down_tc_(tear_down_tc), should_run_(false), elapsed_time_(0) { } // Destructor of TestCase. TestCase::~TestCase() { // Deletes every Test in the collection. ForEach(test_info_list_, internal::Delete); } // Returns the i-th test among all the tests. i can range from 0 to // total_test_count() - 1. If i is not in that range, returns NULL. const TestInfo* TestCase::GetTestInfo(int i) const { const int index = GetElementOr(test_indices_, i, -1); return index < 0 ? NULL : test_info_list_[index]; } // Returns the i-th test among all the tests. i can range from 0 to // total_test_count() - 1. If i is not in that range, returns NULL. TestInfo* TestCase::GetMutableTestInfo(int i) { const int index = GetElementOr(test_indices_, i, -1); return index < 0 ? NULL : test_info_list_[index]; } // Adds a test to this test case. Will delete the test upon // destruction of the TestCase object. void TestCase::AddTestInfo(TestInfo * test_info) { test_info_list_.push_back(test_info); test_indices_.push_back(static_cast(test_indices_.size())); } // Runs every test in this TestCase. void TestCase::Run() { if (!should_run_) return; internal::UnitTestImpl* const impl = internal::GetUnitTestImpl(); impl->set_current_test_case(this); TestEventListener* repeater = UnitTest::GetInstance()->listeners().repeater(); repeater->OnTestCaseStart(*this); impl->os_stack_trace_getter()->UponLeavingGTest(); internal::HandleExceptionsInMethodIfSupported( this, &TestCase::RunSetUpTestCase, "SetUpTestCase()"); const internal::TimeInMillis start = internal::GetTimeInMillis(); for (int i = 0; i < total_test_count(); i++) { GetMutableTestInfo(i)->Run(); } elapsed_time_ = internal::GetTimeInMillis() - start; impl->os_stack_trace_getter()->UponLeavingGTest(); internal::HandleExceptionsInMethodIfSupported( this, &TestCase::RunTearDownTestCase, "TearDownTestCase()"); repeater->OnTestCaseEnd(*this); impl->set_current_test_case(NULL); } // Clears the results of all tests in this test case. void TestCase::ClearResult() { ad_hoc_test_result_.Clear(); ForEach(test_info_list_, TestInfo::ClearTestResult); } // Shuffles the tests in this test case. void TestCase::ShuffleTests(internal::Random* random) { Shuffle(random, &test_indices_); } // Restores the test order to before the first shuffle. void TestCase::UnshuffleTests() { for (size_t i = 0; i < test_indices_.size(); i++) { test_indices_[i] = static_cast(i); } } // Formats a countable noun. Depending on its quantity, either the // singular form or the plural form is used. e.g. // // FormatCountableNoun(1, "formula", "formuli") returns "1 formula". // FormatCountableNoun(5, "book", "books") returns "5 books". static std::string FormatCountableNoun(int count, const char * singular_form, const char * plural_form) { return internal::StreamableToString(count) + " " + (count == 1 ? singular_form : plural_form); } // Formats the count of tests. static std::string FormatTestCount(int test_count) { return FormatCountableNoun(test_count, "test", "tests"); } // Formats the count of test cases. static std::string FormatTestCaseCount(int test_case_count) { return FormatCountableNoun(test_case_count, "test case", "test cases"); } // Converts a TestPartResult::Type enum to human-friendly string // representation. Both kNonFatalFailure and kFatalFailure are translated // to "Failure", as the user usually doesn't care about the difference // between the two when viewing the test result. static const char * TestPartResultTypeToString(TestPartResult::Type type) { switch (type) { case TestPartResult::kSuccess: return "Success"; case TestPartResult::kNonFatalFailure: case TestPartResult::kFatalFailure: #ifdef _MSC_VER return "error: "; #else return "Failure\n"; #endif default: return "Unknown result type"; } } namespace internal { // Prints a TestPartResult to an std::string. static std::string PrintTestPartResultToString( const TestPartResult& test_part_result) { return (Message() << internal::FormatFileLocation(test_part_result.file_name(), test_part_result.line_number()) << " " << TestPartResultTypeToString(test_part_result.type()) << test_part_result.message()).GetString(); } // Prints a TestPartResult. static void PrintTestPartResult(const TestPartResult& test_part_result) { const std::string& result = PrintTestPartResultToString(test_part_result); printf("%s\n", result.c_str()); fflush(stdout); // If the test program runs in Visual Studio or a debugger, the // following statements add the test part result message to the Output // window such that the user can double-click on it to jump to the // corresponding source code location; otherwise they do nothing. #if GTEST_OS_WINDOWS && !GTEST_OS_WINDOWS_MOBILE // We don't call OutputDebugString*() on Windows Mobile, as printing // to stdout is done by OutputDebugString() there already - we don't // want the same message printed twice. ::OutputDebugStringA(result.c_str()); ::OutputDebugStringA("\n"); #endif } // class PrettyUnitTestResultPrinter enum GTestColor { COLOR_DEFAULT, COLOR_RED, COLOR_GREEN, COLOR_YELLOW }; #if GTEST_OS_WINDOWS && !GTEST_OS_WINDOWS_MOBILE // Returns the character attribute for the given color. WORD GetColorAttribute(GTestColor color) { switch (color) { case COLOR_RED: return FOREGROUND_RED; case COLOR_GREEN: return FOREGROUND_GREEN; case COLOR_YELLOW: return FOREGROUND_RED | FOREGROUND_GREEN; default: return 0; } } #else // Returns the ANSI color code for the given color. COLOR_DEFAULT is // an invalid input. const char* GetAnsiColorCode(GTestColor color) { switch (color) { case COLOR_RED: return "1"; case COLOR_GREEN: return "2"; case COLOR_YELLOW: return "3"; default: return NULL; }; } #endif // GTEST_OS_WINDOWS && !GTEST_OS_WINDOWS_MOBILE // Returns true iff Google Test should use colors in the output. bool ShouldUseColor(bool stdout_is_tty) { const char* const gtest_color = GTEST_FLAG(color).c_str(); if (String::CaseInsensitiveCStringEquals(gtest_color, "auto")) { #if GTEST_OS_WINDOWS // On Windows the TERM variable is usually not set, but the // console there does support colors. return stdout_is_tty; #else // On non-Windows platforms, we rely on the TERM variable. const char* const term = posix::GetEnv("TERM"); const bool term_supports_color = String::CStringEquals(term, "xterm") || String::CStringEquals(term, "xterm-color") || String::CStringEquals(term, "xterm-256color") || String::CStringEquals(term, "screen") || String::CStringEquals(term, "screen-256color") || String::CStringEquals(term, "linux") || String::CStringEquals(term, "cygwin"); return stdout_is_tty && term_supports_color; #endif // GTEST_OS_WINDOWS } return String::CaseInsensitiveCStringEquals(gtest_color, "yes") || String::CaseInsensitiveCStringEquals(gtest_color, "true") || String::CaseInsensitiveCStringEquals(gtest_color, "t") || String::CStringEquals(gtest_color, "1"); // We take "yes", "true", "t", and "1" as meaning "yes". If the // value is neither one of these nor "auto", we treat it as "no" to // be conservative. } // Helpers for printing colored strings to stdout. Note that on Windows, we // cannot simply emit special characters and have the terminal change colors. // This routine must actually emit the characters rather than return a string // that would be colored when printed, as can be done on Linux. void ColoredPrintf(GTestColor color, const char* fmt, ...) { va_list args; va_start(args, fmt); #if GTEST_OS_WINDOWS_MOBILE || GTEST_OS_SYMBIAN || GTEST_OS_ZOS || GTEST_OS_IOS const bool use_color = false; #else static const bool in_color_mode = ShouldUseColor(posix::IsATTY(posix::FileNo(stdout)) != 0); const bool use_color = in_color_mode && (color != COLOR_DEFAULT); #endif // GTEST_OS_WINDOWS_MOBILE || GTEST_OS_SYMBIAN || GTEST_OS_ZOS // The '!= 0' comparison is necessary to satisfy MSVC 7.1. if (!use_color) { vprintf(fmt, args); va_end(args); return; } #if GTEST_OS_WINDOWS && !GTEST_OS_WINDOWS_MOBILE const HANDLE stdout_handle = GetStdHandle(STD_OUTPUT_HANDLE); // Gets the current text color. CONSOLE_SCREEN_BUFFER_INFO buffer_info; GetConsoleScreenBufferInfo(stdout_handle, &buffer_info); const WORD old_color_attrs = buffer_info.wAttributes; // We need to flush the stream buffers into the console before each // SetConsoleTextAttribute call lest it affect the text that is already // printed but has not yet reached the console. fflush(stdout); SetConsoleTextAttribute(stdout_handle, GetColorAttribute(color) | FOREGROUND_INTENSITY); vprintf(fmt, args); fflush(stdout); // Restores the text color. SetConsoleTextAttribute(stdout_handle, old_color_attrs); #else printf("\033[0;3%sm", GetAnsiColorCode(color)); vprintf(fmt, args); printf("\033[m"); // Resets the terminal to default. #endif // GTEST_OS_WINDOWS && !GTEST_OS_WINDOWS_MOBILE va_end(args); } // Text printed in Google Test's text output and --gunit_list_tests // output to label the type parameter and value parameter for a test. static const char kTypeParamLabel[] = "TypeParam"; static const char kValueParamLabel[] = "GetParam()"; void PrintFullTestCommentIfPresent(const TestInfo& test_info) { const char* const type_param = test_info.type_param(); const char* const value_param = test_info.value_param(); if (type_param != NULL || value_param != NULL) { printf(", where "); if (type_param != NULL) { printf("%s = %s", kTypeParamLabel, type_param); if (value_param != NULL) printf(" and "); } if (value_param != NULL) { printf("%s = %s", kValueParamLabel, value_param); } } } // This class implements the TestEventListener interface. // // Class PrettyUnitTestResultPrinter is copyable. class PrettyUnitTestResultPrinter : public TestEventListener { public: PrettyUnitTestResultPrinter() {} static void PrintTestName(const char * test_case, const char * test) { printf("%s.%s", test_case, test); } // The following methods override what's in the TestEventListener class. virtual void OnTestProgramStart(const UnitTest& /*unit_test*/) {} virtual void OnTestIterationStart(const UnitTest& unit_test, int iteration); virtual void OnEnvironmentsSetUpStart(const UnitTest& unit_test); virtual void OnEnvironmentsSetUpEnd(const UnitTest& /*unit_test*/) {} virtual void OnTestCaseStart(const TestCase& test_case); virtual void OnTestStart(const TestInfo& test_info); virtual void OnTestPartResult(const TestPartResult& result); virtual void OnTestEnd(const TestInfo& test_info); virtual void OnTestCaseEnd(const TestCase& test_case); virtual void OnEnvironmentsTearDownStart(const UnitTest& unit_test); virtual void OnEnvironmentsTearDownEnd(const UnitTest& /*unit_test*/) {} virtual void OnTestIterationEnd(const UnitTest& unit_test, int iteration); virtual void OnTestProgramEnd(const UnitTest& /*unit_test*/) {} private: static void PrintFailedTests(const UnitTest& unit_test); }; // Fired before each iteration of tests starts. void PrettyUnitTestResultPrinter::OnTestIterationStart( const UnitTest& unit_test, int iteration) { if (GTEST_FLAG(repeat) != 1) printf("\nRepeating all tests (iteration %d) . . .\n\n", iteration + 1); const char* const filter = GTEST_FLAG(filter).c_str(); // Prints the filter if it's not *. This reminds the user that some // tests may be skipped. if (!String::CStringEquals(filter, kUniversalFilter)) { ColoredPrintf(COLOR_YELLOW, "Note: %s filter = %s\n", GTEST_NAME_, filter); } if (internal::ShouldShard(kTestTotalShards, kTestShardIndex, false)) { const Int32 shard_index = Int32FromEnvOrDie(kTestShardIndex, -1); ColoredPrintf(COLOR_YELLOW, "Note: This is test shard %d of %s.\n", static_cast(shard_index) + 1, internal::posix::GetEnv(kTestTotalShards)); } if (GTEST_FLAG(shuffle)) { ColoredPrintf(COLOR_YELLOW, "Note: Randomizing tests' orders with a seed of %d .\n", unit_test.random_seed()); } ColoredPrintf(COLOR_GREEN, "[==========] "); printf("Running %s from %s.\n", FormatTestCount(unit_test.test_to_run_count()).c_str(), FormatTestCaseCount(unit_test.test_case_to_run_count()).c_str()); fflush(stdout); } void PrettyUnitTestResultPrinter::OnEnvironmentsSetUpStart( const UnitTest& /*unit_test*/) { ColoredPrintf(COLOR_GREEN, "[----------] "); printf("Global test environment set-up.\n"); fflush(stdout); } void PrettyUnitTestResultPrinter::OnTestCaseStart(const TestCase& test_case) { const std::string counts = FormatCountableNoun(test_case.test_to_run_count(), "test", "tests"); ColoredPrintf(COLOR_GREEN, "[----------] "); printf("%s from %s", counts.c_str(), test_case.name()); if (test_case.type_param() == NULL) { printf("\n"); } else { printf(", where %s = %s\n", kTypeParamLabel, test_case.type_param()); } fflush(stdout); } void PrettyUnitTestResultPrinter::OnTestStart(const TestInfo& test_info) { ColoredPrintf(COLOR_GREEN, "[ RUN ] "); PrintTestName(test_info.test_case_name(), test_info.name()); printf("\n"); fflush(stdout); } // Called after an assertion failure. void PrettyUnitTestResultPrinter::OnTestPartResult( const TestPartResult& result) { // If the test part succeeded, we don't need to do anything. if (result.type() == TestPartResult::kSuccess) return; // Print failure message from the assertion (e.g. expected this and got that). PrintTestPartResult(result); fflush(stdout); } void PrettyUnitTestResultPrinter::OnTestEnd(const TestInfo& test_info) { if (test_info.result()->Passed()) { ColoredPrintf(COLOR_GREEN, "[ OK ] "); } else { ColoredPrintf(COLOR_RED, "[ FAILED ] "); } PrintTestName(test_info.test_case_name(), test_info.name()); if (test_info.result()->Failed()) PrintFullTestCommentIfPresent(test_info); if (GTEST_FLAG(print_time)) { printf(" (%s ms)\n", internal::StreamableToString( test_info.result()->elapsed_time()).c_str()); } else { printf("\n"); } fflush(stdout); } void PrettyUnitTestResultPrinter::OnTestCaseEnd(const TestCase& test_case) { if (!GTEST_FLAG(print_time)) return; const std::string counts = FormatCountableNoun(test_case.test_to_run_count(), "test", "tests"); ColoredPrintf(COLOR_GREEN, "[----------] "); printf("%s from %s (%s ms total)\n\n", counts.c_str(), test_case.name(), internal::StreamableToString(test_case.elapsed_time()).c_str()); fflush(stdout); } void PrettyUnitTestResultPrinter::OnEnvironmentsTearDownStart( const UnitTest& /*unit_test*/) { ColoredPrintf(COLOR_GREEN, "[----------] "); printf("Global test environment tear-down\n"); fflush(stdout); } // Internal helper for printing the list of failed tests. void PrettyUnitTestResultPrinter::PrintFailedTests(const UnitTest& unit_test) { const int failed_test_count = unit_test.failed_test_count(); if (failed_test_count == 0) { return; } for (int i = 0; i < unit_test.total_test_case_count(); ++i) { const TestCase& test_case = *unit_test.GetTestCase(i); if (!test_case.should_run() || (test_case.failed_test_count() == 0)) { continue; } for (int j = 0; j < test_case.total_test_count(); ++j) { const TestInfo& test_info = *test_case.GetTestInfo(j); if (!test_info.should_run() || test_info.result()->Passed()) { continue; } ColoredPrintf(COLOR_RED, "[ FAILED ] "); printf("%s.%s", test_case.name(), test_info.name()); PrintFullTestCommentIfPresent(test_info); printf("\n"); } } } void PrettyUnitTestResultPrinter::OnTestIterationEnd(const UnitTest& unit_test, int /*iteration*/) { ColoredPrintf(COLOR_GREEN, "[==========] "); printf("%s from %s ran.", FormatTestCount(unit_test.test_to_run_count()).c_str(), FormatTestCaseCount(unit_test.test_case_to_run_count()).c_str()); if (GTEST_FLAG(print_time)) { printf(" (%s ms total)", internal::StreamableToString(unit_test.elapsed_time()).c_str()); } printf("\n"); ColoredPrintf(COLOR_GREEN, "[ PASSED ] "); printf("%s.\n", FormatTestCount(unit_test.successful_test_count()).c_str()); int num_failures = unit_test.failed_test_count(); if (!unit_test.Passed()) { const int failed_test_count = unit_test.failed_test_count(); ColoredPrintf(COLOR_RED, "[ FAILED ] "); printf("%s, listed below:\n", FormatTestCount(failed_test_count).c_str()); PrintFailedTests(unit_test); printf("\n%2d FAILED %s\n", num_failures, num_failures == 1 ? "TEST" : "TESTS"); } int num_disabled = unit_test.reportable_disabled_test_count(); if (num_disabled && !GTEST_FLAG(also_run_disabled_tests)) { if (!num_failures) { printf("\n"); // Add a spacer if no FAILURE banner is displayed. } ColoredPrintf(COLOR_YELLOW, " YOU HAVE %d DISABLED %s\n\n", num_disabled, num_disabled == 1 ? "TEST" : "TESTS"); } // Ensure that Google Test output is printed before, e.g., heapchecker output. fflush(stdout); } // End PrettyUnitTestResultPrinter // class TestEventRepeater // // This class forwards events to other event listeners. class TestEventRepeater : public TestEventListener { public: TestEventRepeater() : forwarding_enabled_(true) {} virtual ~TestEventRepeater(); void Append(TestEventListener *listener); TestEventListener* Release(TestEventListener* listener); // Controls whether events will be forwarded to listeners_. Set to false // in death test child processes. bool forwarding_enabled() const { return forwarding_enabled_; } void set_forwarding_enabled(bool enable) { forwarding_enabled_ = enable; } virtual void OnTestProgramStart(const UnitTest& unit_test); virtual void OnTestIterationStart(const UnitTest& unit_test, int iteration); virtual void OnEnvironmentsSetUpStart(const UnitTest& unit_test); virtual void OnEnvironmentsSetUpEnd(const UnitTest& unit_test); virtual void OnTestCaseStart(const TestCase& test_case); virtual void OnTestStart(const TestInfo& test_info); virtual void OnTestPartResult(const TestPartResult& result); virtual void OnTestEnd(const TestInfo& test_info); virtual void OnTestCaseEnd(const TestCase& test_case); virtual void OnEnvironmentsTearDownStart(const UnitTest& unit_test); virtual void OnEnvironmentsTearDownEnd(const UnitTest& unit_test); virtual void OnTestIterationEnd(const UnitTest& unit_test, int iteration); virtual void OnTestProgramEnd(const UnitTest& unit_test); private: // Controls whether events will be forwarded to listeners_. Set to false // in death test child processes. bool forwarding_enabled_; // The list of listeners that receive events. std::vector listeners_; GTEST_DISALLOW_COPY_AND_ASSIGN_(TestEventRepeater); }; TestEventRepeater::~TestEventRepeater() { ForEach(listeners_, Delete); } void TestEventRepeater::Append(TestEventListener *listener) { listeners_.push_back(listener); } // TODO(vladl@google.com): Factor the search functionality into Vector::Find. TestEventListener* TestEventRepeater::Release(TestEventListener *listener) { for (size_t i = 0; i < listeners_.size(); ++i) { if (listeners_[i] == listener) { listeners_.erase(listeners_.begin() + i); return listener; } } return NULL; } // Since most methods are very similar, use macros to reduce boilerplate. // This defines a member that forwards the call to all listeners. #define GTEST_REPEATER_METHOD_(Name, Type) \ void TestEventRepeater::Name(const Type& parameter) { \ if (forwarding_enabled_) { \ for (size_t i = 0; i < listeners_.size(); i++) { \ listeners_[i]->Name(parameter); \ } \ } \ } // This defines a member that forwards the call to all listeners in reverse // order. #define GTEST_REVERSE_REPEATER_METHOD_(Name, Type) \ void TestEventRepeater::Name(const Type& parameter) { \ if (forwarding_enabled_) { \ for (int i = static_cast(listeners_.size()) - 1; i >= 0; i--) { \ listeners_[i]->Name(parameter); \ } \ } \ } GTEST_REPEATER_METHOD_(OnTestProgramStart, UnitTest) GTEST_REPEATER_METHOD_(OnEnvironmentsSetUpStart, UnitTest) GTEST_REPEATER_METHOD_(OnTestCaseStart, TestCase) GTEST_REPEATER_METHOD_(OnTestStart, TestInfo) GTEST_REPEATER_METHOD_(OnTestPartResult, TestPartResult) GTEST_REPEATER_METHOD_(OnEnvironmentsTearDownStart, UnitTest) GTEST_REVERSE_REPEATER_METHOD_(OnEnvironmentsSetUpEnd, UnitTest) GTEST_REVERSE_REPEATER_METHOD_(OnEnvironmentsTearDownEnd, UnitTest) GTEST_REVERSE_REPEATER_METHOD_(OnTestEnd, TestInfo) GTEST_REVERSE_REPEATER_METHOD_(OnTestCaseEnd, TestCase) GTEST_REVERSE_REPEATER_METHOD_(OnTestProgramEnd, UnitTest) #undef GTEST_REPEATER_METHOD_ #undef GTEST_REVERSE_REPEATER_METHOD_ void TestEventRepeater::OnTestIterationStart(const UnitTest& unit_test, int iteration) { if (forwarding_enabled_) { for (size_t i = 0; i < listeners_.size(); i++) { listeners_[i]->OnTestIterationStart(unit_test, iteration); } } } void TestEventRepeater::OnTestIterationEnd(const UnitTest& unit_test, int iteration) { if (forwarding_enabled_) { for (int i = static_cast(listeners_.size()) - 1; i >= 0; i--) { listeners_[i]->OnTestIterationEnd(unit_test, iteration); } } } // End TestEventRepeater // This class generates an XML output file. class XmlUnitTestResultPrinter : public EmptyTestEventListener { public: explicit XmlUnitTestResultPrinter(const char* output_file); virtual void OnTestIterationEnd(const UnitTest& unit_test, int iteration); private: // Is c a whitespace character that is normalized to a space character // when it appears in an XML attribute value? static bool IsNormalizableWhitespace(char c) { return c == 0x9 || c == 0xA || c == 0xD; } // May c appear in a well-formed XML document? static bool IsValidXmlCharacter(char c) { return IsNormalizableWhitespace(c) || c >= 0x20; } // Returns an XML-escaped copy of the input string str. If // is_attribute is true, the text is meant to appear as an attribute // value, and normalizable whitespace is preserved by replacing it // with character references. static std::string EscapeXml(const std::string& str, bool is_attribute); // Returns the given string with all characters invalid in XML removed. static std::string RemoveInvalidXmlCharacters(const std::string& str); // Convenience wrapper around EscapeXml when str is an attribute value. static std::string EscapeXmlAttribute(const std::string& str) { return EscapeXml(str, true); } // Convenience wrapper around EscapeXml when str is not an attribute value. static std::string EscapeXmlText(const char* str) { return EscapeXml(str, false); } // Verifies that the given attribute belongs to the given element and // streams the attribute as XML. static void OutputXmlAttribute(std::ostream* stream, const std::string& element_name, const std::string& name, const std::string& value); // Streams an XML CDATA section, escaping invalid CDATA sequences as needed. static void OutputXmlCDataSection(::std::ostream* stream, const char* data); // Streams an XML representation of a TestInfo object. static void OutputXmlTestInfo(::std::ostream* stream, const char* test_case_name, const TestInfo& test_info); // Prints an XML representation of a TestCase object static void PrintXmlTestCase(::std::ostream* stream, const TestCase& test_case); // Prints an XML summary of unit_test to output stream out. static void PrintXmlUnitTest(::std::ostream* stream, const UnitTest& unit_test); // Produces a string representing the test properties in a result as space // delimited XML attributes based on the property key="value" pairs. // When the std::string is not empty, it includes a space at the beginning, // to delimit this attribute from prior attributes. static std::string TestPropertiesAsXmlAttributes(const TestResult& result); // The output file. const std::string output_file_; GTEST_DISALLOW_COPY_AND_ASSIGN_(XmlUnitTestResultPrinter); }; // Creates a new XmlUnitTestResultPrinter. XmlUnitTestResultPrinter::XmlUnitTestResultPrinter(const char* output_file) : output_file_(output_file) { if (output_file_.c_str() == NULL || output_file_.empty()) { fprintf(stderr, "XML output file may not be null\n"); fflush(stderr); exit(EXIT_FAILURE); } } // Called after the unit test ends. void XmlUnitTestResultPrinter::OnTestIterationEnd(const UnitTest& unit_test, int /*iteration*/) { FILE* xmlout = NULL; FilePath output_file(output_file_); FilePath output_dir(output_file.RemoveFileName()); if (output_dir.CreateDirectoriesRecursively()) { xmlout = posix::FOpen(output_file_.c_str(), "w"); } if (xmlout == NULL) { // TODO(wan): report the reason of the failure. // // We don't do it for now as: // // 1. There is no urgent need for it. // 2. It's a bit involved to make the errno variable thread-safe on // all three operating systems (Linux, Windows, and Mac OS). // 3. To interpret the meaning of errno in a thread-safe way, // we need the strerror_r() function, which is not available on // Windows. fprintf(stderr, "Unable to open file \"%s\"\n", output_file_.c_str()); fflush(stderr); exit(EXIT_FAILURE); } std::stringstream stream; PrintXmlUnitTest(&stream, unit_test); fprintf(xmlout, "%s", StringStreamToString(&stream).c_str()); fclose(xmlout); } // Returns an XML-escaped copy of the input string str. If is_attribute // is true, the text is meant to appear as an attribute value, and // normalizable whitespace is preserved by replacing it with character // references. // // Invalid XML characters in str, if any, are stripped from the output. // It is expected that most, if not all, of the text processed by this // module will consist of ordinary English text. // If this module is ever modified to produce version 1.1 XML output, // most invalid characters can be retained using character references. // TODO(wan): It might be nice to have a minimally invasive, human-readable // escaping scheme for invalid characters, rather than dropping them. std::string XmlUnitTestResultPrinter::EscapeXml( const std::string& str, bool is_attribute) { Message m; for (size_t i = 0; i < str.size(); ++i) { const char ch = str[i]; switch (ch) { case '<': m << "<"; break; case '>': m << ">"; break; case '&': m << "&"; break; case '\'': if (is_attribute) m << "'"; else m << '\''; break; case '"': if (is_attribute) m << """; else m << '"'; break; default: if (IsValidXmlCharacter(ch)) { if (is_attribute && IsNormalizableWhitespace(ch)) m << "&#x" << String::FormatByte(static_cast(ch)) << ";"; else m << ch; } break; } } return m.GetString(); } // Returns the given string with all characters invalid in XML removed. // Currently invalid characters are dropped from the string. An // alternative is to replace them with certain characters such as . or ?. std::string XmlUnitTestResultPrinter::RemoveInvalidXmlCharacters( const std::string& str) { std::string output; output.reserve(str.size()); for (std::string::const_iterator it = str.begin(); it != str.end(); ++it) if (IsValidXmlCharacter(*it)) output.push_back(*it); return output; } // The following routines generate an XML representation of a UnitTest // object. // // This is how Google Test concepts map to the DTD: // // <-- corresponds to a UnitTest object // <-- corresponds to a TestCase object // <-- corresponds to a TestInfo object // ... // ... // ... // <-- individual assertion failures // // // // Formats the given time in milliseconds as seconds. std::string FormatTimeInMillisAsSeconds(TimeInMillis ms) { ::std::stringstream ss; ss << ms/1000.0; return ss.str(); } // Converts the given epoch time in milliseconds to a date string in the ISO // 8601 format, without the timezone information. std::string FormatEpochTimeInMillisAsIso8601(TimeInMillis ms) { // Using non-reentrant version as localtime_r is not portable. time_t seconds = static_cast(ms / 1000); #ifdef _MSC_VER # pragma warning(push) // Saves the current warning state. # pragma warning(disable:4996) // Temporarily disables warning 4996 // (function or variable may be unsafe). const struct tm* const time_struct = localtime(&seconds); // NOLINT # pragma warning(pop) // Restores the warning state again. #else const struct tm* const time_struct = localtime(&seconds); // NOLINT #endif if (time_struct == NULL) return ""; // Invalid ms value // YYYY-MM-DDThh:mm:ss return StreamableToString(time_struct->tm_year + 1900) + "-" + String::FormatIntWidth2(time_struct->tm_mon + 1) + "-" + String::FormatIntWidth2(time_struct->tm_mday) + "T" + String::FormatIntWidth2(time_struct->tm_hour) + ":" + String::FormatIntWidth2(time_struct->tm_min) + ":" + String::FormatIntWidth2(time_struct->tm_sec); } // Streams an XML CDATA section, escaping invalid CDATA sequences as needed. void XmlUnitTestResultPrinter::OutputXmlCDataSection(::std::ostream* stream, const char* data) { const char* segment = data; *stream << ""); if (next_segment != NULL) { stream->write( segment, static_cast(next_segment - segment)); *stream << "]]>]]>"); } else { *stream << segment; break; } } *stream << "]]>"; } void XmlUnitTestResultPrinter::OutputXmlAttribute( std::ostream* stream, const std::string& element_name, const std::string& name, const std::string& value) { const std::vector& allowed_names = GetReservedAttributesForElement(element_name); GTEST_CHECK_(std::find(allowed_names.begin(), allowed_names.end(), name) != allowed_names.end()) << "Attribute " << name << " is not allowed for element <" << element_name << ">."; *stream << " " << name << "=\"" << EscapeXmlAttribute(value) << "\""; } // Prints an XML representation of a TestInfo object. // TODO(wan): There is also value in printing properties with the plain printer. void XmlUnitTestResultPrinter::OutputXmlTestInfo(::std::ostream* stream, const char* test_case_name, const TestInfo& test_info) { const TestResult& result = *test_info.result(); const std::string kTestcase = "testcase"; *stream << " \n"; } const string location = internal::FormatCompilerIndependentFileLocation( part.file_name(), part.line_number()); const string summary = location + "\n" + part.summary(); *stream << " "; const string detail = location + "\n" + part.message(); OutputXmlCDataSection(stream, RemoveInvalidXmlCharacters(detail).c_str()); *stream << "\n"; } } if (failures == 0) *stream << " />\n"; else *stream << " \n"; } // Prints an XML representation of a TestCase object void XmlUnitTestResultPrinter::PrintXmlTestCase(std::ostream* stream, const TestCase& test_case) { const std::string kTestsuite = "testsuite"; *stream << " <" << kTestsuite; OutputXmlAttribute(stream, kTestsuite, "name", test_case.name()); OutputXmlAttribute(stream, kTestsuite, "tests", StreamableToString(test_case.reportable_test_count())); OutputXmlAttribute(stream, kTestsuite, "failures", StreamableToString(test_case.failed_test_count())); OutputXmlAttribute( stream, kTestsuite, "disabled", StreamableToString(test_case.reportable_disabled_test_count())); OutputXmlAttribute(stream, kTestsuite, "errors", "0"); OutputXmlAttribute(stream, kTestsuite, "time", FormatTimeInMillisAsSeconds(test_case.elapsed_time())); *stream << TestPropertiesAsXmlAttributes(test_case.ad_hoc_test_result()) << ">\n"; for (int i = 0; i < test_case.total_test_count(); ++i) { if (test_case.GetTestInfo(i)->is_reportable()) OutputXmlTestInfo(stream, test_case.name(), *test_case.GetTestInfo(i)); } *stream << " \n"; } // Prints an XML summary of unit_test to output stream out. void XmlUnitTestResultPrinter::PrintXmlUnitTest(std::ostream* stream, const UnitTest& unit_test) { const std::string kTestsuites = "testsuites"; *stream << "\n"; *stream << "<" << kTestsuites; OutputXmlAttribute(stream, kTestsuites, "tests", StreamableToString(unit_test.reportable_test_count())); OutputXmlAttribute(stream, kTestsuites, "failures", StreamableToString(unit_test.failed_test_count())); OutputXmlAttribute( stream, kTestsuites, "disabled", StreamableToString(unit_test.reportable_disabled_test_count())); OutputXmlAttribute(stream, kTestsuites, "errors", "0"); OutputXmlAttribute( stream, kTestsuites, "timestamp", FormatEpochTimeInMillisAsIso8601(unit_test.start_timestamp())); OutputXmlAttribute(stream, kTestsuites, "time", FormatTimeInMillisAsSeconds(unit_test.elapsed_time())); if (GTEST_FLAG(shuffle)) { OutputXmlAttribute(stream, kTestsuites, "random_seed", StreamableToString(unit_test.random_seed())); } *stream << TestPropertiesAsXmlAttributes(unit_test.ad_hoc_test_result()); OutputXmlAttribute(stream, kTestsuites, "name", "AllTests"); *stream << ">\n"; for (int i = 0; i < unit_test.total_test_case_count(); ++i) { if (unit_test.GetTestCase(i)->reportable_test_count() > 0) PrintXmlTestCase(stream, *unit_test.GetTestCase(i)); } *stream << "\n"; } // Produces a string representing the test properties in a result as space // delimited XML attributes based on the property key="value" pairs. std::string XmlUnitTestResultPrinter::TestPropertiesAsXmlAttributes( const TestResult& result) { Message attributes; for (int i = 0; i < result.test_property_count(); ++i) { const TestProperty& property = result.GetTestProperty(i); attributes << " " << property.key() << "=" << "\"" << EscapeXmlAttribute(property.value()) << "\""; } return attributes.GetString(); } // End XmlUnitTestResultPrinter #if GTEST_CAN_STREAM_RESULTS_ // Checks if str contains '=', '&', '%' or '\n' characters. If yes, // replaces them by "%xx" where xx is their hexadecimal value. For // example, replaces "=" with "%3D". This algorithm is O(strlen(str)) // in both time and space -- important as the input str may contain an // arbitrarily long test failure message and stack trace. string StreamingListener::UrlEncode(const char* str) { string result; result.reserve(strlen(str) + 1); for (char ch = *str; ch != '\0'; ch = *++str) { switch (ch) { case '%': case '=': case '&': case '\n': result.append("%" + String::FormatByte(static_cast(ch))); break; default: result.push_back(ch); break; } } return result; } void StreamingListener::SocketWriter::MakeConnection() { GTEST_CHECK_(sockfd_ == -1) << "MakeConnection() can't be called when there is already a connection."; addrinfo hints; memset(&hints, 0, sizeof(hints)); hints.ai_family = AF_UNSPEC; // To allow both IPv4 and IPv6 addresses. hints.ai_socktype = SOCK_STREAM; addrinfo* servinfo = NULL; // Use the getaddrinfo() to get a linked list of IP addresses for // the given host name. const int error_num = getaddrinfo( host_name_.c_str(), port_num_.c_str(), &hints, &servinfo); if (error_num != 0) { GTEST_LOG_(WARNING) << "stream_result_to: getaddrinfo() failed: " << gai_strerror(error_num); } // Loop through all the results and connect to the first we can. for (addrinfo* cur_addr = servinfo; sockfd_ == -1 && cur_addr != NULL; cur_addr = cur_addr->ai_next) { sockfd_ = socket( cur_addr->ai_family, cur_addr->ai_socktype, cur_addr->ai_protocol); if (sockfd_ != -1) { // Connect the client socket to the server socket. if (connect(sockfd_, cur_addr->ai_addr, cur_addr->ai_addrlen) == -1) { close(sockfd_); sockfd_ = -1; } } } freeaddrinfo(servinfo); // all done with this structure if (sockfd_ == -1) { GTEST_LOG_(WARNING) << "stream_result_to: failed to connect to " << host_name_ << ":" << port_num_; } } // End of class Streaming Listener #endif // GTEST_CAN_STREAM_RESULTS__ // Class ScopedTrace // Pushes the given source file location and message onto a per-thread // trace stack maintained by Google Test. ScopedTrace::ScopedTrace(const char* file, int line, const Message& message) GTEST_LOCK_EXCLUDED_(&UnitTest::mutex_) { TraceInfo trace; trace.file = file; trace.line = line; trace.message = message.GetString(); UnitTest::GetInstance()->PushGTestTrace(trace); } // Pops the info pushed by the c'tor. ScopedTrace::~ScopedTrace() GTEST_LOCK_EXCLUDED_(&UnitTest::mutex_) { UnitTest::GetInstance()->PopGTestTrace(); } // class OsStackTraceGetter // Returns the current OS stack trace as an std::string. Parameters: // // max_depth - the maximum number of stack frames to be included // in the trace. // skip_count - the number of top frames to be skipped; doesn't count // against max_depth. // string OsStackTraceGetter::CurrentStackTrace(int /* max_depth */, int /* skip_count */) GTEST_LOCK_EXCLUDED_(mutex_) { return ""; } void OsStackTraceGetter::UponLeavingGTest() GTEST_LOCK_EXCLUDED_(mutex_) { } const char* const OsStackTraceGetter::kElidedFramesMarker = "... " GTEST_NAME_ " internal frames ..."; // A helper class that creates the premature-exit file in its // constructor and deletes the file in its destructor. class ScopedPrematureExitFile { public: explicit ScopedPrematureExitFile(const char* premature_exit_filepath) : premature_exit_filepath_(premature_exit_filepath) { // If a path to the premature-exit file is specified... if (premature_exit_filepath != NULL && *premature_exit_filepath != '\0') { // create the file with a single "0" character in it. I/O // errors are ignored as there's nothing better we can do and we // don't want to fail the test because of this. FILE* pfile = posix::FOpen(premature_exit_filepath, "w"); fwrite("0", 1, 1, pfile); fclose(pfile); } } ~ScopedPrematureExitFile() { if (premature_exit_filepath_ != NULL && *premature_exit_filepath_ != '\0') { remove(premature_exit_filepath_); } } private: const char* const premature_exit_filepath_; GTEST_DISALLOW_COPY_AND_ASSIGN_(ScopedPrematureExitFile); }; } // namespace internal // class TestEventListeners TestEventListeners::TestEventListeners() : repeater_(new internal::TestEventRepeater()), default_result_printer_(NULL), default_xml_generator_(NULL) { } TestEventListeners::~TestEventListeners() { delete repeater_; } // Returns the standard listener responsible for the default console // output. Can be removed from the listeners list to shut down default // console output. Note that removing this object from the listener list // with Release transfers its ownership to the user. void TestEventListeners::Append(TestEventListener* listener) { repeater_->Append(listener); } // Removes the given event listener from the list and returns it. It then // becomes the caller's responsibility to delete the listener. Returns // NULL if the listener is not found in the list. TestEventListener* TestEventListeners::Release(TestEventListener* listener) { if (listener == default_result_printer_) default_result_printer_ = NULL; else if (listener == default_xml_generator_) default_xml_generator_ = NULL; return repeater_->Release(listener); } // Returns repeater that broadcasts the TestEventListener events to all // subscribers. TestEventListener* TestEventListeners::repeater() { return repeater_; } // Sets the default_result_printer attribute to the provided listener. // The listener is also added to the listener list and previous // default_result_printer is removed from it and deleted. The listener can // also be NULL in which case it will not be added to the list. Does // nothing if the previous and the current listener objects are the same. void TestEventListeners::SetDefaultResultPrinter(TestEventListener* listener) { if (default_result_printer_ != listener) { // It is an error to pass this method a listener that is already in the // list. delete Release(default_result_printer_); default_result_printer_ = listener; if (listener != NULL) Append(listener); } } // Sets the default_xml_generator attribute to the provided listener. The // listener is also added to the listener list and previous // default_xml_generator is removed from it and deleted. The listener can // also be NULL in which case it will not be added to the list. Does // nothing if the previous and the current listener objects are the same. void TestEventListeners::SetDefaultXmlGenerator(TestEventListener* listener) { if (default_xml_generator_ != listener) { // It is an error to pass this method a listener that is already in the // list. delete Release(default_xml_generator_); default_xml_generator_ = listener; if (listener != NULL) Append(listener); } } // Controls whether events will be forwarded by the repeater to the // listeners in the list. bool TestEventListeners::EventForwardingEnabled() const { return repeater_->forwarding_enabled(); } void TestEventListeners::SuppressEventForwarding() { repeater_->set_forwarding_enabled(false); } // class UnitTest // Gets the singleton UnitTest object. The first time this method is // called, a UnitTest object is constructed and returned. Consecutive // calls will return the same object. // // We don't protect this under mutex_ as a user is not supposed to // call this before main() starts, from which point on the return // value will never change. UnitTest* UnitTest::GetInstance() { // When compiled with MSVC 7.1 in optimized mode, destroying the // UnitTest object upon exiting the program messes up the exit code, // causing successful tests to appear failed. We have to use a // different implementation in this case to bypass the compiler bug. // This implementation makes the compiler happy, at the cost of // leaking the UnitTest object. // CodeGear C++Builder insists on a public destructor for the // default implementation. Use this implementation to keep good OO // design with private destructor. #if (_MSC_VER == 1310 && !defined(_DEBUG)) || defined(__BORLANDC__) static UnitTest* const instance = new UnitTest; return instance; #else static UnitTest instance; return &instance; #endif // (_MSC_VER == 1310 && !defined(_DEBUG)) || defined(__BORLANDC__) } // Gets the number of successful test cases. int UnitTest::successful_test_case_count() const { return impl()->successful_test_case_count(); } // Gets the number of failed test cases. int UnitTest::failed_test_case_count() const { return impl()->failed_test_case_count(); } // Gets the number of all test cases. int UnitTest::total_test_case_count() const { return impl()->total_test_case_count(); } // Gets the number of all test cases that contain at least one test // that should run. int UnitTest::test_case_to_run_count() const { return impl()->test_case_to_run_count(); } // Gets the number of successful tests. int UnitTest::successful_test_count() const { return impl()->successful_test_count(); } // Gets the number of failed tests. int UnitTest::failed_test_count() const { return impl()->failed_test_count(); } // Gets the number of disabled tests that will be reported in the XML report. int UnitTest::reportable_disabled_test_count() const { return impl()->reportable_disabled_test_count(); } // Gets the number of disabled tests. int UnitTest::disabled_test_count() const { return impl()->disabled_test_count(); } // Gets the number of tests to be printed in the XML report. int UnitTest::reportable_test_count() const { return impl()->reportable_test_count(); } // Gets the number of all tests. int UnitTest::total_test_count() const { return impl()->total_test_count(); } // Gets the number of tests that should run. int UnitTest::test_to_run_count() const { return impl()->test_to_run_count(); } // Gets the time of the test program start, in ms from the start of the // UNIX epoch. internal::TimeInMillis UnitTest::start_timestamp() const { return impl()->start_timestamp(); } // Gets the elapsed time, in milliseconds. internal::TimeInMillis UnitTest::elapsed_time() const { return impl()->elapsed_time(); } // Returns true iff the unit test passed (i.e. all test cases passed). bool UnitTest::Passed() const { return impl()->Passed(); } // Returns true iff the unit test failed (i.e. some test case failed // or something outside of all tests failed). bool UnitTest::Failed() const { return impl()->Failed(); } // Gets the i-th test case among all the test cases. i can range from 0 to // total_test_case_count() - 1. If i is not in that range, returns NULL. const TestCase* UnitTest::GetTestCase(int i) const { return impl()->GetTestCase(i); } // Returns the TestResult containing information on test failures and // properties logged outside of individual test cases. const TestResult& UnitTest::ad_hoc_test_result() const { return *impl()->ad_hoc_test_result(); } // Gets the i-th test case among all the test cases. i can range from 0 to // total_test_case_count() - 1. If i is not in that range, returns NULL. TestCase* UnitTest::GetMutableTestCase(int i) { return impl()->GetMutableTestCase(i); } // Returns the list of event listeners that can be used to track events // inside Google Test. TestEventListeners& UnitTest::listeners() { return *impl()->listeners(); } // Registers and returns a global test environment. When a test // program is run, all global test environments will be set-up in the // order they were registered. After all tests in the program have // finished, all global test environments will be torn-down in the // *reverse* order they were registered. // // The UnitTest object takes ownership of the given environment. // // We don't protect this under mutex_, as we only support calling it // from the main thread. Environment* UnitTest::AddEnvironment(Environment* env) { if (env == NULL) { return NULL; } impl_->environments().push_back(env); return env; } // Adds a TestPartResult to the current TestResult object. All Google Test // assertion macros (e.g. ASSERT_TRUE, EXPECT_EQ, etc) eventually call // this to report their results. The user code should use the // assertion macros instead of calling this directly. void UnitTest::AddTestPartResult( TestPartResult::Type result_type, const char* file_name, int line_number, const std::string& message, const std::string& os_stack_trace) GTEST_LOCK_EXCLUDED_(mutex_) { Message msg; msg << message; internal::MutexLock lock(&mutex_); if (impl_->gtest_trace_stack().size() > 0) { msg << "\n" << GTEST_NAME_ << " trace:"; for (int i = static_cast(impl_->gtest_trace_stack().size()); i > 0; --i) { const internal::TraceInfo& trace = impl_->gtest_trace_stack()[i - 1]; msg << "\n" << internal::FormatFileLocation(trace.file, trace.line) << " " << trace.message; } } if (os_stack_trace.c_str() != NULL && !os_stack_trace.empty()) { msg << internal::kStackTraceMarker << os_stack_trace; } const TestPartResult result = TestPartResult(result_type, file_name, line_number, msg.GetString().c_str()); impl_->GetTestPartResultReporterForCurrentThread()-> ReportTestPartResult(result); if (result_type != TestPartResult::kSuccess) { // gtest_break_on_failure takes precedence over // gtest_throw_on_failure. This allows a user to set the latter // in the code (perhaps in order to use Google Test assertions // with another testing framework) and specify the former on the // command line for debugging. if (GTEST_FLAG(break_on_failure)) { #if GTEST_OS_WINDOWS // Using DebugBreak on Windows allows gtest to still break into a debugger // when a failure happens and both the --gtest_break_on_failure and // the --gtest_catch_exceptions flags are specified. DebugBreak(); #else // Dereference NULL through a volatile pointer to prevent the compiler // from removing. We use this rather than abort() or __builtin_trap() for // portability: Symbian doesn't implement abort() well, and some debuggers // don't correctly trap abort(). *static_cast(NULL) = 1; #endif // GTEST_OS_WINDOWS } else if (GTEST_FLAG(throw_on_failure)) { #if GTEST_HAS_EXCEPTIONS throw internal::GoogleTestFailureException(result); #else // We cannot call abort() as it generates a pop-up in debug mode // that cannot be suppressed in VC 7.1 or below. exit(1); #endif } } } // Adds a TestProperty to the current TestResult object when invoked from // inside a test, to current TestCase's ad_hoc_test_result_ when invoked // from SetUpTestCase or TearDownTestCase, or to the global property set // when invoked elsewhere. If the result already contains a property with // the same key, the value will be updated. void UnitTest::RecordProperty(const std::string& key, const std::string& value) { impl_->RecordProperty(TestProperty(key, value)); } // Runs all tests in this UnitTest object and prints the result. // Returns 0 if successful, or 1 otherwise. // // We don't protect this under mutex_, as we only support calling it // from the main thread. int UnitTest::Run() { const bool in_death_test_child_process = internal::GTEST_FLAG(internal_run_death_test).length() > 0; // Google Test implements this protocol for catching that a test // program exits before returning control to Google Test: // // 1. Upon start, Google Test creates a file whose absolute path // is specified by the environment variable // TEST_PREMATURE_EXIT_FILE. // 2. When Google Test has finished its work, it deletes the file. // // This allows a test runner to set TEST_PREMATURE_EXIT_FILE before // running a Google-Test-based test program and check the existence // of the file at the end of the test execution to see if it has // exited prematurely. // If we are in the child process of a death test, don't // create/delete the premature exit file, as doing so is unnecessary // and will confuse the parent process. Otherwise, create/delete // the file upon entering/leaving this function. If the program // somehow exits before this function has a chance to return, the // premature-exit file will be left undeleted, causing a test runner // that understands the premature-exit-file protocol to report the // test as having failed. const internal::ScopedPrematureExitFile premature_exit_file( in_death_test_child_process ? NULL : internal::posix::GetEnv("TEST_PREMATURE_EXIT_FILE")); // Captures the value of GTEST_FLAG(catch_exceptions). This value will be // used for the duration of the program. impl()->set_catch_exceptions(GTEST_FLAG(catch_exceptions)); #if GTEST_HAS_SEH // Either the user wants Google Test to catch exceptions thrown by the // tests or this is executing in the context of death test child // process. In either case the user does not want to see pop-up dialogs // about crashes - they are expected. if (impl()->catch_exceptions() || in_death_test_child_process) { # if !GTEST_OS_WINDOWS_MOBILE // SetErrorMode doesn't exist on CE. SetErrorMode(SEM_FAILCRITICALERRORS | SEM_NOALIGNMENTFAULTEXCEPT | SEM_NOGPFAULTERRORBOX | SEM_NOOPENFILEERRORBOX); # endif // !GTEST_OS_WINDOWS_MOBILE # if (defined(_MSC_VER) || GTEST_OS_WINDOWS_MINGW) && !GTEST_OS_WINDOWS_MOBILE // Death test children can be terminated with _abort(). On Windows, // _abort() can show a dialog with a warning message. This forces the // abort message to go to stderr instead. _set_error_mode(_OUT_TO_STDERR); # endif # if _MSC_VER >= 1400 && !GTEST_OS_WINDOWS_MOBILE // In the debug version, Visual Studio pops up a separate dialog // offering a choice to debug the aborted program. We need to suppress // this dialog or it will pop up for every EXPECT/ASSERT_DEATH statement // executed. Google Test will notify the user of any unexpected // failure via stderr. // // VC++ doesn't define _set_abort_behavior() prior to the version 8.0. // Users of prior VC versions shall suffer the agony and pain of // clicking through the countless debug dialogs. // TODO(vladl@google.com): find a way to suppress the abort dialog() in the // debug mode when compiled with VC 7.1 or lower. if (!GTEST_FLAG(break_on_failure)) _set_abort_behavior( 0x0, // Clear the following flags: _WRITE_ABORT_MSG | _CALL_REPORTFAULT); // pop-up window, core dump. # endif } #endif // GTEST_HAS_SEH return internal::HandleExceptionsInMethodIfSupported( impl(), &internal::UnitTestImpl::RunAllTests, "auxiliary test code (environments or event listeners)") ? 0 : 1; } // Returns the working directory when the first TEST() or TEST_F() was // executed. const char* UnitTest::original_working_dir() const { return impl_->original_working_dir_.c_str(); } // Returns the TestCase object for the test that's currently running, // or NULL if no test is running. const TestCase* UnitTest::current_test_case() const GTEST_LOCK_EXCLUDED_(mutex_) { internal::MutexLock lock(&mutex_); return impl_->current_test_case(); } // Returns the TestInfo object for the test that's currently running, // or NULL if no test is running. const TestInfo* UnitTest::current_test_info() const GTEST_LOCK_EXCLUDED_(mutex_) { internal::MutexLock lock(&mutex_); return impl_->current_test_info(); } // Returns the random seed used at the start of the current test run. int UnitTest::random_seed() const { return impl_->random_seed(); } #if GTEST_HAS_PARAM_TEST // Returns ParameterizedTestCaseRegistry object used to keep track of // value-parameterized tests and instantiate and register them. internal::ParameterizedTestCaseRegistry& UnitTest::parameterized_test_registry() GTEST_LOCK_EXCLUDED_(mutex_) { return impl_->parameterized_test_registry(); } #endif // GTEST_HAS_PARAM_TEST // Creates an empty UnitTest. UnitTest::UnitTest() { impl_ = new internal::UnitTestImpl(this); } // Destructor of UnitTest. UnitTest::~UnitTest() { delete impl_; } // Pushes a trace defined by SCOPED_TRACE() on to the per-thread // Google Test trace stack. void UnitTest::PushGTestTrace(const internal::TraceInfo& trace) GTEST_LOCK_EXCLUDED_(mutex_) { internal::MutexLock lock(&mutex_); impl_->gtest_trace_stack().push_back(trace); } // Pops a trace from the per-thread Google Test trace stack. void UnitTest::PopGTestTrace() GTEST_LOCK_EXCLUDED_(mutex_) { internal::MutexLock lock(&mutex_); impl_->gtest_trace_stack().pop_back(); } namespace internal { UnitTestImpl::UnitTestImpl(UnitTest* parent) : parent_(parent), #ifdef _MSC_VER # pragma warning(push) // Saves the current warning state. # pragma warning(disable:4355) // Temporarily disables warning 4355 // (using this in initializer). default_global_test_part_result_reporter_(this), default_per_thread_test_part_result_reporter_(this), # pragma warning(pop) // Restores the warning state again. #else default_global_test_part_result_reporter_(this), default_per_thread_test_part_result_reporter_(this), #endif // _MSC_VER global_test_part_result_repoter_( &default_global_test_part_result_reporter_), per_thread_test_part_result_reporter_( &default_per_thread_test_part_result_reporter_), #if GTEST_HAS_PARAM_TEST parameterized_test_registry_(), parameterized_tests_registered_(false), #endif // GTEST_HAS_PARAM_TEST last_death_test_case_(-1), current_test_case_(NULL), current_test_info_(NULL), ad_hoc_test_result_(), os_stack_trace_getter_(NULL), post_flag_parse_init_performed_(false), random_seed_(0), // Will be overridden by the flag before first use. random_(0), // Will be reseeded before first use. start_timestamp_(0), elapsed_time_(0), #if GTEST_HAS_DEATH_TEST death_test_factory_(new DefaultDeathTestFactory), #endif // Will be overridden by the flag before first use. catch_exceptions_(false) { listeners()->SetDefaultResultPrinter(new PrettyUnitTestResultPrinter); } UnitTestImpl::~UnitTestImpl() { // Deletes every TestCase. ForEach(test_cases_, internal::Delete); // Deletes every Environment. ForEach(environments_, internal::Delete); delete os_stack_trace_getter_; } // Adds a TestProperty to the current TestResult object when invoked in a // context of a test, to current test case's ad_hoc_test_result when invoke // from SetUpTestCase/TearDownTestCase, or to the global property set // otherwise. If the result already contains a property with the same key, // the value will be updated. void UnitTestImpl::RecordProperty(const TestProperty& test_property) { std::string xml_element; TestResult* test_result; // TestResult appropriate for property recording. if (current_test_info_ != NULL) { xml_element = "testcase"; test_result = &(current_test_info_->result_); } else if (current_test_case_ != NULL) { xml_element = "testsuite"; test_result = &(current_test_case_->ad_hoc_test_result_); } else { xml_element = "testsuites"; test_result = &ad_hoc_test_result_; } test_result->RecordProperty(xml_element, test_property); } #if GTEST_HAS_DEATH_TEST // Disables event forwarding if the control is currently in a death test // subprocess. Must not be called before InitGoogleTest. void UnitTestImpl::SuppressTestEventsIfInSubprocess() { if (internal_run_death_test_flag_.get() != NULL) listeners()->SuppressEventForwarding(); } #endif // GTEST_HAS_DEATH_TEST // Initializes event listeners performing XML output as specified by // UnitTestOptions. Must not be called before InitGoogleTest. void UnitTestImpl::ConfigureXmlOutput() { const std::string& output_format = UnitTestOptions::GetOutputFormat(); if (output_format == "xml") { listeners()->SetDefaultXmlGenerator(new XmlUnitTestResultPrinter( UnitTestOptions::GetAbsolutePathToOutputFile().c_str())); } else if (output_format != "") { printf("WARNING: unrecognized output format \"%s\" ignored.\n", output_format.c_str()); fflush(stdout); } } #if GTEST_CAN_STREAM_RESULTS_ // Initializes event listeners for streaming test results in string form. // Must not be called before InitGoogleTest. void UnitTestImpl::ConfigureStreamingOutput() { const std::string& target = GTEST_FLAG(stream_result_to); if (!target.empty()) { const size_t pos = target.find(':'); if (pos != std::string::npos) { listeners()->Append(new StreamingListener(target.substr(0, pos), target.substr(pos+1))); } else { printf("WARNING: unrecognized streaming target \"%s\" ignored.\n", target.c_str()); fflush(stdout); } } } #endif // GTEST_CAN_STREAM_RESULTS_ // Performs initialization dependent upon flag values obtained in // ParseGoogleTestFlagsOnly. Is called from InitGoogleTest after the call to // ParseGoogleTestFlagsOnly. In case a user neglects to call InitGoogleTest // this function is also called from RunAllTests. Since this function can be // called more than once, it has to be idempotent. void UnitTestImpl::PostFlagParsingInit() { // Ensures that this function does not execute more than once. if (!post_flag_parse_init_performed_) { post_flag_parse_init_performed_ = true; #if GTEST_HAS_DEATH_TEST InitDeathTestSubprocessControlInfo(); SuppressTestEventsIfInSubprocess(); #endif // GTEST_HAS_DEATH_TEST // Registers parameterized tests. This makes parameterized tests // available to the UnitTest reflection API without running // RUN_ALL_TESTS. RegisterParameterizedTests(); // Configures listeners for XML output. This makes it possible for users // to shut down the default XML output before invoking RUN_ALL_TESTS. ConfigureXmlOutput(); #if GTEST_CAN_STREAM_RESULTS_ // Configures listeners for streaming test results to the specified server. ConfigureStreamingOutput(); #endif // GTEST_CAN_STREAM_RESULTS_ } } // A predicate that checks the name of a TestCase against a known // value. // // This is used for implementation of the UnitTest class only. We put // it in the anonymous namespace to prevent polluting the outer // namespace. // // TestCaseNameIs is copyable. class TestCaseNameIs { public: // Constructor. explicit TestCaseNameIs(const std::string& name) : name_(name) {} // Returns true iff the name of test_case matches name_. bool operator()(const TestCase* test_case) const { return test_case != NULL && strcmp(test_case->name(), name_.c_str()) == 0; } private: std::string name_; }; // Finds and returns a TestCase with the given name. If one doesn't // exist, creates one and returns it. It's the CALLER'S // RESPONSIBILITY to ensure that this function is only called WHEN THE // TESTS ARE NOT SHUFFLED. // // Arguments: // // test_case_name: name of the test case // type_param: the name of the test case's type parameter, or NULL if // this is not a typed or a type-parameterized test case. // set_up_tc: pointer to the function that sets up the test case // tear_down_tc: pointer to the function that tears down the test case TestCase* UnitTestImpl::GetTestCase(const char* test_case_name, const char* type_param, Test::SetUpTestCaseFunc set_up_tc, Test::TearDownTestCaseFunc tear_down_tc) { // Can we find a TestCase with the given name? const std::vector::const_iterator test_case = std::find_if(test_cases_.begin(), test_cases_.end(), TestCaseNameIs(test_case_name)); if (test_case != test_cases_.end()) return *test_case; // No. Let's create one. TestCase* const new_test_case = new TestCase(test_case_name, type_param, set_up_tc, tear_down_tc); // Is this a death test case? if (internal::UnitTestOptions::MatchesFilter(test_case_name, kDeathTestCaseFilter)) { // Yes. Inserts the test case after the last death test case // defined so far. This only works when the test cases haven't // been shuffled. Otherwise we may end up running a death test // after a non-death test. ++last_death_test_case_; test_cases_.insert(test_cases_.begin() + last_death_test_case_, new_test_case); } else { // No. Appends to the end of the list. test_cases_.push_back(new_test_case); } test_case_indices_.push_back(static_cast(test_case_indices_.size())); return new_test_case; } // Helpers for setting up / tearing down the given environment. They // are for use in the ForEach() function. static void SetUpEnvironment(Environment* env) { env->SetUp(); } static void TearDownEnvironment(Environment* env) { env->TearDown(); } // Runs all tests in this UnitTest object, prints the result, and // returns true if all tests are successful. If any exception is // thrown during a test, the test is considered to be failed, but the // rest of the tests will still be run. // // When parameterized tests are enabled, it expands and registers // parameterized tests first in RegisterParameterizedTests(). // All other functions called from RunAllTests() may safely assume that // parameterized tests are ready to be counted and run. bool UnitTestImpl::RunAllTests() { // Makes sure InitGoogleTest() was called. if (!GTestIsInitialized()) { printf("%s", "\nThis test program did NOT call ::testing::InitGoogleTest " "before calling RUN_ALL_TESTS(). Please fix it.\n"); return false; } // Do not run any test if the --help flag was specified. if (g_help_flag) return true; // Repeats the call to the post-flag parsing initialization in case the // user didn't call InitGoogleTest. PostFlagParsingInit(); // Even if sharding is not on, test runners may want to use the // GTEST_SHARD_STATUS_FILE to query whether the test supports the sharding // protocol. internal::WriteToShardStatusFileIfNeeded(); // True iff we are in a subprocess for running a thread-safe-style // death test. bool in_subprocess_for_death_test = false; #if GTEST_HAS_DEATH_TEST in_subprocess_for_death_test = (internal_run_death_test_flag_.get() != NULL); #endif // GTEST_HAS_DEATH_TEST const bool should_shard = ShouldShard(kTestTotalShards, kTestShardIndex, in_subprocess_for_death_test); // Compares the full test names with the filter to decide which // tests to run. const bool has_tests_to_run = FilterTests(should_shard ? HONOR_SHARDING_PROTOCOL : IGNORE_SHARDING_PROTOCOL) > 0; // Lists the tests and exits if the --gtest_list_tests flag was specified. if (GTEST_FLAG(list_tests)) { // This must be called *after* FilterTests() has been called. ListTestsMatchingFilter(); return true; } random_seed_ = GTEST_FLAG(shuffle) ? GetRandomSeedFromFlag(GTEST_FLAG(random_seed)) : 0; // True iff at least one test has failed. bool failed = false; TestEventListener* repeater = listeners()->repeater(); start_timestamp_ = GetTimeInMillis(); repeater->OnTestProgramStart(*parent_); // How many times to repeat the tests? We don't want to repeat them // when we are inside the subprocess of a death test. const int repeat = in_subprocess_for_death_test ? 1 : GTEST_FLAG(repeat); // Repeats forever if the repeat count is negative. const bool forever = repeat < 0; for (int i = 0; forever || i != repeat; i++) { // We want to preserve failures generated by ad-hoc test // assertions executed before RUN_ALL_TESTS(). ClearNonAdHocTestResult(); const TimeInMillis start = GetTimeInMillis(); // Shuffles test cases and tests if requested. if (has_tests_to_run && GTEST_FLAG(shuffle)) { random()->Reseed(random_seed_); // This should be done before calling OnTestIterationStart(), // such that a test event listener can see the actual test order // in the event. ShuffleTests(); } // Tells the unit test event listeners that the tests are about to start. repeater->OnTestIterationStart(*parent_, i); // Runs each test case if there is at least one test to run. if (has_tests_to_run) { // Sets up all environments beforehand. repeater->OnEnvironmentsSetUpStart(*parent_); ForEach(environments_, SetUpEnvironment); repeater->OnEnvironmentsSetUpEnd(*parent_); // Runs the tests only if there was no fatal failure during global // set-up. if (!Test::HasFatalFailure()) { for (int test_index = 0; test_index < total_test_case_count(); test_index++) { GetMutableTestCase(test_index)->Run(); } } // Tears down all environments in reverse order afterwards. repeater->OnEnvironmentsTearDownStart(*parent_); std::for_each(environments_.rbegin(), environments_.rend(), TearDownEnvironment); repeater->OnEnvironmentsTearDownEnd(*parent_); } elapsed_time_ = GetTimeInMillis() - start; // Tells the unit test event listener that the tests have just finished. repeater->OnTestIterationEnd(*parent_, i); // Gets the result and clears it. if (!Passed()) { failed = true; } // Restores the original test order after the iteration. This // allows the user to quickly repro a failure that happens in the // N-th iteration without repeating the first (N - 1) iterations. // This is not enclosed in "if (GTEST_FLAG(shuffle)) { ... }", in // case the user somehow changes the value of the flag somewhere // (it's always safe to unshuffle the tests). UnshuffleTests(); if (GTEST_FLAG(shuffle)) { // Picks a new random seed for each iteration. random_seed_ = GetNextRandomSeed(random_seed_); } } repeater->OnTestProgramEnd(*parent_); return !failed; } // Reads the GTEST_SHARD_STATUS_FILE environment variable, and creates the file // if the variable is present. If a file already exists at this location, this // function will write over it. If the variable is present, but the file cannot // be created, prints an error and exits. void WriteToShardStatusFileIfNeeded() { const char* const test_shard_file = posix::GetEnv(kTestShardStatusFile); if (test_shard_file != NULL) { FILE* const file = posix::FOpen(test_shard_file, "w"); if (file == NULL) { ColoredPrintf(COLOR_RED, "Could not write to the test shard status file \"%s\" " "specified by the %s environment variable.\n", test_shard_file, kTestShardStatusFile); fflush(stdout); exit(EXIT_FAILURE); } fclose(file); } } // Checks whether sharding is enabled by examining the relevant // environment variable values. If the variables are present, // but inconsistent (i.e., shard_index >= total_shards), prints // an error and exits. If in_subprocess_for_death_test, sharding is // disabled because it must only be applied to the original test // process. Otherwise, we could filter out death tests we intended to execute. bool ShouldShard(const char* total_shards_env, const char* shard_index_env, bool in_subprocess_for_death_test) { if (in_subprocess_for_death_test) { return false; } const Int32 total_shards = Int32FromEnvOrDie(total_shards_env, -1); const Int32 shard_index = Int32FromEnvOrDie(shard_index_env, -1); if (total_shards == -1 && shard_index == -1) { return false; } else if (total_shards == -1 && shard_index != -1) { const Message msg = Message() << "Invalid environment variables: you have " << kTestShardIndex << " = " << shard_index << ", but have left " << kTestTotalShards << " unset.\n"; ColoredPrintf(COLOR_RED, msg.GetString().c_str()); fflush(stdout); exit(EXIT_FAILURE); } else if (total_shards != -1 && shard_index == -1) { const Message msg = Message() << "Invalid environment variables: you have " << kTestTotalShards << " = " << total_shards << ", but have left " << kTestShardIndex << " unset.\n"; ColoredPrintf(COLOR_RED, msg.GetString().c_str()); fflush(stdout); exit(EXIT_FAILURE); } else if (shard_index < 0 || shard_index >= total_shards) { const Message msg = Message() << "Invalid environment variables: we require 0 <= " << kTestShardIndex << " < " << kTestTotalShards << ", but you have " << kTestShardIndex << "=" << shard_index << ", " << kTestTotalShards << "=" << total_shards << ".\n"; ColoredPrintf(COLOR_RED, msg.GetString().c_str()); fflush(stdout); exit(EXIT_FAILURE); } return total_shards > 1; } // Parses the environment variable var as an Int32. If it is unset, // returns default_val. If it is not an Int32, prints an error // and aborts. Int32 Int32FromEnvOrDie(const char* var, Int32 default_val) { const char* str_val = posix::GetEnv(var); if (str_val == NULL) { return default_val; } Int32 result; if (!ParseInt32(Message() << "The value of environment variable " << var, str_val, &result)) { exit(EXIT_FAILURE); } return result; } // Given the total number of shards, the shard index, and the test id, // returns true iff the test should be run on this shard. The test id is // some arbitrary but unique non-negative integer assigned to each test // method. Assumes that 0 <= shard_index < total_shards. bool ShouldRunTestOnShard(int total_shards, int shard_index, int test_id) { return (test_id % total_shards) == shard_index; } // Compares the name of each test with the user-specified filter to // decide whether the test should be run, then records the result in // each TestCase and TestInfo object. // If shard_tests == true, further filters tests based on sharding // variables in the environment - see // http://code.google.com/p/googletest/wiki/GoogleTestAdvancedGuide. // Returns the number of tests that should run. int UnitTestImpl::FilterTests(ReactionToSharding shard_tests) { const Int32 total_shards = shard_tests == HONOR_SHARDING_PROTOCOL ? Int32FromEnvOrDie(kTestTotalShards, -1) : -1; const Int32 shard_index = shard_tests == HONOR_SHARDING_PROTOCOL ? Int32FromEnvOrDie(kTestShardIndex, -1) : -1; // num_runnable_tests are the number of tests that will // run across all shards (i.e., match filter and are not disabled). // num_selected_tests are the number of tests to be run on // this shard. int num_runnable_tests = 0; int num_selected_tests = 0; for (size_t i = 0; i < test_cases_.size(); i++) { TestCase* const test_case = test_cases_[i]; const std::string &test_case_name = test_case->name(); test_case->set_should_run(false); for (size_t j = 0; j < test_case->test_info_list().size(); j++) { TestInfo* const test_info = test_case->test_info_list()[j]; const std::string test_name(test_info->name()); // A test is disabled if test case name or test name matches // kDisableTestFilter. const bool is_disabled = internal::UnitTestOptions::MatchesFilter(test_case_name, kDisableTestFilter) || internal::UnitTestOptions::MatchesFilter(test_name, kDisableTestFilter); test_info->is_disabled_ = is_disabled; const bool matches_filter = internal::UnitTestOptions::FilterMatchesTest(test_case_name, test_name); test_info->matches_filter_ = matches_filter; const bool is_runnable = (GTEST_FLAG(also_run_disabled_tests) || !is_disabled) && matches_filter; const bool is_selected = is_runnable && (shard_tests == IGNORE_SHARDING_PROTOCOL || ShouldRunTestOnShard(total_shards, shard_index, num_runnable_tests)); num_runnable_tests += is_runnable; num_selected_tests += is_selected; test_info->should_run_ = is_selected; test_case->set_should_run(test_case->should_run() || is_selected); } } return num_selected_tests; } // Prints the given C-string on a single line by replacing all '\n' // characters with string "\\n". If the output takes more than // max_length characters, only prints the first max_length characters // and "...". static void PrintOnOneLine(const char* str, int max_length) { if (str != NULL) { for (int i = 0; *str != '\0'; ++str) { if (i >= max_length) { printf("..."); break; } if (*str == '\n') { printf("\\n"); i += 2; } else { printf("%c", *str); ++i; } } } } // Prints the names of the tests matching the user-specified filter flag. void UnitTestImpl::ListTestsMatchingFilter() { // Print at most this many characters for each type/value parameter. const int kMaxParamLength = 250; for (size_t i = 0; i < test_cases_.size(); i++) { const TestCase* const test_case = test_cases_[i]; bool printed_test_case_name = false; for (size_t j = 0; j < test_case->test_info_list().size(); j++) { const TestInfo* const test_info = test_case->test_info_list()[j]; if (test_info->matches_filter_) { if (!printed_test_case_name) { printed_test_case_name = true; printf("%s.", test_case->name()); if (test_case->type_param() != NULL) { printf(" # %s = ", kTypeParamLabel); // We print the type parameter on a single line to make // the output easy to parse by a program. PrintOnOneLine(test_case->type_param(), kMaxParamLength); } printf("\n"); } printf(" %s", test_info->name()); if (test_info->value_param() != NULL) { printf(" # %s = ", kValueParamLabel); // We print the value parameter on a single line to make the // output easy to parse by a program. PrintOnOneLine(test_info->value_param(), kMaxParamLength); } printf("\n"); } } } fflush(stdout); } // Sets the OS stack trace getter. // // Does nothing if the input and the current OS stack trace getter are // the same; otherwise, deletes the old getter and makes the input the // current getter. void UnitTestImpl::set_os_stack_trace_getter( OsStackTraceGetterInterface* getter) { if (os_stack_trace_getter_ != getter) { delete os_stack_trace_getter_; os_stack_trace_getter_ = getter; } } // Returns the current OS stack trace getter if it is not NULL; // otherwise, creates an OsStackTraceGetter, makes it the current // getter, and returns it. OsStackTraceGetterInterface* UnitTestImpl::os_stack_trace_getter() { if (os_stack_trace_getter_ == NULL) { os_stack_trace_getter_ = new OsStackTraceGetter; } return os_stack_trace_getter_; } // Returns the TestResult for the test that's currently running, or // the TestResult for the ad hoc test if no test is running. TestResult* UnitTestImpl::current_test_result() { return current_test_info_ ? &(current_test_info_->result_) : &ad_hoc_test_result_; } // Shuffles all test cases, and the tests within each test case, // making sure that death tests are still run first. void UnitTestImpl::ShuffleTests() { // Shuffles the death test cases. ShuffleRange(random(), 0, last_death_test_case_ + 1, &test_case_indices_); // Shuffles the non-death test cases. ShuffleRange(random(), last_death_test_case_ + 1, static_cast(test_cases_.size()), &test_case_indices_); // Shuffles the tests inside each test case. for (size_t i = 0; i < test_cases_.size(); i++) { test_cases_[i]->ShuffleTests(random()); } } // Restores the test cases and tests to their order before the first shuffle. void UnitTestImpl::UnshuffleTests() { for (size_t i = 0; i < test_cases_.size(); i++) { // Unshuffles the tests in each test case. test_cases_[i]->UnshuffleTests(); // Resets the index of each test case. test_case_indices_[i] = static_cast(i); } } // Returns the current OS stack trace as an std::string. // // The maximum number of stack frames to be included is specified by // the gtest_stack_trace_depth flag. The skip_count parameter // specifies the number of top frames to be skipped, which doesn't // count against the number of frames to be included. // // For example, if Foo() calls Bar(), which in turn calls // GetCurrentOsStackTraceExceptTop(..., 1), Foo() will be included in // the trace but Bar() and GetCurrentOsStackTraceExceptTop() won't. std::string GetCurrentOsStackTraceExceptTop(UnitTest* /*unit_test*/, int skip_count) { // We pass skip_count + 1 to skip this wrapper function in addition // to what the user really wants to skip. return GetUnitTestImpl()->CurrentOsStackTraceExceptTop(skip_count + 1); } // Used by the GTEST_SUPPRESS_UNREACHABLE_CODE_WARNING_BELOW_ macro to // suppress unreachable code warnings. namespace { class ClassUniqueToAlwaysTrue {}; } bool IsTrue(bool condition) { return condition; } bool AlwaysTrue() { #if GTEST_HAS_EXCEPTIONS // This condition is always false so AlwaysTrue() never actually throws, // but it makes the compiler think that it may throw. if (IsTrue(false)) throw ClassUniqueToAlwaysTrue(); #endif // GTEST_HAS_EXCEPTIONS return true; } // If *pstr starts with the given prefix, modifies *pstr to be right // past the prefix and returns true; otherwise leaves *pstr unchanged // and returns false. None of pstr, *pstr, and prefix can be NULL. bool SkipPrefix(const char* prefix, const char** pstr) { const size_t prefix_len = strlen(prefix); if (strncmp(*pstr, prefix, prefix_len) == 0) { *pstr += prefix_len; return true; } return false; } // Parses a string as a command line flag. The string should have // the format "--flag=value". When def_optional is true, the "=value" // part can be omitted. // // Returns the value of the flag, or NULL if the parsing failed. const char* ParseFlagValue(const char* str, const char* flag, bool def_optional) { // str and flag must not be NULL. if (str == NULL || flag == NULL) return NULL; // The flag must start with "--" followed by GTEST_FLAG_PREFIX_. const std::string flag_str = std::string("--") + GTEST_FLAG_PREFIX_ + flag; const size_t flag_len = flag_str.length(); if (strncmp(str, flag_str.c_str(), flag_len) != 0) return NULL; // Skips the flag name. const char* flag_end = str + flag_len; // When def_optional is true, it's OK to not have a "=value" part. if (def_optional && (flag_end[0] == '\0')) { return flag_end; } // If def_optional is true and there are more characters after the // flag name, or if def_optional is false, there must be a '=' after // the flag name. if (flag_end[0] != '=') return NULL; // Returns the string after "=". return flag_end + 1; } // Parses a string for a bool flag, in the form of either // "--flag=value" or "--flag". // // In the former case, the value is taken as true as long as it does // not start with '0', 'f', or 'F'. // // In the latter case, the value is taken as true. // // On success, stores the value of the flag in *value, and returns // true. On failure, returns false without changing *value. bool ParseBoolFlag(const char* str, const char* flag, bool* value) { // Gets the value of the flag as a string. const char* const value_str = ParseFlagValue(str, flag, true); // Aborts if the parsing failed. if (value_str == NULL) return false; // Converts the string value to a bool. *value = !(*value_str == '0' || *value_str == 'f' || *value_str == 'F'); return true; } // Parses a string for an Int32 flag, in the form of // "--flag=value". // // On success, stores the value of the flag in *value, and returns // true. On failure, returns false without changing *value. bool ParseInt32Flag(const char* str, const char* flag, Int32* value) { // Gets the value of the flag as a string. const char* const value_str = ParseFlagValue(str, flag, false); // Aborts if the parsing failed. if (value_str == NULL) return false; // Sets *value to the value of the flag. return ParseInt32(Message() << "The value of flag --" << flag, value_str, value); } // Parses a string for a string flag, in the form of // "--flag=value". // // On success, stores the value of the flag in *value, and returns // true. On failure, returns false without changing *value. bool ParseStringFlag(const char* str, const char* flag, std::string* value) { // Gets the value of the flag as a string. const char* const value_str = ParseFlagValue(str, flag, false); // Aborts if the parsing failed. if (value_str == NULL) return false; // Sets *value to the value of the flag. *value = value_str; return true; } // Determines whether a string has a prefix that Google Test uses for its // flags, i.e., starts with GTEST_FLAG_PREFIX_ or GTEST_FLAG_PREFIX_DASH_. // If Google Test detects that a command line flag has its prefix but is not // recognized, it will print its help message. Flags starting with // GTEST_INTERNAL_PREFIX_ followed by "internal_" are considered Google Test // internal flags and do not trigger the help message. static bool HasGoogleTestFlagPrefix(const char* str) { return (SkipPrefix("--", &str) || SkipPrefix("-", &str) || SkipPrefix("/", &str)) && !SkipPrefix(GTEST_FLAG_PREFIX_ "internal_", &str) && (SkipPrefix(GTEST_FLAG_PREFIX_, &str) || SkipPrefix(GTEST_FLAG_PREFIX_DASH_, &str)); } // Prints a string containing code-encoded text. The following escape // sequences can be used in the string to control the text color: // // @@ prints a single '@' character. // @R changes the color to red. // @G changes the color to green. // @Y changes the color to yellow. // @D changes to the default terminal text color. // // TODO(wan@google.com): Write tests for this once we add stdout // capturing to Google Test. static void PrintColorEncoded(const char* str) { GTestColor color = COLOR_DEFAULT; // The current color. // Conceptually, we split the string into segments divided by escape // sequences. Then we print one segment at a time. At the end of // each iteration, the str pointer advances to the beginning of the // next segment. for (;;) { const char* p = strchr(str, '@'); if (p == NULL) { ColoredPrintf(color, "%s", str); return; } ColoredPrintf(color, "%s", std::string(str, p).c_str()); const char ch = p[1]; str = p + 2; if (ch == '@') { ColoredPrintf(color, "@"); } else if (ch == 'D') { color = COLOR_DEFAULT; } else if (ch == 'R') { color = COLOR_RED; } else if (ch == 'G') { color = COLOR_GREEN; } else if (ch == 'Y') { color = COLOR_YELLOW; } else { --str; } } } static const char kColorEncodedHelpMessage[] = "This program contains tests written using " GTEST_NAME_ ". You can use the\n" "following command line flags to control its behavior:\n" "\n" "Test Selection:\n" " @G--" GTEST_FLAG_PREFIX_ "list_tests@D\n" " List the names of all tests instead of running them. The name of\n" " TEST(Foo, Bar) is \"Foo.Bar\".\n" " @G--" GTEST_FLAG_PREFIX_ "filter=@YPOSTIVE_PATTERNS" "[@G-@YNEGATIVE_PATTERNS]@D\n" " Run only the tests whose name matches one of the positive patterns but\n" " none of the negative patterns. '?' matches any single character; '*'\n" " matches any substring; ':' separates two patterns.\n" " @G--" GTEST_FLAG_PREFIX_ "also_run_disabled_tests@D\n" " Run all disabled tests too.\n" "\n" "Test Execution:\n" " @G--" GTEST_FLAG_PREFIX_ "repeat=@Y[COUNT]@D\n" " Run the tests repeatedly; use a negative count to repeat forever.\n" " @G--" GTEST_FLAG_PREFIX_ "shuffle@D\n" " Randomize tests' orders on every iteration.\n" " @G--" GTEST_FLAG_PREFIX_ "random_seed=@Y[NUMBER]@D\n" " Random number seed to use for shuffling test orders (between 1 and\n" " 99999, or 0 to use a seed based on the current time).\n" "\n" "Test Output:\n" " @G--" GTEST_FLAG_PREFIX_ "color=@Y(@Gyes@Y|@Gno@Y|@Gauto@Y)@D\n" " Enable/disable colored output. The default is @Gauto@D.\n" " -@G-" GTEST_FLAG_PREFIX_ "print_time=0@D\n" " Don't print the elapsed time of each test.\n" " @G--" GTEST_FLAG_PREFIX_ "output=xml@Y[@G:@YDIRECTORY_PATH@G" GTEST_PATH_SEP_ "@Y|@G:@YFILE_PATH]@D\n" " Generate an XML report in the given directory or with the given file\n" " name. @YFILE_PATH@D defaults to @Gtest_details.xml@D.\n" #if GTEST_CAN_STREAM_RESULTS_ " @G--" GTEST_FLAG_PREFIX_ "stream_result_to=@YHOST@G:@YPORT@D\n" " Stream test results to the given server.\n" #endif // GTEST_CAN_STREAM_RESULTS_ "\n" "Assertion Behavior:\n" #if GTEST_HAS_DEATH_TEST && !GTEST_OS_WINDOWS " @G--" GTEST_FLAG_PREFIX_ "death_test_style=@Y(@Gfast@Y|@Gthreadsafe@Y)@D\n" " Set the default death test style.\n" #endif // GTEST_HAS_DEATH_TEST && !GTEST_OS_WINDOWS " @G--" GTEST_FLAG_PREFIX_ "break_on_failure@D\n" " Turn assertion failures into debugger break-points.\n" " @G--" GTEST_FLAG_PREFIX_ "throw_on_failure@D\n" " Turn assertion failures into C++ exceptions.\n" " @G--" GTEST_FLAG_PREFIX_ "catch_exceptions=0@D\n" " Do not report exceptions as test failures. Instead, allow them\n" " to crash the program or throw a pop-up (on Windows).\n" "\n" "Except for @G--" GTEST_FLAG_PREFIX_ "list_tests@D, you can alternatively set " "the corresponding\n" "environment variable of a flag (all letters in upper-case). For example, to\n" "disable colored text output, you can either specify @G--" GTEST_FLAG_PREFIX_ "color=no@D or set\n" "the @G" GTEST_FLAG_PREFIX_UPPER_ "COLOR@D environment variable to @Gno@D.\n" "\n" "For more information, please read the " GTEST_NAME_ " documentation at\n" "@G" GTEST_PROJECT_URL_ "@D. If you find a bug in " GTEST_NAME_ "\n" "(not one in your own code or tests), please report it to\n" "@G<" GTEST_DEV_EMAIL_ ">@D.\n"; // Parses the command line for Google Test flags, without initializing // other parts of Google Test. The type parameter CharType can be // instantiated to either char or wchar_t. template void ParseGoogleTestFlagsOnlyImpl(int* argc, CharType** argv) { for (int i = 1; i < *argc; i++) { const std::string arg_string = StreamableToString(argv[i]); const char* const arg = arg_string.c_str(); using internal::ParseBoolFlag; using internal::ParseInt32Flag; using internal::ParseStringFlag; // Do we see a Google Test flag? if (ParseBoolFlag(arg, kAlsoRunDisabledTestsFlag, >EST_FLAG(also_run_disabled_tests)) || ParseBoolFlag(arg, kBreakOnFailureFlag, >EST_FLAG(break_on_failure)) || ParseBoolFlag(arg, kCatchExceptionsFlag, >EST_FLAG(catch_exceptions)) || ParseStringFlag(arg, kColorFlag, >EST_FLAG(color)) || ParseStringFlag(arg, kDeathTestStyleFlag, >EST_FLAG(death_test_style)) || ParseBoolFlag(arg, kDeathTestUseFork, >EST_FLAG(death_test_use_fork)) || ParseStringFlag(arg, kFilterFlag, >EST_FLAG(filter)) || ParseStringFlag(arg, kInternalRunDeathTestFlag, >EST_FLAG(internal_run_death_test)) || ParseBoolFlag(arg, kListTestsFlag, >EST_FLAG(list_tests)) || ParseStringFlag(arg, kOutputFlag, >EST_FLAG(output)) || ParseBoolFlag(arg, kPrintTimeFlag, >EST_FLAG(print_time)) || ParseInt32Flag(arg, kRandomSeedFlag, >EST_FLAG(random_seed)) || ParseInt32Flag(arg, kRepeatFlag, >EST_FLAG(repeat)) || ParseBoolFlag(arg, kShuffleFlag, >EST_FLAG(shuffle)) || ParseInt32Flag(arg, kStackTraceDepthFlag, >EST_FLAG(stack_trace_depth)) || ParseStringFlag(arg, kStreamResultToFlag, >EST_FLAG(stream_result_to)) || ParseBoolFlag(arg, kThrowOnFailureFlag, >EST_FLAG(throw_on_failure)) ) { // Yes. Shift the remainder of the argv list left by one. Note // that argv has (*argc + 1) elements, the last one always being // NULL. The following loop moves the trailing NULL element as // well. for (int j = i; j != *argc; j++) { argv[j] = argv[j + 1]; } // Decrements the argument count. (*argc)--; // We also need to decrement the iterator as we just removed // an element. i--; } else if (arg_string == "--help" || arg_string == "-h" || arg_string == "-?" || arg_string == "/?" || HasGoogleTestFlagPrefix(arg)) { // Both help flag and unrecognized Google Test flags (excluding // internal ones) trigger help display. g_help_flag = true; } } if (g_help_flag) { // We print the help here instead of in RUN_ALL_TESTS(), as the // latter may not be called at all if the user is using Google // Test with another testing framework. PrintColorEncoded(kColorEncodedHelpMessage); } } // Parses the command line for Google Test flags, without initializing // other parts of Google Test. void ParseGoogleTestFlagsOnly(int* argc, char** argv) { ParseGoogleTestFlagsOnlyImpl(argc, argv); } void ParseGoogleTestFlagsOnly(int* argc, wchar_t** argv) { ParseGoogleTestFlagsOnlyImpl(argc, argv); } // The internal implementation of InitGoogleTest(). // // The type parameter CharType can be instantiated to either char or // wchar_t. template void InitGoogleTestImpl(int* argc, CharType** argv) { g_init_gtest_count++; // We don't want to run the initialization code twice. if (g_init_gtest_count != 1) return; if (*argc <= 0) return; internal::g_executable_path = internal::StreamableToString(argv[0]); #if GTEST_HAS_DEATH_TEST g_argvs.clear(); for (int i = 0; i != *argc; i++) { g_argvs.push_back(StreamableToString(argv[i])); } #endif // GTEST_HAS_DEATH_TEST ParseGoogleTestFlagsOnly(argc, argv); GetUnitTestImpl()->PostFlagParsingInit(); } } // namespace internal // Initializes Google Test. This must be called before calling // RUN_ALL_TESTS(). In particular, it parses a command line for the // flags that Google Test recognizes. Whenever a Google Test flag is // seen, it is removed from argv, and *argc is decremented. // // No value is returned. Instead, the Google Test flag variables are // updated. // // Calling the function for the second time has no user-visible effect. void InitGoogleTest(int* argc, char** argv) { internal::InitGoogleTestImpl(argc, argv); } // This overloaded version can be used in Windows programs compiled in // UNICODE mode. void InitGoogleTest(int* argc, wchar_t** argv) { internal::InitGoogleTestImpl(argc, argv); } } // namespace testing ffc-2019.1.0.post0/libs/gtest-1.7.0/src/gtest_main.cc000066400000000000000000000033451346026536000215300ustar00rootroot00000000000000// Copyright 2006, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. #include #include "gtest/gtest.h" GTEST_API_ int main(int argc, char **argv) { printf("Running main() from gtest_main.cc\n"); testing::InitGoogleTest(&argc, argv); return RUN_ALL_TESTS(); } ffc-2019.1.0.post0/libs/gtest-1.7.0/test/000077500000000000000000000000001346026536000172525ustar00rootroot00000000000000ffc-2019.1.0.post0/libs/gtest-1.7.0/test/gtest-death-test_ex_test.cc000066400000000000000000000071371346026536000245120ustar00rootroot00000000000000// Copyright 2010, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: vladl@google.com (Vlad Losev) // // Tests that verify interaction of exceptions and death tests. #include "gtest/gtest-death-test.h" #include "gtest/gtest.h" #if GTEST_HAS_DEATH_TEST # if GTEST_HAS_SEH # include // For RaiseException(). # endif # include "gtest/gtest-spi.h" # if GTEST_HAS_EXCEPTIONS # include // For std::exception. // Tests that death tests report thrown exceptions as failures and that the // exceptions do not escape death test macros. TEST(CxxExceptionDeathTest, ExceptionIsFailure) { try { EXPECT_NONFATAL_FAILURE(EXPECT_DEATH(throw 1, ""), "threw an exception"); } catch (...) { // NOLINT FAIL() << "An exception escaped a death test macro invocation " << "with catch_exceptions " << (testing::GTEST_FLAG(catch_exceptions) ? "enabled" : "disabled"); } } class TestException : public std::exception { public: virtual const char* what() const throw() { return "exceptional message"; } }; TEST(CxxExceptionDeathTest, PrintsMessageForStdExceptions) { // Verifies that the exception message is quoted in the failure text. EXPECT_NONFATAL_FAILURE(EXPECT_DEATH(throw TestException(), ""), "exceptional message"); // Verifies that the location is mentioned in the failure text. EXPECT_NONFATAL_FAILURE(EXPECT_DEATH(throw TestException(), ""), "gtest-death-test_ex_test.cc"); } # endif // GTEST_HAS_EXCEPTIONS # if GTEST_HAS_SEH // Tests that enabling interception of SEH exceptions with the // catch_exceptions flag does not interfere with SEH exceptions being // treated as death by death tests. TEST(SehExceptionDeasTest, CatchExceptionsDoesNotInterfere) { EXPECT_DEATH(RaiseException(42, 0x0, 0, NULL), "") << "with catch_exceptions " << (testing::GTEST_FLAG(catch_exceptions) ? "enabled" : "disabled"); } # endif #endif // GTEST_HAS_DEATH_TEST int main(int argc, char** argv) { testing::InitGoogleTest(&argc, argv); testing::GTEST_FLAG(catch_exceptions) = GTEST_ENABLE_CATCH_EXCEPTIONS_ != 0; return RUN_ALL_TESTS(); } ffc-2019.1.0.post0/libs/gtest-1.7.0/test/gtest-death-test_test.cc000066400000000000000000001243441346026536000240160ustar00rootroot00000000000000// Copyright 2005, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: wan@google.com (Zhanyong Wan) // // Tests for death tests. #include "gtest/gtest-death-test.h" #include "gtest/gtest.h" #include "gtest/internal/gtest-filepath.h" using testing::internal::AlwaysFalse; using testing::internal::AlwaysTrue; #if GTEST_HAS_DEATH_TEST # if GTEST_OS_WINDOWS # include // For chdir(). # else # include # include // For waitpid. # endif // GTEST_OS_WINDOWS # include # include # include # if GTEST_OS_LINUX # include # endif // GTEST_OS_LINUX # include "gtest/gtest-spi.h" // Indicates that this translation unit is part of Google Test's // implementation. It must come before gtest-internal-inl.h is // included, or there will be a compiler error. This trick is to // prevent a user from accidentally including gtest-internal-inl.h in // his code. # define GTEST_IMPLEMENTATION_ 1 # include "src/gtest-internal-inl.h" # undef GTEST_IMPLEMENTATION_ namespace posix = ::testing::internal::posix; using testing::Message; using testing::internal::DeathTest; using testing::internal::DeathTestFactory; using testing::internal::FilePath; using testing::internal::GetLastErrnoDescription; using testing::internal::GetUnitTestImpl; using testing::internal::InDeathTestChild; using testing::internal::ParseNaturalNumber; namespace testing { namespace internal { // A helper class whose objects replace the death test factory for a // single UnitTest object during their lifetimes. class ReplaceDeathTestFactory { public: explicit ReplaceDeathTestFactory(DeathTestFactory* new_factory) : unit_test_impl_(GetUnitTestImpl()) { old_factory_ = unit_test_impl_->death_test_factory_.release(); unit_test_impl_->death_test_factory_.reset(new_factory); } ~ReplaceDeathTestFactory() { unit_test_impl_->death_test_factory_.release(); unit_test_impl_->death_test_factory_.reset(old_factory_); } private: // Prevents copying ReplaceDeathTestFactory objects. ReplaceDeathTestFactory(const ReplaceDeathTestFactory&); void operator=(const ReplaceDeathTestFactory&); UnitTestImpl* unit_test_impl_; DeathTestFactory* old_factory_; }; } // namespace internal } // namespace testing void DieWithMessage(const ::std::string& message) { fprintf(stderr, "%s", message.c_str()); fflush(stderr); // Make sure the text is printed before the process exits. // We call _exit() instead of exit(), as the former is a direct // system call and thus safer in the presence of threads. exit() // will invoke user-defined exit-hooks, which may do dangerous // things that conflict with death tests. // // Some compilers can recognize that _exit() never returns and issue the // 'unreachable code' warning for code following this function, unless // fooled by a fake condition. if (AlwaysTrue()) _exit(1); } void DieInside(const ::std::string& function) { DieWithMessage("death inside " + function + "()."); } // Tests that death tests work. class TestForDeathTest : public testing::Test { protected: TestForDeathTest() : original_dir_(FilePath::GetCurrentDir()) {} virtual ~TestForDeathTest() { posix::ChDir(original_dir_.c_str()); } // A static member function that's expected to die. static void StaticMemberFunction() { DieInside("StaticMemberFunction"); } // A method of the test fixture that may die. void MemberFunction() { if (should_die_) DieInside("MemberFunction"); } // True iff MemberFunction() should die. bool should_die_; const FilePath original_dir_; }; // A class with a member function that may die. class MayDie { public: explicit MayDie(bool should_die) : should_die_(should_die) {} // A member function that may die. void MemberFunction() const { if (should_die_) DieInside("MayDie::MemberFunction"); } private: // True iff MemberFunction() should die. bool should_die_; }; // A global function that's expected to die. void GlobalFunction() { DieInside("GlobalFunction"); } // A non-void function that's expected to die. int NonVoidFunction() { DieInside("NonVoidFunction"); return 1; } // A unary function that may die. void DieIf(bool should_die) { if (should_die) DieInside("DieIf"); } // A binary function that may die. bool DieIfLessThan(int x, int y) { if (x < y) { DieInside("DieIfLessThan"); } return true; } // Tests that ASSERT_DEATH can be used outside a TEST, TEST_F, or test fixture. void DeathTestSubroutine() { EXPECT_DEATH(GlobalFunction(), "death.*GlobalFunction"); ASSERT_DEATH(GlobalFunction(), "death.*GlobalFunction"); } // Death in dbg, not opt. int DieInDebugElse12(int* sideeffect) { if (sideeffect) *sideeffect = 12; # ifndef NDEBUG DieInside("DieInDebugElse12"); # endif // NDEBUG return 12; } # if GTEST_OS_WINDOWS // Tests the ExitedWithCode predicate. TEST(ExitStatusPredicateTest, ExitedWithCode) { // On Windows, the process's exit code is the same as its exit status, // so the predicate just compares the its input with its parameter. EXPECT_TRUE(testing::ExitedWithCode(0)(0)); EXPECT_TRUE(testing::ExitedWithCode(1)(1)); EXPECT_TRUE(testing::ExitedWithCode(42)(42)); EXPECT_FALSE(testing::ExitedWithCode(0)(1)); EXPECT_FALSE(testing::ExitedWithCode(1)(0)); } # else // Returns the exit status of a process that calls _exit(2) with a // given exit code. This is a helper function for the // ExitStatusPredicateTest test suite. static int NormalExitStatus(int exit_code) { pid_t child_pid = fork(); if (child_pid == 0) { _exit(exit_code); } int status; waitpid(child_pid, &status, 0); return status; } // Returns the exit status of a process that raises a given signal. // If the signal does not cause the process to die, then it returns // instead the exit status of a process that exits normally with exit // code 1. This is a helper function for the ExitStatusPredicateTest // test suite. static int KilledExitStatus(int signum) { pid_t child_pid = fork(); if (child_pid == 0) { raise(signum); _exit(1); } int status; waitpid(child_pid, &status, 0); return status; } // Tests the ExitedWithCode predicate. TEST(ExitStatusPredicateTest, ExitedWithCode) { const int status0 = NormalExitStatus(0); const int status1 = NormalExitStatus(1); const int status42 = NormalExitStatus(42); const testing::ExitedWithCode pred0(0); const testing::ExitedWithCode pred1(1); const testing::ExitedWithCode pred42(42); EXPECT_PRED1(pred0, status0); EXPECT_PRED1(pred1, status1); EXPECT_PRED1(pred42, status42); EXPECT_FALSE(pred0(status1)); EXPECT_FALSE(pred42(status0)); EXPECT_FALSE(pred1(status42)); } // Tests the KilledBySignal predicate. TEST(ExitStatusPredicateTest, KilledBySignal) { const int status_segv = KilledExitStatus(SIGSEGV); const int status_kill = KilledExitStatus(SIGKILL); const testing::KilledBySignal pred_segv(SIGSEGV); const testing::KilledBySignal pred_kill(SIGKILL); EXPECT_PRED1(pred_segv, status_segv); EXPECT_PRED1(pred_kill, status_kill); EXPECT_FALSE(pred_segv(status_kill)); EXPECT_FALSE(pred_kill(status_segv)); } # endif // GTEST_OS_WINDOWS // Tests that the death test macros expand to code which may or may not // be followed by operator<<, and that in either case the complete text // comprises only a single C++ statement. TEST_F(TestForDeathTest, SingleStatement) { if (AlwaysFalse()) // This would fail if executed; this is a compilation test only ASSERT_DEATH(return, ""); if (AlwaysTrue()) EXPECT_DEATH(_exit(1), ""); else // This empty "else" branch is meant to ensure that EXPECT_DEATH // doesn't expand into an "if" statement without an "else" ; if (AlwaysFalse()) ASSERT_DEATH(return, "") << "did not die"; if (AlwaysFalse()) ; else EXPECT_DEATH(_exit(1), "") << 1 << 2 << 3; } void DieWithEmbeddedNul() { fprintf(stderr, "Hello%cmy null world.\n", '\0'); fflush(stderr); _exit(1); } # if GTEST_USES_PCRE // Tests that EXPECT_DEATH and ASSERT_DEATH work when the error // message has a NUL character in it. TEST_F(TestForDeathTest, EmbeddedNulInMessage) { // TODO(wan@google.com): doesn't support matching strings // with embedded NUL characters - find a way to workaround it. EXPECT_DEATH(DieWithEmbeddedNul(), "my null world"); ASSERT_DEATH(DieWithEmbeddedNul(), "my null world"); } # endif // GTEST_USES_PCRE // Tests that death test macros expand to code which interacts well with switch // statements. TEST_F(TestForDeathTest, SwitchStatement) { // Microsoft compiler usually complains about switch statements without // case labels. We suppress that warning for this test. # ifdef _MSC_VER # pragma warning(push) # pragma warning(disable: 4065) # endif // _MSC_VER switch (0) default: ASSERT_DEATH(_exit(1), "") << "exit in default switch handler"; switch (0) case 0: EXPECT_DEATH(_exit(1), "") << "exit in switch case"; # ifdef _MSC_VER # pragma warning(pop) # endif // _MSC_VER } // Tests that a static member function can be used in a "fast" style // death test. TEST_F(TestForDeathTest, StaticMemberFunctionFastStyle) { testing::GTEST_FLAG(death_test_style) = "fast"; ASSERT_DEATH(StaticMemberFunction(), "death.*StaticMember"); } // Tests that a method of the test fixture can be used in a "fast" // style death test. TEST_F(TestForDeathTest, MemberFunctionFastStyle) { testing::GTEST_FLAG(death_test_style) = "fast"; should_die_ = true; EXPECT_DEATH(MemberFunction(), "inside.*MemberFunction"); } void ChangeToRootDir() { posix::ChDir(GTEST_PATH_SEP_); } // Tests that death tests work even if the current directory has been // changed. TEST_F(TestForDeathTest, FastDeathTestInChangedDir) { testing::GTEST_FLAG(death_test_style) = "fast"; ChangeToRootDir(); EXPECT_EXIT(_exit(1), testing::ExitedWithCode(1), ""); ChangeToRootDir(); ASSERT_DEATH(_exit(1), ""); } # if GTEST_OS_LINUX void SigprofAction(int, siginfo_t*, void*) { /* no op */ } // Sets SIGPROF action and ITIMER_PROF timer (interval: 1ms). void SetSigprofActionAndTimer() { struct itimerval timer; timer.it_interval.tv_sec = 0; timer.it_interval.tv_usec = 1; timer.it_value = timer.it_interval; ASSERT_EQ(0, setitimer(ITIMER_PROF, &timer, NULL)); struct sigaction signal_action; memset(&signal_action, 0, sizeof(signal_action)); sigemptyset(&signal_action.sa_mask); signal_action.sa_sigaction = SigprofAction; signal_action.sa_flags = SA_RESTART | SA_SIGINFO; ASSERT_EQ(0, sigaction(SIGPROF, &signal_action, NULL)); } // Disables ITIMER_PROF timer and ignores SIGPROF signal. void DisableSigprofActionAndTimer(struct sigaction* old_signal_action) { struct itimerval timer; timer.it_interval.tv_sec = 0; timer.it_interval.tv_usec = 0; timer.it_value = timer.it_interval; ASSERT_EQ(0, setitimer(ITIMER_PROF, &timer, NULL)); struct sigaction signal_action; memset(&signal_action, 0, sizeof(signal_action)); sigemptyset(&signal_action.sa_mask); signal_action.sa_handler = SIG_IGN; ASSERT_EQ(0, sigaction(SIGPROF, &signal_action, old_signal_action)); } // Tests that death tests work when SIGPROF handler and timer are set. TEST_F(TestForDeathTest, FastSigprofActionSet) { testing::GTEST_FLAG(death_test_style) = "fast"; SetSigprofActionAndTimer(); EXPECT_DEATH(_exit(1), ""); struct sigaction old_signal_action; DisableSigprofActionAndTimer(&old_signal_action); EXPECT_TRUE(old_signal_action.sa_sigaction == SigprofAction); } TEST_F(TestForDeathTest, ThreadSafeSigprofActionSet) { testing::GTEST_FLAG(death_test_style) = "threadsafe"; SetSigprofActionAndTimer(); EXPECT_DEATH(_exit(1), ""); struct sigaction old_signal_action; DisableSigprofActionAndTimer(&old_signal_action); EXPECT_TRUE(old_signal_action.sa_sigaction == SigprofAction); } # endif // GTEST_OS_LINUX // Repeats a representative sample of death tests in the "threadsafe" style: TEST_F(TestForDeathTest, StaticMemberFunctionThreadsafeStyle) { testing::GTEST_FLAG(death_test_style) = "threadsafe"; ASSERT_DEATH(StaticMemberFunction(), "death.*StaticMember"); } TEST_F(TestForDeathTest, MemberFunctionThreadsafeStyle) { testing::GTEST_FLAG(death_test_style) = "threadsafe"; should_die_ = true; EXPECT_DEATH(MemberFunction(), "inside.*MemberFunction"); } TEST_F(TestForDeathTest, ThreadsafeDeathTestInLoop) { testing::GTEST_FLAG(death_test_style) = "threadsafe"; for (int i = 0; i < 3; ++i) EXPECT_EXIT(_exit(i), testing::ExitedWithCode(i), "") << ": i = " << i; } TEST_F(TestForDeathTest, ThreadsafeDeathTestInChangedDir) { testing::GTEST_FLAG(death_test_style) = "threadsafe"; ChangeToRootDir(); EXPECT_EXIT(_exit(1), testing::ExitedWithCode(1), ""); ChangeToRootDir(); ASSERT_DEATH(_exit(1), ""); } TEST_F(TestForDeathTest, MixedStyles) { testing::GTEST_FLAG(death_test_style) = "threadsafe"; EXPECT_DEATH(_exit(1), ""); testing::GTEST_FLAG(death_test_style) = "fast"; EXPECT_DEATH(_exit(1), ""); } # if GTEST_HAS_CLONE && GTEST_HAS_PTHREAD namespace { bool pthread_flag; void SetPthreadFlag() { pthread_flag = true; } } // namespace TEST_F(TestForDeathTest, DoesNotExecuteAtforkHooks) { if (!testing::GTEST_FLAG(death_test_use_fork)) { testing::GTEST_FLAG(death_test_style) = "threadsafe"; pthread_flag = false; ASSERT_EQ(0, pthread_atfork(&SetPthreadFlag, NULL, NULL)); ASSERT_DEATH(_exit(1), ""); ASSERT_FALSE(pthread_flag); } } # endif // GTEST_HAS_CLONE && GTEST_HAS_PTHREAD // Tests that a method of another class can be used in a death test. TEST_F(TestForDeathTest, MethodOfAnotherClass) { const MayDie x(true); ASSERT_DEATH(x.MemberFunction(), "MayDie\\:\\:MemberFunction"); } // Tests that a global function can be used in a death test. TEST_F(TestForDeathTest, GlobalFunction) { EXPECT_DEATH(GlobalFunction(), "GlobalFunction"); } // Tests that any value convertible to an RE works as a second // argument to EXPECT_DEATH. TEST_F(TestForDeathTest, AcceptsAnythingConvertibleToRE) { static const char regex_c_str[] = "GlobalFunction"; EXPECT_DEATH(GlobalFunction(), regex_c_str); const testing::internal::RE regex(regex_c_str); EXPECT_DEATH(GlobalFunction(), regex); # if GTEST_HAS_GLOBAL_STRING const string regex_str(regex_c_str); EXPECT_DEATH(GlobalFunction(), regex_str); # endif // GTEST_HAS_GLOBAL_STRING const ::std::string regex_std_str(regex_c_str); EXPECT_DEATH(GlobalFunction(), regex_std_str); } // Tests that a non-void function can be used in a death test. TEST_F(TestForDeathTest, NonVoidFunction) { ASSERT_DEATH(NonVoidFunction(), "NonVoidFunction"); } // Tests that functions that take parameter(s) can be used in a death test. TEST_F(TestForDeathTest, FunctionWithParameter) { EXPECT_DEATH(DieIf(true), "DieIf\\(\\)"); EXPECT_DEATH(DieIfLessThan(2, 3), "DieIfLessThan"); } // Tests that ASSERT_DEATH can be used outside a TEST, TEST_F, or test fixture. TEST_F(TestForDeathTest, OutsideFixture) { DeathTestSubroutine(); } // Tests that death tests can be done inside a loop. TEST_F(TestForDeathTest, InsideLoop) { for (int i = 0; i < 5; i++) { EXPECT_DEATH(DieIfLessThan(-1, i), "DieIfLessThan") << "where i == " << i; } } // Tests that a compound statement can be used in a death test. TEST_F(TestForDeathTest, CompoundStatement) { EXPECT_DEATH({ // NOLINT const int x = 2; const int y = x + 1; DieIfLessThan(x, y); }, "DieIfLessThan"); } // Tests that code that doesn't die causes a death test to fail. TEST_F(TestForDeathTest, DoesNotDie) { EXPECT_NONFATAL_FAILURE(EXPECT_DEATH(DieIf(false), "DieIf"), "failed to die"); } // Tests that a death test fails when the error message isn't expected. TEST_F(TestForDeathTest, ErrorMessageMismatch) { EXPECT_NONFATAL_FAILURE({ // NOLINT EXPECT_DEATH(DieIf(true), "DieIfLessThan") << "End of death test message."; }, "died but not with expected error"); } // On exit, *aborted will be true iff the EXPECT_DEATH() statement // aborted the function. void ExpectDeathTestHelper(bool* aborted) { *aborted = true; EXPECT_DEATH(DieIf(false), "DieIf"); // This assertion should fail. *aborted = false; } // Tests that EXPECT_DEATH doesn't abort the test on failure. TEST_F(TestForDeathTest, EXPECT_DEATH) { bool aborted = true; EXPECT_NONFATAL_FAILURE(ExpectDeathTestHelper(&aborted), "failed to die"); EXPECT_FALSE(aborted); } // Tests that ASSERT_DEATH does abort the test on failure. TEST_F(TestForDeathTest, ASSERT_DEATH) { static bool aborted; EXPECT_FATAL_FAILURE({ // NOLINT aborted = true; ASSERT_DEATH(DieIf(false), "DieIf"); // This assertion should fail. aborted = false; }, "failed to die"); EXPECT_TRUE(aborted); } // Tests that EXPECT_DEATH evaluates the arguments exactly once. TEST_F(TestForDeathTest, SingleEvaluation) { int x = 3; EXPECT_DEATH(DieIf((++x) == 4), "DieIf"); const char* regex = "DieIf"; const char* regex_save = regex; EXPECT_DEATH(DieIfLessThan(3, 4), regex++); EXPECT_EQ(regex_save + 1, regex); } // Tests that run-away death tests are reported as failures. TEST_F(TestForDeathTest, RunawayIsFailure) { EXPECT_NONFATAL_FAILURE(EXPECT_DEATH(static_cast(0), "Foo"), "failed to die."); } // Tests that death tests report executing 'return' in the statement as // failure. TEST_F(TestForDeathTest, ReturnIsFailure) { EXPECT_FATAL_FAILURE(ASSERT_DEATH(return, "Bar"), "illegal return in test statement."); } // Tests that EXPECT_DEBUG_DEATH works as expected, that is, you can stream a // message to it, and in debug mode it: // 1. Asserts on death. // 2. Has no side effect. // // And in opt mode, it: // 1. Has side effects but does not assert. TEST_F(TestForDeathTest, TestExpectDebugDeath) { int sideeffect = 0; EXPECT_DEBUG_DEATH(DieInDebugElse12(&sideeffect), "death.*DieInDebugElse12") << "Must accept a streamed message"; # ifdef NDEBUG // Checks that the assignment occurs in opt mode (sideeffect). EXPECT_EQ(12, sideeffect); # else // Checks that the assignment does not occur in dbg mode (no sideeffect). EXPECT_EQ(0, sideeffect); # endif } // Tests that ASSERT_DEBUG_DEATH works as expected, that is, you can stream a // message to it, and in debug mode it: // 1. Asserts on death. // 2. Has no side effect. // // And in opt mode, it: // 1. Has side effects but does not assert. TEST_F(TestForDeathTest, TestAssertDebugDeath) { int sideeffect = 0; ASSERT_DEBUG_DEATH(DieInDebugElse12(&sideeffect), "death.*DieInDebugElse12") << "Must accept a streamed message"; # ifdef NDEBUG // Checks that the assignment occurs in opt mode (sideeffect). EXPECT_EQ(12, sideeffect); # else // Checks that the assignment does not occur in dbg mode (no sideeffect). EXPECT_EQ(0, sideeffect); # endif } # ifndef NDEBUG void ExpectDebugDeathHelper(bool* aborted) { *aborted = true; EXPECT_DEBUG_DEATH(return, "") << "This is expected to fail."; *aborted = false; } # if GTEST_OS_WINDOWS TEST(PopUpDeathTest, DoesNotShowPopUpOnAbort) { printf("This test should be considered failing if it shows " "any pop-up dialogs.\n"); fflush(stdout); EXPECT_DEATH({ testing::GTEST_FLAG(catch_exceptions) = false; abort(); }, ""); } # endif // GTEST_OS_WINDOWS // Tests that EXPECT_DEBUG_DEATH in debug mode does not abort // the function. TEST_F(TestForDeathTest, ExpectDebugDeathDoesNotAbort) { bool aborted = true; EXPECT_NONFATAL_FAILURE(ExpectDebugDeathHelper(&aborted), ""); EXPECT_FALSE(aborted); } void AssertDebugDeathHelper(bool* aborted) { *aborted = true; ASSERT_DEBUG_DEATH(return, "") << "This is expected to fail."; *aborted = false; } // Tests that ASSERT_DEBUG_DEATH in debug mode aborts the function on // failure. TEST_F(TestForDeathTest, AssertDebugDeathAborts) { static bool aborted; aborted = false; EXPECT_FATAL_FAILURE(AssertDebugDeathHelper(&aborted), ""); EXPECT_TRUE(aborted); } # endif // _NDEBUG // Tests the *_EXIT family of macros, using a variety of predicates. static void TestExitMacros() { EXPECT_EXIT(_exit(1), testing::ExitedWithCode(1), ""); ASSERT_EXIT(_exit(42), testing::ExitedWithCode(42), ""); # if GTEST_OS_WINDOWS // Of all signals effects on the process exit code, only those of SIGABRT // are documented on Windows. // See http://msdn.microsoft.com/en-us/library/dwwzkt4c(VS.71).aspx. EXPECT_EXIT(raise(SIGABRT), testing::ExitedWithCode(3), "") << "b_ar"; # else EXPECT_EXIT(raise(SIGKILL), testing::KilledBySignal(SIGKILL), "") << "foo"; ASSERT_EXIT(raise(SIGUSR2), testing::KilledBySignal(SIGUSR2), "") << "bar"; EXPECT_FATAL_FAILURE({ // NOLINT ASSERT_EXIT(_exit(0), testing::KilledBySignal(SIGSEGV), "") << "This failure is expected, too."; }, "This failure is expected, too."); # endif // GTEST_OS_WINDOWS EXPECT_NONFATAL_FAILURE({ // NOLINT EXPECT_EXIT(raise(SIGSEGV), testing::ExitedWithCode(0), "") << "This failure is expected."; }, "This failure is expected."); } TEST_F(TestForDeathTest, ExitMacros) { TestExitMacros(); } TEST_F(TestForDeathTest, ExitMacrosUsingFork) { testing::GTEST_FLAG(death_test_use_fork) = true; TestExitMacros(); } TEST_F(TestForDeathTest, InvalidStyle) { testing::GTEST_FLAG(death_test_style) = "rococo"; EXPECT_NONFATAL_FAILURE({ // NOLINT EXPECT_DEATH(_exit(0), "") << "This failure is expected."; }, "This failure is expected."); } TEST_F(TestForDeathTest, DeathTestFailedOutput) { testing::GTEST_FLAG(death_test_style) = "fast"; EXPECT_NONFATAL_FAILURE( EXPECT_DEATH(DieWithMessage("death\n"), "expected message"), "Actual msg:\n" "[ DEATH ] death\n"); } TEST_F(TestForDeathTest, DeathTestUnexpectedReturnOutput) { testing::GTEST_FLAG(death_test_style) = "fast"; EXPECT_NONFATAL_FAILURE( EXPECT_DEATH({ fprintf(stderr, "returning\n"); fflush(stderr); return; }, ""), " Result: illegal return in test statement.\n" " Error msg:\n" "[ DEATH ] returning\n"); } TEST_F(TestForDeathTest, DeathTestBadExitCodeOutput) { testing::GTEST_FLAG(death_test_style) = "fast"; EXPECT_NONFATAL_FAILURE( EXPECT_EXIT(DieWithMessage("exiting with rc 1\n"), testing::ExitedWithCode(3), "expected message"), " Result: died but not with expected exit code:\n" " Exited with exit status 1\n" "Actual msg:\n" "[ DEATH ] exiting with rc 1\n"); } TEST_F(TestForDeathTest, DeathTestMultiLineMatchFail) { testing::GTEST_FLAG(death_test_style) = "fast"; EXPECT_NONFATAL_FAILURE( EXPECT_DEATH(DieWithMessage("line 1\nline 2\nline 3\n"), "line 1\nxyz\nline 3\n"), "Actual msg:\n" "[ DEATH ] line 1\n" "[ DEATH ] line 2\n" "[ DEATH ] line 3\n"); } TEST_F(TestForDeathTest, DeathTestMultiLineMatchPass) { testing::GTEST_FLAG(death_test_style) = "fast"; EXPECT_DEATH(DieWithMessage("line 1\nline 2\nline 3\n"), "line 1\nline 2\nline 3\n"); } // A DeathTestFactory that returns MockDeathTests. class MockDeathTestFactory : public DeathTestFactory { public: MockDeathTestFactory(); virtual bool Create(const char* statement, const ::testing::internal::RE* regex, const char* file, int line, DeathTest** test); // Sets the parameters for subsequent calls to Create. void SetParameters(bool create, DeathTest::TestRole role, int status, bool passed); // Accessors. int AssumeRoleCalls() const { return assume_role_calls_; } int WaitCalls() const { return wait_calls_; } int PassedCalls() const { return passed_args_.size(); } bool PassedArgument(int n) const { return passed_args_[n]; } int AbortCalls() const { return abort_args_.size(); } DeathTest::AbortReason AbortArgument(int n) const { return abort_args_[n]; } bool TestDeleted() const { return test_deleted_; } private: friend class MockDeathTest; // If true, Create will return a MockDeathTest; otherwise it returns // NULL. bool create_; // The value a MockDeathTest will return from its AssumeRole method. DeathTest::TestRole role_; // The value a MockDeathTest will return from its Wait method. int status_; // The value a MockDeathTest will return from its Passed method. bool passed_; // Number of times AssumeRole was called. int assume_role_calls_; // Number of times Wait was called. int wait_calls_; // The arguments to the calls to Passed since the last call to // SetParameters. std::vector passed_args_; // The arguments to the calls to Abort since the last call to // SetParameters. std::vector abort_args_; // True if the last MockDeathTest returned by Create has been // deleted. bool test_deleted_; }; // A DeathTest implementation useful in testing. It returns values set // at its creation from its various inherited DeathTest methods, and // reports calls to those methods to its parent MockDeathTestFactory // object. class MockDeathTest : public DeathTest { public: MockDeathTest(MockDeathTestFactory *parent, TestRole role, int status, bool passed) : parent_(parent), role_(role), status_(status), passed_(passed) { } virtual ~MockDeathTest() { parent_->test_deleted_ = true; } virtual TestRole AssumeRole() { ++parent_->assume_role_calls_; return role_; } virtual int Wait() { ++parent_->wait_calls_; return status_; } virtual bool Passed(bool exit_status_ok) { parent_->passed_args_.push_back(exit_status_ok); return passed_; } virtual void Abort(AbortReason reason) { parent_->abort_args_.push_back(reason); } private: MockDeathTestFactory* const parent_; const TestRole role_; const int status_; const bool passed_; }; // MockDeathTestFactory constructor. MockDeathTestFactory::MockDeathTestFactory() : create_(true), role_(DeathTest::OVERSEE_TEST), status_(0), passed_(true), assume_role_calls_(0), wait_calls_(0), passed_args_(), abort_args_() { } // Sets the parameters for subsequent calls to Create. void MockDeathTestFactory::SetParameters(bool create, DeathTest::TestRole role, int status, bool passed) { create_ = create; role_ = role; status_ = status; passed_ = passed; assume_role_calls_ = 0; wait_calls_ = 0; passed_args_.clear(); abort_args_.clear(); } // Sets test to NULL (if create_ is false) or to the address of a new // MockDeathTest object with parameters taken from the last call // to SetParameters (if create_ is true). Always returns true. bool MockDeathTestFactory::Create(const char* /*statement*/, const ::testing::internal::RE* /*regex*/, const char* /*file*/, int /*line*/, DeathTest** test) { test_deleted_ = false; if (create_) { *test = new MockDeathTest(this, role_, status_, passed_); } else { *test = NULL; } return true; } // A test fixture for testing the logic of the GTEST_DEATH_TEST_ macro. // It installs a MockDeathTestFactory that is used for the duration // of the test case. class MacroLogicDeathTest : public testing::Test { protected: static testing::internal::ReplaceDeathTestFactory* replacer_; static MockDeathTestFactory* factory_; static void SetUpTestCase() { factory_ = new MockDeathTestFactory; replacer_ = new testing::internal::ReplaceDeathTestFactory(factory_); } static void TearDownTestCase() { delete replacer_; replacer_ = NULL; delete factory_; factory_ = NULL; } // Runs a death test that breaks the rules by returning. Such a death // test cannot be run directly from a test routine that uses a // MockDeathTest, or the remainder of the routine will not be executed. static void RunReturningDeathTest(bool* flag) { ASSERT_DEATH({ // NOLINT *flag = true; return; }, ""); } }; testing::internal::ReplaceDeathTestFactory* MacroLogicDeathTest::replacer_ = NULL; MockDeathTestFactory* MacroLogicDeathTest::factory_ = NULL; // Test that nothing happens when the factory doesn't return a DeathTest: TEST_F(MacroLogicDeathTest, NothingHappens) { bool flag = false; factory_->SetParameters(false, DeathTest::OVERSEE_TEST, 0, true); EXPECT_DEATH(flag = true, ""); EXPECT_FALSE(flag); EXPECT_EQ(0, factory_->AssumeRoleCalls()); EXPECT_EQ(0, factory_->WaitCalls()); EXPECT_EQ(0, factory_->PassedCalls()); EXPECT_EQ(0, factory_->AbortCalls()); EXPECT_FALSE(factory_->TestDeleted()); } // Test that the parent process doesn't run the death test code, // and that the Passed method returns false when the (simulated) // child process exits with status 0: TEST_F(MacroLogicDeathTest, ChildExitsSuccessfully) { bool flag = false; factory_->SetParameters(true, DeathTest::OVERSEE_TEST, 0, true); EXPECT_DEATH(flag = true, ""); EXPECT_FALSE(flag); EXPECT_EQ(1, factory_->AssumeRoleCalls()); EXPECT_EQ(1, factory_->WaitCalls()); ASSERT_EQ(1, factory_->PassedCalls()); EXPECT_FALSE(factory_->PassedArgument(0)); EXPECT_EQ(0, factory_->AbortCalls()); EXPECT_TRUE(factory_->TestDeleted()); } // Tests that the Passed method was given the argument "true" when // the (simulated) child process exits with status 1: TEST_F(MacroLogicDeathTest, ChildExitsUnsuccessfully) { bool flag = false; factory_->SetParameters(true, DeathTest::OVERSEE_TEST, 1, true); EXPECT_DEATH(flag = true, ""); EXPECT_FALSE(flag); EXPECT_EQ(1, factory_->AssumeRoleCalls()); EXPECT_EQ(1, factory_->WaitCalls()); ASSERT_EQ(1, factory_->PassedCalls()); EXPECT_TRUE(factory_->PassedArgument(0)); EXPECT_EQ(0, factory_->AbortCalls()); EXPECT_TRUE(factory_->TestDeleted()); } // Tests that the (simulated) child process executes the death test // code, and is aborted with the correct AbortReason if it // executes a return statement. TEST_F(MacroLogicDeathTest, ChildPerformsReturn) { bool flag = false; factory_->SetParameters(true, DeathTest::EXECUTE_TEST, 0, true); RunReturningDeathTest(&flag); EXPECT_TRUE(flag); EXPECT_EQ(1, factory_->AssumeRoleCalls()); EXPECT_EQ(0, factory_->WaitCalls()); EXPECT_EQ(0, factory_->PassedCalls()); EXPECT_EQ(1, factory_->AbortCalls()); EXPECT_EQ(DeathTest::TEST_ENCOUNTERED_RETURN_STATEMENT, factory_->AbortArgument(0)); EXPECT_TRUE(factory_->TestDeleted()); } // Tests that the (simulated) child process is aborted with the // correct AbortReason if it does not die. TEST_F(MacroLogicDeathTest, ChildDoesNotDie) { bool flag = false; factory_->SetParameters(true, DeathTest::EXECUTE_TEST, 0, true); EXPECT_DEATH(flag = true, ""); EXPECT_TRUE(flag); EXPECT_EQ(1, factory_->AssumeRoleCalls()); EXPECT_EQ(0, factory_->WaitCalls()); EXPECT_EQ(0, factory_->PassedCalls()); // This time there are two calls to Abort: one since the test didn't // die, and another from the ReturnSentinel when it's destroyed. The // sentinel normally isn't destroyed if a test doesn't die, since // _exit(2) is called in that case by ForkingDeathTest, but not by // our MockDeathTest. ASSERT_EQ(2, factory_->AbortCalls()); EXPECT_EQ(DeathTest::TEST_DID_NOT_DIE, factory_->AbortArgument(0)); EXPECT_EQ(DeathTest::TEST_ENCOUNTERED_RETURN_STATEMENT, factory_->AbortArgument(1)); EXPECT_TRUE(factory_->TestDeleted()); } // Tests that a successful death test does not register a successful // test part. TEST(SuccessRegistrationDeathTest, NoSuccessPart) { EXPECT_DEATH(_exit(1), ""); EXPECT_EQ(0, GetUnitTestImpl()->current_test_result()->total_part_count()); } TEST(StreamingAssertionsDeathTest, DeathTest) { EXPECT_DEATH(_exit(1), "") << "unexpected failure"; ASSERT_DEATH(_exit(1), "") << "unexpected failure"; EXPECT_NONFATAL_FAILURE({ // NOLINT EXPECT_DEATH(_exit(0), "") << "expected failure"; }, "expected failure"); EXPECT_FATAL_FAILURE({ // NOLINT ASSERT_DEATH(_exit(0), "") << "expected failure"; }, "expected failure"); } // Tests that GetLastErrnoDescription returns an empty string when the // last error is 0 and non-empty string when it is non-zero. TEST(GetLastErrnoDescription, GetLastErrnoDescriptionWorks) { errno = ENOENT; EXPECT_STRNE("", GetLastErrnoDescription().c_str()); errno = 0; EXPECT_STREQ("", GetLastErrnoDescription().c_str()); } # if GTEST_OS_WINDOWS TEST(AutoHandleTest, AutoHandleWorks) { HANDLE handle = ::CreateEvent(NULL, FALSE, FALSE, NULL); ASSERT_NE(INVALID_HANDLE_VALUE, handle); // Tests that the AutoHandle is correctly initialized with a handle. testing::internal::AutoHandle auto_handle(handle); EXPECT_EQ(handle, auto_handle.Get()); // Tests that Reset assigns INVALID_HANDLE_VALUE. // Note that this cannot verify whether the original handle is closed. auto_handle.Reset(); EXPECT_EQ(INVALID_HANDLE_VALUE, auto_handle.Get()); // Tests that Reset assigns the new handle. // Note that this cannot verify whether the original handle is closed. handle = ::CreateEvent(NULL, FALSE, FALSE, NULL); ASSERT_NE(INVALID_HANDLE_VALUE, handle); auto_handle.Reset(handle); EXPECT_EQ(handle, auto_handle.Get()); // Tests that AutoHandle contains INVALID_HANDLE_VALUE by default. testing::internal::AutoHandle auto_handle2; EXPECT_EQ(INVALID_HANDLE_VALUE, auto_handle2.Get()); } # endif // GTEST_OS_WINDOWS # if GTEST_OS_WINDOWS typedef unsigned __int64 BiggestParsable; typedef signed __int64 BiggestSignedParsable; # else typedef unsigned long long BiggestParsable; typedef signed long long BiggestSignedParsable; # endif // GTEST_OS_WINDOWS // We cannot use std::numeric_limits::max() as it clashes with the // max() macro defined by . const BiggestParsable kBiggestParsableMax = ULLONG_MAX; const BiggestSignedParsable kBiggestSignedParsableMax = LLONG_MAX; TEST(ParseNaturalNumberTest, RejectsInvalidFormat) { BiggestParsable result = 0; // Rejects non-numbers. EXPECT_FALSE(ParseNaturalNumber("non-number string", &result)); // Rejects numbers with whitespace prefix. EXPECT_FALSE(ParseNaturalNumber(" 123", &result)); // Rejects negative numbers. EXPECT_FALSE(ParseNaturalNumber("-123", &result)); // Rejects numbers starting with a plus sign. EXPECT_FALSE(ParseNaturalNumber("+123", &result)); errno = 0; } TEST(ParseNaturalNumberTest, RejectsOverflownNumbers) { BiggestParsable result = 0; EXPECT_FALSE(ParseNaturalNumber("99999999999999999999999", &result)); signed char char_result = 0; EXPECT_FALSE(ParseNaturalNumber("200", &char_result)); errno = 0; } TEST(ParseNaturalNumberTest, AcceptsValidNumbers) { BiggestParsable result = 0; result = 0; ASSERT_TRUE(ParseNaturalNumber("123", &result)); EXPECT_EQ(123U, result); // Check 0 as an edge case. result = 1; ASSERT_TRUE(ParseNaturalNumber("0", &result)); EXPECT_EQ(0U, result); result = 1; ASSERT_TRUE(ParseNaturalNumber("00000", &result)); EXPECT_EQ(0U, result); } TEST(ParseNaturalNumberTest, AcceptsTypeLimits) { Message msg; msg << kBiggestParsableMax; BiggestParsable result = 0; EXPECT_TRUE(ParseNaturalNumber(msg.GetString(), &result)); EXPECT_EQ(kBiggestParsableMax, result); Message msg2; msg2 << kBiggestSignedParsableMax; BiggestSignedParsable signed_result = 0; EXPECT_TRUE(ParseNaturalNumber(msg2.GetString(), &signed_result)); EXPECT_EQ(kBiggestSignedParsableMax, signed_result); Message msg3; msg3 << INT_MAX; int int_result = 0; EXPECT_TRUE(ParseNaturalNumber(msg3.GetString(), &int_result)); EXPECT_EQ(INT_MAX, int_result); Message msg4; msg4 << UINT_MAX; unsigned int uint_result = 0; EXPECT_TRUE(ParseNaturalNumber(msg4.GetString(), &uint_result)); EXPECT_EQ(UINT_MAX, uint_result); } TEST(ParseNaturalNumberTest, WorksForShorterIntegers) { short short_result = 0; ASSERT_TRUE(ParseNaturalNumber("123", &short_result)); EXPECT_EQ(123, short_result); signed char char_result = 0; ASSERT_TRUE(ParseNaturalNumber("123", &char_result)); EXPECT_EQ(123, char_result); } # if GTEST_OS_WINDOWS TEST(EnvironmentTest, HandleFitsIntoSizeT) { // TODO(vladl@google.com): Remove this test after this condition is verified // in a static assertion in gtest-death-test.cc in the function // GetStatusFileDescriptor. ASSERT_TRUE(sizeof(HANDLE) <= sizeof(size_t)); } # endif // GTEST_OS_WINDOWS // Tests that EXPECT_DEATH_IF_SUPPORTED/ASSERT_DEATH_IF_SUPPORTED trigger // failures when death tests are available on the system. TEST(ConditionalDeathMacrosDeathTest, ExpectsDeathWhenDeathTestsAvailable) { EXPECT_DEATH_IF_SUPPORTED(DieInside("CondDeathTestExpectMacro"), "death inside CondDeathTestExpectMacro"); ASSERT_DEATH_IF_SUPPORTED(DieInside("CondDeathTestAssertMacro"), "death inside CondDeathTestAssertMacro"); // Empty statement will not crash, which must trigger a failure. EXPECT_NONFATAL_FAILURE(EXPECT_DEATH_IF_SUPPORTED(;, ""), ""); EXPECT_FATAL_FAILURE(ASSERT_DEATH_IF_SUPPORTED(;, ""), ""); } #else using testing::internal::CaptureStderr; using testing::internal::GetCapturedStderr; // Tests that EXPECT_DEATH_IF_SUPPORTED/ASSERT_DEATH_IF_SUPPORTED are still // defined but do not trigger failures when death tests are not available on // the system. TEST(ConditionalDeathMacrosTest, WarnsWhenDeathTestsNotAvailable) { // Empty statement will not crash, but that should not trigger a failure // when death tests are not supported. CaptureStderr(); EXPECT_DEATH_IF_SUPPORTED(;, ""); std::string output = GetCapturedStderr(); ASSERT_TRUE(NULL != strstr(output.c_str(), "Death tests are not supported on this platform")); ASSERT_TRUE(NULL != strstr(output.c_str(), ";")); // The streamed message should not be printed as there is no test failure. CaptureStderr(); EXPECT_DEATH_IF_SUPPORTED(;, "") << "streamed message"; output = GetCapturedStderr(); ASSERT_TRUE(NULL == strstr(output.c_str(), "streamed message")); CaptureStderr(); ASSERT_DEATH_IF_SUPPORTED(;, ""); // NOLINT output = GetCapturedStderr(); ASSERT_TRUE(NULL != strstr(output.c_str(), "Death tests are not supported on this platform")); ASSERT_TRUE(NULL != strstr(output.c_str(), ";")); CaptureStderr(); ASSERT_DEATH_IF_SUPPORTED(;, "") << "streamed message"; // NOLINT output = GetCapturedStderr(); ASSERT_TRUE(NULL == strstr(output.c_str(), "streamed message")); } void FuncWithAssert(int* n) { ASSERT_DEATH_IF_SUPPORTED(return;, ""); (*n)++; } // Tests that ASSERT_DEATH_IF_SUPPORTED does not return from the current // function (as ASSERT_DEATH does) if death tests are not supported. TEST(ConditionalDeathMacrosTest, AssertDeatDoesNotReturnhIfUnsupported) { int n = 0; FuncWithAssert(&n); EXPECT_EQ(1, n); } TEST(InDeathTestChildDeathTest, ReportsDeathTestCorrectlyInFastStyle) { testing::GTEST_FLAG(death_test_style) = "fast"; EXPECT_FALSE(InDeathTestChild()); EXPECT_DEATH({ fprintf(stderr, InDeathTestChild() ? "Inside" : "Outside"); fflush(stderr); _exit(1); }, "Inside"); } TEST(InDeathTestChildDeathTest, ReportsDeathTestCorrectlyInThreadSafeStyle) { testing::GTEST_FLAG(death_test_style) = "threadsafe"; EXPECT_FALSE(InDeathTestChild()); EXPECT_DEATH({ fprintf(stderr, InDeathTestChild() ? "Inside" : "Outside"); fflush(stderr); _exit(1); }, "Inside"); } #endif // GTEST_HAS_DEATH_TEST // Tests that the death test macros expand to code which may or may not // be followed by operator<<, and that in either case the complete text // comprises only a single C++ statement. // // The syntax should work whether death tests are available or not. TEST(ConditionalDeathMacrosSyntaxDeathTest, SingleStatement) { if (AlwaysFalse()) // This would fail if executed; this is a compilation test only ASSERT_DEATH_IF_SUPPORTED(return, ""); if (AlwaysTrue()) EXPECT_DEATH_IF_SUPPORTED(_exit(1), ""); else // This empty "else" branch is meant to ensure that EXPECT_DEATH // doesn't expand into an "if" statement without an "else" ; // NOLINT if (AlwaysFalse()) ASSERT_DEATH_IF_SUPPORTED(return, "") << "did not die"; if (AlwaysFalse()) ; // NOLINT else EXPECT_DEATH_IF_SUPPORTED(_exit(1), "") << 1 << 2 << 3; } // Tests that conditional death test macros expand to code which interacts // well with switch statements. TEST(ConditionalDeathMacrosSyntaxDeathTest, SwitchStatement) { // Microsoft compiler usually complains about switch statements without // case labels. We suppress that warning for this test. #ifdef _MSC_VER # pragma warning(push) # pragma warning(disable: 4065) #endif // _MSC_VER switch (0) default: ASSERT_DEATH_IF_SUPPORTED(_exit(1), "") << "exit in default switch handler"; switch (0) case 0: EXPECT_DEATH_IF_SUPPORTED(_exit(1), "") << "exit in switch case"; #ifdef _MSC_VER # pragma warning(pop) #endif // _MSC_VER } // Tests that a test case whose name ends with "DeathTest" works fine // on Windows. TEST(NotADeathTest, Test) { SUCCEED(); } ffc-2019.1.0.post0/libs/gtest-1.7.0/test/gtest-filepath_test.cc000066400000000000000000000561451346026536000235530ustar00rootroot00000000000000// Copyright 2008, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Authors: keith.ray@gmail.com (Keith Ray) // // Google Test filepath utilities // // This file tests classes and functions used internally by // Google Test. They are subject to change without notice. // // This file is #included from gtest_unittest.cc, to avoid changing // build or make-files for some existing Google Test clients. Do not // #include this file anywhere else! #include "gtest/internal/gtest-filepath.h" #include "gtest/gtest.h" // Indicates that this translation unit is part of Google Test's // implementation. It must come before gtest-internal-inl.h is // included, or there will be a compiler error. This trick is to // prevent a user from accidentally including gtest-internal-inl.h in // his code. #define GTEST_IMPLEMENTATION_ 1 #include "src/gtest-internal-inl.h" #undef GTEST_IMPLEMENTATION_ #if GTEST_OS_WINDOWS_MOBILE # include // NOLINT #elif GTEST_OS_WINDOWS # include // NOLINT #endif // GTEST_OS_WINDOWS_MOBILE namespace testing { namespace internal { namespace { #if GTEST_OS_WINDOWS_MOBILE // TODO(wan@google.com): Move these to the POSIX adapter section in // gtest-port.h. // Windows CE doesn't have the remove C function. int remove(const char* path) { LPCWSTR wpath = String::AnsiToUtf16(path); int ret = DeleteFile(wpath) ? 0 : -1; delete [] wpath; return ret; } // Windows CE doesn't have the _rmdir C function. int _rmdir(const char* path) { FilePath filepath(path); LPCWSTR wpath = String::AnsiToUtf16( filepath.RemoveTrailingPathSeparator().c_str()); int ret = RemoveDirectory(wpath) ? 0 : -1; delete [] wpath; return ret; } #else TEST(GetCurrentDirTest, ReturnsCurrentDir) { const FilePath original_dir = FilePath::GetCurrentDir(); EXPECT_FALSE(original_dir.IsEmpty()); posix::ChDir(GTEST_PATH_SEP_); const FilePath cwd = FilePath::GetCurrentDir(); posix::ChDir(original_dir.c_str()); # if GTEST_OS_WINDOWS // Skips the ":". const char* const cwd_without_drive = strchr(cwd.c_str(), ':'); ASSERT_TRUE(cwd_without_drive != NULL); EXPECT_STREQ(GTEST_PATH_SEP_, cwd_without_drive + 1); # else EXPECT_EQ(GTEST_PATH_SEP_, cwd.string()); # endif } #endif // GTEST_OS_WINDOWS_MOBILE TEST(IsEmptyTest, ReturnsTrueForEmptyPath) { EXPECT_TRUE(FilePath("").IsEmpty()); } TEST(IsEmptyTest, ReturnsFalseForNonEmptyPath) { EXPECT_FALSE(FilePath("a").IsEmpty()); EXPECT_FALSE(FilePath(".").IsEmpty()); EXPECT_FALSE(FilePath("a/b").IsEmpty()); EXPECT_FALSE(FilePath("a\\b\\").IsEmpty()); } // RemoveDirectoryName "" -> "" TEST(RemoveDirectoryNameTest, WhenEmptyName) { EXPECT_EQ("", FilePath("").RemoveDirectoryName().string()); } // RemoveDirectoryName "afile" -> "afile" TEST(RemoveDirectoryNameTest, ButNoDirectory) { EXPECT_EQ("afile", FilePath("afile").RemoveDirectoryName().string()); } // RemoveDirectoryName "/afile" -> "afile" TEST(RemoveDirectoryNameTest, RootFileShouldGiveFileName) { EXPECT_EQ("afile", FilePath(GTEST_PATH_SEP_ "afile").RemoveDirectoryName().string()); } // RemoveDirectoryName "adir/" -> "" TEST(RemoveDirectoryNameTest, WhereThereIsNoFileName) { EXPECT_EQ("", FilePath("adir" GTEST_PATH_SEP_).RemoveDirectoryName().string()); } // RemoveDirectoryName "adir/afile" -> "afile" TEST(RemoveDirectoryNameTest, ShouldGiveFileName) { EXPECT_EQ("afile", FilePath("adir" GTEST_PATH_SEP_ "afile").RemoveDirectoryName().string()); } // RemoveDirectoryName "adir/subdir/afile" -> "afile" TEST(RemoveDirectoryNameTest, ShouldAlsoGiveFileName) { EXPECT_EQ("afile", FilePath("adir" GTEST_PATH_SEP_ "subdir" GTEST_PATH_SEP_ "afile") .RemoveDirectoryName().string()); } #if GTEST_HAS_ALT_PATH_SEP_ // Tests that RemoveDirectoryName() works with the alternate separator // on Windows. // RemoveDirectoryName("/afile") -> "afile" TEST(RemoveDirectoryNameTest, RootFileShouldGiveFileNameForAlternateSeparator) { EXPECT_EQ("afile", FilePath("/afile").RemoveDirectoryName().string()); } // RemoveDirectoryName("adir/") -> "" TEST(RemoveDirectoryNameTest, WhereThereIsNoFileNameForAlternateSeparator) { EXPECT_EQ("", FilePath("adir/").RemoveDirectoryName().string()); } // RemoveDirectoryName("adir/afile") -> "afile" TEST(RemoveDirectoryNameTest, ShouldGiveFileNameForAlternateSeparator) { EXPECT_EQ("afile", FilePath("adir/afile").RemoveDirectoryName().string()); } // RemoveDirectoryName("adir/subdir/afile") -> "afile" TEST(RemoveDirectoryNameTest, ShouldAlsoGiveFileNameForAlternateSeparator) { EXPECT_EQ("afile", FilePath("adir/subdir/afile").RemoveDirectoryName().string()); } #endif // RemoveFileName "" -> "./" TEST(RemoveFileNameTest, EmptyName) { #if GTEST_OS_WINDOWS_MOBILE // On Windows CE, we use the root as the current directory. EXPECT_EQ(GTEST_PATH_SEP_, FilePath("").RemoveFileName().string()); #else EXPECT_EQ("." GTEST_PATH_SEP_, FilePath("").RemoveFileName().string()); #endif } // RemoveFileName "adir/" -> "adir/" TEST(RemoveFileNameTest, ButNoFile) { EXPECT_EQ("adir" GTEST_PATH_SEP_, FilePath("adir" GTEST_PATH_SEP_).RemoveFileName().string()); } // RemoveFileName "adir/afile" -> "adir/" TEST(RemoveFileNameTest, GivesDirName) { EXPECT_EQ("adir" GTEST_PATH_SEP_, FilePath("adir" GTEST_PATH_SEP_ "afile").RemoveFileName().string()); } // RemoveFileName "adir/subdir/afile" -> "adir/subdir/" TEST(RemoveFileNameTest, GivesDirAndSubDirName) { EXPECT_EQ("adir" GTEST_PATH_SEP_ "subdir" GTEST_PATH_SEP_, FilePath("adir" GTEST_PATH_SEP_ "subdir" GTEST_PATH_SEP_ "afile") .RemoveFileName().string()); } // RemoveFileName "/afile" -> "/" TEST(RemoveFileNameTest, GivesRootDir) { EXPECT_EQ(GTEST_PATH_SEP_, FilePath(GTEST_PATH_SEP_ "afile").RemoveFileName().string()); } #if GTEST_HAS_ALT_PATH_SEP_ // Tests that RemoveFileName() works with the alternate separator on // Windows. // RemoveFileName("adir/") -> "adir/" TEST(RemoveFileNameTest, ButNoFileForAlternateSeparator) { EXPECT_EQ("adir" GTEST_PATH_SEP_, FilePath("adir/").RemoveFileName().string()); } // RemoveFileName("adir/afile") -> "adir/" TEST(RemoveFileNameTest, GivesDirNameForAlternateSeparator) { EXPECT_EQ("adir" GTEST_PATH_SEP_, FilePath("adir/afile").RemoveFileName().string()); } // RemoveFileName("adir/subdir/afile") -> "adir/subdir/" TEST(RemoveFileNameTest, GivesDirAndSubDirNameForAlternateSeparator) { EXPECT_EQ("adir" GTEST_PATH_SEP_ "subdir" GTEST_PATH_SEP_, FilePath("adir/subdir/afile").RemoveFileName().string()); } // RemoveFileName("/afile") -> "\" TEST(RemoveFileNameTest, GivesRootDirForAlternateSeparator) { EXPECT_EQ(GTEST_PATH_SEP_, FilePath("/afile").RemoveFileName().string()); } #endif TEST(MakeFileNameTest, GenerateWhenNumberIsZero) { FilePath actual = FilePath::MakeFileName(FilePath("foo"), FilePath("bar"), 0, "xml"); EXPECT_EQ("foo" GTEST_PATH_SEP_ "bar.xml", actual.string()); } TEST(MakeFileNameTest, GenerateFileNameNumberGtZero) { FilePath actual = FilePath::MakeFileName(FilePath("foo"), FilePath("bar"), 12, "xml"); EXPECT_EQ("foo" GTEST_PATH_SEP_ "bar_12.xml", actual.string()); } TEST(MakeFileNameTest, GenerateFileNameWithSlashNumberIsZero) { FilePath actual = FilePath::MakeFileName(FilePath("foo" GTEST_PATH_SEP_), FilePath("bar"), 0, "xml"); EXPECT_EQ("foo" GTEST_PATH_SEP_ "bar.xml", actual.string()); } TEST(MakeFileNameTest, GenerateFileNameWithSlashNumberGtZero) { FilePath actual = FilePath::MakeFileName(FilePath("foo" GTEST_PATH_SEP_), FilePath("bar"), 12, "xml"); EXPECT_EQ("foo" GTEST_PATH_SEP_ "bar_12.xml", actual.string()); } TEST(MakeFileNameTest, GenerateWhenNumberIsZeroAndDirIsEmpty) { FilePath actual = FilePath::MakeFileName(FilePath(""), FilePath("bar"), 0, "xml"); EXPECT_EQ("bar.xml", actual.string()); } TEST(MakeFileNameTest, GenerateWhenNumberIsNotZeroAndDirIsEmpty) { FilePath actual = FilePath::MakeFileName(FilePath(""), FilePath("bar"), 14, "xml"); EXPECT_EQ("bar_14.xml", actual.string()); } TEST(ConcatPathsTest, WorksWhenDirDoesNotEndWithPathSep) { FilePath actual = FilePath::ConcatPaths(FilePath("foo"), FilePath("bar.xml")); EXPECT_EQ("foo" GTEST_PATH_SEP_ "bar.xml", actual.string()); } TEST(ConcatPathsTest, WorksWhenPath1EndsWithPathSep) { FilePath actual = FilePath::ConcatPaths(FilePath("foo" GTEST_PATH_SEP_), FilePath("bar.xml")); EXPECT_EQ("foo" GTEST_PATH_SEP_ "bar.xml", actual.string()); } TEST(ConcatPathsTest, Path1BeingEmpty) { FilePath actual = FilePath::ConcatPaths(FilePath(""), FilePath("bar.xml")); EXPECT_EQ("bar.xml", actual.string()); } TEST(ConcatPathsTest, Path2BeingEmpty) { FilePath actual = FilePath::ConcatPaths(FilePath("foo"), FilePath("")); EXPECT_EQ("foo" GTEST_PATH_SEP_, actual.string()); } TEST(ConcatPathsTest, BothPathBeingEmpty) { FilePath actual = FilePath::ConcatPaths(FilePath(""), FilePath("")); EXPECT_EQ("", actual.string()); } TEST(ConcatPathsTest, Path1ContainsPathSep) { FilePath actual = FilePath::ConcatPaths(FilePath("foo" GTEST_PATH_SEP_ "bar"), FilePath("foobar.xml")); EXPECT_EQ("foo" GTEST_PATH_SEP_ "bar" GTEST_PATH_SEP_ "foobar.xml", actual.string()); } TEST(ConcatPathsTest, Path2ContainsPathSep) { FilePath actual = FilePath::ConcatPaths( FilePath("foo" GTEST_PATH_SEP_), FilePath("bar" GTEST_PATH_SEP_ "bar.xml")); EXPECT_EQ("foo" GTEST_PATH_SEP_ "bar" GTEST_PATH_SEP_ "bar.xml", actual.string()); } TEST(ConcatPathsTest, Path2EndsWithPathSep) { FilePath actual = FilePath::ConcatPaths(FilePath("foo"), FilePath("bar" GTEST_PATH_SEP_)); EXPECT_EQ("foo" GTEST_PATH_SEP_ "bar" GTEST_PATH_SEP_, actual.string()); } // RemoveTrailingPathSeparator "" -> "" TEST(RemoveTrailingPathSeparatorTest, EmptyString) { EXPECT_EQ("", FilePath("").RemoveTrailingPathSeparator().string()); } // RemoveTrailingPathSeparator "foo" -> "foo" TEST(RemoveTrailingPathSeparatorTest, FileNoSlashString) { EXPECT_EQ("foo", FilePath("foo").RemoveTrailingPathSeparator().string()); } // RemoveTrailingPathSeparator "foo/" -> "foo" TEST(RemoveTrailingPathSeparatorTest, ShouldRemoveTrailingSeparator) { EXPECT_EQ("foo", FilePath("foo" GTEST_PATH_SEP_).RemoveTrailingPathSeparator().string()); #if GTEST_HAS_ALT_PATH_SEP_ EXPECT_EQ("foo", FilePath("foo/").RemoveTrailingPathSeparator().string()); #endif } // RemoveTrailingPathSeparator "foo/bar/" -> "foo/bar/" TEST(RemoveTrailingPathSeparatorTest, ShouldRemoveLastSeparator) { EXPECT_EQ("foo" GTEST_PATH_SEP_ "bar", FilePath("foo" GTEST_PATH_SEP_ "bar" GTEST_PATH_SEP_) .RemoveTrailingPathSeparator().string()); } // RemoveTrailingPathSeparator "foo/bar" -> "foo/bar" TEST(RemoveTrailingPathSeparatorTest, ShouldReturnUnmodified) { EXPECT_EQ("foo" GTEST_PATH_SEP_ "bar", FilePath("foo" GTEST_PATH_SEP_ "bar") .RemoveTrailingPathSeparator().string()); } TEST(DirectoryTest, RootDirectoryExists) { #if GTEST_OS_WINDOWS // We are on Windows. char current_drive[_MAX_PATH]; // NOLINT current_drive[0] = static_cast(_getdrive() + 'A' - 1); current_drive[1] = ':'; current_drive[2] = '\\'; current_drive[3] = '\0'; EXPECT_TRUE(FilePath(current_drive).DirectoryExists()); #else EXPECT_TRUE(FilePath("/").DirectoryExists()); #endif // GTEST_OS_WINDOWS } #if GTEST_OS_WINDOWS TEST(DirectoryTest, RootOfWrongDriveDoesNotExists) { const int saved_drive_ = _getdrive(); // Find a drive that doesn't exist. Start with 'Z' to avoid common ones. for (char drive = 'Z'; drive >= 'A'; drive--) if (_chdrive(drive - 'A' + 1) == -1) { char non_drive[_MAX_PATH]; // NOLINT non_drive[0] = drive; non_drive[1] = ':'; non_drive[2] = '\\'; non_drive[3] = '\0'; EXPECT_FALSE(FilePath(non_drive).DirectoryExists()); break; } _chdrive(saved_drive_); } #endif // GTEST_OS_WINDOWS #if !GTEST_OS_WINDOWS_MOBILE // Windows CE _does_ consider an empty directory to exist. TEST(DirectoryTest, EmptyPathDirectoryDoesNotExist) { EXPECT_FALSE(FilePath("").DirectoryExists()); } #endif // !GTEST_OS_WINDOWS_MOBILE TEST(DirectoryTest, CurrentDirectoryExists) { #if GTEST_OS_WINDOWS // We are on Windows. # ifndef _WIN32_CE // Windows CE doesn't have a current directory. EXPECT_TRUE(FilePath(".").DirectoryExists()); EXPECT_TRUE(FilePath(".\\").DirectoryExists()); # endif // _WIN32_CE #else EXPECT_TRUE(FilePath(".").DirectoryExists()); EXPECT_TRUE(FilePath("./").DirectoryExists()); #endif // GTEST_OS_WINDOWS } // "foo/bar" == foo//bar" == "foo///bar" TEST(NormalizeTest, MultipleConsecutiveSepaparatorsInMidstring) { EXPECT_EQ("foo" GTEST_PATH_SEP_ "bar", FilePath("foo" GTEST_PATH_SEP_ "bar").string()); EXPECT_EQ("foo" GTEST_PATH_SEP_ "bar", FilePath("foo" GTEST_PATH_SEP_ GTEST_PATH_SEP_ "bar").string()); EXPECT_EQ("foo" GTEST_PATH_SEP_ "bar", FilePath("foo" GTEST_PATH_SEP_ GTEST_PATH_SEP_ GTEST_PATH_SEP_ "bar").string()); } // "/bar" == //bar" == "///bar" TEST(NormalizeTest, MultipleConsecutiveSepaparatorsAtStringStart) { EXPECT_EQ(GTEST_PATH_SEP_ "bar", FilePath(GTEST_PATH_SEP_ "bar").string()); EXPECT_EQ(GTEST_PATH_SEP_ "bar", FilePath(GTEST_PATH_SEP_ GTEST_PATH_SEP_ "bar").string()); EXPECT_EQ(GTEST_PATH_SEP_ "bar", FilePath(GTEST_PATH_SEP_ GTEST_PATH_SEP_ GTEST_PATH_SEP_ "bar").string()); } // "foo/" == foo//" == "foo///" TEST(NormalizeTest, MultipleConsecutiveSepaparatorsAtStringEnd) { EXPECT_EQ("foo" GTEST_PATH_SEP_, FilePath("foo" GTEST_PATH_SEP_).string()); EXPECT_EQ("foo" GTEST_PATH_SEP_, FilePath("foo" GTEST_PATH_SEP_ GTEST_PATH_SEP_).string()); EXPECT_EQ("foo" GTEST_PATH_SEP_, FilePath("foo" GTEST_PATH_SEP_ GTEST_PATH_SEP_ GTEST_PATH_SEP_).string()); } #if GTEST_HAS_ALT_PATH_SEP_ // Tests that separators at the end of the string are normalized // regardless of their combination (e.g. "foo\" =="foo/\" == // "foo\\/"). TEST(NormalizeTest, MixAlternateSeparatorAtStringEnd) { EXPECT_EQ("foo" GTEST_PATH_SEP_, FilePath("foo/").string()); EXPECT_EQ("foo" GTEST_PATH_SEP_, FilePath("foo" GTEST_PATH_SEP_ "/").string()); EXPECT_EQ("foo" GTEST_PATH_SEP_, FilePath("foo//" GTEST_PATH_SEP_).string()); } #endif TEST(AssignmentOperatorTest, DefaultAssignedToNonDefault) { FilePath default_path; FilePath non_default_path("path"); non_default_path = default_path; EXPECT_EQ("", non_default_path.string()); EXPECT_EQ("", default_path.string()); // RHS var is unchanged. } TEST(AssignmentOperatorTest, NonDefaultAssignedToDefault) { FilePath non_default_path("path"); FilePath default_path; default_path = non_default_path; EXPECT_EQ("path", default_path.string()); EXPECT_EQ("path", non_default_path.string()); // RHS var is unchanged. } TEST(AssignmentOperatorTest, ConstAssignedToNonConst) { const FilePath const_default_path("const_path"); FilePath non_default_path("path"); non_default_path = const_default_path; EXPECT_EQ("const_path", non_default_path.string()); } class DirectoryCreationTest : public Test { protected: virtual void SetUp() { testdata_path_.Set(FilePath( TempDir() + GetCurrentExecutableName().string() + "_directory_creation" GTEST_PATH_SEP_ "test" GTEST_PATH_SEP_)); testdata_file_.Set(testdata_path_.RemoveTrailingPathSeparator()); unique_file0_.Set(FilePath::MakeFileName(testdata_path_, FilePath("unique"), 0, "txt")); unique_file1_.Set(FilePath::MakeFileName(testdata_path_, FilePath("unique"), 1, "txt")); remove(testdata_file_.c_str()); remove(unique_file0_.c_str()); remove(unique_file1_.c_str()); posix::RmDir(testdata_path_.c_str()); } virtual void TearDown() { remove(testdata_file_.c_str()); remove(unique_file0_.c_str()); remove(unique_file1_.c_str()); posix::RmDir(testdata_path_.c_str()); } std::string TempDir() const { #if GTEST_OS_WINDOWS_MOBILE return "\\temp\\"; #elif GTEST_OS_WINDOWS const char* temp_dir = posix::GetEnv("TEMP"); if (temp_dir == NULL || temp_dir[0] == '\0') return "\\temp\\"; else if (temp_dir[strlen(temp_dir) - 1] == '\\') return temp_dir; else return std::string(temp_dir) + "\\"; #elif GTEST_OS_LINUX_ANDROID return "/sdcard/"; #else return "/tmp/"; #endif // GTEST_OS_WINDOWS_MOBILE } void CreateTextFile(const char* filename) { FILE* f = posix::FOpen(filename, "w"); fprintf(f, "text\n"); fclose(f); } // Strings representing a directory and a file, with identical paths // except for the trailing separator character that distinquishes // a directory named 'test' from a file named 'test'. Example names: FilePath testdata_path_; // "/tmp/directory_creation/test/" FilePath testdata_file_; // "/tmp/directory_creation/test" FilePath unique_file0_; // "/tmp/directory_creation/test/unique.txt" FilePath unique_file1_; // "/tmp/directory_creation/test/unique_1.txt" }; TEST_F(DirectoryCreationTest, CreateDirectoriesRecursively) { EXPECT_FALSE(testdata_path_.DirectoryExists()) << testdata_path_.string(); EXPECT_TRUE(testdata_path_.CreateDirectoriesRecursively()); EXPECT_TRUE(testdata_path_.DirectoryExists()); } TEST_F(DirectoryCreationTest, CreateDirectoriesForAlreadyExistingPath) { EXPECT_FALSE(testdata_path_.DirectoryExists()) << testdata_path_.string(); EXPECT_TRUE(testdata_path_.CreateDirectoriesRecursively()); // Call 'create' again... should still succeed. EXPECT_TRUE(testdata_path_.CreateDirectoriesRecursively()); } TEST_F(DirectoryCreationTest, CreateDirectoriesAndUniqueFilename) { FilePath file_path(FilePath::GenerateUniqueFileName(testdata_path_, FilePath("unique"), "txt")); EXPECT_EQ(unique_file0_.string(), file_path.string()); EXPECT_FALSE(file_path.FileOrDirectoryExists()); // file not there testdata_path_.CreateDirectoriesRecursively(); EXPECT_FALSE(file_path.FileOrDirectoryExists()); // file still not there CreateTextFile(file_path.c_str()); EXPECT_TRUE(file_path.FileOrDirectoryExists()); FilePath file_path2(FilePath::GenerateUniqueFileName(testdata_path_, FilePath("unique"), "txt")); EXPECT_EQ(unique_file1_.string(), file_path2.string()); EXPECT_FALSE(file_path2.FileOrDirectoryExists()); // file not there CreateTextFile(file_path2.c_str()); EXPECT_TRUE(file_path2.FileOrDirectoryExists()); } TEST_F(DirectoryCreationTest, CreateDirectoriesFail) { // force a failure by putting a file where we will try to create a directory. CreateTextFile(testdata_file_.c_str()); EXPECT_TRUE(testdata_file_.FileOrDirectoryExists()); EXPECT_FALSE(testdata_file_.DirectoryExists()); EXPECT_FALSE(testdata_file_.CreateDirectoriesRecursively()); } TEST(NoDirectoryCreationTest, CreateNoDirectoriesForDefaultXmlFile) { const FilePath test_detail_xml("test_detail.xml"); EXPECT_FALSE(test_detail_xml.CreateDirectoriesRecursively()); } TEST(FilePathTest, DefaultConstructor) { FilePath fp; EXPECT_EQ("", fp.string()); } TEST(FilePathTest, CharAndCopyConstructors) { const FilePath fp("spicy"); EXPECT_EQ("spicy", fp.string()); const FilePath fp_copy(fp); EXPECT_EQ("spicy", fp_copy.string()); } TEST(FilePathTest, StringConstructor) { const FilePath fp(std::string("cider")); EXPECT_EQ("cider", fp.string()); } TEST(FilePathTest, Set) { const FilePath apple("apple"); FilePath mac("mac"); mac.Set(apple); // Implement Set() since overloading operator= is forbidden. EXPECT_EQ("apple", mac.string()); EXPECT_EQ("apple", apple.string()); } TEST(FilePathTest, ToString) { const FilePath file("drink"); EXPECT_EQ("drink", file.string()); } TEST(FilePathTest, RemoveExtension) { EXPECT_EQ("app", FilePath("app.cc").RemoveExtension("cc").string()); EXPECT_EQ("app", FilePath("app.exe").RemoveExtension("exe").string()); EXPECT_EQ("APP", FilePath("APP.EXE").RemoveExtension("exe").string()); } TEST(FilePathTest, RemoveExtensionWhenThereIsNoExtension) { EXPECT_EQ("app", FilePath("app").RemoveExtension("exe").string()); } TEST(FilePathTest, IsDirectory) { EXPECT_FALSE(FilePath("cola").IsDirectory()); EXPECT_TRUE(FilePath("koala" GTEST_PATH_SEP_).IsDirectory()); #if GTEST_HAS_ALT_PATH_SEP_ EXPECT_TRUE(FilePath("koala/").IsDirectory()); #endif } TEST(FilePathTest, IsAbsolutePath) { EXPECT_FALSE(FilePath("is" GTEST_PATH_SEP_ "relative").IsAbsolutePath()); EXPECT_FALSE(FilePath("").IsAbsolutePath()); #if GTEST_OS_WINDOWS EXPECT_TRUE(FilePath("c:\\" GTEST_PATH_SEP_ "is_not" GTEST_PATH_SEP_ "relative").IsAbsolutePath()); EXPECT_FALSE(FilePath("c:foo" GTEST_PATH_SEP_ "bar").IsAbsolutePath()); EXPECT_TRUE(FilePath("c:/" GTEST_PATH_SEP_ "is_not" GTEST_PATH_SEP_ "relative").IsAbsolutePath()); #else EXPECT_TRUE(FilePath(GTEST_PATH_SEP_ "is_not" GTEST_PATH_SEP_ "relative") .IsAbsolutePath()); #endif // GTEST_OS_WINDOWS } TEST(FilePathTest, IsRootDirectory) { #if GTEST_OS_WINDOWS EXPECT_TRUE(FilePath("a:\\").IsRootDirectory()); EXPECT_TRUE(FilePath("Z:/").IsRootDirectory()); EXPECT_TRUE(FilePath("e://").IsRootDirectory()); EXPECT_FALSE(FilePath("").IsRootDirectory()); EXPECT_FALSE(FilePath("b:").IsRootDirectory()); EXPECT_FALSE(FilePath("b:a").IsRootDirectory()); EXPECT_FALSE(FilePath("8:/").IsRootDirectory()); EXPECT_FALSE(FilePath("c|/").IsRootDirectory()); #else EXPECT_TRUE(FilePath("/").IsRootDirectory()); EXPECT_TRUE(FilePath("//").IsRootDirectory()); EXPECT_FALSE(FilePath("").IsRootDirectory()); EXPECT_FALSE(FilePath("\\").IsRootDirectory()); EXPECT_FALSE(FilePath("/x").IsRootDirectory()); #endif } } // namespace } // namespace internal } // namespace testing ffc-2019.1.0.post0/libs/gtest-1.7.0/test/gtest-linked_ptr_test.cc000066400000000000000000000100351346026536000240760ustar00rootroot00000000000000// Copyright 2003, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Authors: Dan Egnor (egnor@google.com) // Ported to Windows: Vadim Berman (vadimb@google.com) #include "gtest/internal/gtest-linked_ptr.h" #include #include "gtest/gtest.h" namespace { using testing::Message; using testing::internal::linked_ptr; int num; Message* history = NULL; // Class which tracks allocation/deallocation class A { public: A(): mynum(num++) { *history << "A" << mynum << " ctor\n"; } virtual ~A() { *history << "A" << mynum << " dtor\n"; } virtual void Use() { *history << "A" << mynum << " use\n"; } protected: int mynum; }; // Subclass class B : public A { public: B() { *history << "B" << mynum << " ctor\n"; } ~B() { *history << "B" << mynum << " dtor\n"; } virtual void Use() { *history << "B" << mynum << " use\n"; } }; class LinkedPtrTest : public testing::Test { public: LinkedPtrTest() { num = 0; history = new Message; } virtual ~LinkedPtrTest() { delete history; history = NULL; } }; TEST_F(LinkedPtrTest, GeneralTest) { { linked_ptr a0, a1, a2; // Use explicit function call notation here to suppress self-assign warning. a0.operator=(a0); a1 = a2; ASSERT_EQ(a0.get(), static_cast(NULL)); ASSERT_EQ(a1.get(), static_cast(NULL)); ASSERT_EQ(a2.get(), static_cast(NULL)); ASSERT_TRUE(a0 == NULL); ASSERT_TRUE(a1 == NULL); ASSERT_TRUE(a2 == NULL); { linked_ptr a3(new A); a0 = a3; ASSERT_TRUE(a0 == a3); ASSERT_TRUE(a0 != NULL); ASSERT_TRUE(a0.get() == a3); ASSERT_TRUE(a0 == a3.get()); linked_ptr a4(a0); a1 = a4; linked_ptr a5(new A); ASSERT_TRUE(a5.get() != a3); ASSERT_TRUE(a5 != a3.get()); a2 = a5; linked_ptr b0(new B); linked_ptr a6(b0); ASSERT_TRUE(b0 == a6); ASSERT_TRUE(a6 == b0); ASSERT_TRUE(b0 != NULL); a5 = b0; a5 = b0; a3->Use(); a4->Use(); a5->Use(); a6->Use(); b0->Use(); (*b0).Use(); b0.get()->Use(); } a0->Use(); a1->Use(); a2->Use(); a1 = a2; a2.reset(new A); a0.reset(); linked_ptr a7; } ASSERT_STREQ( "A0 ctor\n" "A1 ctor\n" "A2 ctor\n" "B2 ctor\n" "A0 use\n" "A0 use\n" "B2 use\n" "B2 use\n" "B2 use\n" "B2 use\n" "B2 use\n" "B2 dtor\n" "A2 dtor\n" "A0 use\n" "A0 use\n" "A1 use\n" "A3 ctor\n" "A0 dtor\n" "A3 dtor\n" "A1 dtor\n", history->GetString().c_str()); } } // Unnamed namespace ffc-2019.1.0.post0/libs/gtest-1.7.0/test/gtest-listener_test.cc000066400000000000000000000230671346026536000236010ustar00rootroot00000000000000// Copyright 2009 Google Inc. All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: vladl@google.com (Vlad Losev) // // The Google C++ Testing Framework (Google Test) // // This file verifies Google Test event listeners receive events at the // right times. #include "gtest/gtest.h" #include using ::testing::AddGlobalTestEnvironment; using ::testing::Environment; using ::testing::InitGoogleTest; using ::testing::Test; using ::testing::TestCase; using ::testing::TestEventListener; using ::testing::TestInfo; using ::testing::TestPartResult; using ::testing::UnitTest; // Used by tests to register their events. std::vector* g_events = NULL; namespace testing { namespace internal { class EventRecordingListener : public TestEventListener { public: explicit EventRecordingListener(const char* name) : name_(name) {} protected: virtual void OnTestProgramStart(const UnitTest& /*unit_test*/) { g_events->push_back(GetFullMethodName("OnTestProgramStart")); } virtual void OnTestIterationStart(const UnitTest& /*unit_test*/, int iteration) { Message message; message << GetFullMethodName("OnTestIterationStart") << "(" << iteration << ")"; g_events->push_back(message.GetString()); } virtual void OnEnvironmentsSetUpStart(const UnitTest& /*unit_test*/) { g_events->push_back(GetFullMethodName("OnEnvironmentsSetUpStart")); } virtual void OnEnvironmentsSetUpEnd(const UnitTest& /*unit_test*/) { g_events->push_back(GetFullMethodName("OnEnvironmentsSetUpEnd")); } virtual void OnTestCaseStart(const TestCase& /*test_case*/) { g_events->push_back(GetFullMethodName("OnTestCaseStart")); } virtual void OnTestStart(const TestInfo& /*test_info*/) { g_events->push_back(GetFullMethodName("OnTestStart")); } virtual void OnTestPartResult(const TestPartResult& /*test_part_result*/) { g_events->push_back(GetFullMethodName("OnTestPartResult")); } virtual void OnTestEnd(const TestInfo& /*test_info*/) { g_events->push_back(GetFullMethodName("OnTestEnd")); } virtual void OnTestCaseEnd(const TestCase& /*test_case*/) { g_events->push_back(GetFullMethodName("OnTestCaseEnd")); } virtual void OnEnvironmentsTearDownStart(const UnitTest& /*unit_test*/) { g_events->push_back(GetFullMethodName("OnEnvironmentsTearDownStart")); } virtual void OnEnvironmentsTearDownEnd(const UnitTest& /*unit_test*/) { g_events->push_back(GetFullMethodName("OnEnvironmentsTearDownEnd")); } virtual void OnTestIterationEnd(const UnitTest& /*unit_test*/, int iteration) { Message message; message << GetFullMethodName("OnTestIterationEnd") << "(" << iteration << ")"; g_events->push_back(message.GetString()); } virtual void OnTestProgramEnd(const UnitTest& /*unit_test*/) { g_events->push_back(GetFullMethodName("OnTestProgramEnd")); } private: std::string GetFullMethodName(const char* name) { return name_ + "." + name; } std::string name_; }; class EnvironmentInvocationCatcher : public Environment { protected: virtual void SetUp() { g_events->push_back("Environment::SetUp"); } virtual void TearDown() { g_events->push_back("Environment::TearDown"); } }; class ListenerTest : public Test { protected: static void SetUpTestCase() { g_events->push_back("ListenerTest::SetUpTestCase"); } static void TearDownTestCase() { g_events->push_back("ListenerTest::TearDownTestCase"); } virtual void SetUp() { g_events->push_back("ListenerTest::SetUp"); } virtual void TearDown() { g_events->push_back("ListenerTest::TearDown"); } }; TEST_F(ListenerTest, DoesFoo) { // Test execution order within a test case is not guaranteed so we are not // recording the test name. g_events->push_back("ListenerTest::* Test Body"); SUCCEED(); // Triggers OnTestPartResult. } TEST_F(ListenerTest, DoesBar) { g_events->push_back("ListenerTest::* Test Body"); SUCCEED(); // Triggers OnTestPartResult. } } // namespace internal } // namespace testing using ::testing::internal::EnvironmentInvocationCatcher; using ::testing::internal::EventRecordingListener; void VerifyResults(const std::vector& data, const char* const* expected_data, int expected_data_size) { const int actual_size = data.size(); // If the following assertion fails, a new entry will be appended to // data. Hence we save data.size() first. EXPECT_EQ(expected_data_size, actual_size); // Compares the common prefix. const int shorter_size = expected_data_size <= actual_size ? expected_data_size : actual_size; int i = 0; for (; i < shorter_size; ++i) { ASSERT_STREQ(expected_data[i], data[i].c_str()) << "at position " << i; } // Prints extra elements in the actual data. for (; i < actual_size; ++i) { printf(" Actual event #%d: %s\n", i, data[i].c_str()); } } int main(int argc, char **argv) { std::vector events; g_events = &events; InitGoogleTest(&argc, argv); UnitTest::GetInstance()->listeners().Append( new EventRecordingListener("1st")); UnitTest::GetInstance()->listeners().Append( new EventRecordingListener("2nd")); AddGlobalTestEnvironment(new EnvironmentInvocationCatcher); GTEST_CHECK_(events.size() == 0) << "AddGlobalTestEnvironment should not generate any events itself."; ::testing::GTEST_FLAG(repeat) = 2; int ret_val = RUN_ALL_TESTS(); const char* const expected_events[] = { "1st.OnTestProgramStart", "2nd.OnTestProgramStart", "1st.OnTestIterationStart(0)", "2nd.OnTestIterationStart(0)", "1st.OnEnvironmentsSetUpStart", "2nd.OnEnvironmentsSetUpStart", "Environment::SetUp", "2nd.OnEnvironmentsSetUpEnd", "1st.OnEnvironmentsSetUpEnd", "1st.OnTestCaseStart", "2nd.OnTestCaseStart", "ListenerTest::SetUpTestCase", "1st.OnTestStart", "2nd.OnTestStart", "ListenerTest::SetUp", "ListenerTest::* Test Body", "1st.OnTestPartResult", "2nd.OnTestPartResult", "ListenerTest::TearDown", "2nd.OnTestEnd", "1st.OnTestEnd", "1st.OnTestStart", "2nd.OnTestStart", "ListenerTest::SetUp", "ListenerTest::* Test Body", "1st.OnTestPartResult", "2nd.OnTestPartResult", "ListenerTest::TearDown", "2nd.OnTestEnd", "1st.OnTestEnd", "ListenerTest::TearDownTestCase", "2nd.OnTestCaseEnd", "1st.OnTestCaseEnd", "1st.OnEnvironmentsTearDownStart", "2nd.OnEnvironmentsTearDownStart", "Environment::TearDown", "2nd.OnEnvironmentsTearDownEnd", "1st.OnEnvironmentsTearDownEnd", "2nd.OnTestIterationEnd(0)", "1st.OnTestIterationEnd(0)", "1st.OnTestIterationStart(1)", "2nd.OnTestIterationStart(1)", "1st.OnEnvironmentsSetUpStart", "2nd.OnEnvironmentsSetUpStart", "Environment::SetUp", "2nd.OnEnvironmentsSetUpEnd", "1st.OnEnvironmentsSetUpEnd", "1st.OnTestCaseStart", "2nd.OnTestCaseStart", "ListenerTest::SetUpTestCase", "1st.OnTestStart", "2nd.OnTestStart", "ListenerTest::SetUp", "ListenerTest::* Test Body", "1st.OnTestPartResult", "2nd.OnTestPartResult", "ListenerTest::TearDown", "2nd.OnTestEnd", "1st.OnTestEnd", "1st.OnTestStart", "2nd.OnTestStart", "ListenerTest::SetUp", "ListenerTest::* Test Body", "1st.OnTestPartResult", "2nd.OnTestPartResult", "ListenerTest::TearDown", "2nd.OnTestEnd", "1st.OnTestEnd", "ListenerTest::TearDownTestCase", "2nd.OnTestCaseEnd", "1st.OnTestCaseEnd", "1st.OnEnvironmentsTearDownStart", "2nd.OnEnvironmentsTearDownStart", "Environment::TearDown", "2nd.OnEnvironmentsTearDownEnd", "1st.OnEnvironmentsTearDownEnd", "2nd.OnTestIterationEnd(1)", "1st.OnTestIterationEnd(1)", "2nd.OnTestProgramEnd", "1st.OnTestProgramEnd" }; VerifyResults(events, expected_events, sizeof(expected_events)/sizeof(expected_events[0])); // We need to check manually for ad hoc test failures that happen after // RUN_ALL_TESTS finishes. if (UnitTest::GetInstance()->Failed()) ret_val = 1; return ret_val; } ffc-2019.1.0.post0/libs/gtest-1.7.0/test/gtest-message_test.cc000066400000000000000000000122661346026536000233770ustar00rootroot00000000000000// Copyright 2005, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: wan@google.com (Zhanyong Wan) // // Tests for the Message class. #include "gtest/gtest-message.h" #include "gtest/gtest.h" namespace { using ::testing::Message; // Tests the testing::Message class // Tests the default constructor. TEST(MessageTest, DefaultConstructor) { const Message msg; EXPECT_EQ("", msg.GetString()); } // Tests the copy constructor. TEST(MessageTest, CopyConstructor) { const Message msg1("Hello"); const Message msg2(msg1); EXPECT_EQ("Hello", msg2.GetString()); } // Tests constructing a Message from a C-string. TEST(MessageTest, ConstructsFromCString) { Message msg("Hello"); EXPECT_EQ("Hello", msg.GetString()); } // Tests streaming a float. TEST(MessageTest, StreamsFloat) { const std::string s = (Message() << 1.23456F << " " << 2.34567F).GetString(); // Both numbers should be printed with enough precision. EXPECT_PRED_FORMAT2(testing::IsSubstring, "1.234560", s.c_str()); EXPECT_PRED_FORMAT2(testing::IsSubstring, " 2.345669", s.c_str()); } // Tests streaming a double. TEST(MessageTest, StreamsDouble) { const std::string s = (Message() << 1260570880.4555497 << " " << 1260572265.1954534).GetString(); // Both numbers should be printed with enough precision. EXPECT_PRED_FORMAT2(testing::IsSubstring, "1260570880.45", s.c_str()); EXPECT_PRED_FORMAT2(testing::IsSubstring, " 1260572265.19", s.c_str()); } // Tests streaming a non-char pointer. TEST(MessageTest, StreamsPointer) { int n = 0; int* p = &n; EXPECT_NE("(null)", (Message() << p).GetString()); } // Tests streaming a NULL non-char pointer. TEST(MessageTest, StreamsNullPointer) { int* p = NULL; EXPECT_EQ("(null)", (Message() << p).GetString()); } // Tests streaming a C string. TEST(MessageTest, StreamsCString) { EXPECT_EQ("Foo", (Message() << "Foo").GetString()); } // Tests streaming a NULL C string. TEST(MessageTest, StreamsNullCString) { char* p = NULL; EXPECT_EQ("(null)", (Message() << p).GetString()); } // Tests streaming std::string. TEST(MessageTest, StreamsString) { const ::std::string str("Hello"); EXPECT_EQ("Hello", (Message() << str).GetString()); } // Tests that we can output strings containing embedded NULs. TEST(MessageTest, StreamsStringWithEmbeddedNUL) { const char char_array_with_nul[] = "Here's a NUL\0 and some more string"; const ::std::string string_with_nul(char_array_with_nul, sizeof(char_array_with_nul) - 1); EXPECT_EQ("Here's a NUL\\0 and some more string", (Message() << string_with_nul).GetString()); } // Tests streaming a NUL char. TEST(MessageTest, StreamsNULChar) { EXPECT_EQ("\\0", (Message() << '\0').GetString()); } // Tests streaming int. TEST(MessageTest, StreamsInt) { EXPECT_EQ("123", (Message() << 123).GetString()); } // Tests that basic IO manipulators (endl, ends, and flush) can be // streamed to Message. TEST(MessageTest, StreamsBasicIoManip) { EXPECT_EQ("Line 1.\nA NUL char \\0 in line 2.", (Message() << "Line 1." << std::endl << "A NUL char " << std::ends << std::flush << " in line 2.").GetString()); } // Tests Message::GetString() TEST(MessageTest, GetString) { Message msg; msg << 1 << " lamb"; EXPECT_EQ("1 lamb", msg.GetString()); } // Tests streaming a Message object to an ostream. TEST(MessageTest, StreamsToOStream) { Message msg("Hello"); ::std::stringstream ss; ss << msg; EXPECT_EQ("Hello", testing::internal::StringStreamToString(&ss)); } // Tests that a Message object doesn't take up too much stack space. TEST(MessageTest, DoesNotTakeUpMuchStackSpace) { EXPECT_LE(sizeof(Message), 16U); } } // namespace ffc-2019.1.0.post0/libs/gtest-1.7.0/test/gtest-options_test.cc000066400000000000000000000174021346026536000234430ustar00rootroot00000000000000// Copyright 2008, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Authors: keith.ray@gmail.com (Keith Ray) // // Google Test UnitTestOptions tests // // This file tests classes and functions used internally by // Google Test. They are subject to change without notice. // // This file is #included from gtest.cc, to avoid changing build or // make-files on Windows and other platforms. Do not #include this file // anywhere else! #include "gtest/gtest.h" #if GTEST_OS_WINDOWS_MOBILE # include #elif GTEST_OS_WINDOWS # include #endif // GTEST_OS_WINDOWS_MOBILE // Indicates that this translation unit is part of Google Test's // implementation. It must come before gtest-internal-inl.h is // included, or there will be a compiler error. This trick is to // prevent a user from accidentally including gtest-internal-inl.h in // his code. #define GTEST_IMPLEMENTATION_ 1 #include "src/gtest-internal-inl.h" #undef GTEST_IMPLEMENTATION_ namespace testing { namespace internal { namespace { // Turns the given relative path into an absolute path. FilePath GetAbsolutePathOf(const FilePath& relative_path) { return FilePath::ConcatPaths(FilePath::GetCurrentDir(), relative_path); } // Testing UnitTestOptions::GetOutputFormat/GetOutputFile. TEST(XmlOutputTest, GetOutputFormatDefault) { GTEST_FLAG(output) = ""; EXPECT_STREQ("", UnitTestOptions::GetOutputFormat().c_str()); } TEST(XmlOutputTest, GetOutputFormat) { GTEST_FLAG(output) = "xml:filename"; EXPECT_STREQ("xml", UnitTestOptions::GetOutputFormat().c_str()); } TEST(XmlOutputTest, GetOutputFileDefault) { GTEST_FLAG(output) = ""; EXPECT_EQ(GetAbsolutePathOf(FilePath("test_detail.xml")).string(), UnitTestOptions::GetAbsolutePathToOutputFile()); } TEST(XmlOutputTest, GetOutputFileSingleFile) { GTEST_FLAG(output) = "xml:filename.abc"; EXPECT_EQ(GetAbsolutePathOf(FilePath("filename.abc")).string(), UnitTestOptions::GetAbsolutePathToOutputFile()); } TEST(XmlOutputTest, GetOutputFileFromDirectoryPath) { GTEST_FLAG(output) = "xml:path" GTEST_PATH_SEP_; const std::string expected_output_file = GetAbsolutePathOf( FilePath(std::string("path") + GTEST_PATH_SEP_ + GetCurrentExecutableName().string() + ".xml")).string(); const std::string& output_file = UnitTestOptions::GetAbsolutePathToOutputFile(); #if GTEST_OS_WINDOWS EXPECT_STRCASEEQ(expected_output_file.c_str(), output_file.c_str()); #else EXPECT_EQ(expected_output_file, output_file.c_str()); #endif } TEST(OutputFileHelpersTest, GetCurrentExecutableName) { const std::string exe_str = GetCurrentExecutableName().string(); #if GTEST_OS_WINDOWS const bool success = _strcmpi("gtest-options_test", exe_str.c_str()) == 0 || _strcmpi("gtest-options-ex_test", exe_str.c_str()) == 0 || _strcmpi("gtest_all_test", exe_str.c_str()) == 0 || _strcmpi("gtest_dll_test", exe_str.c_str()) == 0; #else // TODO(wan@google.com): remove the hard-coded "lt-" prefix when // Chandler Carruth's libtool replacement is ready. const bool success = exe_str == "gtest-options_test" || exe_str == "gtest_all_test" || exe_str == "lt-gtest_all_test" || exe_str == "gtest_dll_test"; #endif // GTEST_OS_WINDOWS if (!success) FAIL() << "GetCurrentExecutableName() returns " << exe_str; } class XmlOutputChangeDirTest : public Test { protected: virtual void SetUp() { original_working_dir_ = FilePath::GetCurrentDir(); posix::ChDir(".."); // This will make the test fail if run from the root directory. EXPECT_NE(original_working_dir_.string(), FilePath::GetCurrentDir().string()); } virtual void TearDown() { posix::ChDir(original_working_dir_.string().c_str()); } FilePath original_working_dir_; }; TEST_F(XmlOutputChangeDirTest, PreserveOriginalWorkingDirWithDefault) { GTEST_FLAG(output) = ""; EXPECT_EQ(FilePath::ConcatPaths(original_working_dir_, FilePath("test_detail.xml")).string(), UnitTestOptions::GetAbsolutePathToOutputFile()); } TEST_F(XmlOutputChangeDirTest, PreserveOriginalWorkingDirWithDefaultXML) { GTEST_FLAG(output) = "xml"; EXPECT_EQ(FilePath::ConcatPaths(original_working_dir_, FilePath("test_detail.xml")).string(), UnitTestOptions::GetAbsolutePathToOutputFile()); } TEST_F(XmlOutputChangeDirTest, PreserveOriginalWorkingDirWithRelativeFile) { GTEST_FLAG(output) = "xml:filename.abc"; EXPECT_EQ(FilePath::ConcatPaths(original_working_dir_, FilePath("filename.abc")).string(), UnitTestOptions::GetAbsolutePathToOutputFile()); } TEST_F(XmlOutputChangeDirTest, PreserveOriginalWorkingDirWithRelativePath) { GTEST_FLAG(output) = "xml:path" GTEST_PATH_SEP_; const std::string expected_output_file = FilePath::ConcatPaths( original_working_dir_, FilePath(std::string("path") + GTEST_PATH_SEP_ + GetCurrentExecutableName().string() + ".xml")).string(); const std::string& output_file = UnitTestOptions::GetAbsolutePathToOutputFile(); #if GTEST_OS_WINDOWS EXPECT_STRCASEEQ(expected_output_file.c_str(), output_file.c_str()); #else EXPECT_EQ(expected_output_file, output_file.c_str()); #endif } TEST_F(XmlOutputChangeDirTest, PreserveOriginalWorkingDirWithAbsoluteFile) { #if GTEST_OS_WINDOWS GTEST_FLAG(output) = "xml:c:\\tmp\\filename.abc"; EXPECT_EQ(FilePath("c:\\tmp\\filename.abc").string(), UnitTestOptions::GetAbsolutePathToOutputFile()); #else GTEST_FLAG(output) ="xml:/tmp/filename.abc"; EXPECT_EQ(FilePath("/tmp/filename.abc").string(), UnitTestOptions::GetAbsolutePathToOutputFile()); #endif } TEST_F(XmlOutputChangeDirTest, PreserveOriginalWorkingDirWithAbsolutePath) { #if GTEST_OS_WINDOWS const std::string path = "c:\\tmp\\"; #else const std::string path = "/tmp/"; #endif GTEST_FLAG(output) = "xml:" + path; const std::string expected_output_file = path + GetCurrentExecutableName().string() + ".xml"; const std::string& output_file = UnitTestOptions::GetAbsolutePathToOutputFile(); #if GTEST_OS_WINDOWS EXPECT_STRCASEEQ(expected_output_file.c_str(), output_file.c_str()); #else EXPECT_EQ(expected_output_file, output_file.c_str()); #endif } } // namespace } // namespace internal } // namespace testing ffc-2019.1.0.post0/libs/gtest-1.7.0/test/gtest-param-test2_test.cc000066400000000000000000000055041346026536000241070ustar00rootroot00000000000000// Copyright 2008, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: vladl@google.com (Vlad Losev) // // Tests for Google Test itself. This verifies that the basic constructs of // Google Test work. #include "gtest/gtest.h" #include "test/gtest-param-test_test.h" #if GTEST_HAS_PARAM_TEST using ::testing::Values; using ::testing::internal::ParamGenerator; // Tests that generators defined in a different translation unit // are functional. The test using extern_gen is defined // in gtest-param-test_test.cc. ParamGenerator extern_gen = Values(33); // Tests that a parameterized test case can be defined in one translation unit // and instantiated in another. The test is defined in gtest-param-test_test.cc // and ExternalInstantiationTest fixture class is defined in // gtest-param-test_test.h. INSTANTIATE_TEST_CASE_P(MultiplesOf33, ExternalInstantiationTest, Values(33, 66)); // Tests that a parameterized test case can be instantiated // in multiple translation units. Another instantiation is defined // in gtest-param-test_test.cc and InstantiationInMultipleTranslaionUnitsTest // fixture is defined in gtest-param-test_test.h INSTANTIATE_TEST_CASE_P(Sequence2, InstantiationInMultipleTranslaionUnitsTest, Values(42*3, 42*4, 42*5)); #endif // GTEST_HAS_PARAM_TEST ffc-2019.1.0.post0/libs/gtest-1.7.0/test/gtest-param-test_test.cc000066400000000000000000001011021346026536000240140ustar00rootroot00000000000000// Copyright 2008, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: vladl@google.com (Vlad Losev) // // Tests for Google Test itself. This file verifies that the parameter // generators objects produce correct parameter sequences and that // Google Test runtime instantiates correct tests from those sequences. #include "gtest/gtest.h" #if GTEST_HAS_PARAM_TEST # include # include # include # include # include # include // To include gtest-internal-inl.h. # define GTEST_IMPLEMENTATION_ 1 # include "src/gtest-internal-inl.h" // for UnitTestOptions # undef GTEST_IMPLEMENTATION_ # include "test/gtest-param-test_test.h" using ::std::vector; using ::std::sort; using ::testing::AddGlobalTestEnvironment; using ::testing::Bool; using ::testing::Message; using ::testing::Range; using ::testing::TestWithParam; using ::testing::Values; using ::testing::ValuesIn; # if GTEST_HAS_COMBINE using ::testing::Combine; using ::std::tr1::get; using ::std::tr1::make_tuple; using ::std::tr1::tuple; # endif // GTEST_HAS_COMBINE using ::testing::internal::ParamGenerator; using ::testing::internal::UnitTestOptions; // Prints a value to a string. // // TODO(wan@google.com): remove PrintValue() when we move matchers and // EXPECT_THAT() from Google Mock to Google Test. At that time, we // can write EXPECT_THAT(x, Eq(y)) to compare two tuples x and y, as // EXPECT_THAT() and the matchers know how to print tuples. template ::std::string PrintValue(const T& value) { ::std::stringstream stream; stream << value; return stream.str(); } # if GTEST_HAS_COMBINE // These overloads allow printing tuples in our tests. We cannot // define an operator<< for tuples, as that definition needs to be in // the std namespace in order to be picked up by Google Test via // Argument-Dependent Lookup, yet defining anything in the std // namespace in non-STL code is undefined behavior. template ::std::string PrintValue(const tuple& value) { ::std::stringstream stream; stream << "(" << get<0>(value) << ", " << get<1>(value) << ")"; return stream.str(); } template ::std::string PrintValue(const tuple& value) { ::std::stringstream stream; stream << "(" << get<0>(value) << ", " << get<1>(value) << ", "<< get<2>(value) << ")"; return stream.str(); } template ::std::string PrintValue( const tuple& value) { ::std::stringstream stream; stream << "(" << get<0>(value) << ", " << get<1>(value) << ", "<< get<2>(value) << ", " << get<3>(value) << ", "<< get<4>(value) << ", " << get<5>(value) << ", "<< get<6>(value) << ", " << get<7>(value) << ", "<< get<8>(value) << ", " << get<9>(value) << ")"; return stream.str(); } # endif // GTEST_HAS_COMBINE // Verifies that a sequence generated by the generator and accessed // via the iterator object matches the expected one using Google Test // assertions. template void VerifyGenerator(const ParamGenerator& generator, const T (&expected_values)[N]) { typename ParamGenerator::iterator it = generator.begin(); for (size_t i = 0; i < N; ++i) { ASSERT_FALSE(it == generator.end()) << "At element " << i << " when accessing via an iterator " << "created with the copy constructor.\n"; // We cannot use EXPECT_EQ() here as the values may be tuples, // which don't support <<. EXPECT_TRUE(expected_values[i] == *it) << "where i is " << i << ", expected_values[i] is " << PrintValue(expected_values[i]) << ", *it is " << PrintValue(*it) << ", and 'it' is an iterator created with the copy constructor.\n"; it++; } EXPECT_TRUE(it == generator.end()) << "At the presumed end of sequence when accessing via an iterator " << "created with the copy constructor.\n"; // Test the iterator assignment. The following lines verify that // the sequence accessed via an iterator initialized via the // assignment operator (as opposed to a copy constructor) matches // just the same. it = generator.begin(); for (size_t i = 0; i < N; ++i) { ASSERT_FALSE(it == generator.end()) << "At element " << i << " when accessing via an iterator " << "created with the assignment operator.\n"; EXPECT_TRUE(expected_values[i] == *it) << "where i is " << i << ", expected_values[i] is " << PrintValue(expected_values[i]) << ", *it is " << PrintValue(*it) << ", and 'it' is an iterator created with the copy constructor.\n"; it++; } EXPECT_TRUE(it == generator.end()) << "At the presumed end of sequence when accessing via an iterator " << "created with the assignment operator.\n"; } template void VerifyGeneratorIsEmpty(const ParamGenerator& generator) { typename ParamGenerator::iterator it = generator.begin(); EXPECT_TRUE(it == generator.end()); it = generator.begin(); EXPECT_TRUE(it == generator.end()); } // Generator tests. They test that each of the provided generator functions // generates an expected sequence of values. The general test pattern // instantiates a generator using one of the generator functions, // checks the sequence produced by the generator using its iterator API, // and then resets the iterator back to the beginning of the sequence // and checks the sequence again. // Tests that iterators produced by generator functions conform to the // ForwardIterator concept. TEST(IteratorTest, ParamIteratorConformsToForwardIteratorConcept) { const ParamGenerator gen = Range(0, 10); ParamGenerator::iterator it = gen.begin(); // Verifies that iterator initialization works as expected. ParamGenerator::iterator it2 = it; EXPECT_TRUE(*it == *it2) << "Initialized iterators must point to the " << "element same as its source points to"; // Verifies that iterator assignment works as expected. it++; EXPECT_FALSE(*it == *it2); it2 = it; EXPECT_TRUE(*it == *it2) << "Assigned iterators must point to the " << "element same as its source points to"; // Verifies that prefix operator++() returns *this. EXPECT_EQ(&it, &(++it)) << "Result of the prefix operator++ must be " << "refer to the original object"; // Verifies that the result of the postfix operator++ points to the value // pointed to by the original iterator. int original_value = *it; // Have to compute it outside of macro call to be // unaffected by the parameter evaluation order. EXPECT_EQ(original_value, *(it++)); // Verifies that prefix and postfix operator++() advance an iterator // all the same. it2 = it; it++; ++it2; EXPECT_TRUE(*it == *it2); } // Tests that Range() generates the expected sequence. TEST(RangeTest, IntRangeWithDefaultStep) { const ParamGenerator gen = Range(0, 3); const int expected_values[] = {0, 1, 2}; VerifyGenerator(gen, expected_values); } // Edge case. Tests that Range() generates the single element sequence // as expected when provided with range limits that are equal. TEST(RangeTest, IntRangeSingleValue) { const ParamGenerator gen = Range(0, 1); const int expected_values[] = {0}; VerifyGenerator(gen, expected_values); } // Edge case. Tests that Range() with generates empty sequence when // supplied with an empty range. TEST(RangeTest, IntRangeEmpty) { const ParamGenerator gen = Range(0, 0); VerifyGeneratorIsEmpty(gen); } // Tests that Range() with custom step (greater then one) generates // the expected sequence. TEST(RangeTest, IntRangeWithCustomStep) { const ParamGenerator gen = Range(0, 9, 3); const int expected_values[] = {0, 3, 6}; VerifyGenerator(gen, expected_values); } // Tests that Range() with custom step (greater then one) generates // the expected sequence when the last element does not fall on the // upper range limit. Sequences generated by Range() must not have // elements beyond the range limits. TEST(RangeTest, IntRangeWithCustomStepOverUpperBound) { const ParamGenerator gen = Range(0, 4, 3); const int expected_values[] = {0, 3}; VerifyGenerator(gen, expected_values); } // Verifies that Range works with user-defined types that define // copy constructor, operator=(), operator+(), and operator<(). class DogAdder { public: explicit DogAdder(const char* a_value) : value_(a_value) {} DogAdder(const DogAdder& other) : value_(other.value_.c_str()) {} DogAdder operator=(const DogAdder& other) { if (this != &other) value_ = other.value_; return *this; } DogAdder operator+(const DogAdder& other) const { Message msg; msg << value_.c_str() << other.value_.c_str(); return DogAdder(msg.GetString().c_str()); } bool operator<(const DogAdder& other) const { return value_ < other.value_; } const std::string& value() const { return value_; } private: std::string value_; }; TEST(RangeTest, WorksWithACustomType) { const ParamGenerator gen = Range(DogAdder("cat"), DogAdder("catdogdog"), DogAdder("dog")); ParamGenerator::iterator it = gen.begin(); ASSERT_FALSE(it == gen.end()); EXPECT_STREQ("cat", it->value().c_str()); ASSERT_FALSE(++it == gen.end()); EXPECT_STREQ("catdog", it->value().c_str()); EXPECT_TRUE(++it == gen.end()); } class IntWrapper { public: explicit IntWrapper(int a_value) : value_(a_value) {} IntWrapper(const IntWrapper& other) : value_(other.value_) {} IntWrapper operator=(const IntWrapper& other) { value_ = other.value_; return *this; } // operator+() adds a different type. IntWrapper operator+(int other) const { return IntWrapper(value_ + other); } bool operator<(const IntWrapper& other) const { return value_ < other.value_; } int value() const { return value_; } private: int value_; }; TEST(RangeTest, WorksWithACustomTypeWithDifferentIncrementType) { const ParamGenerator gen = Range(IntWrapper(0), IntWrapper(2)); ParamGenerator::iterator it = gen.begin(); ASSERT_FALSE(it == gen.end()); EXPECT_EQ(0, it->value()); ASSERT_FALSE(++it == gen.end()); EXPECT_EQ(1, it->value()); EXPECT_TRUE(++it == gen.end()); } // Tests that ValuesIn() with an array parameter generates // the expected sequence. TEST(ValuesInTest, ValuesInArray) { int array[] = {3, 5, 8}; const ParamGenerator gen = ValuesIn(array); VerifyGenerator(gen, array); } // Tests that ValuesIn() with a const array parameter generates // the expected sequence. TEST(ValuesInTest, ValuesInConstArray) { const int array[] = {3, 5, 8}; const ParamGenerator gen = ValuesIn(array); VerifyGenerator(gen, array); } // Edge case. Tests that ValuesIn() with an array parameter containing a // single element generates the single element sequence. TEST(ValuesInTest, ValuesInSingleElementArray) { int array[] = {42}; const ParamGenerator gen = ValuesIn(array); VerifyGenerator(gen, array); } // Tests that ValuesIn() generates the expected sequence for an STL // container (vector). TEST(ValuesInTest, ValuesInVector) { typedef ::std::vector ContainerType; ContainerType values; values.push_back(3); values.push_back(5); values.push_back(8); const ParamGenerator gen = ValuesIn(values); const int expected_values[] = {3, 5, 8}; VerifyGenerator(gen, expected_values); } // Tests that ValuesIn() generates the expected sequence. TEST(ValuesInTest, ValuesInIteratorRange) { typedef ::std::vector ContainerType; ContainerType values; values.push_back(3); values.push_back(5); values.push_back(8); const ParamGenerator gen = ValuesIn(values.begin(), values.end()); const int expected_values[] = {3, 5, 8}; VerifyGenerator(gen, expected_values); } // Edge case. Tests that ValuesIn() provided with an iterator range specifying a // single value generates a single-element sequence. TEST(ValuesInTest, ValuesInSingleElementIteratorRange) { typedef ::std::vector ContainerType; ContainerType values; values.push_back(42); const ParamGenerator gen = ValuesIn(values.begin(), values.end()); const int expected_values[] = {42}; VerifyGenerator(gen, expected_values); } // Edge case. Tests that ValuesIn() provided with an empty iterator range // generates an empty sequence. TEST(ValuesInTest, ValuesInEmptyIteratorRange) { typedef ::std::vector ContainerType; ContainerType values; const ParamGenerator gen = ValuesIn(values.begin(), values.end()); VerifyGeneratorIsEmpty(gen); } // Tests that the Values() generates the expected sequence. TEST(ValuesTest, ValuesWorks) { const ParamGenerator gen = Values(3, 5, 8); const int expected_values[] = {3, 5, 8}; VerifyGenerator(gen, expected_values); } // Tests that Values() generates the expected sequences from elements of // different types convertible to ParamGenerator's parameter type. TEST(ValuesTest, ValuesWorksForValuesOfCompatibleTypes) { const ParamGenerator gen = Values(3, 5.0f, 8.0); const double expected_values[] = {3.0, 5.0, 8.0}; VerifyGenerator(gen, expected_values); } TEST(ValuesTest, ValuesWorksForMaxLengthList) { const ParamGenerator gen = Values( 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, 120, 130, 140, 150, 160, 170, 180, 190, 200, 210, 220, 230, 240, 250, 260, 270, 280, 290, 300, 310, 320, 330, 340, 350, 360, 370, 380, 390, 400, 410, 420, 430, 440, 450, 460, 470, 480, 490, 500); const int expected_values[] = { 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 110, 120, 130, 140, 150, 160, 170, 180, 190, 200, 210, 220, 230, 240, 250, 260, 270, 280, 290, 300, 310, 320, 330, 340, 350, 360, 370, 380, 390, 400, 410, 420, 430, 440, 450, 460, 470, 480, 490, 500}; VerifyGenerator(gen, expected_values); } // Edge case test. Tests that single-parameter Values() generates the sequence // with the single value. TEST(ValuesTest, ValuesWithSingleParameter) { const ParamGenerator gen = Values(42); const int expected_values[] = {42}; VerifyGenerator(gen, expected_values); } // Tests that Bool() generates sequence (false, true). TEST(BoolTest, BoolWorks) { const ParamGenerator gen = Bool(); const bool expected_values[] = {false, true}; VerifyGenerator(gen, expected_values); } # if GTEST_HAS_COMBINE // Tests that Combine() with two parameters generates the expected sequence. TEST(CombineTest, CombineWithTwoParameters) { const char* foo = "foo"; const char* bar = "bar"; const ParamGenerator > gen = Combine(Values(foo, bar), Values(3, 4)); tuple expected_values[] = { make_tuple(foo, 3), make_tuple(foo, 4), make_tuple(bar, 3), make_tuple(bar, 4)}; VerifyGenerator(gen, expected_values); } // Tests that Combine() with three parameters generates the expected sequence. TEST(CombineTest, CombineWithThreeParameters) { const ParamGenerator > gen = Combine(Values(0, 1), Values(3, 4), Values(5, 6)); tuple expected_values[] = { make_tuple(0, 3, 5), make_tuple(0, 3, 6), make_tuple(0, 4, 5), make_tuple(0, 4, 6), make_tuple(1, 3, 5), make_tuple(1, 3, 6), make_tuple(1, 4, 5), make_tuple(1, 4, 6)}; VerifyGenerator(gen, expected_values); } // Tests that the Combine() with the first parameter generating a single value // sequence generates a sequence with the number of elements equal to the // number of elements in the sequence generated by the second parameter. TEST(CombineTest, CombineWithFirstParameterSingleValue) { const ParamGenerator > gen = Combine(Values(42), Values(0, 1)); tuple expected_values[] = {make_tuple(42, 0), make_tuple(42, 1)}; VerifyGenerator(gen, expected_values); } // Tests that the Combine() with the second parameter generating a single value // sequence generates a sequence with the number of elements equal to the // number of elements in the sequence generated by the first parameter. TEST(CombineTest, CombineWithSecondParameterSingleValue) { const ParamGenerator > gen = Combine(Values(0, 1), Values(42)); tuple expected_values[] = {make_tuple(0, 42), make_tuple(1, 42)}; VerifyGenerator(gen, expected_values); } // Tests that when the first parameter produces an empty sequence, // Combine() produces an empty sequence, too. TEST(CombineTest, CombineWithFirstParameterEmptyRange) { const ParamGenerator > gen = Combine(Range(0, 0), Values(0, 1)); VerifyGeneratorIsEmpty(gen); } // Tests that when the second parameter produces an empty sequence, // Combine() produces an empty sequence, too. TEST(CombineTest, CombineWithSecondParameterEmptyRange) { const ParamGenerator > gen = Combine(Values(0, 1), Range(1, 1)); VerifyGeneratorIsEmpty(gen); } // Edge case. Tests that combine works with the maximum number // of parameters supported by Google Test (currently 10). TEST(CombineTest, CombineWithMaxNumberOfParameters) { const char* foo = "foo"; const char* bar = "bar"; const ParamGenerator > gen = Combine(Values(foo, bar), Values(1), Values(2), Values(3), Values(4), Values(5), Values(6), Values(7), Values(8), Values(9)); tuple expected_values[] = {make_tuple(foo, 1, 2, 3, 4, 5, 6, 7, 8, 9), make_tuple(bar, 1, 2, 3, 4, 5, 6, 7, 8, 9)}; VerifyGenerator(gen, expected_values); } # endif // GTEST_HAS_COMBINE // Tests that an generator produces correct sequence after being // assigned from another generator. TEST(ParamGeneratorTest, AssignmentWorks) { ParamGenerator gen = Values(1, 2); const ParamGenerator gen2 = Values(3, 4); gen = gen2; const int expected_values[] = {3, 4}; VerifyGenerator(gen, expected_values); } // This test verifies that the tests are expanded and run as specified: // one test per element from the sequence produced by the generator // specified in INSTANTIATE_TEST_CASE_P. It also verifies that the test's // fixture constructor, SetUp(), and TearDown() have run and have been // supplied with the correct parameters. // The use of environment object allows detection of the case where no test // case functionality is run at all. In this case TestCaseTearDown will not // be able to detect missing tests, naturally. template class TestGenerationEnvironment : public ::testing::Environment { public: static TestGenerationEnvironment* Instance() { static TestGenerationEnvironment* instance = new TestGenerationEnvironment; return instance; } void FixtureConstructorExecuted() { fixture_constructor_count_++; } void SetUpExecuted() { set_up_count_++; } void TearDownExecuted() { tear_down_count_++; } void TestBodyExecuted() { test_body_count_++; } virtual void TearDown() { // If all MultipleTestGenerationTest tests have been de-selected // by the filter flag, the following checks make no sense. bool perform_check = false; for (int i = 0; i < kExpectedCalls; ++i) { Message msg; msg << "TestsExpandedAndRun/" << i; if (UnitTestOptions::FilterMatchesTest( "TestExpansionModule/MultipleTestGenerationTest", msg.GetString().c_str())) { perform_check = true; } } if (perform_check) { EXPECT_EQ(kExpectedCalls, fixture_constructor_count_) << "Fixture constructor of ParamTestGenerationTest test case " << "has not been run as expected."; EXPECT_EQ(kExpectedCalls, set_up_count_) << "Fixture SetUp method of ParamTestGenerationTest test case " << "has not been run as expected."; EXPECT_EQ(kExpectedCalls, tear_down_count_) << "Fixture TearDown method of ParamTestGenerationTest test case " << "has not been run as expected."; EXPECT_EQ(kExpectedCalls, test_body_count_) << "Test in ParamTestGenerationTest test case " << "has not been run as expected."; } } private: TestGenerationEnvironment() : fixture_constructor_count_(0), set_up_count_(0), tear_down_count_(0), test_body_count_(0) {} int fixture_constructor_count_; int set_up_count_; int tear_down_count_; int test_body_count_; GTEST_DISALLOW_COPY_AND_ASSIGN_(TestGenerationEnvironment); }; const int test_generation_params[] = {36, 42, 72}; class TestGenerationTest : public TestWithParam { public: enum { PARAMETER_COUNT = sizeof(test_generation_params)/sizeof(test_generation_params[0]) }; typedef TestGenerationEnvironment Environment; TestGenerationTest() { Environment::Instance()->FixtureConstructorExecuted(); current_parameter_ = GetParam(); } virtual void SetUp() { Environment::Instance()->SetUpExecuted(); EXPECT_EQ(current_parameter_, GetParam()); } virtual void TearDown() { Environment::Instance()->TearDownExecuted(); EXPECT_EQ(current_parameter_, GetParam()); } static void SetUpTestCase() { bool all_tests_in_test_case_selected = true; for (int i = 0; i < PARAMETER_COUNT; ++i) { Message test_name; test_name << "TestsExpandedAndRun/" << i; if ( !UnitTestOptions::FilterMatchesTest( "TestExpansionModule/MultipleTestGenerationTest", test_name.GetString())) { all_tests_in_test_case_selected = false; } } EXPECT_TRUE(all_tests_in_test_case_selected) << "When running the TestGenerationTest test case all of its tests\n" << "must be selected by the filter flag for the test case to pass.\n" << "If not all of them are enabled, we can't reliably conclude\n" << "that the correct number of tests have been generated."; collected_parameters_.clear(); } static void TearDownTestCase() { vector expected_values(test_generation_params, test_generation_params + PARAMETER_COUNT); // Test execution order is not guaranteed by Google Test, // so the order of values in collected_parameters_ can be // different and we have to sort to compare. sort(expected_values.begin(), expected_values.end()); sort(collected_parameters_.begin(), collected_parameters_.end()); EXPECT_TRUE(collected_parameters_ == expected_values); } protected: int current_parameter_; static vector collected_parameters_; private: GTEST_DISALLOW_COPY_AND_ASSIGN_(TestGenerationTest); }; vector TestGenerationTest::collected_parameters_; TEST_P(TestGenerationTest, TestsExpandedAndRun) { Environment::Instance()->TestBodyExecuted(); EXPECT_EQ(current_parameter_, GetParam()); collected_parameters_.push_back(GetParam()); } INSTANTIATE_TEST_CASE_P(TestExpansionModule, TestGenerationTest, ValuesIn(test_generation_params)); // This test verifies that the element sequence (third parameter of // INSTANTIATE_TEST_CASE_P) is evaluated in InitGoogleTest() and neither at // the call site of INSTANTIATE_TEST_CASE_P nor in RUN_ALL_TESTS(). For // that, we declare param_value_ to be a static member of // GeneratorEvaluationTest and initialize it to 0. We set it to 1 in // main(), just before invocation of InitGoogleTest(). After calling // InitGoogleTest(), we set the value to 2. If the sequence is evaluated // before or after InitGoogleTest, INSTANTIATE_TEST_CASE_P will create a // test with parameter other than 1, and the test body will fail the // assertion. class GeneratorEvaluationTest : public TestWithParam { public: static int param_value() { return param_value_; } static void set_param_value(int param_value) { param_value_ = param_value; } private: static int param_value_; }; int GeneratorEvaluationTest::param_value_ = 0; TEST_P(GeneratorEvaluationTest, GeneratorsEvaluatedInMain) { EXPECT_EQ(1, GetParam()); } INSTANTIATE_TEST_CASE_P(GenEvalModule, GeneratorEvaluationTest, Values(GeneratorEvaluationTest::param_value())); // Tests that generators defined in a different translation unit are // functional. Generator extern_gen is defined in gtest-param-test_test2.cc. extern ParamGenerator extern_gen; class ExternalGeneratorTest : public TestWithParam {}; TEST_P(ExternalGeneratorTest, ExternalGenerator) { // Sequence produced by extern_gen contains only a single value // which we verify here. EXPECT_EQ(GetParam(), 33); } INSTANTIATE_TEST_CASE_P(ExternalGeneratorModule, ExternalGeneratorTest, extern_gen); // Tests that a parameterized test case can be defined in one translation // unit and instantiated in another. This test will be instantiated in // gtest-param-test_test2.cc. ExternalInstantiationTest fixture class is // defined in gtest-param-test_test.h. TEST_P(ExternalInstantiationTest, IsMultipleOf33) { EXPECT_EQ(0, GetParam() % 33); } // Tests that a parameterized test case can be instantiated with multiple // generators. class MultipleInstantiationTest : public TestWithParam {}; TEST_P(MultipleInstantiationTest, AllowsMultipleInstances) { } INSTANTIATE_TEST_CASE_P(Sequence1, MultipleInstantiationTest, Values(1, 2)); INSTANTIATE_TEST_CASE_P(Sequence2, MultipleInstantiationTest, Range(3, 5)); // Tests that a parameterized test case can be instantiated // in multiple translation units. This test will be instantiated // here and in gtest-param-test_test2.cc. // InstantiationInMultipleTranslationUnitsTest fixture class // is defined in gtest-param-test_test.h. TEST_P(InstantiationInMultipleTranslaionUnitsTest, IsMultipleOf42) { EXPECT_EQ(0, GetParam() % 42); } INSTANTIATE_TEST_CASE_P(Sequence1, InstantiationInMultipleTranslaionUnitsTest, Values(42, 42*2)); // Tests that each iteration of parameterized test runs in a separate test // object. class SeparateInstanceTest : public TestWithParam { public: SeparateInstanceTest() : count_(0) {} static void TearDownTestCase() { EXPECT_GE(global_count_, 2) << "If some (but not all) SeparateInstanceTest tests have been " << "filtered out this test will fail. Make sure that all " << "GeneratorEvaluationTest are selected or de-selected together " << "by the test filter."; } protected: int count_; static int global_count_; }; int SeparateInstanceTest::global_count_ = 0; TEST_P(SeparateInstanceTest, TestsRunInSeparateInstances) { EXPECT_EQ(0, count_++); global_count_++; } INSTANTIATE_TEST_CASE_P(FourElemSequence, SeparateInstanceTest, Range(1, 4)); // Tests that all instantiations of a test have named appropriately. Test // defined with TEST_P(TestCaseName, TestName) and instantiated with // INSTANTIATE_TEST_CASE_P(SequenceName, TestCaseName, generator) must be named // SequenceName/TestCaseName.TestName/i, where i is the 0-based index of the // sequence element used to instantiate the test. class NamingTest : public TestWithParam {}; TEST_P(NamingTest, TestsReportCorrectNamesAndParameters) { const ::testing::TestInfo* const test_info = ::testing::UnitTest::GetInstance()->current_test_info(); EXPECT_STREQ("ZeroToFiveSequence/NamingTest", test_info->test_case_name()); Message index_stream; index_stream << "TestsReportCorrectNamesAndParameters/" << GetParam(); EXPECT_STREQ(index_stream.GetString().c_str(), test_info->name()); EXPECT_EQ(::testing::PrintToString(GetParam()), test_info->value_param()); } INSTANTIATE_TEST_CASE_P(ZeroToFiveSequence, NamingTest, Range(0, 5)); // Class that cannot be streamed into an ostream. It needs to be copyable // (and, in case of MSVC, also assignable) in order to be a test parameter // type. Its default copy constructor and assignment operator do exactly // what we need. class Unstreamable { public: explicit Unstreamable(int value) : value_(value) {} private: int value_; }; class CommentTest : public TestWithParam {}; TEST_P(CommentTest, TestsCorrectlyReportUnstreamableParams) { const ::testing::TestInfo* const test_info = ::testing::UnitTest::GetInstance()->current_test_info(); EXPECT_EQ(::testing::PrintToString(GetParam()), test_info->value_param()); } INSTANTIATE_TEST_CASE_P(InstantiationWithComments, CommentTest, Values(Unstreamable(1))); // Verify that we can create a hierarchy of test fixtures, where the base // class fixture is not parameterized and the derived class is. In this case // ParameterizedDerivedTest inherits from NonParameterizedBaseTest. We // perform simple tests on both. class NonParameterizedBaseTest : public ::testing::Test { public: NonParameterizedBaseTest() : n_(17) { } protected: int n_; }; class ParameterizedDerivedTest : public NonParameterizedBaseTest, public ::testing::WithParamInterface { protected: ParameterizedDerivedTest() : count_(0) { } int count_; static int global_count_; }; int ParameterizedDerivedTest::global_count_ = 0; TEST_F(NonParameterizedBaseTest, FixtureIsInitialized) { EXPECT_EQ(17, n_); } TEST_P(ParameterizedDerivedTest, SeesSequence) { EXPECT_EQ(17, n_); EXPECT_EQ(0, count_++); EXPECT_EQ(GetParam(), global_count_++); } class ParameterizedDeathTest : public ::testing::TestWithParam { }; TEST_F(ParameterizedDeathTest, GetParamDiesFromTestF) { EXPECT_DEATH_IF_SUPPORTED(GetParam(), ".* value-parameterized test .*"); } INSTANTIATE_TEST_CASE_P(RangeZeroToFive, ParameterizedDerivedTest, Range(0, 5)); #endif // GTEST_HAS_PARAM_TEST TEST(CompileTest, CombineIsDefinedOnlyWhenGtestHasParamTestIsDefined) { #if GTEST_HAS_COMBINE && !GTEST_HAS_PARAM_TEST FAIL() << "GTEST_HAS_COMBINE is defined while GTEST_HAS_PARAM_TEST is not\n" #endif } int main(int argc, char **argv) { #if GTEST_HAS_PARAM_TEST // Used in TestGenerationTest test case. AddGlobalTestEnvironment(TestGenerationTest::Environment::Instance()); // Used in GeneratorEvaluationTest test case. Tests that the updated value // will be picked up for instantiating tests in GeneratorEvaluationTest. GeneratorEvaluationTest::set_param_value(1); #endif // GTEST_HAS_PARAM_TEST ::testing::InitGoogleTest(&argc, argv); #if GTEST_HAS_PARAM_TEST // Used in GeneratorEvaluationTest test case. Tests that value updated // here will NOT be used for instantiating tests in // GeneratorEvaluationTest. GeneratorEvaluationTest::set_param_value(2); #endif // GTEST_HAS_PARAM_TEST return RUN_ALL_TESTS(); } ffc-2019.1.0.post0/libs/gtest-1.7.0/test/gtest-param-test_test.h000066400000000000000000000044471346026536000236740ustar00rootroot00000000000000// Copyright 2008, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Authors: vladl@google.com (Vlad Losev) // // The Google C++ Testing Framework (Google Test) // // This header file provides classes and functions used internally // for testing Google Test itself. #ifndef GTEST_TEST_GTEST_PARAM_TEST_TEST_H_ #define GTEST_TEST_GTEST_PARAM_TEST_TEST_H_ #include "gtest/gtest.h" #if GTEST_HAS_PARAM_TEST // Test fixture for testing definition and instantiation of a test // in separate translation units. class ExternalInstantiationTest : public ::testing::TestWithParam { }; // Test fixture for testing instantiation of a test in multiple // translation units. class InstantiationInMultipleTranslaionUnitsTest : public ::testing::TestWithParam { }; #endif // GTEST_HAS_PARAM_TEST #endif // GTEST_TEST_GTEST_PARAM_TEST_TEST_H_ ffc-2019.1.0.post0/libs/gtest-1.7.0/test/gtest-port_test.cc000066400000000000000000001142661346026536000227420ustar00rootroot00000000000000// Copyright 2008, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Authors: vladl@google.com (Vlad Losev), wan@google.com (Zhanyong Wan) // // This file tests the internal cross-platform support utilities. #include "gtest/internal/gtest-port.h" #include #if GTEST_OS_MAC # include #endif // GTEST_OS_MAC #include #include // For std::pair and std::make_pair. #include #include "gtest/gtest.h" #include "gtest/gtest-spi.h" // Indicates that this translation unit is part of Google Test's // implementation. It must come before gtest-internal-inl.h is // included, or there will be a compiler error. This trick is to // prevent a user from accidentally including gtest-internal-inl.h in // his code. #define GTEST_IMPLEMENTATION_ 1 #include "src/gtest-internal-inl.h" #undef GTEST_IMPLEMENTATION_ using std::make_pair; using std::pair; namespace testing { namespace internal { TEST(IsXDigitTest, WorksForNarrowAscii) { EXPECT_TRUE(IsXDigit('0')); EXPECT_TRUE(IsXDigit('9')); EXPECT_TRUE(IsXDigit('A')); EXPECT_TRUE(IsXDigit('F')); EXPECT_TRUE(IsXDigit('a')); EXPECT_TRUE(IsXDigit('f')); EXPECT_FALSE(IsXDigit('-')); EXPECT_FALSE(IsXDigit('g')); EXPECT_FALSE(IsXDigit('G')); } TEST(IsXDigitTest, ReturnsFalseForNarrowNonAscii) { EXPECT_FALSE(IsXDigit(static_cast(0x80))); EXPECT_FALSE(IsXDigit(static_cast('0' | 0x80))); } TEST(IsXDigitTest, WorksForWideAscii) { EXPECT_TRUE(IsXDigit(L'0')); EXPECT_TRUE(IsXDigit(L'9')); EXPECT_TRUE(IsXDigit(L'A')); EXPECT_TRUE(IsXDigit(L'F')); EXPECT_TRUE(IsXDigit(L'a')); EXPECT_TRUE(IsXDigit(L'f')); EXPECT_FALSE(IsXDigit(L'-')); EXPECT_FALSE(IsXDigit(L'g')); EXPECT_FALSE(IsXDigit(L'G')); } TEST(IsXDigitTest, ReturnsFalseForWideNonAscii) { EXPECT_FALSE(IsXDigit(static_cast(0x80))); EXPECT_FALSE(IsXDigit(static_cast(L'0' | 0x80))); EXPECT_FALSE(IsXDigit(static_cast(L'0' | 0x100))); } class Base { public: // Copy constructor and assignment operator do exactly what we need, so we // use them. Base() : member_(0) {} explicit Base(int n) : member_(n) {} virtual ~Base() {} int member() { return member_; } private: int member_; }; class Derived : public Base { public: explicit Derived(int n) : Base(n) {} }; TEST(ImplicitCastTest, ConvertsPointers) { Derived derived(0); EXPECT_TRUE(&derived == ::testing::internal::ImplicitCast_(&derived)); } TEST(ImplicitCastTest, CanUseInheritance) { Derived derived(1); Base base = ::testing::internal::ImplicitCast_(derived); EXPECT_EQ(derived.member(), base.member()); } class Castable { public: explicit Castable(bool* converted) : converted_(converted) {} operator Base() { *converted_ = true; return Base(); } private: bool* converted_; }; TEST(ImplicitCastTest, CanUseNonConstCastOperator) { bool converted = false; Castable castable(&converted); Base base = ::testing::internal::ImplicitCast_(castable); EXPECT_TRUE(converted); } class ConstCastable { public: explicit ConstCastable(bool* converted) : converted_(converted) {} operator Base() const { *converted_ = true; return Base(); } private: bool* converted_; }; TEST(ImplicitCastTest, CanUseConstCastOperatorOnConstValues) { bool converted = false; const ConstCastable const_castable(&converted); Base base = ::testing::internal::ImplicitCast_(const_castable); EXPECT_TRUE(converted); } class ConstAndNonConstCastable { public: ConstAndNonConstCastable(bool* converted, bool* const_converted) : converted_(converted), const_converted_(const_converted) {} operator Base() { *converted_ = true; return Base(); } operator Base() const { *const_converted_ = true; return Base(); } private: bool* converted_; bool* const_converted_; }; TEST(ImplicitCastTest, CanSelectBetweenConstAndNonConstCasrAppropriately) { bool converted = false; bool const_converted = false; ConstAndNonConstCastable castable(&converted, &const_converted); Base base = ::testing::internal::ImplicitCast_(castable); EXPECT_TRUE(converted); EXPECT_FALSE(const_converted); converted = false; const_converted = false; const ConstAndNonConstCastable const_castable(&converted, &const_converted); base = ::testing::internal::ImplicitCast_(const_castable); EXPECT_FALSE(converted); EXPECT_TRUE(const_converted); } class To { public: To(bool* converted) { *converted = true; } // NOLINT }; TEST(ImplicitCastTest, CanUseImplicitConstructor) { bool converted = false; To to = ::testing::internal::ImplicitCast_(&converted); (void)to; EXPECT_TRUE(converted); } TEST(IteratorTraitsTest, WorksForSTLContainerIterators) { StaticAssertTypeEq::const_iterator>::value_type>(); StaticAssertTypeEq::iterator>::value_type>(); } TEST(IteratorTraitsTest, WorksForPointerToNonConst) { StaticAssertTypeEq::value_type>(); StaticAssertTypeEq::value_type>(); } TEST(IteratorTraitsTest, WorksForPointerToConst) { StaticAssertTypeEq::value_type>(); StaticAssertTypeEq::value_type>(); } // Tests that the element_type typedef is available in scoped_ptr and refers // to the parameter type. TEST(ScopedPtrTest, DefinesElementType) { StaticAssertTypeEq::element_type>(); } // TODO(vladl@google.com): Implement THE REST of scoped_ptr tests. TEST(GtestCheckSyntaxTest, BehavesLikeASingleStatement) { if (AlwaysFalse()) GTEST_CHECK_(false) << "This should never be executed; " "It's a compilation test only."; if (AlwaysTrue()) GTEST_CHECK_(true); else ; // NOLINT if (AlwaysFalse()) ; // NOLINT else GTEST_CHECK_(true) << ""; } TEST(GtestCheckSyntaxTest, WorksWithSwitch) { switch (0) { case 1: break; default: GTEST_CHECK_(true); } switch (0) case 0: GTEST_CHECK_(true) << "Check failed in switch case"; } // Verifies behavior of FormatFileLocation. TEST(FormatFileLocationTest, FormatsFileLocation) { EXPECT_PRED_FORMAT2(IsSubstring, "foo.cc", FormatFileLocation("foo.cc", 42)); EXPECT_PRED_FORMAT2(IsSubstring, "42", FormatFileLocation("foo.cc", 42)); } TEST(FormatFileLocationTest, FormatsUnknownFile) { EXPECT_PRED_FORMAT2( IsSubstring, "unknown file", FormatFileLocation(NULL, 42)); EXPECT_PRED_FORMAT2(IsSubstring, "42", FormatFileLocation(NULL, 42)); } TEST(FormatFileLocationTest, FormatsUknownLine) { EXPECT_EQ("foo.cc:", FormatFileLocation("foo.cc", -1)); } TEST(FormatFileLocationTest, FormatsUknownFileAndLine) { EXPECT_EQ("unknown file:", FormatFileLocation(NULL, -1)); } // Verifies behavior of FormatCompilerIndependentFileLocation. TEST(FormatCompilerIndependentFileLocationTest, FormatsFileLocation) { EXPECT_EQ("foo.cc:42", FormatCompilerIndependentFileLocation("foo.cc", 42)); } TEST(FormatCompilerIndependentFileLocationTest, FormatsUknownFile) { EXPECT_EQ("unknown file:42", FormatCompilerIndependentFileLocation(NULL, 42)); } TEST(FormatCompilerIndependentFileLocationTest, FormatsUknownLine) { EXPECT_EQ("foo.cc", FormatCompilerIndependentFileLocation("foo.cc", -1)); } TEST(FormatCompilerIndependentFileLocationTest, FormatsUknownFileAndLine) { EXPECT_EQ("unknown file", FormatCompilerIndependentFileLocation(NULL, -1)); } #if GTEST_OS_MAC || GTEST_OS_QNX void* ThreadFunc(void* data) { pthread_mutex_t* mutex = static_cast(data); pthread_mutex_lock(mutex); pthread_mutex_unlock(mutex); return NULL; } TEST(GetThreadCountTest, ReturnsCorrectValue) { EXPECT_EQ(1U, GetThreadCount()); pthread_mutex_t mutex; pthread_attr_t attr; pthread_t thread_id; // TODO(vladl@google.com): turn mutex into internal::Mutex for automatic // destruction. pthread_mutex_init(&mutex, NULL); pthread_mutex_lock(&mutex); ASSERT_EQ(0, pthread_attr_init(&attr)); ASSERT_EQ(0, pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_JOINABLE)); const int status = pthread_create(&thread_id, &attr, &ThreadFunc, &mutex); ASSERT_EQ(0, pthread_attr_destroy(&attr)); ASSERT_EQ(0, status); EXPECT_EQ(2U, GetThreadCount()); pthread_mutex_unlock(&mutex); void* dummy; ASSERT_EQ(0, pthread_join(thread_id, &dummy)); # if GTEST_OS_MAC // MacOS X may not immediately report the updated thread count after // joining a thread, causing flakiness in this test. To counter that, we // wait for up to .5 seconds for the OS to report the correct value. for (int i = 0; i < 5; ++i) { if (GetThreadCount() == 1) break; SleepMilliseconds(100); } # endif // GTEST_OS_MAC EXPECT_EQ(1U, GetThreadCount()); pthread_mutex_destroy(&mutex); } #else TEST(GetThreadCountTest, ReturnsZeroWhenUnableToCountThreads) { EXPECT_EQ(0U, GetThreadCount()); } #endif // GTEST_OS_MAC || GTEST_OS_QNX TEST(GtestCheckDeathTest, DiesWithCorrectOutputOnFailure) { const bool a_false_condition = false; const char regex[] = #ifdef _MSC_VER "gtest-port_test\\.cc\\(\\d+\\):" #elif GTEST_USES_POSIX_RE "gtest-port_test\\.cc:[0-9]+" #else "gtest-port_test\\.cc:\\d+" #endif // _MSC_VER ".*a_false_condition.*Extra info.*"; EXPECT_DEATH_IF_SUPPORTED(GTEST_CHECK_(a_false_condition) << "Extra info", regex); } #if GTEST_HAS_DEATH_TEST TEST(GtestCheckDeathTest, LivesSilentlyOnSuccess) { EXPECT_EXIT({ GTEST_CHECK_(true) << "Extra info"; ::std::cerr << "Success\n"; exit(0); }, ::testing::ExitedWithCode(0), "Success"); } #endif // GTEST_HAS_DEATH_TEST // Verifies that Google Test choose regular expression engine appropriate to // the platform. The test will produce compiler errors in case of failure. // For simplicity, we only cover the most important platforms here. TEST(RegexEngineSelectionTest, SelectsCorrectRegexEngine) { #if GTEST_HAS_POSIX_RE EXPECT_TRUE(GTEST_USES_POSIX_RE); #else EXPECT_TRUE(GTEST_USES_SIMPLE_RE); #endif } #if GTEST_USES_POSIX_RE # if GTEST_HAS_TYPED_TEST template class RETest : public ::testing::Test {}; // Defines StringTypes as the list of all string types that class RE // supports. typedef testing::Types< ::std::string, # if GTEST_HAS_GLOBAL_STRING ::string, # endif // GTEST_HAS_GLOBAL_STRING const char*> StringTypes; TYPED_TEST_CASE(RETest, StringTypes); // Tests RE's implicit constructors. TYPED_TEST(RETest, ImplicitConstructorWorks) { const RE empty(TypeParam("")); EXPECT_STREQ("", empty.pattern()); const RE simple(TypeParam("hello")); EXPECT_STREQ("hello", simple.pattern()); const RE normal(TypeParam(".*(\\w+)")); EXPECT_STREQ(".*(\\w+)", normal.pattern()); } // Tests that RE's constructors reject invalid regular expressions. TYPED_TEST(RETest, RejectsInvalidRegex) { EXPECT_NONFATAL_FAILURE({ const RE invalid(TypeParam("?")); }, "\"?\" is not a valid POSIX Extended regular expression."); } // Tests RE::FullMatch(). TYPED_TEST(RETest, FullMatchWorks) { const RE empty(TypeParam("")); EXPECT_TRUE(RE::FullMatch(TypeParam(""), empty)); EXPECT_FALSE(RE::FullMatch(TypeParam("a"), empty)); const RE re(TypeParam("a.*z")); EXPECT_TRUE(RE::FullMatch(TypeParam("az"), re)); EXPECT_TRUE(RE::FullMatch(TypeParam("axyz"), re)); EXPECT_FALSE(RE::FullMatch(TypeParam("baz"), re)); EXPECT_FALSE(RE::FullMatch(TypeParam("azy"), re)); } // Tests RE::PartialMatch(). TYPED_TEST(RETest, PartialMatchWorks) { const RE empty(TypeParam("")); EXPECT_TRUE(RE::PartialMatch(TypeParam(""), empty)); EXPECT_TRUE(RE::PartialMatch(TypeParam("a"), empty)); const RE re(TypeParam("a.*z")); EXPECT_TRUE(RE::PartialMatch(TypeParam("az"), re)); EXPECT_TRUE(RE::PartialMatch(TypeParam("axyz"), re)); EXPECT_TRUE(RE::PartialMatch(TypeParam("baz"), re)); EXPECT_TRUE(RE::PartialMatch(TypeParam("azy"), re)); EXPECT_FALSE(RE::PartialMatch(TypeParam("zza"), re)); } # endif // GTEST_HAS_TYPED_TEST #elif GTEST_USES_SIMPLE_RE TEST(IsInSetTest, NulCharIsNotInAnySet) { EXPECT_FALSE(IsInSet('\0', "")); EXPECT_FALSE(IsInSet('\0', "\0")); EXPECT_FALSE(IsInSet('\0', "a")); } TEST(IsInSetTest, WorksForNonNulChars) { EXPECT_FALSE(IsInSet('a', "Ab")); EXPECT_FALSE(IsInSet('c', "")); EXPECT_TRUE(IsInSet('b', "bcd")); EXPECT_TRUE(IsInSet('b', "ab")); } TEST(IsAsciiDigitTest, IsFalseForNonDigit) { EXPECT_FALSE(IsAsciiDigit('\0')); EXPECT_FALSE(IsAsciiDigit(' ')); EXPECT_FALSE(IsAsciiDigit('+')); EXPECT_FALSE(IsAsciiDigit('-')); EXPECT_FALSE(IsAsciiDigit('.')); EXPECT_FALSE(IsAsciiDigit('a')); } TEST(IsAsciiDigitTest, IsTrueForDigit) { EXPECT_TRUE(IsAsciiDigit('0')); EXPECT_TRUE(IsAsciiDigit('1')); EXPECT_TRUE(IsAsciiDigit('5')); EXPECT_TRUE(IsAsciiDigit('9')); } TEST(IsAsciiPunctTest, IsFalseForNonPunct) { EXPECT_FALSE(IsAsciiPunct('\0')); EXPECT_FALSE(IsAsciiPunct(' ')); EXPECT_FALSE(IsAsciiPunct('\n')); EXPECT_FALSE(IsAsciiPunct('a')); EXPECT_FALSE(IsAsciiPunct('0')); } TEST(IsAsciiPunctTest, IsTrueForPunct) { for (const char* p = "^-!\"#$%&'()*+,./:;<=>?@[\\]_`{|}~"; *p; p++) { EXPECT_PRED1(IsAsciiPunct, *p); } } TEST(IsRepeatTest, IsFalseForNonRepeatChar) { EXPECT_FALSE(IsRepeat('\0')); EXPECT_FALSE(IsRepeat(' ')); EXPECT_FALSE(IsRepeat('a')); EXPECT_FALSE(IsRepeat('1')); EXPECT_FALSE(IsRepeat('-')); } TEST(IsRepeatTest, IsTrueForRepeatChar) { EXPECT_TRUE(IsRepeat('?')); EXPECT_TRUE(IsRepeat('*')); EXPECT_TRUE(IsRepeat('+')); } TEST(IsAsciiWhiteSpaceTest, IsFalseForNonWhiteSpace) { EXPECT_FALSE(IsAsciiWhiteSpace('\0')); EXPECT_FALSE(IsAsciiWhiteSpace('a')); EXPECT_FALSE(IsAsciiWhiteSpace('1')); EXPECT_FALSE(IsAsciiWhiteSpace('+')); EXPECT_FALSE(IsAsciiWhiteSpace('_')); } TEST(IsAsciiWhiteSpaceTest, IsTrueForWhiteSpace) { EXPECT_TRUE(IsAsciiWhiteSpace(' ')); EXPECT_TRUE(IsAsciiWhiteSpace('\n')); EXPECT_TRUE(IsAsciiWhiteSpace('\r')); EXPECT_TRUE(IsAsciiWhiteSpace('\t')); EXPECT_TRUE(IsAsciiWhiteSpace('\v')); EXPECT_TRUE(IsAsciiWhiteSpace('\f')); } TEST(IsAsciiWordCharTest, IsFalseForNonWordChar) { EXPECT_FALSE(IsAsciiWordChar('\0')); EXPECT_FALSE(IsAsciiWordChar('+')); EXPECT_FALSE(IsAsciiWordChar('.')); EXPECT_FALSE(IsAsciiWordChar(' ')); EXPECT_FALSE(IsAsciiWordChar('\n')); } TEST(IsAsciiWordCharTest, IsTrueForLetter) { EXPECT_TRUE(IsAsciiWordChar('a')); EXPECT_TRUE(IsAsciiWordChar('b')); EXPECT_TRUE(IsAsciiWordChar('A')); EXPECT_TRUE(IsAsciiWordChar('Z')); } TEST(IsAsciiWordCharTest, IsTrueForDigit) { EXPECT_TRUE(IsAsciiWordChar('0')); EXPECT_TRUE(IsAsciiWordChar('1')); EXPECT_TRUE(IsAsciiWordChar('7')); EXPECT_TRUE(IsAsciiWordChar('9')); } TEST(IsAsciiWordCharTest, IsTrueForUnderscore) { EXPECT_TRUE(IsAsciiWordChar('_')); } TEST(IsValidEscapeTest, IsFalseForNonPrintable) { EXPECT_FALSE(IsValidEscape('\0')); EXPECT_FALSE(IsValidEscape('\007')); } TEST(IsValidEscapeTest, IsFalseForDigit) { EXPECT_FALSE(IsValidEscape('0')); EXPECT_FALSE(IsValidEscape('9')); } TEST(IsValidEscapeTest, IsFalseForWhiteSpace) { EXPECT_FALSE(IsValidEscape(' ')); EXPECT_FALSE(IsValidEscape('\n')); } TEST(IsValidEscapeTest, IsFalseForSomeLetter) { EXPECT_FALSE(IsValidEscape('a')); EXPECT_FALSE(IsValidEscape('Z')); } TEST(IsValidEscapeTest, IsTrueForPunct) { EXPECT_TRUE(IsValidEscape('.')); EXPECT_TRUE(IsValidEscape('-')); EXPECT_TRUE(IsValidEscape('^')); EXPECT_TRUE(IsValidEscape('$')); EXPECT_TRUE(IsValidEscape('(')); EXPECT_TRUE(IsValidEscape(']')); EXPECT_TRUE(IsValidEscape('{')); EXPECT_TRUE(IsValidEscape('|')); } TEST(IsValidEscapeTest, IsTrueForSomeLetter) { EXPECT_TRUE(IsValidEscape('d')); EXPECT_TRUE(IsValidEscape('D')); EXPECT_TRUE(IsValidEscape('s')); EXPECT_TRUE(IsValidEscape('S')); EXPECT_TRUE(IsValidEscape('w')); EXPECT_TRUE(IsValidEscape('W')); } TEST(AtomMatchesCharTest, EscapedPunct) { EXPECT_FALSE(AtomMatchesChar(true, '\\', '\0')); EXPECT_FALSE(AtomMatchesChar(true, '\\', ' ')); EXPECT_FALSE(AtomMatchesChar(true, '_', '.')); EXPECT_FALSE(AtomMatchesChar(true, '.', 'a')); EXPECT_TRUE(AtomMatchesChar(true, '\\', '\\')); EXPECT_TRUE(AtomMatchesChar(true, '_', '_')); EXPECT_TRUE(AtomMatchesChar(true, '+', '+')); EXPECT_TRUE(AtomMatchesChar(true, '.', '.')); } TEST(AtomMatchesCharTest, Escaped_d) { EXPECT_FALSE(AtomMatchesChar(true, 'd', '\0')); EXPECT_FALSE(AtomMatchesChar(true, 'd', 'a')); EXPECT_FALSE(AtomMatchesChar(true, 'd', '.')); EXPECT_TRUE(AtomMatchesChar(true, 'd', '0')); EXPECT_TRUE(AtomMatchesChar(true, 'd', '9')); } TEST(AtomMatchesCharTest, Escaped_D) { EXPECT_FALSE(AtomMatchesChar(true, 'D', '0')); EXPECT_FALSE(AtomMatchesChar(true, 'D', '9')); EXPECT_TRUE(AtomMatchesChar(true, 'D', '\0')); EXPECT_TRUE(AtomMatchesChar(true, 'D', 'a')); EXPECT_TRUE(AtomMatchesChar(true, 'D', '-')); } TEST(AtomMatchesCharTest, Escaped_s) { EXPECT_FALSE(AtomMatchesChar(true, 's', '\0')); EXPECT_FALSE(AtomMatchesChar(true, 's', 'a')); EXPECT_FALSE(AtomMatchesChar(true, 's', '.')); EXPECT_FALSE(AtomMatchesChar(true, 's', '9')); EXPECT_TRUE(AtomMatchesChar(true, 's', ' ')); EXPECT_TRUE(AtomMatchesChar(true, 's', '\n')); EXPECT_TRUE(AtomMatchesChar(true, 's', '\t')); } TEST(AtomMatchesCharTest, Escaped_S) { EXPECT_FALSE(AtomMatchesChar(true, 'S', ' ')); EXPECT_FALSE(AtomMatchesChar(true, 'S', '\r')); EXPECT_TRUE(AtomMatchesChar(true, 'S', '\0')); EXPECT_TRUE(AtomMatchesChar(true, 'S', 'a')); EXPECT_TRUE(AtomMatchesChar(true, 'S', '9')); } TEST(AtomMatchesCharTest, Escaped_w) { EXPECT_FALSE(AtomMatchesChar(true, 'w', '\0')); EXPECT_FALSE(AtomMatchesChar(true, 'w', '+')); EXPECT_FALSE(AtomMatchesChar(true, 'w', ' ')); EXPECT_FALSE(AtomMatchesChar(true, 'w', '\n')); EXPECT_TRUE(AtomMatchesChar(true, 'w', '0')); EXPECT_TRUE(AtomMatchesChar(true, 'w', 'b')); EXPECT_TRUE(AtomMatchesChar(true, 'w', 'C')); EXPECT_TRUE(AtomMatchesChar(true, 'w', '_')); } TEST(AtomMatchesCharTest, Escaped_W) { EXPECT_FALSE(AtomMatchesChar(true, 'W', 'A')); EXPECT_FALSE(AtomMatchesChar(true, 'W', 'b')); EXPECT_FALSE(AtomMatchesChar(true, 'W', '9')); EXPECT_FALSE(AtomMatchesChar(true, 'W', '_')); EXPECT_TRUE(AtomMatchesChar(true, 'W', '\0')); EXPECT_TRUE(AtomMatchesChar(true, 'W', '*')); EXPECT_TRUE(AtomMatchesChar(true, 'W', '\n')); } TEST(AtomMatchesCharTest, EscapedWhiteSpace) { EXPECT_FALSE(AtomMatchesChar(true, 'f', '\0')); EXPECT_FALSE(AtomMatchesChar(true, 'f', '\n')); EXPECT_FALSE(AtomMatchesChar(true, 'n', '\0')); EXPECT_FALSE(AtomMatchesChar(true, 'n', '\r')); EXPECT_FALSE(AtomMatchesChar(true, 'r', '\0')); EXPECT_FALSE(AtomMatchesChar(true, 'r', 'a')); EXPECT_FALSE(AtomMatchesChar(true, 't', '\0')); EXPECT_FALSE(AtomMatchesChar(true, 't', 't')); EXPECT_FALSE(AtomMatchesChar(true, 'v', '\0')); EXPECT_FALSE(AtomMatchesChar(true, 'v', '\f')); EXPECT_TRUE(AtomMatchesChar(true, 'f', '\f')); EXPECT_TRUE(AtomMatchesChar(true, 'n', '\n')); EXPECT_TRUE(AtomMatchesChar(true, 'r', '\r')); EXPECT_TRUE(AtomMatchesChar(true, 't', '\t')); EXPECT_TRUE(AtomMatchesChar(true, 'v', '\v')); } TEST(AtomMatchesCharTest, UnescapedDot) { EXPECT_FALSE(AtomMatchesChar(false, '.', '\n')); EXPECT_TRUE(AtomMatchesChar(false, '.', '\0')); EXPECT_TRUE(AtomMatchesChar(false, '.', '.')); EXPECT_TRUE(AtomMatchesChar(false, '.', 'a')); EXPECT_TRUE(AtomMatchesChar(false, '.', ' ')); } TEST(AtomMatchesCharTest, UnescapedChar) { EXPECT_FALSE(AtomMatchesChar(false, 'a', '\0')); EXPECT_FALSE(AtomMatchesChar(false, 'a', 'b')); EXPECT_FALSE(AtomMatchesChar(false, '$', 'a')); EXPECT_TRUE(AtomMatchesChar(false, '$', '$')); EXPECT_TRUE(AtomMatchesChar(false, '5', '5')); EXPECT_TRUE(AtomMatchesChar(false, 'Z', 'Z')); } TEST(ValidateRegexTest, GeneratesFailureAndReturnsFalseForInvalid) { EXPECT_NONFATAL_FAILURE(ASSERT_FALSE(ValidateRegex(NULL)), "NULL is not a valid simple regular expression"); EXPECT_NONFATAL_FAILURE( ASSERT_FALSE(ValidateRegex("a\\")), "Syntax error at index 1 in simple regular expression \"a\\\": "); EXPECT_NONFATAL_FAILURE(ASSERT_FALSE(ValidateRegex("a\\")), "'\\' cannot appear at the end"); EXPECT_NONFATAL_FAILURE(ASSERT_FALSE(ValidateRegex("\\n\\")), "'\\' cannot appear at the end"); EXPECT_NONFATAL_FAILURE(ASSERT_FALSE(ValidateRegex("\\s\\hb")), "invalid escape sequence \"\\h\""); EXPECT_NONFATAL_FAILURE(ASSERT_FALSE(ValidateRegex("^^")), "'^' can only appear at the beginning"); EXPECT_NONFATAL_FAILURE(ASSERT_FALSE(ValidateRegex(".*^b")), "'^' can only appear at the beginning"); EXPECT_NONFATAL_FAILURE(ASSERT_FALSE(ValidateRegex("$$")), "'$' can only appear at the end"); EXPECT_NONFATAL_FAILURE(ASSERT_FALSE(ValidateRegex("^$a")), "'$' can only appear at the end"); EXPECT_NONFATAL_FAILURE(ASSERT_FALSE(ValidateRegex("a(b")), "'(' is unsupported"); EXPECT_NONFATAL_FAILURE(ASSERT_FALSE(ValidateRegex("ab)")), "')' is unsupported"); EXPECT_NONFATAL_FAILURE(ASSERT_FALSE(ValidateRegex("[ab")), "'[' is unsupported"); EXPECT_NONFATAL_FAILURE(ASSERT_FALSE(ValidateRegex("a{2")), "'{' is unsupported"); EXPECT_NONFATAL_FAILURE(ASSERT_FALSE(ValidateRegex("?")), "'?' can only follow a repeatable token"); EXPECT_NONFATAL_FAILURE(ASSERT_FALSE(ValidateRegex("^*")), "'*' can only follow a repeatable token"); EXPECT_NONFATAL_FAILURE(ASSERT_FALSE(ValidateRegex("5*+")), "'+' can only follow a repeatable token"); } TEST(ValidateRegexTest, ReturnsTrueForValid) { EXPECT_TRUE(ValidateRegex("")); EXPECT_TRUE(ValidateRegex("a")); EXPECT_TRUE(ValidateRegex(".*")); EXPECT_TRUE(ValidateRegex("^a_+")); EXPECT_TRUE(ValidateRegex("^a\\t\\&?")); EXPECT_TRUE(ValidateRegex("09*$")); EXPECT_TRUE(ValidateRegex("^Z$")); EXPECT_TRUE(ValidateRegex("a\\^Z\\$\\(\\)\\|\\[\\]\\{\\}")); } TEST(MatchRepetitionAndRegexAtHeadTest, WorksForZeroOrOne) { EXPECT_FALSE(MatchRepetitionAndRegexAtHead(false, 'a', '?', "a", "ba")); // Repeating more than once. EXPECT_FALSE(MatchRepetitionAndRegexAtHead(false, 'a', '?', "b", "aab")); // Repeating zero times. EXPECT_TRUE(MatchRepetitionAndRegexAtHead(false, 'a', '?', "b", "ba")); // Repeating once. EXPECT_TRUE(MatchRepetitionAndRegexAtHead(false, 'a', '?', "b", "ab")); EXPECT_TRUE(MatchRepetitionAndRegexAtHead(false, '#', '?', ".", "##")); } TEST(MatchRepetitionAndRegexAtHeadTest, WorksForZeroOrMany) { EXPECT_FALSE(MatchRepetitionAndRegexAtHead(false, '.', '*', "a$", "baab")); // Repeating zero times. EXPECT_TRUE(MatchRepetitionAndRegexAtHead(false, '.', '*', "b", "bc")); // Repeating once. EXPECT_TRUE(MatchRepetitionAndRegexAtHead(false, '.', '*', "b", "abc")); // Repeating more than once. EXPECT_TRUE(MatchRepetitionAndRegexAtHead(true, 'w', '*', "-", "ab_1-g")); } TEST(MatchRepetitionAndRegexAtHeadTest, WorksForOneOrMany) { EXPECT_FALSE(MatchRepetitionAndRegexAtHead(false, '.', '+', "a$", "baab")); // Repeating zero times. EXPECT_FALSE(MatchRepetitionAndRegexAtHead(false, '.', '+', "b", "bc")); // Repeating once. EXPECT_TRUE(MatchRepetitionAndRegexAtHead(false, '.', '+', "b", "abc")); // Repeating more than once. EXPECT_TRUE(MatchRepetitionAndRegexAtHead(true, 'w', '+', "-", "ab_1-g")); } TEST(MatchRegexAtHeadTest, ReturnsTrueForEmptyRegex) { EXPECT_TRUE(MatchRegexAtHead("", "")); EXPECT_TRUE(MatchRegexAtHead("", "ab")); } TEST(MatchRegexAtHeadTest, WorksWhenDollarIsInRegex) { EXPECT_FALSE(MatchRegexAtHead("$", "a")); EXPECT_TRUE(MatchRegexAtHead("$", "")); EXPECT_TRUE(MatchRegexAtHead("a$", "a")); } TEST(MatchRegexAtHeadTest, WorksWhenRegexStartsWithEscapeSequence) { EXPECT_FALSE(MatchRegexAtHead("\\w", "+")); EXPECT_FALSE(MatchRegexAtHead("\\W", "ab")); EXPECT_TRUE(MatchRegexAtHead("\\sa", "\nab")); EXPECT_TRUE(MatchRegexAtHead("\\d", "1a")); } TEST(MatchRegexAtHeadTest, WorksWhenRegexStartsWithRepetition) { EXPECT_FALSE(MatchRegexAtHead(".+a", "abc")); EXPECT_FALSE(MatchRegexAtHead("a?b", "aab")); EXPECT_TRUE(MatchRegexAtHead(".*a", "bc12-ab")); EXPECT_TRUE(MatchRegexAtHead("a?b", "b")); EXPECT_TRUE(MatchRegexAtHead("a?b", "ab")); } TEST(MatchRegexAtHeadTest, WorksWhenRegexStartsWithRepetionOfEscapeSequence) { EXPECT_FALSE(MatchRegexAtHead("\\.+a", "abc")); EXPECT_FALSE(MatchRegexAtHead("\\s?b", " b")); EXPECT_TRUE(MatchRegexAtHead("\\(*a", "((((ab")); EXPECT_TRUE(MatchRegexAtHead("\\^?b", "^b")); EXPECT_TRUE(MatchRegexAtHead("\\\\?b", "b")); EXPECT_TRUE(MatchRegexAtHead("\\\\?b", "\\b")); } TEST(MatchRegexAtHeadTest, MatchesSequentially) { EXPECT_FALSE(MatchRegexAtHead("ab.*c", "acabc")); EXPECT_TRUE(MatchRegexAtHead("ab.*c", "ab-fsc")); } TEST(MatchRegexAnywhereTest, ReturnsFalseWhenStringIsNull) { EXPECT_FALSE(MatchRegexAnywhere("", NULL)); } TEST(MatchRegexAnywhereTest, WorksWhenRegexStartsWithCaret) { EXPECT_FALSE(MatchRegexAnywhere("^a", "ba")); EXPECT_FALSE(MatchRegexAnywhere("^$", "a")); EXPECT_TRUE(MatchRegexAnywhere("^a", "ab")); EXPECT_TRUE(MatchRegexAnywhere("^", "ab")); EXPECT_TRUE(MatchRegexAnywhere("^$", "")); } TEST(MatchRegexAnywhereTest, ReturnsFalseWhenNoMatch) { EXPECT_FALSE(MatchRegexAnywhere("a", "bcde123")); EXPECT_FALSE(MatchRegexAnywhere("a.+a", "--aa88888888")); } TEST(MatchRegexAnywhereTest, ReturnsTrueWhenMatchingPrefix) { EXPECT_TRUE(MatchRegexAnywhere("\\w+", "ab1_ - 5")); EXPECT_TRUE(MatchRegexAnywhere(".*=", "=")); EXPECT_TRUE(MatchRegexAnywhere("x.*ab?.*bc", "xaaabc")); } TEST(MatchRegexAnywhereTest, ReturnsTrueWhenMatchingNonPrefix) { EXPECT_TRUE(MatchRegexAnywhere("\\w+", "$$$ ab1_ - 5")); EXPECT_TRUE(MatchRegexAnywhere("\\.+=", "= ...=")); } // Tests RE's implicit constructors. TEST(RETest, ImplicitConstructorWorks) { const RE empty(""); EXPECT_STREQ("", empty.pattern()); const RE simple("hello"); EXPECT_STREQ("hello", simple.pattern()); } // Tests that RE's constructors reject invalid regular expressions. TEST(RETest, RejectsInvalidRegex) { EXPECT_NONFATAL_FAILURE({ const RE normal(NULL); }, "NULL is not a valid simple regular expression"); EXPECT_NONFATAL_FAILURE({ const RE normal(".*(\\w+"); }, "'(' is unsupported"); EXPECT_NONFATAL_FAILURE({ const RE invalid("^?"); }, "'?' can only follow a repeatable token"); } // Tests RE::FullMatch(). TEST(RETest, FullMatchWorks) { const RE empty(""); EXPECT_TRUE(RE::FullMatch("", empty)); EXPECT_FALSE(RE::FullMatch("a", empty)); const RE re1("a"); EXPECT_TRUE(RE::FullMatch("a", re1)); const RE re("a.*z"); EXPECT_TRUE(RE::FullMatch("az", re)); EXPECT_TRUE(RE::FullMatch("axyz", re)); EXPECT_FALSE(RE::FullMatch("baz", re)); EXPECT_FALSE(RE::FullMatch("azy", re)); } // Tests RE::PartialMatch(). TEST(RETest, PartialMatchWorks) { const RE empty(""); EXPECT_TRUE(RE::PartialMatch("", empty)); EXPECT_TRUE(RE::PartialMatch("a", empty)); const RE re("a.*z"); EXPECT_TRUE(RE::PartialMatch("az", re)); EXPECT_TRUE(RE::PartialMatch("axyz", re)); EXPECT_TRUE(RE::PartialMatch("baz", re)); EXPECT_TRUE(RE::PartialMatch("azy", re)); EXPECT_FALSE(RE::PartialMatch("zza", re)); } #endif // GTEST_USES_POSIX_RE #if !GTEST_OS_WINDOWS_MOBILE TEST(CaptureTest, CapturesStdout) { CaptureStdout(); fprintf(stdout, "abc"); EXPECT_STREQ("abc", GetCapturedStdout().c_str()); CaptureStdout(); fprintf(stdout, "def%cghi", '\0'); EXPECT_EQ(::std::string("def\0ghi", 7), ::std::string(GetCapturedStdout())); } TEST(CaptureTest, CapturesStderr) { CaptureStderr(); fprintf(stderr, "jkl"); EXPECT_STREQ("jkl", GetCapturedStderr().c_str()); CaptureStderr(); fprintf(stderr, "jkl%cmno", '\0'); EXPECT_EQ(::std::string("jkl\0mno", 7), ::std::string(GetCapturedStderr())); } // Tests that stdout and stderr capture don't interfere with each other. TEST(CaptureTest, CapturesStdoutAndStderr) { CaptureStdout(); CaptureStderr(); fprintf(stdout, "pqr"); fprintf(stderr, "stu"); EXPECT_STREQ("pqr", GetCapturedStdout().c_str()); EXPECT_STREQ("stu", GetCapturedStderr().c_str()); } TEST(CaptureDeathTest, CannotReenterStdoutCapture) { CaptureStdout(); EXPECT_DEATH_IF_SUPPORTED(CaptureStdout(), "Only one stdout capturer can exist at a time"); GetCapturedStdout(); // We cannot test stderr capturing using death tests as they use it // themselves. } #endif // !GTEST_OS_WINDOWS_MOBILE TEST(ThreadLocalTest, DefaultConstructorInitializesToDefaultValues) { ThreadLocal t1; EXPECT_EQ(0, t1.get()); ThreadLocal t2; EXPECT_TRUE(t2.get() == NULL); } TEST(ThreadLocalTest, SingleParamConstructorInitializesToParam) { ThreadLocal t1(123); EXPECT_EQ(123, t1.get()); int i = 0; ThreadLocal t2(&i); EXPECT_EQ(&i, t2.get()); } class NoDefaultContructor { public: explicit NoDefaultContructor(const char*) {} NoDefaultContructor(const NoDefaultContructor&) {} }; TEST(ThreadLocalTest, ValueDefaultContructorIsNotRequiredForParamVersion) { ThreadLocal bar(NoDefaultContructor("foo")); bar.pointer(); } TEST(ThreadLocalTest, GetAndPointerReturnSameValue) { ThreadLocal thread_local_string; EXPECT_EQ(thread_local_string.pointer(), &(thread_local_string.get())); // Verifies the condition still holds after calling set. thread_local_string.set("foo"); EXPECT_EQ(thread_local_string.pointer(), &(thread_local_string.get())); } TEST(ThreadLocalTest, PointerAndConstPointerReturnSameValue) { ThreadLocal thread_local_string; const ThreadLocal& const_thread_local_string = thread_local_string; EXPECT_EQ(thread_local_string.pointer(), const_thread_local_string.pointer()); thread_local_string.set("foo"); EXPECT_EQ(thread_local_string.pointer(), const_thread_local_string.pointer()); } #if GTEST_IS_THREADSAFE void AddTwo(int* param) { *param += 2; } TEST(ThreadWithParamTest, ConstructorExecutesThreadFunc) { int i = 40; ThreadWithParam thread(&AddTwo, &i, NULL); thread.Join(); EXPECT_EQ(42, i); } TEST(MutexDeathTest, AssertHeldShouldAssertWhenNotLocked) { // AssertHeld() is flaky only in the presence of multiple threads accessing // the lock. In this case, the test is robust. EXPECT_DEATH_IF_SUPPORTED({ Mutex m; { MutexLock lock(&m); } m.AssertHeld(); }, "thread .*hold"); } TEST(MutexTest, AssertHeldShouldNotAssertWhenLocked) { Mutex m; MutexLock lock(&m); m.AssertHeld(); } class AtomicCounterWithMutex { public: explicit AtomicCounterWithMutex(Mutex* mutex) : value_(0), mutex_(mutex), random_(42) {} void Increment() { MutexLock lock(mutex_); int temp = value_; { // Locking a mutex puts up a memory barrier, preventing reads and // writes to value_ rearranged when observed from other threads. // // We cannot use Mutex and MutexLock here or rely on their memory // barrier functionality as we are testing them here. pthread_mutex_t memory_barrier_mutex; GTEST_CHECK_POSIX_SUCCESS_( pthread_mutex_init(&memory_barrier_mutex, NULL)); GTEST_CHECK_POSIX_SUCCESS_(pthread_mutex_lock(&memory_barrier_mutex)); SleepMilliseconds(random_.Generate(30)); GTEST_CHECK_POSIX_SUCCESS_(pthread_mutex_unlock(&memory_barrier_mutex)); GTEST_CHECK_POSIX_SUCCESS_(pthread_mutex_destroy(&memory_barrier_mutex)); } value_ = temp + 1; } int value() const { return value_; } private: volatile int value_; Mutex* const mutex_; // Protects value_. Random random_; }; void CountingThreadFunc(pair param) { for (int i = 0; i < param.second; ++i) param.first->Increment(); } // Tests that the mutex only lets one thread at a time to lock it. TEST(MutexTest, OnlyOneThreadCanLockAtATime) { Mutex mutex; AtomicCounterWithMutex locked_counter(&mutex); typedef ThreadWithParam > ThreadType; const int kCycleCount = 20; const int kThreadCount = 7; scoped_ptr counting_threads[kThreadCount]; Notification threads_can_start; // Creates and runs kThreadCount threads that increment locked_counter // kCycleCount times each. for (int i = 0; i < kThreadCount; ++i) { counting_threads[i].reset(new ThreadType(&CountingThreadFunc, make_pair(&locked_counter, kCycleCount), &threads_can_start)); } threads_can_start.Notify(); for (int i = 0; i < kThreadCount; ++i) counting_threads[i]->Join(); // If the mutex lets more than one thread to increment the counter at a // time, they are likely to encounter a race condition and have some // increments overwritten, resulting in the lower then expected counter // value. EXPECT_EQ(kCycleCount * kThreadCount, locked_counter.value()); } template void RunFromThread(void (func)(T), T param) { ThreadWithParam thread(func, param, NULL); thread.Join(); } void RetrieveThreadLocalValue( pair*, std::string*> param) { *param.second = param.first->get(); } TEST(ThreadLocalTest, ParameterizedConstructorSetsDefault) { ThreadLocal thread_local_string("foo"); EXPECT_STREQ("foo", thread_local_string.get().c_str()); thread_local_string.set("bar"); EXPECT_STREQ("bar", thread_local_string.get().c_str()); std::string result; RunFromThread(&RetrieveThreadLocalValue, make_pair(&thread_local_string, &result)); EXPECT_STREQ("foo", result.c_str()); } // DestructorTracker keeps track of whether its instances have been // destroyed. static std::vector g_destroyed; class DestructorTracker { public: DestructorTracker() : index_(GetNewIndex()) {} DestructorTracker(const DestructorTracker& /* rhs */) : index_(GetNewIndex()) {} ~DestructorTracker() { // We never access g_destroyed concurrently, so we don't need to // protect the write operation under a mutex. g_destroyed[index_] = true; } private: static int GetNewIndex() { g_destroyed.push_back(false); return g_destroyed.size() - 1; } const int index_; }; typedef ThreadLocal* ThreadParam; void CallThreadLocalGet(ThreadParam thread_local_param) { thread_local_param->get(); } // Tests that when a ThreadLocal object dies in a thread, it destroys // the managed object for that thread. TEST(ThreadLocalTest, DestroysManagedObjectForOwnThreadWhenDying) { g_destroyed.clear(); { // The next line default constructs a DestructorTracker object as // the default value of objects managed by thread_local_tracker. ThreadLocal thread_local_tracker; ASSERT_EQ(1U, g_destroyed.size()); ASSERT_FALSE(g_destroyed[0]); // This creates another DestructorTracker object for the main thread. thread_local_tracker.get(); ASSERT_EQ(2U, g_destroyed.size()); ASSERT_FALSE(g_destroyed[0]); ASSERT_FALSE(g_destroyed[1]); } // Now thread_local_tracker has died. It should have destroyed both the // default value shared by all threads and the value for the main // thread. ASSERT_EQ(2U, g_destroyed.size()); EXPECT_TRUE(g_destroyed[0]); EXPECT_TRUE(g_destroyed[1]); g_destroyed.clear(); } // Tests that when a thread exits, the thread-local object for that // thread is destroyed. TEST(ThreadLocalTest, DestroysManagedObjectAtThreadExit) { g_destroyed.clear(); { // The next line default constructs a DestructorTracker object as // the default value of objects managed by thread_local_tracker. ThreadLocal thread_local_tracker; ASSERT_EQ(1U, g_destroyed.size()); ASSERT_FALSE(g_destroyed[0]); // This creates another DestructorTracker object in the new thread. ThreadWithParam thread( &CallThreadLocalGet, &thread_local_tracker, NULL); thread.Join(); // Now the new thread has exited. The per-thread object for it // should have been destroyed. ASSERT_EQ(2U, g_destroyed.size()); ASSERT_FALSE(g_destroyed[0]); ASSERT_TRUE(g_destroyed[1]); } // Now thread_local_tracker has died. The default value should have been // destroyed too. ASSERT_EQ(2U, g_destroyed.size()); EXPECT_TRUE(g_destroyed[0]); EXPECT_TRUE(g_destroyed[1]); g_destroyed.clear(); } TEST(ThreadLocalTest, ThreadLocalMutationsAffectOnlyCurrentThread) { ThreadLocal thread_local_string; thread_local_string.set("Foo"); EXPECT_STREQ("Foo", thread_local_string.get().c_str()); std::string result; RunFromThread(&RetrieveThreadLocalValue, make_pair(&thread_local_string, &result)); EXPECT_TRUE(result.empty()); } #endif // GTEST_IS_THREADSAFE } // namespace internal } // namespace testing ffc-2019.1.0.post0/libs/gtest-1.7.0/test/gtest-printers_test.cc000066400000000000000000001406641346026536000236250ustar00rootroot00000000000000// Copyright 2007, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: wan@google.com (Zhanyong Wan) // Google Test - The Google C++ Testing Framework // // This file tests the universal value printer. #include "gtest/gtest-printers.h" #include #include #include #include #include #include #include #include #include #include #include #include #include "gtest/gtest.h" // hash_map and hash_set are available under Visual C++. #if _MSC_VER # define GTEST_HAS_HASH_MAP_ 1 // Indicates that hash_map is available. # include // NOLINT # define GTEST_HAS_HASH_SET_ 1 // Indicates that hash_set is available. # include // NOLINT #endif // GTEST_OS_WINDOWS // Some user-defined types for testing the universal value printer. // An anonymous enum type. enum AnonymousEnum { kAE1 = -1, kAE2 = 1 }; // An enum without a user-defined printer. enum EnumWithoutPrinter { kEWP1 = -2, kEWP2 = 42 }; // An enum with a << operator. enum EnumWithStreaming { kEWS1 = 10 }; std::ostream& operator<<(std::ostream& os, EnumWithStreaming e) { return os << (e == kEWS1 ? "kEWS1" : "invalid"); } // An enum with a PrintTo() function. enum EnumWithPrintTo { kEWPT1 = 1 }; void PrintTo(EnumWithPrintTo e, std::ostream* os) { *os << (e == kEWPT1 ? "kEWPT1" : "invalid"); } // A class implicitly convertible to BiggestInt. class BiggestIntConvertible { public: operator ::testing::internal::BiggestInt() const { return 42; } }; // A user-defined unprintable class template in the global namespace. template class UnprintableTemplateInGlobal { public: UnprintableTemplateInGlobal() : value_() {} private: T value_; }; // A user-defined streamable type in the global namespace. class StreamableInGlobal { public: virtual ~StreamableInGlobal() {} }; inline void operator<<(::std::ostream& os, const StreamableInGlobal& /* x */) { os << "StreamableInGlobal"; } void operator<<(::std::ostream& os, const StreamableInGlobal* /* x */) { os << "StreamableInGlobal*"; } namespace foo { // A user-defined unprintable type in a user namespace. class UnprintableInFoo { public: UnprintableInFoo() : z_(0) { memcpy(xy_, "\xEF\x12\x0\x0\x34\xAB\x0\x0", 8); } private: char xy_[8]; double z_; }; // A user-defined printable type in a user-chosen namespace. struct PrintableViaPrintTo { PrintableViaPrintTo() : value() {} int value; }; void PrintTo(const PrintableViaPrintTo& x, ::std::ostream* os) { *os << "PrintableViaPrintTo: " << x.value; } // A type with a user-defined << for printing its pointer. struct PointerPrintable { }; ::std::ostream& operator<<(::std::ostream& os, const PointerPrintable* /* x */) { return os << "PointerPrintable*"; } // A user-defined printable class template in a user-chosen namespace. template class PrintableViaPrintToTemplate { public: explicit PrintableViaPrintToTemplate(const T& a_value) : value_(a_value) {} const T& value() const { return value_; } private: T value_; }; template void PrintTo(const PrintableViaPrintToTemplate& x, ::std::ostream* os) { *os << "PrintableViaPrintToTemplate: " << x.value(); } // A user-defined streamable class template in a user namespace. template class StreamableTemplateInFoo { public: StreamableTemplateInFoo() : value_() {} const T& value() const { return value_; } private: T value_; }; template inline ::std::ostream& operator<<(::std::ostream& os, const StreamableTemplateInFoo& x) { return os << "StreamableTemplateInFoo: " << x.value(); } } // namespace foo namespace testing { namespace gtest_printers_test { using ::std::deque; using ::std::list; using ::std::make_pair; using ::std::map; using ::std::multimap; using ::std::multiset; using ::std::pair; using ::std::set; using ::std::vector; using ::testing::PrintToString; using ::testing::internal::FormatForComparisonFailureMessage; using ::testing::internal::ImplicitCast_; using ::testing::internal::NativeArray; using ::testing::internal::RE; using ::testing::internal::Strings; using ::testing::internal::UniversalPrint; using ::testing::internal::UniversalPrinter; using ::testing::internal::UniversalTersePrint; using ::testing::internal::UniversalTersePrintTupleFieldsToStrings; using ::testing::internal::kReference; using ::testing::internal::string; #if GTEST_HAS_TR1_TUPLE using ::std::tr1::make_tuple; using ::std::tr1::tuple; #endif // The hash_* classes are not part of the C++ standard. STLport // defines them in namespace std. MSVC defines them in ::stdext. GCC // defines them in ::. #ifdef _STLP_HASH_MAP // We got from STLport. using ::std::hash_map; using ::std::hash_set; using ::std::hash_multimap; using ::std::hash_multiset; #elif _MSC_VER using ::stdext::hash_map; using ::stdext::hash_set; using ::stdext::hash_multimap; using ::stdext::hash_multiset; #endif // Prints a value to a string using the universal value printer. This // is a helper for testing UniversalPrinter::Print() for various types. template string Print(const T& value) { ::std::stringstream ss; UniversalPrinter::Print(value, &ss); return ss.str(); } // Prints a value passed by reference to a string, using the universal // value printer. This is a helper for testing // UniversalPrinter::Print() for various types. template string PrintByRef(const T& value) { ::std::stringstream ss; UniversalPrinter::Print(value, &ss); return ss.str(); } // Tests printing various enum types. TEST(PrintEnumTest, AnonymousEnum) { EXPECT_EQ("-1", Print(kAE1)); EXPECT_EQ("1", Print(kAE2)); } TEST(PrintEnumTest, EnumWithoutPrinter) { EXPECT_EQ("-2", Print(kEWP1)); EXPECT_EQ("42", Print(kEWP2)); } TEST(PrintEnumTest, EnumWithStreaming) { EXPECT_EQ("kEWS1", Print(kEWS1)); EXPECT_EQ("invalid", Print(static_cast(0))); } TEST(PrintEnumTest, EnumWithPrintTo) { EXPECT_EQ("kEWPT1", Print(kEWPT1)); EXPECT_EQ("invalid", Print(static_cast(0))); } // Tests printing a class implicitly convertible to BiggestInt. TEST(PrintClassTest, BiggestIntConvertible) { EXPECT_EQ("42", Print(BiggestIntConvertible())); } // Tests printing various char types. // char. TEST(PrintCharTest, PlainChar) { EXPECT_EQ("'\\0'", Print('\0')); EXPECT_EQ("'\\'' (39, 0x27)", Print('\'')); EXPECT_EQ("'\"' (34, 0x22)", Print('"')); EXPECT_EQ("'?' (63, 0x3F)", Print('?')); EXPECT_EQ("'\\\\' (92, 0x5C)", Print('\\')); EXPECT_EQ("'\\a' (7)", Print('\a')); EXPECT_EQ("'\\b' (8)", Print('\b')); EXPECT_EQ("'\\f' (12, 0xC)", Print('\f')); EXPECT_EQ("'\\n' (10, 0xA)", Print('\n')); EXPECT_EQ("'\\r' (13, 0xD)", Print('\r')); EXPECT_EQ("'\\t' (9)", Print('\t')); EXPECT_EQ("'\\v' (11, 0xB)", Print('\v')); EXPECT_EQ("'\\x7F' (127)", Print('\x7F')); EXPECT_EQ("'\\xFF' (255)", Print('\xFF')); EXPECT_EQ("' ' (32, 0x20)", Print(' ')); EXPECT_EQ("'a' (97, 0x61)", Print('a')); } // signed char. TEST(PrintCharTest, SignedChar) { EXPECT_EQ("'\\0'", Print(static_cast('\0'))); EXPECT_EQ("'\\xCE' (-50)", Print(static_cast(-50))); } // unsigned char. TEST(PrintCharTest, UnsignedChar) { EXPECT_EQ("'\\0'", Print(static_cast('\0'))); EXPECT_EQ("'b' (98, 0x62)", Print(static_cast('b'))); } // Tests printing other simple, built-in types. // bool. TEST(PrintBuiltInTypeTest, Bool) { EXPECT_EQ("false", Print(false)); EXPECT_EQ("true", Print(true)); } // wchar_t. TEST(PrintBuiltInTypeTest, Wchar_t) { EXPECT_EQ("L'\\0'", Print(L'\0')); EXPECT_EQ("L'\\'' (39, 0x27)", Print(L'\'')); EXPECT_EQ("L'\"' (34, 0x22)", Print(L'"')); EXPECT_EQ("L'?' (63, 0x3F)", Print(L'?')); EXPECT_EQ("L'\\\\' (92, 0x5C)", Print(L'\\')); EXPECT_EQ("L'\\a' (7)", Print(L'\a')); EXPECT_EQ("L'\\b' (8)", Print(L'\b')); EXPECT_EQ("L'\\f' (12, 0xC)", Print(L'\f')); EXPECT_EQ("L'\\n' (10, 0xA)", Print(L'\n')); EXPECT_EQ("L'\\r' (13, 0xD)", Print(L'\r')); EXPECT_EQ("L'\\t' (9)", Print(L'\t')); EXPECT_EQ("L'\\v' (11, 0xB)", Print(L'\v')); EXPECT_EQ("L'\\x7F' (127)", Print(L'\x7F')); EXPECT_EQ("L'\\xFF' (255)", Print(L'\xFF')); EXPECT_EQ("L' ' (32, 0x20)", Print(L' ')); EXPECT_EQ("L'a' (97, 0x61)", Print(L'a')); EXPECT_EQ("L'\\x576' (1398)", Print(static_cast(0x576))); EXPECT_EQ("L'\\xC74D' (51021)", Print(static_cast(0xC74D))); } // Test that Int64 provides more storage than wchar_t. TEST(PrintTypeSizeTest, Wchar_t) { EXPECT_LT(sizeof(wchar_t), sizeof(testing::internal::Int64)); } // Various integer types. TEST(PrintBuiltInTypeTest, Integer) { EXPECT_EQ("'\\xFF' (255)", Print(static_cast(255))); // uint8 EXPECT_EQ("'\\x80' (-128)", Print(static_cast(-128))); // int8 EXPECT_EQ("65535", Print(USHRT_MAX)); // uint16 EXPECT_EQ("-32768", Print(SHRT_MIN)); // int16 EXPECT_EQ("4294967295", Print(UINT_MAX)); // uint32 EXPECT_EQ("-2147483648", Print(INT_MIN)); // int32 EXPECT_EQ("18446744073709551615", Print(static_cast(-1))); // uint64 EXPECT_EQ("-9223372036854775808", Print(static_cast(1) << 63)); // int64 } // Size types. TEST(PrintBuiltInTypeTest, Size_t) { EXPECT_EQ("1", Print(sizeof('a'))); // size_t. #if !GTEST_OS_WINDOWS // Windows has no ssize_t type. EXPECT_EQ("-2", Print(static_cast(-2))); // ssize_t. #endif // !GTEST_OS_WINDOWS } // Floating-points. TEST(PrintBuiltInTypeTest, FloatingPoints) { EXPECT_EQ("1.5", Print(1.5f)); // float EXPECT_EQ("-2.5", Print(-2.5)); // double } // Since ::std::stringstream::operator<<(const void *) formats the pointer // output differently with different compilers, we have to create the expected // output first and use it as our expectation. static string PrintPointer(const void *p) { ::std::stringstream expected_result_stream; expected_result_stream << p; return expected_result_stream.str(); } // Tests printing C strings. // const char*. TEST(PrintCStringTest, Const) { const char* p = "World"; EXPECT_EQ(PrintPointer(p) + " pointing to \"World\"", Print(p)); } // char*. TEST(PrintCStringTest, NonConst) { char p[] = "Hi"; EXPECT_EQ(PrintPointer(p) + " pointing to \"Hi\"", Print(static_cast(p))); } // NULL C string. TEST(PrintCStringTest, Null) { const char* p = NULL; EXPECT_EQ("NULL", Print(p)); } // Tests that C strings are escaped properly. TEST(PrintCStringTest, EscapesProperly) { const char* p = "'\"?\\\a\b\f\n\r\t\v\x7F\xFF a"; EXPECT_EQ(PrintPointer(p) + " pointing to \"'\\\"?\\\\\\a\\b\\f" "\\n\\r\\t\\v\\x7F\\xFF a\"", Print(p)); } // MSVC compiler can be configured to define whar_t as a typedef // of unsigned short. Defining an overload for const wchar_t* in that case // would cause pointers to unsigned shorts be printed as wide strings, // possibly accessing more memory than intended and causing invalid // memory accesses. MSVC defines _NATIVE_WCHAR_T_DEFINED symbol when // wchar_t is implemented as a native type. #if !defined(_MSC_VER) || defined(_NATIVE_WCHAR_T_DEFINED) // const wchar_t*. TEST(PrintWideCStringTest, Const) { const wchar_t* p = L"World"; EXPECT_EQ(PrintPointer(p) + " pointing to L\"World\"", Print(p)); } // wchar_t*. TEST(PrintWideCStringTest, NonConst) { wchar_t p[] = L"Hi"; EXPECT_EQ(PrintPointer(p) + " pointing to L\"Hi\"", Print(static_cast(p))); } // NULL wide C string. TEST(PrintWideCStringTest, Null) { const wchar_t* p = NULL; EXPECT_EQ("NULL", Print(p)); } // Tests that wide C strings are escaped properly. TEST(PrintWideCStringTest, EscapesProperly) { const wchar_t s[] = {'\'', '"', '?', '\\', '\a', '\b', '\f', '\n', '\r', '\t', '\v', 0xD3, 0x576, 0x8D3, 0xC74D, ' ', 'a', '\0'}; EXPECT_EQ(PrintPointer(s) + " pointing to L\"'\\\"?\\\\\\a\\b\\f" "\\n\\r\\t\\v\\xD3\\x576\\x8D3\\xC74D a\"", Print(static_cast(s))); } #endif // native wchar_t // Tests printing pointers to other char types. // signed char*. TEST(PrintCharPointerTest, SignedChar) { signed char* p = reinterpret_cast(0x1234); EXPECT_EQ(PrintPointer(p), Print(p)); p = NULL; EXPECT_EQ("NULL", Print(p)); } // const signed char*. TEST(PrintCharPointerTest, ConstSignedChar) { signed char* p = reinterpret_cast(0x1234); EXPECT_EQ(PrintPointer(p), Print(p)); p = NULL; EXPECT_EQ("NULL", Print(p)); } // unsigned char*. TEST(PrintCharPointerTest, UnsignedChar) { unsigned char* p = reinterpret_cast(0x1234); EXPECT_EQ(PrintPointer(p), Print(p)); p = NULL; EXPECT_EQ("NULL", Print(p)); } // const unsigned char*. TEST(PrintCharPointerTest, ConstUnsignedChar) { const unsigned char* p = reinterpret_cast(0x1234); EXPECT_EQ(PrintPointer(p), Print(p)); p = NULL; EXPECT_EQ("NULL", Print(p)); } // Tests printing pointers to simple, built-in types. // bool*. TEST(PrintPointerToBuiltInTypeTest, Bool) { bool* p = reinterpret_cast(0xABCD); EXPECT_EQ(PrintPointer(p), Print(p)); p = NULL; EXPECT_EQ("NULL", Print(p)); } // void*. TEST(PrintPointerToBuiltInTypeTest, Void) { void* p = reinterpret_cast(0xABCD); EXPECT_EQ(PrintPointer(p), Print(p)); p = NULL; EXPECT_EQ("NULL", Print(p)); } // const void*. TEST(PrintPointerToBuiltInTypeTest, ConstVoid) { const void* p = reinterpret_cast(0xABCD); EXPECT_EQ(PrintPointer(p), Print(p)); p = NULL; EXPECT_EQ("NULL", Print(p)); } // Tests printing pointers to pointers. TEST(PrintPointerToPointerTest, IntPointerPointer) { int** p = reinterpret_cast(0xABCD); EXPECT_EQ(PrintPointer(p), Print(p)); p = NULL; EXPECT_EQ("NULL", Print(p)); } // Tests printing (non-member) function pointers. void MyFunction(int /* n */) {} TEST(PrintPointerTest, NonMemberFunctionPointer) { // We cannot directly cast &MyFunction to const void* because the // standard disallows casting between pointers to functions and // pointers to objects, and some compilers (e.g. GCC 3.4) enforce // this limitation. EXPECT_EQ( PrintPointer(reinterpret_cast( reinterpret_cast(&MyFunction))), Print(&MyFunction)); int (*p)(bool) = NULL; // NOLINT EXPECT_EQ("NULL", Print(p)); } // An assertion predicate determining whether a one string is a prefix for // another. template AssertionResult HasPrefix(const StringType& str, const StringType& prefix) { if (str.find(prefix, 0) == 0) return AssertionSuccess(); const bool is_wide_string = sizeof(prefix[0]) > 1; const char* const begin_string_quote = is_wide_string ? "L\"" : "\""; return AssertionFailure() << begin_string_quote << prefix << "\" is not a prefix of " << begin_string_quote << str << "\"\n"; } // Tests printing member variable pointers. Although they are called // pointers, they don't point to a location in the address space. // Their representation is implementation-defined. Thus they will be // printed as raw bytes. struct Foo { public: virtual ~Foo() {} int MyMethod(char x) { return x + 1; } virtual char MyVirtualMethod(int /* n */) { return 'a'; } int value; }; TEST(PrintPointerTest, MemberVariablePointer) { EXPECT_TRUE(HasPrefix(Print(&Foo::value), Print(sizeof(&Foo::value)) + "-byte object ")); int (Foo::*p) = NULL; // NOLINT EXPECT_TRUE(HasPrefix(Print(p), Print(sizeof(p)) + "-byte object ")); } // Tests printing member function pointers. Although they are called // pointers, they don't point to a location in the address space. // Their representation is implementation-defined. Thus they will be // printed as raw bytes. TEST(PrintPointerTest, MemberFunctionPointer) { EXPECT_TRUE(HasPrefix(Print(&Foo::MyMethod), Print(sizeof(&Foo::MyMethod)) + "-byte object ")); EXPECT_TRUE( HasPrefix(Print(&Foo::MyVirtualMethod), Print(sizeof((&Foo::MyVirtualMethod))) + "-byte object ")); int (Foo::*p)(char) = NULL; // NOLINT EXPECT_TRUE(HasPrefix(Print(p), Print(sizeof(p)) + "-byte object ")); } // Tests printing C arrays. // The difference between this and Print() is that it ensures that the // argument is a reference to an array. template string PrintArrayHelper(T (&a)[N]) { return Print(a); } // One-dimensional array. TEST(PrintArrayTest, OneDimensionalArray) { int a[5] = { 1, 2, 3, 4, 5 }; EXPECT_EQ("{ 1, 2, 3, 4, 5 }", PrintArrayHelper(a)); } // Two-dimensional array. TEST(PrintArrayTest, TwoDimensionalArray) { int a[2][5] = { { 1, 2, 3, 4, 5 }, { 6, 7, 8, 9, 0 } }; EXPECT_EQ("{ { 1, 2, 3, 4, 5 }, { 6, 7, 8, 9, 0 } }", PrintArrayHelper(a)); } // Array of const elements. TEST(PrintArrayTest, ConstArray) { const bool a[1] = { false }; EXPECT_EQ("{ false }", PrintArrayHelper(a)); } // char array without terminating NUL. TEST(PrintArrayTest, CharArrayWithNoTerminatingNul) { // Array a contains '\0' in the middle and doesn't end with '\0'. char a[] = { 'H', '\0', 'i' }; EXPECT_EQ("\"H\\0i\" (no terminating NUL)", PrintArrayHelper(a)); } // const char array with terminating NUL. TEST(PrintArrayTest, ConstCharArrayWithTerminatingNul) { const char a[] = "\0Hi"; EXPECT_EQ("\"\\0Hi\"", PrintArrayHelper(a)); } // const wchar_t array without terminating NUL. TEST(PrintArrayTest, WCharArrayWithNoTerminatingNul) { // Array a contains '\0' in the middle and doesn't end with '\0'. const wchar_t a[] = { L'H', L'\0', L'i' }; EXPECT_EQ("L\"H\\0i\" (no terminating NUL)", PrintArrayHelper(a)); } // wchar_t array with terminating NUL. TEST(PrintArrayTest, WConstCharArrayWithTerminatingNul) { const wchar_t a[] = L"\0Hi"; EXPECT_EQ("L\"\\0Hi\"", PrintArrayHelper(a)); } // Array of objects. TEST(PrintArrayTest, ObjectArray) { string a[3] = { "Hi", "Hello", "Ni hao" }; EXPECT_EQ("{ \"Hi\", \"Hello\", \"Ni hao\" }", PrintArrayHelper(a)); } // Array with many elements. TEST(PrintArrayTest, BigArray) { int a[100] = { 1, 2, 3 }; EXPECT_EQ("{ 1, 2, 3, 0, 0, 0, 0, 0, ..., 0, 0, 0, 0, 0, 0, 0, 0 }", PrintArrayHelper(a)); } // Tests printing ::string and ::std::string. #if GTEST_HAS_GLOBAL_STRING // ::string. TEST(PrintStringTest, StringInGlobalNamespace) { const char s[] = "'\"?\\\a\b\f\n\0\r\t\v\x7F\xFF a"; const ::string str(s, sizeof(s)); EXPECT_EQ("\"'\\\"?\\\\\\a\\b\\f\\n\\0\\r\\t\\v\\x7F\\xFF a\\0\"", Print(str)); } #endif // GTEST_HAS_GLOBAL_STRING // ::std::string. TEST(PrintStringTest, StringInStdNamespace) { const char s[] = "'\"?\\\a\b\f\n\0\r\t\v\x7F\xFF a"; const ::std::string str(s, sizeof(s)); EXPECT_EQ("\"'\\\"?\\\\\\a\\b\\f\\n\\0\\r\\t\\v\\x7F\\xFF a\\0\"", Print(str)); } TEST(PrintStringTest, StringAmbiguousHex) { // "\x6BANANA" is ambiguous, it can be interpreted as starting with either of: // '\x6', '\x6B', or '\x6BA'. // a hex escaping sequence following by a decimal digit EXPECT_EQ("\"0\\x12\" \"3\"", Print(::std::string("0\x12" "3"))); // a hex escaping sequence following by a hex digit (lower-case) EXPECT_EQ("\"mm\\x6\" \"bananas\"", Print(::std::string("mm\x6" "bananas"))); // a hex escaping sequence following by a hex digit (upper-case) EXPECT_EQ("\"NOM\\x6\" \"BANANA\"", Print(::std::string("NOM\x6" "BANANA"))); // a hex escaping sequence following by a non-xdigit EXPECT_EQ("\"!\\x5-!\"", Print(::std::string("!\x5-!"))); } // Tests printing ::wstring and ::std::wstring. #if GTEST_HAS_GLOBAL_WSTRING // ::wstring. TEST(PrintWideStringTest, StringInGlobalNamespace) { const wchar_t s[] = L"'\"?\\\a\b\f\n\0\r\t\v\xD3\x576\x8D3\xC74D a"; const ::wstring str(s, sizeof(s)/sizeof(wchar_t)); EXPECT_EQ("L\"'\\\"?\\\\\\a\\b\\f\\n\\0\\r\\t\\v" "\\xD3\\x576\\x8D3\\xC74D a\\0\"", Print(str)); } #endif // GTEST_HAS_GLOBAL_WSTRING #if GTEST_HAS_STD_WSTRING // ::std::wstring. TEST(PrintWideStringTest, StringInStdNamespace) { const wchar_t s[] = L"'\"?\\\a\b\f\n\0\r\t\v\xD3\x576\x8D3\xC74D a"; const ::std::wstring str(s, sizeof(s)/sizeof(wchar_t)); EXPECT_EQ("L\"'\\\"?\\\\\\a\\b\\f\\n\\0\\r\\t\\v" "\\xD3\\x576\\x8D3\\xC74D a\\0\"", Print(str)); } TEST(PrintWideStringTest, StringAmbiguousHex) { // same for wide strings. EXPECT_EQ("L\"0\\x12\" L\"3\"", Print(::std::wstring(L"0\x12" L"3"))); EXPECT_EQ("L\"mm\\x6\" L\"bananas\"", Print(::std::wstring(L"mm\x6" L"bananas"))); EXPECT_EQ("L\"NOM\\x6\" L\"BANANA\"", Print(::std::wstring(L"NOM\x6" L"BANANA"))); EXPECT_EQ("L\"!\\x5-!\"", Print(::std::wstring(L"!\x5-!"))); } #endif // GTEST_HAS_STD_WSTRING // Tests printing types that support generic streaming (i.e. streaming // to std::basic_ostream for any valid Char and // CharTraits types). // Tests printing a non-template type that supports generic streaming. class AllowsGenericStreaming {}; template std::basic_ostream& operator<<( std::basic_ostream& os, const AllowsGenericStreaming& /* a */) { return os << "AllowsGenericStreaming"; } TEST(PrintTypeWithGenericStreamingTest, NonTemplateType) { AllowsGenericStreaming a; EXPECT_EQ("AllowsGenericStreaming", Print(a)); } // Tests printing a template type that supports generic streaming. template class AllowsGenericStreamingTemplate {}; template std::basic_ostream& operator<<( std::basic_ostream& os, const AllowsGenericStreamingTemplate& /* a */) { return os << "AllowsGenericStreamingTemplate"; } TEST(PrintTypeWithGenericStreamingTest, TemplateType) { AllowsGenericStreamingTemplate a; EXPECT_EQ("AllowsGenericStreamingTemplate", Print(a)); } // Tests printing a type that supports generic streaming and can be // implicitly converted to another printable type. template class AllowsGenericStreamingAndImplicitConversionTemplate { public: operator bool() const { return false; } }; template std::basic_ostream& operator<<( std::basic_ostream& os, const AllowsGenericStreamingAndImplicitConversionTemplate& /* a */) { return os << "AllowsGenericStreamingAndImplicitConversionTemplate"; } TEST(PrintTypeWithGenericStreamingTest, TypeImplicitlyConvertible) { AllowsGenericStreamingAndImplicitConversionTemplate a; EXPECT_EQ("AllowsGenericStreamingAndImplicitConversionTemplate", Print(a)); } #if GTEST_HAS_STRING_PIECE_ // Tests printing StringPiece. TEST(PrintStringPieceTest, SimpleStringPiece) { const StringPiece sp = "Hello"; EXPECT_EQ("\"Hello\"", Print(sp)); } TEST(PrintStringPieceTest, UnprintableCharacters) { const char str[] = "NUL (\0) and \r\t"; const StringPiece sp(str, sizeof(str) - 1); EXPECT_EQ("\"NUL (\\0) and \\r\\t\"", Print(sp)); } #endif // GTEST_HAS_STRING_PIECE_ // Tests printing STL containers. TEST(PrintStlContainerTest, EmptyDeque) { deque empty; EXPECT_EQ("{}", Print(empty)); } TEST(PrintStlContainerTest, NonEmptyDeque) { deque non_empty; non_empty.push_back(1); non_empty.push_back(3); EXPECT_EQ("{ 1, 3 }", Print(non_empty)); } #if GTEST_HAS_HASH_MAP_ TEST(PrintStlContainerTest, OneElementHashMap) { hash_map map1; map1[1] = 'a'; EXPECT_EQ("{ (1, 'a' (97, 0x61)) }", Print(map1)); } TEST(PrintStlContainerTest, HashMultiMap) { hash_multimap map1; map1.insert(make_pair(5, true)); map1.insert(make_pair(5, false)); // Elements of hash_multimap can be printed in any order. const string result = Print(map1); EXPECT_TRUE(result == "{ (5, true), (5, false) }" || result == "{ (5, false), (5, true) }") << " where Print(map1) returns \"" << result << "\"."; } #endif // GTEST_HAS_HASH_MAP_ #if GTEST_HAS_HASH_SET_ TEST(PrintStlContainerTest, HashSet) { hash_set set1; set1.insert("hello"); EXPECT_EQ("{ \"hello\" }", Print(set1)); } TEST(PrintStlContainerTest, HashMultiSet) { const int kSize = 5; int a[kSize] = { 1, 1, 2, 5, 1 }; hash_multiset set1(a, a + kSize); // Elements of hash_multiset can be printed in any order. const string result = Print(set1); const string expected_pattern = "{ d, d, d, d, d }"; // d means a digit. // Verifies the result matches the expected pattern; also extracts // the numbers in the result. ASSERT_EQ(expected_pattern.length(), result.length()); std::vector numbers; for (size_t i = 0; i != result.length(); i++) { if (expected_pattern[i] == 'd') { ASSERT_NE(isdigit(static_cast(result[i])), 0); numbers.push_back(result[i] - '0'); } else { EXPECT_EQ(expected_pattern[i], result[i]) << " where result is " << result; } } // Makes sure the result contains the right numbers. std::sort(numbers.begin(), numbers.end()); std::sort(a, a + kSize); EXPECT_TRUE(std::equal(a, a + kSize, numbers.begin())); } #endif // GTEST_HAS_HASH_SET_ TEST(PrintStlContainerTest, List) { const string a[] = { "hello", "world" }; const list strings(a, a + 2); EXPECT_EQ("{ \"hello\", \"world\" }", Print(strings)); } TEST(PrintStlContainerTest, Map) { map map1; map1[1] = true; map1[5] = false; map1[3] = true; EXPECT_EQ("{ (1, true), (3, true), (5, false) }", Print(map1)); } TEST(PrintStlContainerTest, MultiMap) { multimap map1; // The make_pair template function would deduce the type as // pair here, and since the key part in a multimap has to // be constant, without a templated ctor in the pair class (as in // libCstd on Solaris), make_pair call would fail to compile as no // implicit conversion is found. Thus explicit typename is used // here instead. map1.insert(pair(true, 0)); map1.insert(pair(true, 1)); map1.insert(pair(false, 2)); EXPECT_EQ("{ (false, 2), (true, 0), (true, 1) }", Print(map1)); } TEST(PrintStlContainerTest, Set) { const unsigned int a[] = { 3, 0, 5 }; set set1(a, a + 3); EXPECT_EQ("{ 0, 3, 5 }", Print(set1)); } TEST(PrintStlContainerTest, MultiSet) { const int a[] = { 1, 1, 2, 5, 1 }; multiset set1(a, a + 5); EXPECT_EQ("{ 1, 1, 1, 2, 5 }", Print(set1)); } TEST(PrintStlContainerTest, Pair) { pair p(true, 5); EXPECT_EQ("(true, 5)", Print(p)); } TEST(PrintStlContainerTest, Vector) { vector v; v.push_back(1); v.push_back(2); EXPECT_EQ("{ 1, 2 }", Print(v)); } TEST(PrintStlContainerTest, LongSequence) { const int a[100] = { 1, 2, 3 }; const vector v(a, a + 100); EXPECT_EQ("{ 1, 2, 3, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, " "0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ... }", Print(v)); } TEST(PrintStlContainerTest, NestedContainer) { const int a1[] = { 1, 2 }; const int a2[] = { 3, 4, 5 }; const list l1(a1, a1 + 2); const list l2(a2, a2 + 3); vector > v; v.push_back(l1); v.push_back(l2); EXPECT_EQ("{ { 1, 2 }, { 3, 4, 5 } }", Print(v)); } TEST(PrintStlContainerTest, OneDimensionalNativeArray) { const int a[3] = { 1, 2, 3 }; NativeArray b(a, 3, kReference); EXPECT_EQ("{ 1, 2, 3 }", Print(b)); } TEST(PrintStlContainerTest, TwoDimensionalNativeArray) { const int a[2][3] = { { 1, 2, 3 }, { 4, 5, 6 } }; NativeArray b(a, 2, kReference); EXPECT_EQ("{ { 1, 2, 3 }, { 4, 5, 6 } }", Print(b)); } // Tests that a class named iterator isn't treated as a container. struct iterator { char x; }; TEST(PrintStlContainerTest, Iterator) { iterator it = {}; EXPECT_EQ("1-byte object <00>", Print(it)); } // Tests that a class named const_iterator isn't treated as a container. struct const_iterator { char x; }; TEST(PrintStlContainerTest, ConstIterator) { const_iterator it = {}; EXPECT_EQ("1-byte object <00>", Print(it)); } #if GTEST_HAS_TR1_TUPLE // Tests printing tuples. // Tuples of various arities. TEST(PrintTupleTest, VariousSizes) { tuple<> t0; EXPECT_EQ("()", Print(t0)); tuple t1(5); EXPECT_EQ("(5)", Print(t1)); tuple t2('a', true); EXPECT_EQ("('a' (97, 0x61), true)", Print(t2)); tuple t3(false, 2, 3); EXPECT_EQ("(false, 2, 3)", Print(t3)); tuple t4(false, 2, 3, 4); EXPECT_EQ("(false, 2, 3, 4)", Print(t4)); tuple t5(false, 2, 3, 4, true); EXPECT_EQ("(false, 2, 3, 4, true)", Print(t5)); tuple t6(false, 2, 3, 4, true, 6); EXPECT_EQ("(false, 2, 3, 4, true, 6)", Print(t6)); tuple t7(false, 2, 3, 4, true, 6, 7); EXPECT_EQ("(false, 2, 3, 4, true, 6, 7)", Print(t7)); tuple t8( false, 2, 3, 4, true, 6, 7, true); EXPECT_EQ("(false, 2, 3, 4, true, 6, 7, true)", Print(t8)); tuple t9( false, 2, 3, 4, true, 6, 7, true, 9); EXPECT_EQ("(false, 2, 3, 4, true, 6, 7, true, 9)", Print(t9)); const char* const str = "8"; // VC++ 2010's implementation of tuple of C++0x is deficient, requiring // an explicit type cast of NULL to be used. tuple t10(false, 'a', 3, 4, 5, 1.5F, -2.5, str, ImplicitCast_(NULL), "10"); EXPECT_EQ("(false, 'a' (97, 0x61), 3, 4, 5, 1.5, -2.5, " + PrintPointer(str) + " pointing to \"8\", NULL, \"10\")", Print(t10)); } // Nested tuples. TEST(PrintTupleTest, NestedTuple) { tuple, char> nested(make_tuple(5, true), 'a'); EXPECT_EQ("((5, true), 'a' (97, 0x61))", Print(nested)); } #endif // GTEST_HAS_TR1_TUPLE // Tests printing user-defined unprintable types. // Unprintable types in the global namespace. TEST(PrintUnprintableTypeTest, InGlobalNamespace) { EXPECT_EQ("1-byte object <00>", Print(UnprintableTemplateInGlobal())); } // Unprintable types in a user namespace. TEST(PrintUnprintableTypeTest, InUserNamespace) { EXPECT_EQ("16-byte object ", Print(::foo::UnprintableInFoo())); } // Unprintable types are that too big to be printed completely. struct Big { Big() { memset(array, 0, sizeof(array)); } char array[257]; }; TEST(PrintUnpritableTypeTest, BigObject) { EXPECT_EQ("257-byte object <00-00 00-00 00-00 00-00 00-00 00-00 " "00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 " "00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 " "00-00 00-00 00-00 00-00 00-00 00-00 ... 00-00 00-00 00-00 " "00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 " "00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 " "00-00 00-00 00-00 00-00 00-00 00-00 00-00 00-00 00>", Print(Big())); } // Tests printing user-defined streamable types. // Streamable types in the global namespace. TEST(PrintStreamableTypeTest, InGlobalNamespace) { StreamableInGlobal x; EXPECT_EQ("StreamableInGlobal", Print(x)); EXPECT_EQ("StreamableInGlobal*", Print(&x)); } // Printable template types in a user namespace. TEST(PrintStreamableTypeTest, TemplateTypeInUserNamespace) { EXPECT_EQ("StreamableTemplateInFoo: 0", Print(::foo::StreamableTemplateInFoo())); } // Tests printing user-defined types that have a PrintTo() function. TEST(PrintPrintableTypeTest, InUserNamespace) { EXPECT_EQ("PrintableViaPrintTo: 0", Print(::foo::PrintableViaPrintTo())); } // Tests printing a pointer to a user-defined type that has a << // operator for its pointer. TEST(PrintPrintableTypeTest, PointerInUserNamespace) { ::foo::PointerPrintable x; EXPECT_EQ("PointerPrintable*", Print(&x)); } // Tests printing user-defined class template that have a PrintTo() function. TEST(PrintPrintableTypeTest, TemplateInUserNamespace) { EXPECT_EQ("PrintableViaPrintToTemplate: 5", Print(::foo::PrintableViaPrintToTemplate(5))); } #if GTEST_HAS_PROTOBUF_ // Tests printing a protocol message. TEST(PrintProtocolMessageTest, PrintsShortDebugString) { testing::internal::TestMessage msg; msg.set_member("yes"); EXPECT_EQ("", Print(msg)); } // Tests printing a short proto2 message. TEST(PrintProto2MessageTest, PrintsShortDebugStringWhenItIsShort) { testing::internal::FooMessage msg; msg.set_int_field(2); msg.set_string_field("hello"); EXPECT_PRED2(RE::FullMatch, Print(msg), ""); } // Tests printing a long proto2 message. TEST(PrintProto2MessageTest, PrintsDebugStringWhenItIsLong) { testing::internal::FooMessage msg; msg.set_int_field(2); msg.set_string_field("hello"); msg.add_names("peter"); msg.add_names("paul"); msg.add_names("mary"); EXPECT_PRED2(RE::FullMatch, Print(msg), "<\n" "int_field:\\s*2\n" "string_field:\\s*\"hello\"\n" "names:\\s*\"peter\"\n" "names:\\s*\"paul\"\n" "names:\\s*\"mary\"\n" ">"); } #endif // GTEST_HAS_PROTOBUF_ // Tests that the universal printer prints both the address and the // value of a reference. TEST(PrintReferenceTest, PrintsAddressAndValue) { int n = 5; EXPECT_EQ("@" + PrintPointer(&n) + " 5", PrintByRef(n)); int a[2][3] = { { 0, 1, 2 }, { 3, 4, 5 } }; EXPECT_EQ("@" + PrintPointer(a) + " { { 0, 1, 2 }, { 3, 4, 5 } }", PrintByRef(a)); const ::foo::UnprintableInFoo x; EXPECT_EQ("@" + PrintPointer(&x) + " 16-byte object " "", PrintByRef(x)); } // Tests that the universal printer prints a function pointer passed by // reference. TEST(PrintReferenceTest, HandlesFunctionPointer) { void (*fp)(int n) = &MyFunction; const string fp_pointer_string = PrintPointer(reinterpret_cast(&fp)); // We cannot directly cast &MyFunction to const void* because the // standard disallows casting between pointers to functions and // pointers to objects, and some compilers (e.g. GCC 3.4) enforce // this limitation. const string fp_string = PrintPointer(reinterpret_cast( reinterpret_cast(fp))); EXPECT_EQ("@" + fp_pointer_string + " " + fp_string, PrintByRef(fp)); } // Tests that the universal printer prints a member function pointer // passed by reference. TEST(PrintReferenceTest, HandlesMemberFunctionPointer) { int (Foo::*p)(char ch) = &Foo::MyMethod; EXPECT_TRUE(HasPrefix( PrintByRef(p), "@" + PrintPointer(reinterpret_cast(&p)) + " " + Print(sizeof(p)) + "-byte object ")); char (Foo::*p2)(int n) = &Foo::MyVirtualMethod; EXPECT_TRUE(HasPrefix( PrintByRef(p2), "@" + PrintPointer(reinterpret_cast(&p2)) + " " + Print(sizeof(p2)) + "-byte object ")); } // Tests that the universal printer prints a member variable pointer // passed by reference. TEST(PrintReferenceTest, HandlesMemberVariablePointer) { int (Foo::*p) = &Foo::value; // NOLINT EXPECT_TRUE(HasPrefix( PrintByRef(p), "@" + PrintPointer(&p) + " " + Print(sizeof(p)) + "-byte object ")); } // Tests that FormatForComparisonFailureMessage(), which is used to print // an operand in a comparison assertion (e.g. ASSERT_EQ) when the assertion // fails, formats the operand in the desired way. // scalar TEST(FormatForComparisonFailureMessageTest, WorksForScalar) { EXPECT_STREQ("123", FormatForComparisonFailureMessage(123, 124).c_str()); } // non-char pointer TEST(FormatForComparisonFailureMessageTest, WorksForNonCharPointer) { int n = 0; EXPECT_EQ(PrintPointer(&n), FormatForComparisonFailureMessage(&n, &n).c_str()); } // non-char array TEST(FormatForComparisonFailureMessageTest, FormatsNonCharArrayAsPointer) { // In expression 'array == x', 'array' is compared by pointer. // Therefore we want to print an array operand as a pointer. int n[] = { 1, 2, 3 }; EXPECT_EQ(PrintPointer(n), FormatForComparisonFailureMessage(n, n).c_str()); } // Tests formatting a char pointer when it's compared with another pointer. // In this case we want to print it as a raw pointer, as the comparision is by // pointer. // char pointer vs pointer TEST(FormatForComparisonFailureMessageTest, WorksForCharPointerVsPointer) { // In expression 'p == x', where 'p' and 'x' are (const or not) char // pointers, the operands are compared by pointer. Therefore we // want to print 'p' as a pointer instead of a C string (we don't // even know if it's supposed to point to a valid C string). // const char* const char* s = "hello"; EXPECT_EQ(PrintPointer(s), FormatForComparisonFailureMessage(s, s).c_str()); // char* char ch = 'a'; EXPECT_EQ(PrintPointer(&ch), FormatForComparisonFailureMessage(&ch, &ch).c_str()); } // wchar_t pointer vs pointer TEST(FormatForComparisonFailureMessageTest, WorksForWCharPointerVsPointer) { // In expression 'p == x', where 'p' and 'x' are (const or not) char // pointers, the operands are compared by pointer. Therefore we // want to print 'p' as a pointer instead of a wide C string (we don't // even know if it's supposed to point to a valid wide C string). // const wchar_t* const wchar_t* s = L"hello"; EXPECT_EQ(PrintPointer(s), FormatForComparisonFailureMessage(s, s).c_str()); // wchar_t* wchar_t ch = L'a'; EXPECT_EQ(PrintPointer(&ch), FormatForComparisonFailureMessage(&ch, &ch).c_str()); } // Tests formatting a char pointer when it's compared to a string object. // In this case we want to print the char pointer as a C string. #if GTEST_HAS_GLOBAL_STRING // char pointer vs ::string TEST(FormatForComparisonFailureMessageTest, WorksForCharPointerVsString) { const char* s = "hello \"world"; EXPECT_STREQ("\"hello \\\"world\"", // The string content should be escaped. FormatForComparisonFailureMessage(s, ::string()).c_str()); // char* char str[] = "hi\1"; char* p = str; EXPECT_STREQ("\"hi\\x1\"", // The string content should be escaped. FormatForComparisonFailureMessage(p, ::string()).c_str()); } #endif // char pointer vs std::string TEST(FormatForComparisonFailureMessageTest, WorksForCharPointerVsStdString) { const char* s = "hello \"world"; EXPECT_STREQ("\"hello \\\"world\"", // The string content should be escaped. FormatForComparisonFailureMessage(s, ::std::string()).c_str()); // char* char str[] = "hi\1"; char* p = str; EXPECT_STREQ("\"hi\\x1\"", // The string content should be escaped. FormatForComparisonFailureMessage(p, ::std::string()).c_str()); } #if GTEST_HAS_GLOBAL_WSTRING // wchar_t pointer vs ::wstring TEST(FormatForComparisonFailureMessageTest, WorksForWCharPointerVsWString) { const wchar_t* s = L"hi \"world"; EXPECT_STREQ("L\"hi \\\"world\"", // The string content should be escaped. FormatForComparisonFailureMessage(s, ::wstring()).c_str()); // wchar_t* wchar_t str[] = L"hi\1"; wchar_t* p = str; EXPECT_STREQ("L\"hi\\x1\"", // The string content should be escaped. FormatForComparisonFailureMessage(p, ::wstring()).c_str()); } #endif #if GTEST_HAS_STD_WSTRING // wchar_t pointer vs std::wstring TEST(FormatForComparisonFailureMessageTest, WorksForWCharPointerVsStdWString) { const wchar_t* s = L"hi \"world"; EXPECT_STREQ("L\"hi \\\"world\"", // The string content should be escaped. FormatForComparisonFailureMessage(s, ::std::wstring()).c_str()); // wchar_t* wchar_t str[] = L"hi\1"; wchar_t* p = str; EXPECT_STREQ("L\"hi\\x1\"", // The string content should be escaped. FormatForComparisonFailureMessage(p, ::std::wstring()).c_str()); } #endif // Tests formatting a char array when it's compared with a pointer or array. // In this case we want to print the array as a row pointer, as the comparison // is by pointer. // char array vs pointer TEST(FormatForComparisonFailureMessageTest, WorksForCharArrayVsPointer) { char str[] = "hi \"world\""; char* p = NULL; EXPECT_EQ(PrintPointer(str), FormatForComparisonFailureMessage(str, p).c_str()); } // char array vs char array TEST(FormatForComparisonFailureMessageTest, WorksForCharArrayVsCharArray) { const char str[] = "hi \"world\""; EXPECT_EQ(PrintPointer(str), FormatForComparisonFailureMessage(str, str).c_str()); } // wchar_t array vs pointer TEST(FormatForComparisonFailureMessageTest, WorksForWCharArrayVsPointer) { wchar_t str[] = L"hi \"world\""; wchar_t* p = NULL; EXPECT_EQ(PrintPointer(str), FormatForComparisonFailureMessage(str, p).c_str()); } // wchar_t array vs wchar_t array TEST(FormatForComparisonFailureMessageTest, WorksForWCharArrayVsWCharArray) { const wchar_t str[] = L"hi \"world\""; EXPECT_EQ(PrintPointer(str), FormatForComparisonFailureMessage(str, str).c_str()); } // Tests formatting a char array when it's compared with a string object. // In this case we want to print the array as a C string. #if GTEST_HAS_GLOBAL_STRING // char array vs string TEST(FormatForComparisonFailureMessageTest, WorksForCharArrayVsString) { const char str[] = "hi \"w\0rld\""; EXPECT_STREQ("\"hi \\\"w\"", // The content should be escaped. // Embedded NUL terminates the string. FormatForComparisonFailureMessage(str, ::string()).c_str()); } #endif // char array vs std::string TEST(FormatForComparisonFailureMessageTest, WorksForCharArrayVsStdString) { const char str[] = "hi \"world\""; EXPECT_STREQ("\"hi \\\"world\\\"\"", // The content should be escaped. FormatForComparisonFailureMessage(str, ::std::string()).c_str()); } #if GTEST_HAS_GLOBAL_WSTRING // wchar_t array vs wstring TEST(FormatForComparisonFailureMessageTest, WorksForWCharArrayVsWString) { const wchar_t str[] = L"hi \"world\""; EXPECT_STREQ("L\"hi \\\"world\\\"\"", // The content should be escaped. FormatForComparisonFailureMessage(str, ::wstring()).c_str()); } #endif #if GTEST_HAS_STD_WSTRING // wchar_t array vs std::wstring TEST(FormatForComparisonFailureMessageTest, WorksForWCharArrayVsStdWString) { const wchar_t str[] = L"hi \"w\0rld\""; EXPECT_STREQ( "L\"hi \\\"w\"", // The content should be escaped. // Embedded NUL terminates the string. FormatForComparisonFailureMessage(str, ::std::wstring()).c_str()); } #endif // Useful for testing PrintToString(). We cannot use EXPECT_EQ() // there as its implementation uses PrintToString(). The caller must // ensure that 'value' has no side effect. #define EXPECT_PRINT_TO_STRING_(value, expected_string) \ EXPECT_TRUE(PrintToString(value) == (expected_string)) \ << " where " #value " prints as " << (PrintToString(value)) TEST(PrintToStringTest, WorksForScalar) { EXPECT_PRINT_TO_STRING_(123, "123"); } TEST(PrintToStringTest, WorksForPointerToConstChar) { const char* p = "hello"; EXPECT_PRINT_TO_STRING_(p, "\"hello\""); } TEST(PrintToStringTest, WorksForPointerToNonConstChar) { char s[] = "hello"; char* p = s; EXPECT_PRINT_TO_STRING_(p, "\"hello\""); } TEST(PrintToStringTest, EscapesForPointerToConstChar) { const char* p = "hello\n"; EXPECT_PRINT_TO_STRING_(p, "\"hello\\n\""); } TEST(PrintToStringTest, EscapesForPointerToNonConstChar) { char s[] = "hello\1"; char* p = s; EXPECT_PRINT_TO_STRING_(p, "\"hello\\x1\""); } TEST(PrintToStringTest, WorksForArray) { int n[3] = { 1, 2, 3 }; EXPECT_PRINT_TO_STRING_(n, "{ 1, 2, 3 }"); } TEST(PrintToStringTest, WorksForCharArray) { char s[] = "hello"; EXPECT_PRINT_TO_STRING_(s, "\"hello\""); } TEST(PrintToStringTest, WorksForCharArrayWithEmbeddedNul) { const char str_with_nul[] = "hello\0 world"; EXPECT_PRINT_TO_STRING_(str_with_nul, "\"hello\\0 world\""); char mutable_str_with_nul[] = "hello\0 world"; EXPECT_PRINT_TO_STRING_(mutable_str_with_nul, "\"hello\\0 world\""); } #undef EXPECT_PRINT_TO_STRING_ TEST(UniversalTersePrintTest, WorksForNonReference) { ::std::stringstream ss; UniversalTersePrint(123, &ss); EXPECT_EQ("123", ss.str()); } TEST(UniversalTersePrintTest, WorksForReference) { const int& n = 123; ::std::stringstream ss; UniversalTersePrint(n, &ss); EXPECT_EQ("123", ss.str()); } TEST(UniversalTersePrintTest, WorksForCString) { const char* s1 = "abc"; ::std::stringstream ss1; UniversalTersePrint(s1, &ss1); EXPECT_EQ("\"abc\"", ss1.str()); char* s2 = const_cast(s1); ::std::stringstream ss2; UniversalTersePrint(s2, &ss2); EXPECT_EQ("\"abc\"", ss2.str()); const char* s3 = NULL; ::std::stringstream ss3; UniversalTersePrint(s3, &ss3); EXPECT_EQ("NULL", ss3.str()); } TEST(UniversalPrintTest, WorksForNonReference) { ::std::stringstream ss; UniversalPrint(123, &ss); EXPECT_EQ("123", ss.str()); } TEST(UniversalPrintTest, WorksForReference) { const int& n = 123; ::std::stringstream ss; UniversalPrint(n, &ss); EXPECT_EQ("123", ss.str()); } TEST(UniversalPrintTest, WorksForCString) { const char* s1 = "abc"; ::std::stringstream ss1; UniversalPrint(s1, &ss1); EXPECT_EQ(PrintPointer(s1) + " pointing to \"abc\"", string(ss1.str())); char* s2 = const_cast(s1); ::std::stringstream ss2; UniversalPrint(s2, &ss2); EXPECT_EQ(PrintPointer(s2) + " pointing to \"abc\"", string(ss2.str())); const char* s3 = NULL; ::std::stringstream ss3; UniversalPrint(s3, &ss3); EXPECT_EQ("NULL", ss3.str()); } TEST(UniversalPrintTest, WorksForCharArray) { const char str[] = "\"Line\0 1\"\nLine 2"; ::std::stringstream ss1; UniversalPrint(str, &ss1); EXPECT_EQ("\"\\\"Line\\0 1\\\"\\nLine 2\"", ss1.str()); const char mutable_str[] = "\"Line\0 1\"\nLine 2"; ::std::stringstream ss2; UniversalPrint(mutable_str, &ss2); EXPECT_EQ("\"\\\"Line\\0 1\\\"\\nLine 2\"", ss2.str()); } #if GTEST_HAS_TR1_TUPLE TEST(UniversalTersePrintTupleFieldsToStringsTest, PrintsEmptyTuple) { Strings result = UniversalTersePrintTupleFieldsToStrings(make_tuple()); EXPECT_EQ(0u, result.size()); } TEST(UniversalTersePrintTupleFieldsToStringsTest, PrintsOneTuple) { Strings result = UniversalTersePrintTupleFieldsToStrings(make_tuple(1)); ASSERT_EQ(1u, result.size()); EXPECT_EQ("1", result[0]); } TEST(UniversalTersePrintTupleFieldsToStringsTest, PrintsTwoTuple) { Strings result = UniversalTersePrintTupleFieldsToStrings(make_tuple(1, 'a')); ASSERT_EQ(2u, result.size()); EXPECT_EQ("1", result[0]); EXPECT_EQ("'a' (97, 0x61)", result[1]); } TEST(UniversalTersePrintTupleFieldsToStringsTest, PrintsTersely) { const int n = 1; Strings result = UniversalTersePrintTupleFieldsToStrings( tuple(n, "a")); ASSERT_EQ(2u, result.size()); EXPECT_EQ("1", result[0]); EXPECT_EQ("\"a\"", result[1]); } #endif // GTEST_HAS_TR1_TUPLE } // namespace gtest_printers_test } // namespace testing ffc-2019.1.0.post0/libs/gtest-1.7.0/test/gtest-test-part_test.cc000066400000000000000000000161621346026536000236750ustar00rootroot00000000000000// Copyright 2008 Google Inc. // All Rights Reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: mheule@google.com (Markus Heule) // #include "gtest/gtest-test-part.h" #include "gtest/gtest.h" using testing::Message; using testing::Test; using testing::TestPartResult; using testing::TestPartResultArray; namespace { // Tests the TestPartResult class. // The test fixture for testing TestPartResult. class TestPartResultTest : public Test { protected: TestPartResultTest() : r1_(TestPartResult::kSuccess, "foo/bar.cc", 10, "Success!"), r2_(TestPartResult::kNonFatalFailure, "foo/bar.cc", -1, "Failure!"), r3_(TestPartResult::kFatalFailure, NULL, -1, "Failure!") {} TestPartResult r1_, r2_, r3_; }; TEST_F(TestPartResultTest, ConstructorWorks) { Message message; message << "something is terribly wrong"; message << static_cast(testing::internal::kStackTraceMarker); message << "some unimportant stack trace"; const TestPartResult result(TestPartResult::kNonFatalFailure, "some_file.cc", 42, message.GetString().c_str()); EXPECT_EQ(TestPartResult::kNonFatalFailure, result.type()); EXPECT_STREQ("some_file.cc", result.file_name()); EXPECT_EQ(42, result.line_number()); EXPECT_STREQ(message.GetString().c_str(), result.message()); EXPECT_STREQ("something is terribly wrong", result.summary()); } TEST_F(TestPartResultTest, ResultAccessorsWork) { const TestPartResult success(TestPartResult::kSuccess, "file.cc", 42, "message"); EXPECT_TRUE(success.passed()); EXPECT_FALSE(success.failed()); EXPECT_FALSE(success.nonfatally_failed()); EXPECT_FALSE(success.fatally_failed()); const TestPartResult nonfatal_failure(TestPartResult::kNonFatalFailure, "file.cc", 42, "message"); EXPECT_FALSE(nonfatal_failure.passed()); EXPECT_TRUE(nonfatal_failure.failed()); EXPECT_TRUE(nonfatal_failure.nonfatally_failed()); EXPECT_FALSE(nonfatal_failure.fatally_failed()); const TestPartResult fatal_failure(TestPartResult::kFatalFailure, "file.cc", 42, "message"); EXPECT_FALSE(fatal_failure.passed()); EXPECT_TRUE(fatal_failure.failed()); EXPECT_FALSE(fatal_failure.nonfatally_failed()); EXPECT_TRUE(fatal_failure.fatally_failed()); } // Tests TestPartResult::type(). TEST_F(TestPartResultTest, type) { EXPECT_EQ(TestPartResult::kSuccess, r1_.type()); EXPECT_EQ(TestPartResult::kNonFatalFailure, r2_.type()); EXPECT_EQ(TestPartResult::kFatalFailure, r3_.type()); } // Tests TestPartResult::file_name(). TEST_F(TestPartResultTest, file_name) { EXPECT_STREQ("foo/bar.cc", r1_.file_name()); EXPECT_STREQ(NULL, r3_.file_name()); } // Tests TestPartResult::line_number(). TEST_F(TestPartResultTest, line_number) { EXPECT_EQ(10, r1_.line_number()); EXPECT_EQ(-1, r2_.line_number()); } // Tests TestPartResult::message(). TEST_F(TestPartResultTest, message) { EXPECT_STREQ("Success!", r1_.message()); } // Tests TestPartResult::passed(). TEST_F(TestPartResultTest, Passed) { EXPECT_TRUE(r1_.passed()); EXPECT_FALSE(r2_.passed()); EXPECT_FALSE(r3_.passed()); } // Tests TestPartResult::failed(). TEST_F(TestPartResultTest, Failed) { EXPECT_FALSE(r1_.failed()); EXPECT_TRUE(r2_.failed()); EXPECT_TRUE(r3_.failed()); } // Tests TestPartResult::fatally_failed(). TEST_F(TestPartResultTest, FatallyFailed) { EXPECT_FALSE(r1_.fatally_failed()); EXPECT_FALSE(r2_.fatally_failed()); EXPECT_TRUE(r3_.fatally_failed()); } // Tests TestPartResult::nonfatally_failed(). TEST_F(TestPartResultTest, NonfatallyFailed) { EXPECT_FALSE(r1_.nonfatally_failed()); EXPECT_TRUE(r2_.nonfatally_failed()); EXPECT_FALSE(r3_.nonfatally_failed()); } // Tests the TestPartResultArray class. class TestPartResultArrayTest : public Test { protected: TestPartResultArrayTest() : r1_(TestPartResult::kNonFatalFailure, "foo/bar.cc", -1, "Failure 1"), r2_(TestPartResult::kFatalFailure, "foo/bar.cc", -1, "Failure 2") {} const TestPartResult r1_, r2_; }; // Tests that TestPartResultArray initially has size 0. TEST_F(TestPartResultArrayTest, InitialSizeIsZero) { TestPartResultArray results; EXPECT_EQ(0, results.size()); } // Tests that TestPartResultArray contains the given TestPartResult // after one Append() operation. TEST_F(TestPartResultArrayTest, ContainsGivenResultAfterAppend) { TestPartResultArray results; results.Append(r1_); EXPECT_EQ(1, results.size()); EXPECT_STREQ("Failure 1", results.GetTestPartResult(0).message()); } // Tests that TestPartResultArray contains the given TestPartResults // after two Append() operations. TEST_F(TestPartResultArrayTest, ContainsGivenResultsAfterTwoAppends) { TestPartResultArray results; results.Append(r1_); results.Append(r2_); EXPECT_EQ(2, results.size()); EXPECT_STREQ("Failure 1", results.GetTestPartResult(0).message()); EXPECT_STREQ("Failure 2", results.GetTestPartResult(1).message()); } typedef TestPartResultArrayTest TestPartResultArrayDeathTest; // Tests that the program dies when GetTestPartResult() is called with // an invalid index. TEST_F(TestPartResultArrayDeathTest, DiesWhenIndexIsOutOfBound) { TestPartResultArray results; results.Append(r1_); EXPECT_DEATH_IF_SUPPORTED(results.GetTestPartResult(-1), ""); EXPECT_DEATH_IF_SUPPORTED(results.GetTestPartResult(1), ""); } // TODO(mheule@google.com): Add a test for the class HasNewFatalFailureHelper. } // namespace ffc-2019.1.0.post0/libs/gtest-1.7.0/test/gtest-tuple_test.cc000066400000000000000000000220421346026536000230750ustar00rootroot00000000000000// Copyright 2007, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: wan@google.com (Zhanyong Wan) #include "gtest/internal/gtest-tuple.h" #include #include "gtest/gtest.h" namespace { using ::std::tr1::get; using ::std::tr1::make_tuple; using ::std::tr1::tuple; using ::std::tr1::tuple_element; using ::std::tr1::tuple_size; using ::testing::StaticAssertTypeEq; // Tests that tuple_element >::type returns TK. TEST(tuple_element_Test, ReturnsElementType) { StaticAssertTypeEq >::type>(); StaticAssertTypeEq >::type>(); StaticAssertTypeEq >::type>(); } // Tests that tuple_size::value gives the number of fields in tuple // type T. TEST(tuple_size_Test, ReturnsNumberOfFields) { EXPECT_EQ(0, +tuple_size >::value); EXPECT_EQ(1, +tuple_size >::value); EXPECT_EQ(1, +tuple_size >::value); EXPECT_EQ(1, +(tuple_size > >::value)); EXPECT_EQ(2, +(tuple_size >::value)); EXPECT_EQ(3, +(tuple_size >::value)); } // Tests comparing a tuple with itself. TEST(ComparisonTest, ComparesWithSelf) { const tuple a(5, 'a', false); EXPECT_TRUE(a == a); EXPECT_FALSE(a != a); } // Tests comparing two tuples with the same value. TEST(ComparisonTest, ComparesEqualTuples) { const tuple a(5, true), b(5, true); EXPECT_TRUE(a == b); EXPECT_FALSE(a != b); } // Tests comparing two different tuples that have no reference fields. TEST(ComparisonTest, ComparesUnequalTuplesWithoutReferenceFields) { typedef tuple FooTuple; const FooTuple a(0, 'x'); const FooTuple b(1, 'a'); EXPECT_TRUE(a != b); EXPECT_FALSE(a == b); const FooTuple c(1, 'b'); EXPECT_TRUE(b != c); EXPECT_FALSE(b == c); } // Tests comparing two different tuples that have reference fields. TEST(ComparisonTest, ComparesUnequalTuplesWithReferenceFields) { typedef tuple FooTuple; int i = 5; const char ch = 'a'; const FooTuple a(i, ch); int j = 6; const FooTuple b(j, ch); EXPECT_TRUE(a != b); EXPECT_FALSE(a == b); j = 5; const char ch2 = 'b'; const FooTuple c(j, ch2); EXPECT_TRUE(b != c); EXPECT_FALSE(b == c); } // Tests that a tuple field with a reference type is an alias of the // variable it's supposed to reference. TEST(ReferenceFieldTest, IsAliasOfReferencedVariable) { int n = 0; tuple t(true, n); n = 1; EXPECT_EQ(n, get<1>(t)) << "Changing a underlying variable should update the reference field."; // Makes sure that the implementation doesn't do anything funny with // the & operator for the return type of get<>(). EXPECT_EQ(&n, &(get<1>(t))) << "The address of a reference field should equal the address of " << "the underlying variable."; get<1>(t) = 2; EXPECT_EQ(2, n) << "Changing a reference field should update the underlying variable."; } // Tests that tuple's default constructor default initializes each field. // This test needs to compile without generating warnings. TEST(TupleConstructorTest, DefaultConstructorDefaultInitializesEachField) { // The TR1 report requires that tuple's default constructor default // initializes each field, even if it's a primitive type. If the // implementation forgets to do this, this test will catch it by // generating warnings about using uninitialized variables (assuming // a decent compiler). tuple<> empty; tuple a1, b1; b1 = a1; EXPECT_EQ(0, get<0>(b1)); tuple a2, b2; b2 = a2; EXPECT_EQ(0, get<0>(b2)); EXPECT_EQ(0.0, get<1>(b2)); tuple a3, b3; b3 = a3; EXPECT_EQ(0.0, get<0>(b3)); EXPECT_EQ('\0', get<1>(b3)); EXPECT_TRUE(get<2>(b3) == NULL); tuple a10, b10; b10 = a10; EXPECT_EQ(0, get<0>(b10)); EXPECT_EQ(0, get<1>(b10)); EXPECT_EQ(0, get<2>(b10)); EXPECT_EQ(0, get<3>(b10)); EXPECT_EQ(0, get<4>(b10)); EXPECT_EQ(0, get<5>(b10)); EXPECT_EQ(0, get<6>(b10)); EXPECT_EQ(0, get<7>(b10)); EXPECT_EQ(0, get<8>(b10)); EXPECT_EQ(0, get<9>(b10)); } // Tests constructing a tuple from its fields. TEST(TupleConstructorTest, ConstructsFromFields) { int n = 1; // Reference field. tuple a(n); EXPECT_EQ(&n, &(get<0>(a))); // Non-reference fields. tuple b(5, 'a'); EXPECT_EQ(5, get<0>(b)); EXPECT_EQ('a', get<1>(b)); // Const reference field. const int m = 2; tuple c(true, m); EXPECT_TRUE(get<0>(c)); EXPECT_EQ(&m, &(get<1>(c))); } // Tests tuple's copy constructor. TEST(TupleConstructorTest, CopyConstructor) { tuple a(0.0, true); tuple b(a); EXPECT_DOUBLE_EQ(0.0, get<0>(b)); EXPECT_TRUE(get<1>(b)); } // Tests constructing a tuple from another tuple that has a compatible // but different type. TEST(TupleConstructorTest, ConstructsFromDifferentTupleType) { tuple a(0, 1, 'a'); tuple b(a); EXPECT_DOUBLE_EQ(0.0, get<0>(b)); EXPECT_EQ(1, get<1>(b)); EXPECT_EQ('a', get<2>(b)); } // Tests constructing a 2-tuple from an std::pair. TEST(TupleConstructorTest, ConstructsFromPair) { ::std::pair a(1, 'a'); tuple b(a); tuple c(a); } // Tests assigning a tuple to another tuple with the same type. TEST(TupleAssignmentTest, AssignsToSameTupleType) { const tuple a(5, 7L); tuple b; b = a; EXPECT_EQ(5, get<0>(b)); EXPECT_EQ(7L, get<1>(b)); } // Tests assigning a tuple to another tuple with a different but // compatible type. TEST(TupleAssignmentTest, AssignsToDifferentTupleType) { const tuple a(1, 7L, true); tuple b; b = a; EXPECT_EQ(1L, get<0>(b)); EXPECT_EQ(7, get<1>(b)); EXPECT_TRUE(get<2>(b)); } // Tests assigning an std::pair to a 2-tuple. TEST(TupleAssignmentTest, AssignsFromPair) { const ::std::pair a(5, true); tuple b; b = a; EXPECT_EQ(5, get<0>(b)); EXPECT_TRUE(get<1>(b)); tuple c; c = a; EXPECT_EQ(5L, get<0>(c)); EXPECT_TRUE(get<1>(c)); } // A fixture for testing big tuples. class BigTupleTest : public testing::Test { protected: typedef tuple BigTuple; BigTupleTest() : a_(1, 0, 0, 0, 0, 0, 0, 0, 0, 2), b_(1, 0, 0, 0, 0, 0, 0, 0, 0, 3) {} BigTuple a_, b_; }; // Tests constructing big tuples. TEST_F(BigTupleTest, Construction) { BigTuple a; BigTuple b(b_); } // Tests that get(t) returns the N-th (0-based) field of tuple t. TEST_F(BigTupleTest, get) { EXPECT_EQ(1, get<0>(a_)); EXPECT_EQ(2, get<9>(a_)); // Tests that get() works on a const tuple too. const BigTuple a(a_); EXPECT_EQ(1, get<0>(a)); EXPECT_EQ(2, get<9>(a)); } // Tests comparing big tuples. TEST_F(BigTupleTest, Comparisons) { EXPECT_TRUE(a_ == a_); EXPECT_FALSE(a_ != a_); EXPECT_TRUE(a_ != b_); EXPECT_FALSE(a_ == b_); } TEST(MakeTupleTest, WorksForScalarTypes) { tuple a; a = make_tuple(true, 5); EXPECT_TRUE(get<0>(a)); EXPECT_EQ(5, get<1>(a)); tuple b; b = make_tuple('a', 'b', 5); EXPECT_EQ('a', get<0>(b)); EXPECT_EQ('b', get<1>(b)); EXPECT_EQ(5, get<2>(b)); } TEST(MakeTupleTest, WorksForPointers) { int a[] = { 1, 2, 3, 4 }; const char* const str = "hi"; int* const p = a; tuple t; t = make_tuple(str, p); EXPECT_EQ(str, get<0>(t)); EXPECT_EQ(p, get<1>(t)); } } // namespace ffc-2019.1.0.post0/libs/gtest-1.7.0/test/gtest-typed-test2_test.cc000066400000000000000000000040131346026536000241260ustar00rootroot00000000000000// Copyright 2008 Google Inc. // All Rights Reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: wan@google.com (Zhanyong Wan) #include #include "test/gtest-typed-test_test.h" #include "gtest/gtest.h" #if GTEST_HAS_TYPED_TEST_P // Tests that the same type-parameterized test case can be // instantiated in different translation units linked together. // (ContainerTest is also instantiated in gtest-typed-test_test.cc.) INSTANTIATE_TYPED_TEST_CASE_P(Vector, ContainerTest, testing::Types >); #endif // GTEST_HAS_TYPED_TEST_P ffc-2019.1.0.post0/libs/gtest-1.7.0/test/gtest-typed-test_test.cc000066400000000000000000000261511346026536000240530ustar00rootroot00000000000000// Copyright 2008 Google Inc. // All Rights Reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: wan@google.com (Zhanyong Wan) #include #include #include "test/gtest-typed-test_test.h" #include "gtest/gtest.h" using testing::Test; // Used for testing that SetUpTestCase()/TearDownTestCase(), fixture // ctor/dtor, and SetUp()/TearDown() work correctly in typed tests and // type-parameterized test. template class CommonTest : public Test { // For some technical reason, SetUpTestCase() and TearDownTestCase() // must be public. public: static void SetUpTestCase() { shared_ = new T(5); } static void TearDownTestCase() { delete shared_; shared_ = NULL; } // This 'protected:' is optional. There's no harm in making all // members of this fixture class template public. protected: // We used to use std::list here, but switched to std::vector since // MSVC's doesn't compile cleanly with /W4. typedef std::vector Vector; typedef std::set IntSet; CommonTest() : value_(1) {} virtual ~CommonTest() { EXPECT_EQ(3, value_); } virtual void SetUp() { EXPECT_EQ(1, value_); value_++; } virtual void TearDown() { EXPECT_EQ(2, value_); value_++; } T value_; static T* shared_; }; template T* CommonTest::shared_ = NULL; // This #ifdef block tests typed tests. #if GTEST_HAS_TYPED_TEST using testing::Types; // Tests that SetUpTestCase()/TearDownTestCase(), fixture ctor/dtor, // and SetUp()/TearDown() work correctly in typed tests typedef Types TwoTypes; TYPED_TEST_CASE(CommonTest, TwoTypes); TYPED_TEST(CommonTest, ValuesAreCorrect) { // Static members of the fixture class template can be visited via // the TestFixture:: prefix. EXPECT_EQ(5, *TestFixture::shared_); // Typedefs in the fixture class template can be visited via the // "typename TestFixture::" prefix. typename TestFixture::Vector empty; EXPECT_EQ(0U, empty.size()); typename TestFixture::IntSet empty2; EXPECT_EQ(0U, empty2.size()); // Non-static members of the fixture class must be visited via // 'this', as required by C++ for class templates. EXPECT_EQ(2, this->value_); } // The second test makes sure shared_ is not deleted after the first // test. TYPED_TEST(CommonTest, ValuesAreStillCorrect) { // Static members of the fixture class template can also be visited // via 'this'. ASSERT_TRUE(this->shared_ != NULL); EXPECT_EQ(5, *this->shared_); // TypeParam can be used to refer to the type parameter. EXPECT_EQ(static_cast(2), this->value_); } // Tests that multiple TYPED_TEST_CASE's can be defined in the same // translation unit. template class TypedTest1 : public Test { }; // Verifies that the second argument of TYPED_TEST_CASE can be a // single type. TYPED_TEST_CASE(TypedTest1, int); TYPED_TEST(TypedTest1, A) {} template class TypedTest2 : public Test { }; // Verifies that the second argument of TYPED_TEST_CASE can be a // Types<...> type list. TYPED_TEST_CASE(TypedTest2, Types); // This also verifies that tests from different typed test cases can // share the same name. TYPED_TEST(TypedTest2, A) {} // Tests that a typed test case can be defined in a namespace. namespace library1 { template class NumericTest : public Test { }; typedef Types NumericTypes; TYPED_TEST_CASE(NumericTest, NumericTypes); TYPED_TEST(NumericTest, DefaultIsZero) { EXPECT_EQ(0, TypeParam()); } } // namespace library1 #endif // GTEST_HAS_TYPED_TEST // This #ifdef block tests type-parameterized tests. #if GTEST_HAS_TYPED_TEST_P using testing::Types; using testing::internal::TypedTestCasePState; // Tests TypedTestCasePState. class TypedTestCasePStateTest : public Test { protected: virtual void SetUp() { state_.AddTestName("foo.cc", 0, "FooTest", "A"); state_.AddTestName("foo.cc", 0, "FooTest", "B"); state_.AddTestName("foo.cc", 0, "FooTest", "C"); } TypedTestCasePState state_; }; TEST_F(TypedTestCasePStateTest, SucceedsForMatchingList) { const char* tests = "A, B, C"; EXPECT_EQ(tests, state_.VerifyRegisteredTestNames("foo.cc", 1, tests)); } // Makes sure that the order of the tests and spaces around the names // don't matter. TEST_F(TypedTestCasePStateTest, IgnoresOrderAndSpaces) { const char* tests = "A,C, B"; EXPECT_EQ(tests, state_.VerifyRegisteredTestNames("foo.cc", 1, tests)); } typedef TypedTestCasePStateTest TypedTestCasePStateDeathTest; TEST_F(TypedTestCasePStateDeathTest, DetectsDuplicates) { EXPECT_DEATH_IF_SUPPORTED( state_.VerifyRegisteredTestNames("foo.cc", 1, "A, B, A, C"), "foo\\.cc.1.?: Test A is listed more than once\\."); } TEST_F(TypedTestCasePStateDeathTest, DetectsExtraTest) { EXPECT_DEATH_IF_SUPPORTED( state_.VerifyRegisteredTestNames("foo.cc", 1, "A, B, C, D"), "foo\\.cc.1.?: No test named D can be found in this test case\\."); } TEST_F(TypedTestCasePStateDeathTest, DetectsMissedTest) { EXPECT_DEATH_IF_SUPPORTED( state_.VerifyRegisteredTestNames("foo.cc", 1, "A, C"), "foo\\.cc.1.?: You forgot to list test B\\."); } // Tests that defining a test for a parameterized test case generates // a run-time error if the test case has been registered. TEST_F(TypedTestCasePStateDeathTest, DetectsTestAfterRegistration) { state_.VerifyRegisteredTestNames("foo.cc", 1, "A, B, C"); EXPECT_DEATH_IF_SUPPORTED( state_.AddTestName("foo.cc", 2, "FooTest", "D"), "foo\\.cc.2.?: Test D must be defined before REGISTER_TYPED_TEST_CASE_P" "\\(FooTest, \\.\\.\\.\\)\\."); } // Tests that SetUpTestCase()/TearDownTestCase(), fixture ctor/dtor, // and SetUp()/TearDown() work correctly in type-parameterized tests. template class DerivedTest : public CommonTest { }; TYPED_TEST_CASE_P(DerivedTest); TYPED_TEST_P(DerivedTest, ValuesAreCorrect) { // Static members of the fixture class template can be visited via // the TestFixture:: prefix. EXPECT_EQ(5, *TestFixture::shared_); // Non-static members of the fixture class must be visited via // 'this', as required by C++ for class templates. EXPECT_EQ(2, this->value_); } // The second test makes sure shared_ is not deleted after the first // test. TYPED_TEST_P(DerivedTest, ValuesAreStillCorrect) { // Static members of the fixture class template can also be visited // via 'this'. ASSERT_TRUE(this->shared_ != NULL); EXPECT_EQ(5, *this->shared_); EXPECT_EQ(2, this->value_); } REGISTER_TYPED_TEST_CASE_P(DerivedTest, ValuesAreCorrect, ValuesAreStillCorrect); typedef Types MyTwoTypes; INSTANTIATE_TYPED_TEST_CASE_P(My, DerivedTest, MyTwoTypes); // Tests that multiple TYPED_TEST_CASE_P's can be defined in the same // translation unit. template class TypedTestP1 : public Test { }; TYPED_TEST_CASE_P(TypedTestP1); // For testing that the code between TYPED_TEST_CASE_P() and // TYPED_TEST_P() is not enclosed in a namespace. typedef int IntAfterTypedTestCaseP; TYPED_TEST_P(TypedTestP1, A) {} TYPED_TEST_P(TypedTestP1, B) {} // For testing that the code between TYPED_TEST_P() and // REGISTER_TYPED_TEST_CASE_P() is not enclosed in a namespace. typedef int IntBeforeRegisterTypedTestCaseP; REGISTER_TYPED_TEST_CASE_P(TypedTestP1, A, B); template class TypedTestP2 : public Test { }; TYPED_TEST_CASE_P(TypedTestP2); // This also verifies that tests from different type-parameterized // test cases can share the same name. TYPED_TEST_P(TypedTestP2, A) {} REGISTER_TYPED_TEST_CASE_P(TypedTestP2, A); // Verifies that the code between TYPED_TEST_CASE_P() and // REGISTER_TYPED_TEST_CASE_P() is not enclosed in a namespace. IntAfterTypedTestCaseP after = 0; IntBeforeRegisterTypedTestCaseP before = 0; // Verifies that the last argument of INSTANTIATE_TYPED_TEST_CASE_P() // can be either a single type or a Types<...> type list. INSTANTIATE_TYPED_TEST_CASE_P(Int, TypedTestP1, int); INSTANTIATE_TYPED_TEST_CASE_P(Int, TypedTestP2, Types); // Tests that the same type-parameterized test case can be // instantiated more than once in the same translation unit. INSTANTIATE_TYPED_TEST_CASE_P(Double, TypedTestP2, Types); // Tests that the same type-parameterized test case can be // instantiated in different translation units linked together. // (ContainerTest is also instantiated in gtest-typed-test_test.cc.) typedef Types, std::set > MyContainers; INSTANTIATE_TYPED_TEST_CASE_P(My, ContainerTest, MyContainers); // Tests that a type-parameterized test case can be defined and // instantiated in a namespace. namespace library2 { template class NumericTest : public Test { }; TYPED_TEST_CASE_P(NumericTest); TYPED_TEST_P(NumericTest, DefaultIsZero) { EXPECT_EQ(0, TypeParam()); } TYPED_TEST_P(NumericTest, ZeroIsLessThanOne) { EXPECT_LT(TypeParam(0), TypeParam(1)); } REGISTER_TYPED_TEST_CASE_P(NumericTest, DefaultIsZero, ZeroIsLessThanOne); typedef Types NumericTypes; INSTANTIATE_TYPED_TEST_CASE_P(My, NumericTest, NumericTypes); } // namespace library2 #endif // GTEST_HAS_TYPED_TEST_P #if !defined(GTEST_HAS_TYPED_TEST) && !defined(GTEST_HAS_TYPED_TEST_P) // Google Test may not support type-parameterized tests with some // compilers. If we use conditional compilation to compile out all // code referring to the gtest_main library, MSVC linker will not link // that library at all and consequently complain about missing entry // point defined in that library (fatal error LNK1561: entry point // must be defined). This dummy test keeps gtest_main linked in. TEST(DummyTest, TypedTestsAreNotSupportedOnThisPlatform) {} #endif // #if !defined(GTEST_HAS_TYPED_TEST) && !defined(GTEST_HAS_TYPED_TEST_P) ffc-2019.1.0.post0/libs/gtest-1.7.0/test/gtest-typed-test_test.h000066400000000000000000000046651346026536000237230ustar00rootroot00000000000000// Copyright 2008 Google Inc. // All Rights Reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: wan@google.com (Zhanyong Wan) #ifndef GTEST_TEST_GTEST_TYPED_TEST_TEST_H_ #define GTEST_TEST_GTEST_TYPED_TEST_TEST_H_ #include "gtest/gtest.h" #if GTEST_HAS_TYPED_TEST_P using testing::Test; // For testing that the same type-parameterized test case can be // instantiated in different translation units linked together. // ContainerTest will be instantiated in both gtest-typed-test_test.cc // and gtest-typed-test2_test.cc. template class ContainerTest : public Test { }; TYPED_TEST_CASE_P(ContainerTest); TYPED_TEST_P(ContainerTest, CanBeDefaultConstructed) { TypeParam container; } TYPED_TEST_P(ContainerTest, InitialSizeIsZero) { TypeParam container; EXPECT_EQ(0U, container.size()); } REGISTER_TYPED_TEST_CASE_P(ContainerTest, CanBeDefaultConstructed, InitialSizeIsZero); #endif // GTEST_HAS_TYPED_TEST_P #endif // GTEST_TEST_GTEST_TYPED_TEST_TEST_H_ ffc-2019.1.0.post0/libs/gtest-1.7.0/test/gtest-unittest-api_test.cc000066400000000000000000000316271346026536000244030ustar00rootroot00000000000000// Copyright 2009 Google Inc. All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: vladl@google.com (Vlad Losev) // // The Google C++ Testing Framework (Google Test) // // This file contains tests verifying correctness of data provided via // UnitTest's public methods. #include "gtest/gtest.h" #include // For strcmp. #include using ::testing::InitGoogleTest; namespace testing { namespace internal { template struct LessByName { bool operator()(const T* a, const T* b) { return strcmp(a->name(), b->name()) < 0; } }; class UnitTestHelper { public: // Returns the array of pointers to all test cases sorted by the test case // name. The caller is responsible for deleting the array. static TestCase const** const GetSortedTestCases() { UnitTest& unit_test = *UnitTest::GetInstance(); TestCase const** const test_cases = new const TestCase*[unit_test.total_test_case_count()]; for (int i = 0; i < unit_test.total_test_case_count(); ++i) test_cases[i] = unit_test.GetTestCase(i); std::sort(test_cases, test_cases + unit_test.total_test_case_count(), LessByName()); return test_cases; } // Returns the test case by its name. The caller doesn't own the returned // pointer. static const TestCase* FindTestCase(const char* name) { UnitTest& unit_test = *UnitTest::GetInstance(); for (int i = 0; i < unit_test.total_test_case_count(); ++i) { const TestCase* test_case = unit_test.GetTestCase(i); if (0 == strcmp(test_case->name(), name)) return test_case; } return NULL; } // Returns the array of pointers to all tests in a particular test case // sorted by the test name. The caller is responsible for deleting the // array. static TestInfo const** const GetSortedTests(const TestCase* test_case) { TestInfo const** const tests = new const TestInfo*[test_case->total_test_count()]; for (int i = 0; i < test_case->total_test_count(); ++i) tests[i] = test_case->GetTestInfo(i); std::sort(tests, tests + test_case->total_test_count(), LessByName()); return tests; } }; #if GTEST_HAS_TYPED_TEST template class TestCaseWithCommentTest : public Test {}; TYPED_TEST_CASE(TestCaseWithCommentTest, Types); TYPED_TEST(TestCaseWithCommentTest, Dummy) {} const int kTypedTestCases = 1; const int kTypedTests = 1; #else const int kTypedTestCases = 0; const int kTypedTests = 0; #endif // GTEST_HAS_TYPED_TEST // We can only test the accessors that do not change value while tests run. // Since tests can be run in any order, the values the accessors that track // test execution (such as failed_test_count) can not be predicted. TEST(ApiTest, UnitTestImmutableAccessorsWork) { UnitTest* unit_test = UnitTest::GetInstance(); ASSERT_EQ(2 + kTypedTestCases, unit_test->total_test_case_count()); EXPECT_EQ(1 + kTypedTestCases, unit_test->test_case_to_run_count()); EXPECT_EQ(2, unit_test->disabled_test_count()); EXPECT_EQ(5 + kTypedTests, unit_test->total_test_count()); EXPECT_EQ(3 + kTypedTests, unit_test->test_to_run_count()); const TestCase** const test_cases = UnitTestHelper::GetSortedTestCases(); EXPECT_STREQ("ApiTest", test_cases[0]->name()); EXPECT_STREQ("DISABLED_Test", test_cases[1]->name()); #if GTEST_HAS_TYPED_TEST EXPECT_STREQ("TestCaseWithCommentTest/0", test_cases[2]->name()); #endif // GTEST_HAS_TYPED_TEST delete[] test_cases; // The following lines initiate actions to verify certain methods in // FinalSuccessChecker::TearDown. // Records a test property to verify TestResult::GetTestProperty(). RecordProperty("key", "value"); } AssertionResult IsNull(const char* str) { if (str != NULL) { return testing::AssertionFailure() << "argument is " << str; } return AssertionSuccess(); } TEST(ApiTest, TestCaseImmutableAccessorsWork) { const TestCase* test_case = UnitTestHelper::FindTestCase("ApiTest"); ASSERT_TRUE(test_case != NULL); EXPECT_STREQ("ApiTest", test_case->name()); EXPECT_TRUE(IsNull(test_case->type_param())); EXPECT_TRUE(test_case->should_run()); EXPECT_EQ(1, test_case->disabled_test_count()); EXPECT_EQ(3, test_case->test_to_run_count()); ASSERT_EQ(4, test_case->total_test_count()); const TestInfo** tests = UnitTestHelper::GetSortedTests(test_case); EXPECT_STREQ("DISABLED_Dummy1", tests[0]->name()); EXPECT_STREQ("ApiTest", tests[0]->test_case_name()); EXPECT_TRUE(IsNull(tests[0]->value_param())); EXPECT_TRUE(IsNull(tests[0]->type_param())); EXPECT_FALSE(tests[0]->should_run()); EXPECT_STREQ("TestCaseDisabledAccessorsWork", tests[1]->name()); EXPECT_STREQ("ApiTest", tests[1]->test_case_name()); EXPECT_TRUE(IsNull(tests[1]->value_param())); EXPECT_TRUE(IsNull(tests[1]->type_param())); EXPECT_TRUE(tests[1]->should_run()); EXPECT_STREQ("TestCaseImmutableAccessorsWork", tests[2]->name()); EXPECT_STREQ("ApiTest", tests[2]->test_case_name()); EXPECT_TRUE(IsNull(tests[2]->value_param())); EXPECT_TRUE(IsNull(tests[2]->type_param())); EXPECT_TRUE(tests[2]->should_run()); EXPECT_STREQ("UnitTestImmutableAccessorsWork", tests[3]->name()); EXPECT_STREQ("ApiTest", tests[3]->test_case_name()); EXPECT_TRUE(IsNull(tests[3]->value_param())); EXPECT_TRUE(IsNull(tests[3]->type_param())); EXPECT_TRUE(tests[3]->should_run()); delete[] tests; tests = NULL; #if GTEST_HAS_TYPED_TEST test_case = UnitTestHelper::FindTestCase("TestCaseWithCommentTest/0"); ASSERT_TRUE(test_case != NULL); EXPECT_STREQ("TestCaseWithCommentTest/0", test_case->name()); EXPECT_STREQ(GetTypeName().c_str(), test_case->type_param()); EXPECT_TRUE(test_case->should_run()); EXPECT_EQ(0, test_case->disabled_test_count()); EXPECT_EQ(1, test_case->test_to_run_count()); ASSERT_EQ(1, test_case->total_test_count()); tests = UnitTestHelper::GetSortedTests(test_case); EXPECT_STREQ("Dummy", tests[0]->name()); EXPECT_STREQ("TestCaseWithCommentTest/0", tests[0]->test_case_name()); EXPECT_TRUE(IsNull(tests[0]->value_param())); EXPECT_STREQ(GetTypeName().c_str(), tests[0]->type_param()); EXPECT_TRUE(tests[0]->should_run()); delete[] tests; #endif // GTEST_HAS_TYPED_TEST } TEST(ApiTest, TestCaseDisabledAccessorsWork) { const TestCase* test_case = UnitTestHelper::FindTestCase("DISABLED_Test"); ASSERT_TRUE(test_case != NULL); EXPECT_STREQ("DISABLED_Test", test_case->name()); EXPECT_TRUE(IsNull(test_case->type_param())); EXPECT_FALSE(test_case->should_run()); EXPECT_EQ(1, test_case->disabled_test_count()); EXPECT_EQ(0, test_case->test_to_run_count()); ASSERT_EQ(1, test_case->total_test_count()); const TestInfo* const test_info = test_case->GetTestInfo(0); EXPECT_STREQ("Dummy2", test_info->name()); EXPECT_STREQ("DISABLED_Test", test_info->test_case_name()); EXPECT_TRUE(IsNull(test_info->value_param())); EXPECT_TRUE(IsNull(test_info->type_param())); EXPECT_FALSE(test_info->should_run()); } // These two tests are here to provide support for testing // test_case_to_run_count, disabled_test_count, and test_to_run_count. TEST(ApiTest, DISABLED_Dummy1) {} TEST(DISABLED_Test, Dummy2) {} class FinalSuccessChecker : public Environment { protected: virtual void TearDown() { UnitTest* unit_test = UnitTest::GetInstance(); EXPECT_EQ(1 + kTypedTestCases, unit_test->successful_test_case_count()); EXPECT_EQ(3 + kTypedTests, unit_test->successful_test_count()); EXPECT_EQ(0, unit_test->failed_test_case_count()); EXPECT_EQ(0, unit_test->failed_test_count()); EXPECT_TRUE(unit_test->Passed()); EXPECT_FALSE(unit_test->Failed()); ASSERT_EQ(2 + kTypedTestCases, unit_test->total_test_case_count()); const TestCase** const test_cases = UnitTestHelper::GetSortedTestCases(); EXPECT_STREQ("ApiTest", test_cases[0]->name()); EXPECT_TRUE(IsNull(test_cases[0]->type_param())); EXPECT_TRUE(test_cases[0]->should_run()); EXPECT_EQ(1, test_cases[0]->disabled_test_count()); ASSERT_EQ(4, test_cases[0]->total_test_count()); EXPECT_EQ(3, test_cases[0]->successful_test_count()); EXPECT_EQ(0, test_cases[0]->failed_test_count()); EXPECT_TRUE(test_cases[0]->Passed()); EXPECT_FALSE(test_cases[0]->Failed()); EXPECT_STREQ("DISABLED_Test", test_cases[1]->name()); EXPECT_TRUE(IsNull(test_cases[1]->type_param())); EXPECT_FALSE(test_cases[1]->should_run()); EXPECT_EQ(1, test_cases[1]->disabled_test_count()); ASSERT_EQ(1, test_cases[1]->total_test_count()); EXPECT_EQ(0, test_cases[1]->successful_test_count()); EXPECT_EQ(0, test_cases[1]->failed_test_count()); #if GTEST_HAS_TYPED_TEST EXPECT_STREQ("TestCaseWithCommentTest/0", test_cases[2]->name()); EXPECT_STREQ(GetTypeName().c_str(), test_cases[2]->type_param()); EXPECT_TRUE(test_cases[2]->should_run()); EXPECT_EQ(0, test_cases[2]->disabled_test_count()); ASSERT_EQ(1, test_cases[2]->total_test_count()); EXPECT_EQ(1, test_cases[2]->successful_test_count()); EXPECT_EQ(0, test_cases[2]->failed_test_count()); EXPECT_TRUE(test_cases[2]->Passed()); EXPECT_FALSE(test_cases[2]->Failed()); #endif // GTEST_HAS_TYPED_TEST const TestCase* test_case = UnitTestHelper::FindTestCase("ApiTest"); const TestInfo** tests = UnitTestHelper::GetSortedTests(test_case); EXPECT_STREQ("DISABLED_Dummy1", tests[0]->name()); EXPECT_STREQ("ApiTest", tests[0]->test_case_name()); EXPECT_FALSE(tests[0]->should_run()); EXPECT_STREQ("TestCaseDisabledAccessorsWork", tests[1]->name()); EXPECT_STREQ("ApiTest", tests[1]->test_case_name()); EXPECT_TRUE(IsNull(tests[1]->value_param())); EXPECT_TRUE(IsNull(tests[1]->type_param())); EXPECT_TRUE(tests[1]->should_run()); EXPECT_TRUE(tests[1]->result()->Passed()); EXPECT_EQ(0, tests[1]->result()->test_property_count()); EXPECT_STREQ("TestCaseImmutableAccessorsWork", tests[2]->name()); EXPECT_STREQ("ApiTest", tests[2]->test_case_name()); EXPECT_TRUE(IsNull(tests[2]->value_param())); EXPECT_TRUE(IsNull(tests[2]->type_param())); EXPECT_TRUE(tests[2]->should_run()); EXPECT_TRUE(tests[2]->result()->Passed()); EXPECT_EQ(0, tests[2]->result()->test_property_count()); EXPECT_STREQ("UnitTestImmutableAccessorsWork", tests[3]->name()); EXPECT_STREQ("ApiTest", tests[3]->test_case_name()); EXPECT_TRUE(IsNull(tests[3]->value_param())); EXPECT_TRUE(IsNull(tests[3]->type_param())); EXPECT_TRUE(tests[3]->should_run()); EXPECT_TRUE(tests[3]->result()->Passed()); EXPECT_EQ(1, tests[3]->result()->test_property_count()); const TestProperty& property = tests[3]->result()->GetTestProperty(0); EXPECT_STREQ("key", property.key()); EXPECT_STREQ("value", property.value()); delete[] tests; #if GTEST_HAS_TYPED_TEST test_case = UnitTestHelper::FindTestCase("TestCaseWithCommentTest/0"); tests = UnitTestHelper::GetSortedTests(test_case); EXPECT_STREQ("Dummy", tests[0]->name()); EXPECT_STREQ("TestCaseWithCommentTest/0", tests[0]->test_case_name()); EXPECT_TRUE(IsNull(tests[0]->value_param())); EXPECT_STREQ(GetTypeName().c_str(), tests[0]->type_param()); EXPECT_TRUE(tests[0]->should_run()); EXPECT_TRUE(tests[0]->result()->Passed()); EXPECT_EQ(0, tests[0]->result()->test_property_count()); delete[] tests; #endif // GTEST_HAS_TYPED_TEST delete[] test_cases; } }; } // namespace internal } // namespace testing int main(int argc, char **argv) { InitGoogleTest(&argc, argv); AddGlobalTestEnvironment(new testing::internal::FinalSuccessChecker()); return RUN_ALL_TESTS(); } ffc-2019.1.0.post0/libs/gtest-1.7.0/test/gtest_all_test.cc000066400000000000000000000043131346026536000225770ustar00rootroot00000000000000// Copyright 2009, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: wan@google.com (Zhanyong Wan) // // Tests for Google C++ Testing Framework (Google Test) // // Sometimes it's desirable to build most of Google Test's own tests // by compiling a single file. This file serves this purpose. #include "test/gtest-filepath_test.cc" #include "test/gtest-linked_ptr_test.cc" #include "test/gtest-message_test.cc" #include "test/gtest-options_test.cc" #include "test/gtest-port_test.cc" #include "test/gtest_pred_impl_unittest.cc" #include "test/gtest_prod_test.cc" #include "test/gtest-test-part_test.cc" #include "test/gtest-typed-test_test.cc" #include "test/gtest-typed-test2_test.cc" #include "test/gtest_unittest.cc" #include "test/production.cc" ffc-2019.1.0.post0/libs/gtest-1.7.0/test/gtest_break_on_failure_unittest.py000077500000000000000000000162531346026536000262720ustar00rootroot00000000000000#!/usr/bin/env python # # Copyright 2006, Google Inc. # All rights reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions are # met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # * Redistributions in binary form must reproduce the above # copyright notice, this list of conditions and the following disclaimer # in the documentation and/or other materials provided with the # distribution. # * Neither the name of Google Inc. nor the names of its # contributors may be used to endorse or promote products derived from # this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """Unit test for Google Test's break-on-failure mode. A user can ask Google Test to seg-fault when an assertion fails, using either the GTEST_BREAK_ON_FAILURE environment variable or the --gtest_break_on_failure flag. This script tests such functionality by invoking gtest_break_on_failure_unittest_ (a program written with Google Test) with different environments and command line flags. """ __author__ = 'wan@google.com (Zhanyong Wan)' import gtest_test_utils import os import sys # Constants. IS_WINDOWS = os.name == 'nt' # The environment variable for enabling/disabling the break-on-failure mode. BREAK_ON_FAILURE_ENV_VAR = 'GTEST_BREAK_ON_FAILURE' # The command line flag for enabling/disabling the break-on-failure mode. BREAK_ON_FAILURE_FLAG = 'gtest_break_on_failure' # The environment variable for enabling/disabling the throw-on-failure mode. THROW_ON_FAILURE_ENV_VAR = 'GTEST_THROW_ON_FAILURE' # The environment variable for enabling/disabling the catch-exceptions mode. CATCH_EXCEPTIONS_ENV_VAR = 'GTEST_CATCH_EXCEPTIONS' # Path to the gtest_break_on_failure_unittest_ program. EXE_PATH = gtest_test_utils.GetTestExecutablePath( 'gtest_break_on_failure_unittest_') environ = gtest_test_utils.environ SetEnvVar = gtest_test_utils.SetEnvVar # Tests in this file run a Google-Test-based test program and expect it # to terminate prematurely. Therefore they are incompatible with # the premature-exit-file protocol by design. Unset the # premature-exit filepath to prevent Google Test from creating # the file. SetEnvVar(gtest_test_utils.PREMATURE_EXIT_FILE_ENV_VAR, None) def Run(command): """Runs a command; returns 1 if it was killed by a signal, or 0 otherwise.""" p = gtest_test_utils.Subprocess(command, env=environ) if p.terminated_by_signal: return 1 else: return 0 # The tests. class GTestBreakOnFailureUnitTest(gtest_test_utils.TestCase): """Tests using the GTEST_BREAK_ON_FAILURE environment variable or the --gtest_break_on_failure flag to turn assertion failures into segmentation faults. """ def RunAndVerify(self, env_var_value, flag_value, expect_seg_fault): """Runs gtest_break_on_failure_unittest_ and verifies that it does (or does not) have a seg-fault. Args: env_var_value: value of the GTEST_BREAK_ON_FAILURE environment variable; None if the variable should be unset. flag_value: value of the --gtest_break_on_failure flag; None if the flag should not be present. expect_seg_fault: 1 if the program is expected to generate a seg-fault; 0 otherwise. """ SetEnvVar(BREAK_ON_FAILURE_ENV_VAR, env_var_value) if env_var_value is None: env_var_value_msg = ' is not set' else: env_var_value_msg = '=' + env_var_value if flag_value is None: flag = '' elif flag_value == '0': flag = '--%s=0' % BREAK_ON_FAILURE_FLAG else: flag = '--%s' % BREAK_ON_FAILURE_FLAG command = [EXE_PATH] if flag: command.append(flag) if expect_seg_fault: should_or_not = 'should' else: should_or_not = 'should not' has_seg_fault = Run(command) SetEnvVar(BREAK_ON_FAILURE_ENV_VAR, None) msg = ('when %s%s, an assertion failure in "%s" %s cause a seg-fault.' % (BREAK_ON_FAILURE_ENV_VAR, env_var_value_msg, ' '.join(command), should_or_not)) self.assert_(has_seg_fault == expect_seg_fault, msg) def testDefaultBehavior(self): """Tests the behavior of the default mode.""" self.RunAndVerify(env_var_value=None, flag_value=None, expect_seg_fault=0) def testEnvVar(self): """Tests using the GTEST_BREAK_ON_FAILURE environment variable.""" self.RunAndVerify(env_var_value='0', flag_value=None, expect_seg_fault=0) self.RunAndVerify(env_var_value='1', flag_value=None, expect_seg_fault=1) def testFlag(self): """Tests using the --gtest_break_on_failure flag.""" self.RunAndVerify(env_var_value=None, flag_value='0', expect_seg_fault=0) self.RunAndVerify(env_var_value=None, flag_value='1', expect_seg_fault=1) def testFlagOverridesEnvVar(self): """Tests that the flag overrides the environment variable.""" self.RunAndVerify(env_var_value='0', flag_value='0', expect_seg_fault=0) self.RunAndVerify(env_var_value='0', flag_value='1', expect_seg_fault=1) self.RunAndVerify(env_var_value='1', flag_value='0', expect_seg_fault=0) self.RunAndVerify(env_var_value='1', flag_value='1', expect_seg_fault=1) def testBreakOnFailureOverridesThrowOnFailure(self): """Tests that gtest_break_on_failure overrides gtest_throw_on_failure.""" SetEnvVar(THROW_ON_FAILURE_ENV_VAR, '1') try: self.RunAndVerify(env_var_value=None, flag_value='1', expect_seg_fault=1) finally: SetEnvVar(THROW_ON_FAILURE_ENV_VAR, None) if IS_WINDOWS: def testCatchExceptionsDoesNotInterfere(self): """Tests that gtest_catch_exceptions doesn't interfere.""" SetEnvVar(CATCH_EXCEPTIONS_ENV_VAR, '1') try: self.RunAndVerify(env_var_value='1', flag_value='1', expect_seg_fault=1) finally: SetEnvVar(CATCH_EXCEPTIONS_ENV_VAR, None) if __name__ == '__main__': gtest_test_utils.Main() ffc-2019.1.0.post0/libs/gtest-1.7.0/test/gtest_break_on_failure_unittest_.cc000066400000000000000000000062771346026536000263700ustar00rootroot00000000000000// Copyright 2006, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: wan@google.com (Zhanyong Wan) // Unit test for Google Test's break-on-failure mode. // // A user can ask Google Test to seg-fault when an assertion fails, using // either the GTEST_BREAK_ON_FAILURE environment variable or the // --gtest_break_on_failure flag. This file is used for testing such // functionality. // // This program will be invoked from a Python unit test. It is // expected to fail. Don't run it directly. #include "gtest/gtest.h" #if GTEST_OS_WINDOWS # include # include #endif namespace { // A test that's expected to fail. TEST(Foo, Bar) { EXPECT_EQ(2, 3); } #if GTEST_HAS_SEH && !GTEST_OS_WINDOWS_MOBILE // On Windows Mobile global exception handlers are not supported. LONG WINAPI ExitWithExceptionCode( struct _EXCEPTION_POINTERS* exception_pointers) { exit(exception_pointers->ExceptionRecord->ExceptionCode); } #endif } // namespace int main(int argc, char **argv) { #if GTEST_OS_WINDOWS // Suppresses display of the Windows error dialog upon encountering // a general protection fault (segment violation). SetErrorMode(SEM_NOGPFAULTERRORBOX | SEM_FAILCRITICALERRORS); # if GTEST_HAS_SEH && !GTEST_OS_WINDOWS_MOBILE // The default unhandled exception filter does not always exit // with the exception code as exit code - for example it exits with // 0 for EXCEPTION_ACCESS_VIOLATION and 1 for EXCEPTION_BREAKPOINT // if the application is compiled in debug mode. Thus we use our own // filter which always exits with the exception code for unhandled // exceptions. SetUnhandledExceptionFilter(ExitWithExceptionCode); # endif #endif testing::InitGoogleTest(&argc, argv); return RUN_ALL_TESTS(); } ffc-2019.1.0.post0/libs/gtest-1.7.0/test/gtest_catch_exceptions_test.py000077500000000000000000000232551346026536000254260ustar00rootroot00000000000000#!/usr/bin/env python # # Copyright 2010 Google Inc. All Rights Reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions are # met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # * Redistributions in binary form must reproduce the above # copyright notice, this list of conditions and the following disclaimer # in the documentation and/or other materials provided with the # distribution. # * Neither the name of Google Inc. nor the names of its # contributors may be used to endorse or promote products derived from # this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """Tests Google Test's exception catching behavior. This script invokes gtest_catch_exceptions_test_ and gtest_catch_exceptions_ex_test_ (programs written with Google Test) and verifies their output. """ __author__ = 'vladl@google.com (Vlad Losev)' import os import gtest_test_utils # Constants. FLAG_PREFIX = '--gtest_' LIST_TESTS_FLAG = FLAG_PREFIX + 'list_tests' NO_CATCH_EXCEPTIONS_FLAG = FLAG_PREFIX + 'catch_exceptions=0' FILTER_FLAG = FLAG_PREFIX + 'filter' # Path to the gtest_catch_exceptions_ex_test_ binary, compiled with # exceptions enabled. EX_EXE_PATH = gtest_test_utils.GetTestExecutablePath( 'gtest_catch_exceptions_ex_test_') # Path to the gtest_catch_exceptions_test_ binary, compiled with # exceptions disabled. EXE_PATH = gtest_test_utils.GetTestExecutablePath( 'gtest_catch_exceptions_no_ex_test_') environ = gtest_test_utils.environ SetEnvVar = gtest_test_utils.SetEnvVar # Tests in this file run a Google-Test-based test program and expect it # to terminate prematurely. Therefore they are incompatible with # the premature-exit-file protocol by design. Unset the # premature-exit filepath to prevent Google Test from creating # the file. SetEnvVar(gtest_test_utils.PREMATURE_EXIT_FILE_ENV_VAR, None) TEST_LIST = gtest_test_utils.Subprocess( [EXE_PATH, LIST_TESTS_FLAG], env=environ).output SUPPORTS_SEH_EXCEPTIONS = 'ThrowsSehException' in TEST_LIST if SUPPORTS_SEH_EXCEPTIONS: BINARY_OUTPUT = gtest_test_utils.Subprocess([EXE_PATH], env=environ).output EX_BINARY_OUTPUT = gtest_test_utils.Subprocess( [EX_EXE_PATH], env=environ).output # The tests. if SUPPORTS_SEH_EXCEPTIONS: # pylint:disable-msg=C6302 class CatchSehExceptionsTest(gtest_test_utils.TestCase): """Tests exception-catching behavior.""" def TestSehExceptions(self, test_output): self.assert_('SEH exception with code 0x2a thrown ' 'in the test fixture\'s constructor' in test_output) self.assert_('SEH exception with code 0x2a thrown ' 'in the test fixture\'s destructor' in test_output) self.assert_('SEH exception with code 0x2a thrown in SetUpTestCase()' in test_output) self.assert_('SEH exception with code 0x2a thrown in TearDownTestCase()' in test_output) self.assert_('SEH exception with code 0x2a thrown in SetUp()' in test_output) self.assert_('SEH exception with code 0x2a thrown in TearDown()' in test_output) self.assert_('SEH exception with code 0x2a thrown in the test body' in test_output) def testCatchesSehExceptionsWithCxxExceptionsEnabled(self): self.TestSehExceptions(EX_BINARY_OUTPUT) def testCatchesSehExceptionsWithCxxExceptionsDisabled(self): self.TestSehExceptions(BINARY_OUTPUT) class CatchCxxExceptionsTest(gtest_test_utils.TestCase): """Tests C++ exception-catching behavior. Tests in this test case verify that: * C++ exceptions are caught and logged as C++ (not SEH) exceptions * Exception thrown affect the remainder of the test work flow in the expected manner. """ def testCatchesCxxExceptionsInFixtureConstructor(self): self.assert_('C++ exception with description ' '"Standard C++ exception" thrown ' 'in the test fixture\'s constructor' in EX_BINARY_OUTPUT) self.assert_('unexpected' not in EX_BINARY_OUTPUT, 'This failure belongs in this test only if ' '"CxxExceptionInConstructorTest" (no quotes) ' 'appears on the same line as words "called unexpectedly"') if ('CxxExceptionInDestructorTest.ThrowsExceptionInDestructor' in EX_BINARY_OUTPUT): def testCatchesCxxExceptionsInFixtureDestructor(self): self.assert_('C++ exception with description ' '"Standard C++ exception" thrown ' 'in the test fixture\'s destructor' in EX_BINARY_OUTPUT) self.assert_('CxxExceptionInDestructorTest::TearDownTestCase() ' 'called as expected.' in EX_BINARY_OUTPUT) def testCatchesCxxExceptionsInSetUpTestCase(self): self.assert_('C++ exception with description "Standard C++ exception"' ' thrown in SetUpTestCase()' in EX_BINARY_OUTPUT) self.assert_('CxxExceptionInConstructorTest::TearDownTestCase() ' 'called as expected.' in EX_BINARY_OUTPUT) self.assert_('CxxExceptionInSetUpTestCaseTest constructor ' 'called as expected.' in EX_BINARY_OUTPUT) self.assert_('CxxExceptionInSetUpTestCaseTest destructor ' 'called as expected.' in EX_BINARY_OUTPUT) self.assert_('CxxExceptionInSetUpTestCaseTest::SetUp() ' 'called as expected.' in EX_BINARY_OUTPUT) self.assert_('CxxExceptionInSetUpTestCaseTest::TearDown() ' 'called as expected.' in EX_BINARY_OUTPUT) self.assert_('CxxExceptionInSetUpTestCaseTest test body ' 'called as expected.' in EX_BINARY_OUTPUT) def testCatchesCxxExceptionsInTearDownTestCase(self): self.assert_('C++ exception with description "Standard C++ exception"' ' thrown in TearDownTestCase()' in EX_BINARY_OUTPUT) def testCatchesCxxExceptionsInSetUp(self): self.assert_('C++ exception with description "Standard C++ exception"' ' thrown in SetUp()' in EX_BINARY_OUTPUT) self.assert_('CxxExceptionInSetUpTest::TearDownTestCase() ' 'called as expected.' in EX_BINARY_OUTPUT) self.assert_('CxxExceptionInSetUpTest destructor ' 'called as expected.' in EX_BINARY_OUTPUT) self.assert_('CxxExceptionInSetUpTest::TearDown() ' 'called as expected.' in EX_BINARY_OUTPUT) self.assert_('unexpected' not in EX_BINARY_OUTPUT, 'This failure belongs in this test only if ' '"CxxExceptionInSetUpTest" (no quotes) ' 'appears on the same line as words "called unexpectedly"') def testCatchesCxxExceptionsInTearDown(self): self.assert_('C++ exception with description "Standard C++ exception"' ' thrown in TearDown()' in EX_BINARY_OUTPUT) self.assert_('CxxExceptionInTearDownTest::TearDownTestCase() ' 'called as expected.' in EX_BINARY_OUTPUT) self.assert_('CxxExceptionInTearDownTest destructor ' 'called as expected.' in EX_BINARY_OUTPUT) def testCatchesCxxExceptionsInTestBody(self): self.assert_('C++ exception with description "Standard C++ exception"' ' thrown in the test body' in EX_BINARY_OUTPUT) self.assert_('CxxExceptionInTestBodyTest::TearDownTestCase() ' 'called as expected.' in EX_BINARY_OUTPUT) self.assert_('CxxExceptionInTestBodyTest destructor ' 'called as expected.' in EX_BINARY_OUTPUT) self.assert_('CxxExceptionInTestBodyTest::TearDown() ' 'called as expected.' in EX_BINARY_OUTPUT) def testCatchesNonStdCxxExceptions(self): self.assert_('Unknown C++ exception thrown in the test body' in EX_BINARY_OUTPUT) def testUnhandledCxxExceptionsAbortTheProgram(self): # Filters out SEH exception tests on Windows. Unhandled SEH exceptions # cause tests to show pop-up windows there. FITLER_OUT_SEH_TESTS_FLAG = FILTER_FLAG + '=-*Seh*' # By default, Google Test doesn't catch the exceptions. uncaught_exceptions_ex_binary_output = gtest_test_utils.Subprocess( [EX_EXE_PATH, NO_CATCH_EXCEPTIONS_FLAG, FITLER_OUT_SEH_TESTS_FLAG], env=environ).output self.assert_('Unhandled C++ exception terminating the program' in uncaught_exceptions_ex_binary_output) self.assert_('unexpected' not in uncaught_exceptions_ex_binary_output) if __name__ == '__main__': gtest_test_utils.Main() ffc-2019.1.0.post0/libs/gtest-1.7.0/test/gtest_catch_exceptions_test_.cc000066400000000000000000000213271346026536000255150ustar00rootroot00000000000000// Copyright 2010, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: vladl@google.com (Vlad Losev) // // Tests for Google Test itself. Tests in this file throw C++ or SEH // exceptions, and the output is verified by gtest_catch_exceptions_test.py. #include "gtest/gtest.h" #include // NOLINT #include // For exit(). #if GTEST_HAS_SEH # include #endif #if GTEST_HAS_EXCEPTIONS # include // For set_terminate(). # include #endif using testing::Test; #if GTEST_HAS_SEH class SehExceptionInConstructorTest : public Test { public: SehExceptionInConstructorTest() { RaiseException(42, 0, 0, NULL); } }; TEST_F(SehExceptionInConstructorTest, ThrowsExceptionInConstructor) {} class SehExceptionInDestructorTest : public Test { public: ~SehExceptionInDestructorTest() { RaiseException(42, 0, 0, NULL); } }; TEST_F(SehExceptionInDestructorTest, ThrowsExceptionInDestructor) {} class SehExceptionInSetUpTestCaseTest : public Test { public: static void SetUpTestCase() { RaiseException(42, 0, 0, NULL); } }; TEST_F(SehExceptionInSetUpTestCaseTest, ThrowsExceptionInSetUpTestCase) {} class SehExceptionInTearDownTestCaseTest : public Test { public: static void TearDownTestCase() { RaiseException(42, 0, 0, NULL); } }; TEST_F(SehExceptionInTearDownTestCaseTest, ThrowsExceptionInTearDownTestCase) {} class SehExceptionInSetUpTest : public Test { protected: virtual void SetUp() { RaiseException(42, 0, 0, NULL); } }; TEST_F(SehExceptionInSetUpTest, ThrowsExceptionInSetUp) {} class SehExceptionInTearDownTest : public Test { protected: virtual void TearDown() { RaiseException(42, 0, 0, NULL); } }; TEST_F(SehExceptionInTearDownTest, ThrowsExceptionInTearDown) {} TEST(SehExceptionTest, ThrowsSehException) { RaiseException(42, 0, 0, NULL); } #endif // GTEST_HAS_SEH #if GTEST_HAS_EXCEPTIONS class CxxExceptionInConstructorTest : public Test { public: CxxExceptionInConstructorTest() { // Without this macro VC++ complains about unreachable code at the end of // the constructor. GTEST_SUPPRESS_UNREACHABLE_CODE_WARNING_BELOW_( throw std::runtime_error("Standard C++ exception")); } static void TearDownTestCase() { printf("%s", "CxxExceptionInConstructorTest::TearDownTestCase() " "called as expected.\n"); } protected: ~CxxExceptionInConstructorTest() { ADD_FAILURE() << "CxxExceptionInConstructorTest destructor " << "called unexpectedly."; } virtual void SetUp() { ADD_FAILURE() << "CxxExceptionInConstructorTest::SetUp() " << "called unexpectedly."; } virtual void TearDown() { ADD_FAILURE() << "CxxExceptionInConstructorTest::TearDown() " << "called unexpectedly."; } }; TEST_F(CxxExceptionInConstructorTest, ThrowsExceptionInConstructor) { ADD_FAILURE() << "CxxExceptionInConstructorTest test body " << "called unexpectedly."; } // Exceptions in destructors are not supported in C++11. #if !defined(__GXX_EXPERIMENTAL_CXX0X__) && __cplusplus < 201103L class CxxExceptionInDestructorTest : public Test { public: static void TearDownTestCase() { printf("%s", "CxxExceptionInDestructorTest::TearDownTestCase() " "called as expected.\n"); } protected: ~CxxExceptionInDestructorTest() { GTEST_SUPPRESS_UNREACHABLE_CODE_WARNING_BELOW_( throw std::runtime_error("Standard C++ exception")); } }; TEST_F(CxxExceptionInDestructorTest, ThrowsExceptionInDestructor) {} #endif // C++11 mode class CxxExceptionInSetUpTestCaseTest : public Test { public: CxxExceptionInSetUpTestCaseTest() { printf("%s", "CxxExceptionInSetUpTestCaseTest constructor " "called as expected.\n"); } static void SetUpTestCase() { throw std::runtime_error("Standard C++ exception"); } static void TearDownTestCase() { printf("%s", "CxxExceptionInSetUpTestCaseTest::TearDownTestCase() " "called as expected.\n"); } protected: ~CxxExceptionInSetUpTestCaseTest() { printf("%s", "CxxExceptionInSetUpTestCaseTest destructor " "called as expected.\n"); } virtual void SetUp() { printf("%s", "CxxExceptionInSetUpTestCaseTest::SetUp() " "called as expected.\n"); } virtual void TearDown() { printf("%s", "CxxExceptionInSetUpTestCaseTest::TearDown() " "called as expected.\n"); } }; TEST_F(CxxExceptionInSetUpTestCaseTest, ThrowsExceptionInSetUpTestCase) { printf("%s", "CxxExceptionInSetUpTestCaseTest test body " "called as expected.\n"); } class CxxExceptionInTearDownTestCaseTest : public Test { public: static void TearDownTestCase() { throw std::runtime_error("Standard C++ exception"); } }; TEST_F(CxxExceptionInTearDownTestCaseTest, ThrowsExceptionInTearDownTestCase) {} class CxxExceptionInSetUpTest : public Test { public: static void TearDownTestCase() { printf("%s", "CxxExceptionInSetUpTest::TearDownTestCase() " "called as expected.\n"); } protected: ~CxxExceptionInSetUpTest() { printf("%s", "CxxExceptionInSetUpTest destructor " "called as expected.\n"); } virtual void SetUp() { throw std::runtime_error("Standard C++ exception"); } virtual void TearDown() { printf("%s", "CxxExceptionInSetUpTest::TearDown() " "called as expected.\n"); } }; TEST_F(CxxExceptionInSetUpTest, ThrowsExceptionInSetUp) { ADD_FAILURE() << "CxxExceptionInSetUpTest test body " << "called unexpectedly."; } class CxxExceptionInTearDownTest : public Test { public: static void TearDownTestCase() { printf("%s", "CxxExceptionInTearDownTest::TearDownTestCase() " "called as expected.\n"); } protected: ~CxxExceptionInTearDownTest() { printf("%s", "CxxExceptionInTearDownTest destructor " "called as expected.\n"); } virtual void TearDown() { throw std::runtime_error("Standard C++ exception"); } }; TEST_F(CxxExceptionInTearDownTest, ThrowsExceptionInTearDown) {} class CxxExceptionInTestBodyTest : public Test { public: static void TearDownTestCase() { printf("%s", "CxxExceptionInTestBodyTest::TearDownTestCase() " "called as expected.\n"); } protected: ~CxxExceptionInTestBodyTest() { printf("%s", "CxxExceptionInTestBodyTest destructor " "called as expected.\n"); } virtual void TearDown() { printf("%s", "CxxExceptionInTestBodyTest::TearDown() " "called as expected.\n"); } }; TEST_F(CxxExceptionInTestBodyTest, ThrowsStdCxxException) { throw std::runtime_error("Standard C++ exception"); } TEST(CxxExceptionTest, ThrowsNonStdCxxException) { throw "C-string"; } // This terminate handler aborts the program using exit() rather than abort(). // This avoids showing pop-ups on Windows systems and core dumps on Unix-like // ones. void TerminateHandler() { fprintf(stderr, "%s\n", "Unhandled C++ exception terminating the program."); fflush(NULL); exit(3); } #endif // GTEST_HAS_EXCEPTIONS int main(int argc, char** argv) { #if GTEST_HAS_EXCEPTIONS std::set_terminate(&TerminateHandler); #endif testing::InitGoogleTest(&argc, argv); return RUN_ALL_TESTS(); } ffc-2019.1.0.post0/libs/gtest-1.7.0/test/gtest_color_test.py000077500000000000000000000114571346026536000232220ustar00rootroot00000000000000#!/usr/bin/env python # # Copyright 2008, Google Inc. # All rights reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions are # met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # * Redistributions in binary form must reproduce the above # copyright notice, this list of conditions and the following disclaimer # in the documentation and/or other materials provided with the # distribution. # * Neither the name of Google Inc. nor the names of its # contributors may be used to endorse or promote products derived from # this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """Verifies that Google Test correctly determines whether to use colors.""" __author__ = 'wan@google.com (Zhanyong Wan)' import os import gtest_test_utils IS_WINDOWS = os.name = 'nt' COLOR_ENV_VAR = 'GTEST_COLOR' COLOR_FLAG = 'gtest_color' COMMAND = gtest_test_utils.GetTestExecutablePath('gtest_color_test_') def SetEnvVar(env_var, value): """Sets the env variable to 'value'; unsets it when 'value' is None.""" if value is not None: os.environ[env_var] = value elif env_var in os.environ: del os.environ[env_var] def UsesColor(term, color_env_var, color_flag): """Runs gtest_color_test_ and returns its exit code.""" SetEnvVar('TERM', term) SetEnvVar(COLOR_ENV_VAR, color_env_var) if color_flag is None: args = [] else: args = ['--%s=%s' % (COLOR_FLAG, color_flag)] p = gtest_test_utils.Subprocess([COMMAND] + args) return not p.exited or p.exit_code class GTestColorTest(gtest_test_utils.TestCase): def testNoEnvVarNoFlag(self): """Tests the case when there's neither GTEST_COLOR nor --gtest_color.""" if not IS_WINDOWS: self.assert_(not UsesColor('dumb', None, None)) self.assert_(not UsesColor('emacs', None, None)) self.assert_(not UsesColor('xterm-mono', None, None)) self.assert_(not UsesColor('unknown', None, None)) self.assert_(not UsesColor(None, None, None)) self.assert_(UsesColor('linux', None, None)) self.assert_(UsesColor('cygwin', None, None)) self.assert_(UsesColor('xterm', None, None)) self.assert_(UsesColor('xterm-color', None, None)) self.assert_(UsesColor('xterm-256color', None, None)) def testFlagOnly(self): """Tests the case when there's --gtest_color but not GTEST_COLOR.""" self.assert_(not UsesColor('dumb', None, 'no')) self.assert_(not UsesColor('xterm-color', None, 'no')) if not IS_WINDOWS: self.assert_(not UsesColor('emacs', None, 'auto')) self.assert_(UsesColor('xterm', None, 'auto')) self.assert_(UsesColor('dumb', None, 'yes')) self.assert_(UsesColor('xterm', None, 'yes')) def testEnvVarOnly(self): """Tests the case when there's GTEST_COLOR but not --gtest_color.""" self.assert_(not UsesColor('dumb', 'no', None)) self.assert_(not UsesColor('xterm-color', 'no', None)) if not IS_WINDOWS: self.assert_(not UsesColor('dumb', 'auto', None)) self.assert_(UsesColor('xterm-color', 'auto', None)) self.assert_(UsesColor('dumb', 'yes', None)) self.assert_(UsesColor('xterm-color', 'yes', None)) def testEnvVarAndFlag(self): """Tests the case when there are both GTEST_COLOR and --gtest_color.""" self.assert_(not UsesColor('xterm-color', 'no', 'no')) self.assert_(UsesColor('dumb', 'no', 'yes')) self.assert_(UsesColor('xterm-color', 'no', 'auto')) def testAliasesOfYesAndNo(self): """Tests using aliases in specifying --gtest_color.""" self.assert_(UsesColor('dumb', None, 'true')) self.assert_(UsesColor('dumb', None, 'YES')) self.assert_(UsesColor('dumb', None, 'T')) self.assert_(UsesColor('dumb', None, '1')) self.assert_(not UsesColor('xterm', None, 'f')) self.assert_(not UsesColor('xterm', None, 'false')) self.assert_(not UsesColor('xterm', None, '0')) self.assert_(not UsesColor('xterm', None, 'unknown')) if __name__ == '__main__': gtest_test_utils.Main() ffc-2019.1.0.post0/libs/gtest-1.7.0/test/gtest_color_test_.cc000066400000000000000000000055101346026536000233040ustar00rootroot00000000000000// Copyright 2008, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: wan@google.com (Zhanyong Wan) // A helper program for testing how Google Test determines whether to use // colors in the output. It prints "YES" and returns 1 if Google Test // decides to use colors, and prints "NO" and returns 0 otherwise. #include #include "gtest/gtest.h" // Indicates that this translation unit is part of Google Test's // implementation. It must come before gtest-internal-inl.h is // included, or there will be a compiler error. This trick is to // prevent a user from accidentally including gtest-internal-inl.h in // his code. #define GTEST_IMPLEMENTATION_ 1 #include "src/gtest-internal-inl.h" #undef GTEST_IMPLEMENTATION_ using testing::internal::ShouldUseColor; // The purpose of this is to ensure that the UnitTest singleton is // created before main() is entered, and thus that ShouldUseColor() // works the same way as in a real Google-Test-based test. We don't actual // run the TEST itself. TEST(GTestColorTest, Dummy) { } int main(int argc, char** argv) { testing::InitGoogleTest(&argc, argv); if (ShouldUseColor(true)) { // Google Test decides to use colors in the output (assuming it // goes to a TTY). printf("YES\n"); return 1; } else { // Google Test decides not to use colors in the output. printf("NO\n"); return 0; } } ffc-2019.1.0.post0/libs/gtest-1.7.0/test/gtest_env_var_test.py000077500000000000000000000066371346026536000235500ustar00rootroot00000000000000#!/usr/bin/env python # # Copyright 2008, Google Inc. # All rights reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions are # met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # * Redistributions in binary form must reproduce the above # copyright notice, this list of conditions and the following disclaimer # in the documentation and/or other materials provided with the # distribution. # * Neither the name of Google Inc. nor the names of its # contributors may be used to endorse or promote products derived from # this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """Verifies that Google Test correctly parses environment variables.""" __author__ = 'wan@google.com (Zhanyong Wan)' import os import gtest_test_utils IS_WINDOWS = os.name == 'nt' IS_LINUX = os.name == 'posix' and os.uname()[0] == 'Linux' COMMAND = gtest_test_utils.GetTestExecutablePath('gtest_env_var_test_') environ = os.environ.copy() def AssertEq(expected, actual): if expected != actual: print 'Expected: %s' % (expected,) print ' Actual: %s' % (actual,) raise AssertionError def SetEnvVar(env_var, value): """Sets the env variable to 'value'; unsets it when 'value' is None.""" if value is not None: environ[env_var] = value elif env_var in environ: del environ[env_var] def GetFlag(flag): """Runs gtest_env_var_test_ and returns its output.""" args = [COMMAND] if flag is not None: args += [flag] return gtest_test_utils.Subprocess(args, env=environ).output def TestFlag(flag, test_val, default_val): """Verifies that the given flag is affected by the corresponding env var.""" env_var = 'GTEST_' + flag.upper() SetEnvVar(env_var, test_val) AssertEq(test_val, GetFlag(flag)) SetEnvVar(env_var, None) AssertEq(default_val, GetFlag(flag)) class GTestEnvVarTest(gtest_test_utils.TestCase): def testEnvVarAffectsFlag(self): """Tests that environment variable should affect the corresponding flag.""" TestFlag('break_on_failure', '1', '0') TestFlag('color', 'yes', 'auto') TestFlag('filter', 'FooTest.Bar', '*') TestFlag('output', 'xml:tmp/foo.xml', '') TestFlag('print_time', '0', '1') TestFlag('repeat', '999', '1') TestFlag('throw_on_failure', '1', '0') TestFlag('death_test_style', 'threadsafe', 'fast') TestFlag('catch_exceptions', '0', '1') if IS_LINUX: TestFlag('death_test_use_fork', '1', '0') TestFlag('stack_trace_depth', '0', '100') if __name__ == '__main__': gtest_test_utils.Main() ffc-2019.1.0.post0/libs/gtest-1.7.0/test/gtest_env_var_test_.cc000066400000000000000000000067701346026536000236370ustar00rootroot00000000000000// Copyright 2008, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: wan@google.com (Zhanyong Wan) // A helper program for testing that Google Test parses the environment // variables correctly. #include "gtest/gtest.h" #include #define GTEST_IMPLEMENTATION_ 1 #include "src/gtest-internal-inl.h" #undef GTEST_IMPLEMENTATION_ using ::std::cout; namespace testing { // The purpose of this is to make the test more realistic by ensuring // that the UnitTest singleton is created before main() is entered. // We don't actual run the TEST itself. TEST(GTestEnvVarTest, Dummy) { } void PrintFlag(const char* flag) { if (strcmp(flag, "break_on_failure") == 0) { cout << GTEST_FLAG(break_on_failure); return; } if (strcmp(flag, "catch_exceptions") == 0) { cout << GTEST_FLAG(catch_exceptions); return; } if (strcmp(flag, "color") == 0) { cout << GTEST_FLAG(color); return; } if (strcmp(flag, "death_test_style") == 0) { cout << GTEST_FLAG(death_test_style); return; } if (strcmp(flag, "death_test_use_fork") == 0) { cout << GTEST_FLAG(death_test_use_fork); return; } if (strcmp(flag, "filter") == 0) { cout << GTEST_FLAG(filter); return; } if (strcmp(flag, "output") == 0) { cout << GTEST_FLAG(output); return; } if (strcmp(flag, "print_time") == 0) { cout << GTEST_FLAG(print_time); return; } if (strcmp(flag, "repeat") == 0) { cout << GTEST_FLAG(repeat); return; } if (strcmp(flag, "stack_trace_depth") == 0) { cout << GTEST_FLAG(stack_trace_depth); return; } if (strcmp(flag, "throw_on_failure") == 0) { cout << GTEST_FLAG(throw_on_failure); return; } cout << "Invalid flag name " << flag << ". Valid names are break_on_failure, color, filter, etc.\n"; exit(1); } } // namespace testing int main(int argc, char** argv) { testing::InitGoogleTest(&argc, argv); if (argc != 2) { cout << "Usage: gtest_env_var_test_ NAME_OF_FLAG\n"; return 1; } testing::PrintFlag(argv[1]); return 0; } ffc-2019.1.0.post0/libs/gtest-1.7.0/test/gtest_environment_test.cc000066400000000000000000000147161346026536000244030ustar00rootroot00000000000000// Copyright 2007, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: wan@google.com (Zhanyong Wan) // // Tests using global test environments. #include #include #include "gtest/gtest.h" #define GTEST_IMPLEMENTATION_ 1 // Required for the next #include. #include "src/gtest-internal-inl.h" #undef GTEST_IMPLEMENTATION_ namespace testing { GTEST_DECLARE_string_(filter); } namespace { enum FailureType { NO_FAILURE, NON_FATAL_FAILURE, FATAL_FAILURE }; // For testing using global test environments. class MyEnvironment : public testing::Environment { public: MyEnvironment() { Reset(); } // Depending on the value of failure_in_set_up_, SetUp() will // generate a non-fatal failure, generate a fatal failure, or // succeed. virtual void SetUp() { set_up_was_run_ = true; switch (failure_in_set_up_) { case NON_FATAL_FAILURE: ADD_FAILURE() << "Expected non-fatal failure in global set-up."; break; case FATAL_FAILURE: FAIL() << "Expected fatal failure in global set-up."; break; default: break; } } // Generates a non-fatal failure. virtual void TearDown() { tear_down_was_run_ = true; ADD_FAILURE() << "Expected non-fatal failure in global tear-down."; } // Resets the state of the environment s.t. it can be reused. void Reset() { failure_in_set_up_ = NO_FAILURE; set_up_was_run_ = false; tear_down_was_run_ = false; } // We call this function to set the type of failure SetUp() should // generate. void set_failure_in_set_up(FailureType type) { failure_in_set_up_ = type; } // Was SetUp() run? bool set_up_was_run() const { return set_up_was_run_; } // Was TearDown() run? bool tear_down_was_run() const { return tear_down_was_run_; } private: FailureType failure_in_set_up_; bool set_up_was_run_; bool tear_down_was_run_; }; // Was the TEST run? bool test_was_run; // The sole purpose of this TEST is to enable us to check whether it // was run. TEST(FooTest, Bar) { test_was_run = true; } // Prints the message and aborts the program if condition is false. void Check(bool condition, const char* msg) { if (!condition) { printf("FAILED: %s\n", msg); testing::internal::posix::Abort(); } } // Runs the tests. Return true iff successful. // // The 'failure' parameter specifies the type of failure that should // be generated by the global set-up. int RunAllTests(MyEnvironment* env, FailureType failure) { env->Reset(); env->set_failure_in_set_up(failure); test_was_run = false; testing::internal::GetUnitTestImpl()->ClearAdHocTestResult(); return RUN_ALL_TESTS(); } } // namespace int main(int argc, char **argv) { testing::InitGoogleTest(&argc, argv); // Registers a global test environment, and verifies that the // registration function returns its argument. MyEnvironment* const env = new MyEnvironment; Check(testing::AddGlobalTestEnvironment(env) == env, "AddGlobalTestEnvironment() should return its argument."); // Verifies that RUN_ALL_TESTS() runs the tests when the global // set-up is successful. Check(RunAllTests(env, NO_FAILURE) != 0, "RUN_ALL_TESTS() should return non-zero, as the global tear-down " "should generate a failure."); Check(test_was_run, "The tests should run, as the global set-up should generate no " "failure"); Check(env->tear_down_was_run(), "The global tear-down should run, as the global set-up was run."); // Verifies that RUN_ALL_TESTS() runs the tests when the global // set-up generates no fatal failure. Check(RunAllTests(env, NON_FATAL_FAILURE) != 0, "RUN_ALL_TESTS() should return non-zero, as both the global set-up " "and the global tear-down should generate a non-fatal failure."); Check(test_was_run, "The tests should run, as the global set-up should generate no " "fatal failure."); Check(env->tear_down_was_run(), "The global tear-down should run, as the global set-up was run."); // Verifies that RUN_ALL_TESTS() runs no test when the global set-up // generates a fatal failure. Check(RunAllTests(env, FATAL_FAILURE) != 0, "RUN_ALL_TESTS() should return non-zero, as the global set-up " "should generate a fatal failure."); Check(!test_was_run, "The tests should not run, as the global set-up should generate " "a fatal failure."); Check(env->tear_down_was_run(), "The global tear-down should run, as the global set-up was run."); // Verifies that RUN_ALL_TESTS() doesn't do global set-up or // tear-down when there is no test to run. testing::GTEST_FLAG(filter) = "-*"; Check(RunAllTests(env, NO_FAILURE) == 0, "RUN_ALL_TESTS() should return zero, as there is no test to run."); Check(!env->set_up_was_run(), "The global set-up should not run, as there is no test to run."); Check(!env->tear_down_was_run(), "The global tear-down should not run, " "as the global set-up was not run."); printf("PASS\n"); return 0; } ffc-2019.1.0.post0/libs/gtest-1.7.0/test/gtest_filter_unittest.py000077500000000000000000000514151346026536000242670ustar00rootroot00000000000000#!/usr/bin/env python # # Copyright 2005 Google Inc. All Rights Reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions are # met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # * Redistributions in binary form must reproduce the above # copyright notice, this list of conditions and the following disclaimer # in the documentation and/or other materials provided with the # distribution. # * Neither the name of Google Inc. nor the names of its # contributors may be used to endorse or promote products derived from # this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """Unit test for Google Test test filters. A user can specify which test(s) in a Google Test program to run via either the GTEST_FILTER environment variable or the --gtest_filter flag. This script tests such functionality by invoking gtest_filter_unittest_ (a program written with Google Test) with different environments and command line flags. Note that test sharding may also influence which tests are filtered. Therefore, we test that here also. """ __author__ = 'wan@google.com (Zhanyong Wan)' import os import re import sets import sys import gtest_test_utils # Constants. # Checks if this platform can pass empty environment variables to child # processes. We set an env variable to an empty string and invoke a python # script in a subprocess to print whether the variable is STILL in # os.environ. We then use 'eval' to parse the child's output so that an # exception is thrown if the input is anything other than 'True' nor 'False'. os.environ['EMPTY_VAR'] = '' child = gtest_test_utils.Subprocess( [sys.executable, '-c', 'import os; print \'EMPTY_VAR\' in os.environ']) CAN_PASS_EMPTY_ENV = eval(child.output) # Check if this platform can unset environment variables in child processes. # We set an env variable to a non-empty string, unset it, and invoke # a python script in a subprocess to print whether the variable # is NO LONGER in os.environ. # We use 'eval' to parse the child's output so that an exception # is thrown if the input is neither 'True' nor 'False'. os.environ['UNSET_VAR'] = 'X' del os.environ['UNSET_VAR'] child = gtest_test_utils.Subprocess( [sys.executable, '-c', 'import os; print \'UNSET_VAR\' not in os.environ']) CAN_UNSET_ENV = eval(child.output) # Checks if we should test with an empty filter. This doesn't # make sense on platforms that cannot pass empty env variables (Win32) # and on platforms that cannot unset variables (since we cannot tell # the difference between "" and NULL -- Borland and Solaris < 5.10) CAN_TEST_EMPTY_FILTER = (CAN_PASS_EMPTY_ENV and CAN_UNSET_ENV) # The environment variable for specifying the test filters. FILTER_ENV_VAR = 'GTEST_FILTER' # The environment variables for test sharding. TOTAL_SHARDS_ENV_VAR = 'GTEST_TOTAL_SHARDS' SHARD_INDEX_ENV_VAR = 'GTEST_SHARD_INDEX' SHARD_STATUS_FILE_ENV_VAR = 'GTEST_SHARD_STATUS_FILE' # The command line flag for specifying the test filters. FILTER_FLAG = 'gtest_filter' # The command line flag for including disabled tests. ALSO_RUN_DISABED_TESTS_FLAG = 'gtest_also_run_disabled_tests' # Command to run the gtest_filter_unittest_ program. COMMAND = gtest_test_utils.GetTestExecutablePath('gtest_filter_unittest_') # Regex for determining whether parameterized tests are enabled in the binary. PARAM_TEST_REGEX = re.compile(r'/ParamTest') # Regex for parsing test case names from Google Test's output. TEST_CASE_REGEX = re.compile(r'^\[\-+\] \d+ tests? from (\w+(/\w+)?)') # Regex for parsing test names from Google Test's output. TEST_REGEX = re.compile(r'^\[\s*RUN\s*\].*\.(\w+(/\w+)?)') # The command line flag to tell Google Test to output the list of tests it # will run. LIST_TESTS_FLAG = '--gtest_list_tests' # Indicates whether Google Test supports death tests. SUPPORTS_DEATH_TESTS = 'HasDeathTest' in gtest_test_utils.Subprocess( [COMMAND, LIST_TESTS_FLAG]).output # Full names of all tests in gtest_filter_unittests_. PARAM_TESTS = [ 'SeqP/ParamTest.TestX/0', 'SeqP/ParamTest.TestX/1', 'SeqP/ParamTest.TestY/0', 'SeqP/ParamTest.TestY/1', 'SeqQ/ParamTest.TestX/0', 'SeqQ/ParamTest.TestX/1', 'SeqQ/ParamTest.TestY/0', 'SeqQ/ParamTest.TestY/1', ] DISABLED_TESTS = [ 'BarTest.DISABLED_TestFour', 'BarTest.DISABLED_TestFive', 'BazTest.DISABLED_TestC', 'DISABLED_FoobarTest.Test1', 'DISABLED_FoobarTest.DISABLED_Test2', 'DISABLED_FoobarbazTest.TestA', ] if SUPPORTS_DEATH_TESTS: DEATH_TESTS = [ 'HasDeathTest.Test1', 'HasDeathTest.Test2', ] else: DEATH_TESTS = [] # All the non-disabled tests. ACTIVE_TESTS = [ 'FooTest.Abc', 'FooTest.Xyz', 'BarTest.TestOne', 'BarTest.TestTwo', 'BarTest.TestThree', 'BazTest.TestOne', 'BazTest.TestA', 'BazTest.TestB', ] + DEATH_TESTS + PARAM_TESTS param_tests_present = None # Utilities. environ = os.environ.copy() def SetEnvVar(env_var, value): """Sets the env variable to 'value'; unsets it when 'value' is None.""" if value is not None: environ[env_var] = value elif env_var in environ: del environ[env_var] def RunAndReturnOutput(args = None): """Runs the test program and returns its output.""" return gtest_test_utils.Subprocess([COMMAND] + (args or []), env=environ).output def RunAndExtractTestList(args = None): """Runs the test program and returns its exit code and a list of tests run.""" p = gtest_test_utils.Subprocess([COMMAND] + (args or []), env=environ) tests_run = [] test_case = '' test = '' for line in p.output.split('\n'): match = TEST_CASE_REGEX.match(line) if match is not None: test_case = match.group(1) else: match = TEST_REGEX.match(line) if match is not None: test = match.group(1) tests_run.append(test_case + '.' + test) return (tests_run, p.exit_code) def InvokeWithModifiedEnv(extra_env, function, *args, **kwargs): """Runs the given function and arguments in a modified environment.""" try: original_env = environ.copy() environ.update(extra_env) return function(*args, **kwargs) finally: environ.clear() environ.update(original_env) def RunWithSharding(total_shards, shard_index, command): """Runs a test program shard and returns exit code and a list of tests run.""" extra_env = {SHARD_INDEX_ENV_VAR: str(shard_index), TOTAL_SHARDS_ENV_VAR: str(total_shards)} return InvokeWithModifiedEnv(extra_env, RunAndExtractTestList, command) # The unit test. class GTestFilterUnitTest(gtest_test_utils.TestCase): """Tests the env variable or the command line flag to filter tests.""" # Utilities. def AssertSetEqual(self, lhs, rhs): """Asserts that two sets are equal.""" for elem in lhs: self.assert_(elem in rhs, '%s in %s' % (elem, rhs)) for elem in rhs: self.assert_(elem in lhs, '%s in %s' % (elem, lhs)) def AssertPartitionIsValid(self, set_var, list_of_sets): """Asserts that list_of_sets is a valid partition of set_var.""" full_partition = [] for slice_var in list_of_sets: full_partition.extend(slice_var) self.assertEqual(len(set_var), len(full_partition)) self.assertEqual(sets.Set(set_var), sets.Set(full_partition)) def AdjustForParameterizedTests(self, tests_to_run): """Adjust tests_to_run in case value parameterized tests are disabled.""" global param_tests_present if not param_tests_present: return list(sets.Set(tests_to_run) - sets.Set(PARAM_TESTS)) else: return tests_to_run def RunAndVerify(self, gtest_filter, tests_to_run): """Checks that the binary runs correct set of tests for a given filter.""" tests_to_run = self.AdjustForParameterizedTests(tests_to_run) # First, tests using the environment variable. # Windows removes empty variables from the environment when passing it # to a new process. This means it is impossible to pass an empty filter # into a process using the environment variable. However, we can still # test the case when the variable is not supplied (i.e., gtest_filter is # None). # pylint: disable-msg=C6403 if CAN_TEST_EMPTY_FILTER or gtest_filter != '': SetEnvVar(FILTER_ENV_VAR, gtest_filter) tests_run = RunAndExtractTestList()[0] SetEnvVar(FILTER_ENV_VAR, None) self.AssertSetEqual(tests_run, tests_to_run) # pylint: enable-msg=C6403 # Next, tests using the command line flag. if gtest_filter is None: args = [] else: args = ['--%s=%s' % (FILTER_FLAG, gtest_filter)] tests_run = RunAndExtractTestList(args)[0] self.AssertSetEqual(tests_run, tests_to_run) def RunAndVerifyWithSharding(self, gtest_filter, total_shards, tests_to_run, args=None, check_exit_0=False): """Checks that binary runs correct tests for the given filter and shard. Runs all shards of gtest_filter_unittest_ with the given filter, and verifies that the right set of tests were run. The union of tests run on each shard should be identical to tests_to_run, without duplicates. Args: gtest_filter: A filter to apply to the tests. total_shards: A total number of shards to split test run into. tests_to_run: A set of tests expected to run. args : Arguments to pass to the to the test binary. check_exit_0: When set to a true value, make sure that all shards return 0. """ tests_to_run = self.AdjustForParameterizedTests(tests_to_run) # Windows removes empty variables from the environment when passing it # to a new process. This means it is impossible to pass an empty filter # into a process using the environment variable. However, we can still # test the case when the variable is not supplied (i.e., gtest_filter is # None). # pylint: disable-msg=C6403 if CAN_TEST_EMPTY_FILTER or gtest_filter != '': SetEnvVar(FILTER_ENV_VAR, gtest_filter) partition = [] for i in range(0, total_shards): (tests_run, exit_code) = RunWithSharding(total_shards, i, args) if check_exit_0: self.assertEqual(0, exit_code) partition.append(tests_run) self.AssertPartitionIsValid(tests_to_run, partition) SetEnvVar(FILTER_ENV_VAR, None) # pylint: enable-msg=C6403 def RunAndVerifyAllowingDisabled(self, gtest_filter, tests_to_run): """Checks that the binary runs correct set of tests for the given filter. Runs gtest_filter_unittest_ with the given filter, and enables disabled tests. Verifies that the right set of tests were run. Args: gtest_filter: A filter to apply to the tests. tests_to_run: A set of tests expected to run. """ tests_to_run = self.AdjustForParameterizedTests(tests_to_run) # Construct the command line. args = ['--%s' % ALSO_RUN_DISABED_TESTS_FLAG] if gtest_filter is not None: args.append('--%s=%s' % (FILTER_FLAG, gtest_filter)) tests_run = RunAndExtractTestList(args)[0] self.AssertSetEqual(tests_run, tests_to_run) def setUp(self): """Sets up test case. Determines whether value-parameterized tests are enabled in the binary and sets the flags accordingly. """ global param_tests_present if param_tests_present is None: param_tests_present = PARAM_TEST_REGEX.search( RunAndReturnOutput()) is not None def testDefaultBehavior(self): """Tests the behavior of not specifying the filter.""" self.RunAndVerify(None, ACTIVE_TESTS) def testDefaultBehaviorWithShards(self): """Tests the behavior without the filter, with sharding enabled.""" self.RunAndVerifyWithSharding(None, 1, ACTIVE_TESTS) self.RunAndVerifyWithSharding(None, 2, ACTIVE_TESTS) self.RunAndVerifyWithSharding(None, len(ACTIVE_TESTS) - 1, ACTIVE_TESTS) self.RunAndVerifyWithSharding(None, len(ACTIVE_TESTS), ACTIVE_TESTS) self.RunAndVerifyWithSharding(None, len(ACTIVE_TESTS) + 1, ACTIVE_TESTS) def testEmptyFilter(self): """Tests an empty filter.""" self.RunAndVerify('', []) self.RunAndVerifyWithSharding('', 1, []) self.RunAndVerifyWithSharding('', 2, []) def testBadFilter(self): """Tests a filter that matches nothing.""" self.RunAndVerify('BadFilter', []) self.RunAndVerifyAllowingDisabled('BadFilter', []) def testFullName(self): """Tests filtering by full name.""" self.RunAndVerify('FooTest.Xyz', ['FooTest.Xyz']) self.RunAndVerifyAllowingDisabled('FooTest.Xyz', ['FooTest.Xyz']) self.RunAndVerifyWithSharding('FooTest.Xyz', 5, ['FooTest.Xyz']) def testUniversalFilters(self): """Tests filters that match everything.""" self.RunAndVerify('*', ACTIVE_TESTS) self.RunAndVerify('*.*', ACTIVE_TESTS) self.RunAndVerifyWithSharding('*.*', len(ACTIVE_TESTS) - 3, ACTIVE_TESTS) self.RunAndVerifyAllowingDisabled('*', ACTIVE_TESTS + DISABLED_TESTS) self.RunAndVerifyAllowingDisabled('*.*', ACTIVE_TESTS + DISABLED_TESTS) def testFilterByTestCase(self): """Tests filtering by test case name.""" self.RunAndVerify('FooTest.*', ['FooTest.Abc', 'FooTest.Xyz']) BAZ_TESTS = ['BazTest.TestOne', 'BazTest.TestA', 'BazTest.TestB'] self.RunAndVerify('BazTest.*', BAZ_TESTS) self.RunAndVerifyAllowingDisabled('BazTest.*', BAZ_TESTS + ['BazTest.DISABLED_TestC']) def testFilterByTest(self): """Tests filtering by test name.""" self.RunAndVerify('*.TestOne', ['BarTest.TestOne', 'BazTest.TestOne']) def testFilterDisabledTests(self): """Select only the disabled tests to run.""" self.RunAndVerify('DISABLED_FoobarTest.Test1', []) self.RunAndVerifyAllowingDisabled('DISABLED_FoobarTest.Test1', ['DISABLED_FoobarTest.Test1']) self.RunAndVerify('*DISABLED_*', []) self.RunAndVerifyAllowingDisabled('*DISABLED_*', DISABLED_TESTS) self.RunAndVerify('*.DISABLED_*', []) self.RunAndVerifyAllowingDisabled('*.DISABLED_*', [ 'BarTest.DISABLED_TestFour', 'BarTest.DISABLED_TestFive', 'BazTest.DISABLED_TestC', 'DISABLED_FoobarTest.DISABLED_Test2', ]) self.RunAndVerify('DISABLED_*', []) self.RunAndVerifyAllowingDisabled('DISABLED_*', [ 'DISABLED_FoobarTest.Test1', 'DISABLED_FoobarTest.DISABLED_Test2', 'DISABLED_FoobarbazTest.TestA', ]) def testWildcardInTestCaseName(self): """Tests using wildcard in the test case name.""" self.RunAndVerify('*a*.*', [ 'BarTest.TestOne', 'BarTest.TestTwo', 'BarTest.TestThree', 'BazTest.TestOne', 'BazTest.TestA', 'BazTest.TestB', ] + DEATH_TESTS + PARAM_TESTS) def testWildcardInTestName(self): """Tests using wildcard in the test name.""" self.RunAndVerify('*.*A*', ['FooTest.Abc', 'BazTest.TestA']) def testFilterWithoutDot(self): """Tests a filter that has no '.' in it.""" self.RunAndVerify('*z*', [ 'FooTest.Xyz', 'BazTest.TestOne', 'BazTest.TestA', 'BazTest.TestB', ]) def testTwoPatterns(self): """Tests filters that consist of two patterns.""" self.RunAndVerify('Foo*.*:*A*', [ 'FooTest.Abc', 'FooTest.Xyz', 'BazTest.TestA', ]) # An empty pattern + a non-empty one self.RunAndVerify(':*A*', ['FooTest.Abc', 'BazTest.TestA']) def testThreePatterns(self): """Tests filters that consist of three patterns.""" self.RunAndVerify('*oo*:*A*:*One', [ 'FooTest.Abc', 'FooTest.Xyz', 'BarTest.TestOne', 'BazTest.TestOne', 'BazTest.TestA', ]) # The 2nd pattern is empty. self.RunAndVerify('*oo*::*One', [ 'FooTest.Abc', 'FooTest.Xyz', 'BarTest.TestOne', 'BazTest.TestOne', ]) # The last 2 patterns are empty. self.RunAndVerify('*oo*::', [ 'FooTest.Abc', 'FooTest.Xyz', ]) def testNegativeFilters(self): self.RunAndVerify('*-BazTest.TestOne', [ 'FooTest.Abc', 'FooTest.Xyz', 'BarTest.TestOne', 'BarTest.TestTwo', 'BarTest.TestThree', 'BazTest.TestA', 'BazTest.TestB', ] + DEATH_TESTS + PARAM_TESTS) self.RunAndVerify('*-FooTest.Abc:BazTest.*', [ 'FooTest.Xyz', 'BarTest.TestOne', 'BarTest.TestTwo', 'BarTest.TestThree', ] + DEATH_TESTS + PARAM_TESTS) self.RunAndVerify('BarTest.*-BarTest.TestOne', [ 'BarTest.TestTwo', 'BarTest.TestThree', ]) # Tests without leading '*'. self.RunAndVerify('-FooTest.Abc:FooTest.Xyz:BazTest.*', [ 'BarTest.TestOne', 'BarTest.TestTwo', 'BarTest.TestThree', ] + DEATH_TESTS + PARAM_TESTS) # Value parameterized tests. self.RunAndVerify('*/*', PARAM_TESTS) # Value parameterized tests filtering by the sequence name. self.RunAndVerify('SeqP/*', [ 'SeqP/ParamTest.TestX/0', 'SeqP/ParamTest.TestX/1', 'SeqP/ParamTest.TestY/0', 'SeqP/ParamTest.TestY/1', ]) # Value parameterized tests filtering by the test name. self.RunAndVerify('*/0', [ 'SeqP/ParamTest.TestX/0', 'SeqP/ParamTest.TestY/0', 'SeqQ/ParamTest.TestX/0', 'SeqQ/ParamTest.TestY/0', ]) def testFlagOverridesEnvVar(self): """Tests that the filter flag overrides the filtering env. variable.""" SetEnvVar(FILTER_ENV_VAR, 'Foo*') args = ['--%s=%s' % (FILTER_FLAG, '*One')] tests_run = RunAndExtractTestList(args)[0] SetEnvVar(FILTER_ENV_VAR, None) self.AssertSetEqual(tests_run, ['BarTest.TestOne', 'BazTest.TestOne']) def testShardStatusFileIsCreated(self): """Tests that the shard file is created if specified in the environment.""" shard_status_file = os.path.join(gtest_test_utils.GetTempDir(), 'shard_status_file') self.assert_(not os.path.exists(shard_status_file)) extra_env = {SHARD_STATUS_FILE_ENV_VAR: shard_status_file} try: InvokeWithModifiedEnv(extra_env, RunAndReturnOutput) finally: self.assert_(os.path.exists(shard_status_file)) os.remove(shard_status_file) def testShardStatusFileIsCreatedWithListTests(self): """Tests that the shard file is created with the "list_tests" flag.""" shard_status_file = os.path.join(gtest_test_utils.GetTempDir(), 'shard_status_file2') self.assert_(not os.path.exists(shard_status_file)) extra_env = {SHARD_STATUS_FILE_ENV_VAR: shard_status_file} try: output = InvokeWithModifiedEnv(extra_env, RunAndReturnOutput, [LIST_TESTS_FLAG]) finally: # This assertion ensures that Google Test enumerated the tests as # opposed to running them. self.assert_('[==========]' not in output, 'Unexpected output during test enumeration.\n' 'Please ensure that LIST_TESTS_FLAG is assigned the\n' 'correct flag value for listing Google Test tests.') self.assert_(os.path.exists(shard_status_file)) os.remove(shard_status_file) if SUPPORTS_DEATH_TESTS: def testShardingWorksWithDeathTests(self): """Tests integration with death tests and sharding.""" gtest_filter = 'HasDeathTest.*:SeqP/*' expected_tests = [ 'HasDeathTest.Test1', 'HasDeathTest.Test2', 'SeqP/ParamTest.TestX/0', 'SeqP/ParamTest.TestX/1', 'SeqP/ParamTest.TestY/0', 'SeqP/ParamTest.TestY/1', ] for flag in ['--gtest_death_test_style=threadsafe', '--gtest_death_test_style=fast']: self.RunAndVerifyWithSharding(gtest_filter, 3, expected_tests, check_exit_0=True, args=[flag]) self.RunAndVerifyWithSharding(gtest_filter, 5, expected_tests, check_exit_0=True, args=[flag]) if __name__ == '__main__': gtest_test_utils.Main() ffc-2019.1.0.post0/libs/gtest-1.7.0/test/gtest_filter_unittest_.cc000066400000000000000000000067541346026536000243660ustar00rootroot00000000000000// Copyright 2005, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: wan@google.com (Zhanyong Wan) // Unit test for Google Test test filters. // // A user can specify which test(s) in a Google Test program to run via // either the GTEST_FILTER environment variable or the --gtest_filter // flag. This is used for testing such functionality. // // The program will be invoked from a Python unit test. Don't run it // directly. #include "gtest/gtest.h" namespace { // Test case FooTest. class FooTest : public testing::Test { }; TEST_F(FooTest, Abc) { } TEST_F(FooTest, Xyz) { FAIL() << "Expected failure."; } // Test case BarTest. TEST(BarTest, TestOne) { } TEST(BarTest, TestTwo) { } TEST(BarTest, TestThree) { } TEST(BarTest, DISABLED_TestFour) { FAIL() << "Expected failure."; } TEST(BarTest, DISABLED_TestFive) { FAIL() << "Expected failure."; } // Test case BazTest. TEST(BazTest, TestOne) { FAIL() << "Expected failure."; } TEST(BazTest, TestA) { } TEST(BazTest, TestB) { } TEST(BazTest, DISABLED_TestC) { FAIL() << "Expected failure."; } // Test case HasDeathTest TEST(HasDeathTest, Test1) { EXPECT_DEATH_IF_SUPPORTED(exit(1), ".*"); } // We need at least two death tests to make sure that the all death tests // aren't on the first shard. TEST(HasDeathTest, Test2) { EXPECT_DEATH_IF_SUPPORTED(exit(1), ".*"); } // Test case FoobarTest TEST(DISABLED_FoobarTest, Test1) { FAIL() << "Expected failure."; } TEST(DISABLED_FoobarTest, DISABLED_Test2) { FAIL() << "Expected failure."; } // Test case FoobarbazTest TEST(DISABLED_FoobarbazTest, TestA) { FAIL() << "Expected failure."; } #if GTEST_HAS_PARAM_TEST class ParamTest : public testing::TestWithParam { }; TEST_P(ParamTest, TestX) { } TEST_P(ParamTest, TestY) { } INSTANTIATE_TEST_CASE_P(SeqP, ParamTest, testing::Values(1, 2)); INSTANTIATE_TEST_CASE_P(SeqQ, ParamTest, testing::Values(5, 6)); #endif // GTEST_HAS_PARAM_TEST } // namespace int main(int argc, char **argv) { ::testing::InitGoogleTest(&argc, argv); return RUN_ALL_TESTS(); } ffc-2019.1.0.post0/libs/gtest-1.7.0/test/gtest_help_test.py000077500000000000000000000133401346026536000230250ustar00rootroot00000000000000#!/usr/bin/env python # # Copyright 2009, Google Inc. # All rights reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions are # met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # * Redistributions in binary form must reproduce the above # copyright notice, this list of conditions and the following disclaimer # in the documentation and/or other materials provided with the # distribution. # * Neither the name of Google Inc. nor the names of its # contributors may be used to endorse or promote products derived from # this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """Tests the --help flag of Google C++ Testing Framework. SYNOPSIS gtest_help_test.py --build_dir=BUILD/DIR # where BUILD/DIR contains the built gtest_help_test_ file. gtest_help_test.py """ __author__ = 'wan@google.com (Zhanyong Wan)' import os import re import gtest_test_utils IS_LINUX = os.name == 'posix' and os.uname()[0] == 'Linux' IS_WINDOWS = os.name == 'nt' PROGRAM_PATH = gtest_test_utils.GetTestExecutablePath('gtest_help_test_') FLAG_PREFIX = '--gtest_' DEATH_TEST_STYLE_FLAG = FLAG_PREFIX + 'death_test_style' STREAM_RESULT_TO_FLAG = FLAG_PREFIX + 'stream_result_to' UNKNOWN_FLAG = FLAG_PREFIX + 'unknown_flag_for_testing' LIST_TESTS_FLAG = FLAG_PREFIX + 'list_tests' INCORRECT_FLAG_VARIANTS = [re.sub('^--', '-', LIST_TESTS_FLAG), re.sub('^--', '/', LIST_TESTS_FLAG), re.sub('_', '-', LIST_TESTS_FLAG)] INTERNAL_FLAG_FOR_TESTING = FLAG_PREFIX + 'internal_flag_for_testing' SUPPORTS_DEATH_TESTS = "DeathTest" in gtest_test_utils.Subprocess( [PROGRAM_PATH, LIST_TESTS_FLAG]).output # The help message must match this regex. HELP_REGEX = re.compile( FLAG_PREFIX + r'list_tests.*' + FLAG_PREFIX + r'filter=.*' + FLAG_PREFIX + r'also_run_disabled_tests.*' + FLAG_PREFIX + r'repeat=.*' + FLAG_PREFIX + r'shuffle.*' + FLAG_PREFIX + r'random_seed=.*' + FLAG_PREFIX + r'color=.*' + FLAG_PREFIX + r'print_time.*' + FLAG_PREFIX + r'output=.*' + FLAG_PREFIX + r'break_on_failure.*' + FLAG_PREFIX + r'throw_on_failure.*' + FLAG_PREFIX + r'catch_exceptions=0.*', re.DOTALL) def RunWithFlag(flag): """Runs gtest_help_test_ with the given flag. Returns: the exit code and the text output as a tuple. Args: flag: the command-line flag to pass to gtest_help_test_, or None. """ if flag is None: command = [PROGRAM_PATH] else: command = [PROGRAM_PATH, flag] child = gtest_test_utils.Subprocess(command) return child.exit_code, child.output class GTestHelpTest(gtest_test_utils.TestCase): """Tests the --help flag and its equivalent forms.""" def TestHelpFlag(self, flag): """Verifies correct behavior when help flag is specified. The right message must be printed and the tests must skipped when the given flag is specified. Args: flag: A flag to pass to the binary or None. """ exit_code, output = RunWithFlag(flag) self.assertEquals(0, exit_code) self.assert_(HELP_REGEX.search(output), output) if IS_LINUX: self.assert_(STREAM_RESULT_TO_FLAG in output, output) else: self.assert_(STREAM_RESULT_TO_FLAG not in output, output) if SUPPORTS_DEATH_TESTS and not IS_WINDOWS: self.assert_(DEATH_TEST_STYLE_FLAG in output, output) else: self.assert_(DEATH_TEST_STYLE_FLAG not in output, output) def TestNonHelpFlag(self, flag): """Verifies correct behavior when no help flag is specified. Verifies that when no help flag is specified, the tests are run and the help message is not printed. Args: flag: A flag to pass to the binary or None. """ exit_code, output = RunWithFlag(flag) self.assert_(exit_code != 0) self.assert_(not HELP_REGEX.search(output), output) def testPrintsHelpWithFullFlag(self): self.TestHelpFlag('--help') def testPrintsHelpWithShortFlag(self): self.TestHelpFlag('-h') def testPrintsHelpWithQuestionFlag(self): self.TestHelpFlag('-?') def testPrintsHelpWithWindowsStyleQuestionFlag(self): self.TestHelpFlag('/?') def testPrintsHelpWithUnrecognizedGoogleTestFlag(self): self.TestHelpFlag(UNKNOWN_FLAG) def testPrintsHelpWithIncorrectFlagStyle(self): for incorrect_flag in INCORRECT_FLAG_VARIANTS: self.TestHelpFlag(incorrect_flag) def testRunsTestsWithoutHelpFlag(self): """Verifies that when no help flag is specified, the tests are run and the help message is not printed.""" self.TestNonHelpFlag(None) def testRunsTestsWithGtestInternalFlag(self): """Verifies that the tests are run and no help message is printed when a flag starting with Google Test prefix and 'internal_' is supplied.""" self.TestNonHelpFlag(INTERNAL_FLAG_FOR_TESTING) if __name__ == '__main__': gtest_test_utils.Main() ffc-2019.1.0.post0/libs/gtest-1.7.0/test/gtest_help_test_.cc000066400000000000000000000041231346026536000231150ustar00rootroot00000000000000// Copyright 2009, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: wan@google.com (Zhanyong Wan) // This program is meant to be run by gtest_help_test.py. Do not run // it directly. #include "gtest/gtest.h" // When a help flag is specified, this program should skip the tests // and exit with 0; otherwise the following test will be executed, // causing this program to exit with a non-zero code. TEST(HelpFlagTest, ShouldNotBeRun) { ASSERT_TRUE(false) << "Tests shouldn't be run when --help is specified."; } #if GTEST_HAS_DEATH_TEST TEST(DeathTest, UsedByPythonScriptToDetectSupportForDeathTestsInThisBinary) {} #endif ffc-2019.1.0.post0/libs/gtest-1.7.0/test/gtest_list_tests_unittest.py000077500000000000000000000145631346026536000252020ustar00rootroot00000000000000#!/usr/bin/env python # # Copyright 2006, Google Inc. # All rights reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions are # met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # * Redistributions in binary form must reproduce the above # copyright notice, this list of conditions and the following disclaimer # in the documentation and/or other materials provided with the # distribution. # * Neither the name of Google Inc. nor the names of its # contributors may be used to endorse or promote products derived from # this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """Unit test for Google Test's --gtest_list_tests flag. A user can ask Google Test to list all tests by specifying the --gtest_list_tests flag. This script tests such functionality by invoking gtest_list_tests_unittest_ (a program written with Google Test) the command line flags. """ __author__ = 'phanna@google.com (Patrick Hanna)' import gtest_test_utils import re # Constants. # The command line flag for enabling/disabling listing all tests. LIST_TESTS_FLAG = 'gtest_list_tests' # Path to the gtest_list_tests_unittest_ program. EXE_PATH = gtest_test_utils.GetTestExecutablePath('gtest_list_tests_unittest_') # The expected output when running gtest_list_tests_unittest_ with # --gtest_list_tests EXPECTED_OUTPUT_NO_FILTER_RE = re.compile(r"""FooDeathTest\. Test1 Foo\. Bar1 Bar2 DISABLED_Bar3 Abc\. Xyz Def FooBar\. Baz FooTest\. Test1 DISABLED_Test2 Test3 TypedTest/0\. # TypeParam = (VeryLo{245}|class VeryLo{239})\.\.\. TestA TestB TypedTest/1\. # TypeParam = int\s*\* TestA TestB TypedTest/2\. # TypeParam = .*MyArray TestA TestB My/TypeParamTest/0\. # TypeParam = (VeryLo{245}|class VeryLo{239})\.\.\. TestA TestB My/TypeParamTest/1\. # TypeParam = int\s*\* TestA TestB My/TypeParamTest/2\. # TypeParam = .*MyArray TestA TestB MyInstantiation/ValueParamTest\. TestA/0 # GetParam\(\) = one line TestA/1 # GetParam\(\) = two\\nlines TestA/2 # GetParam\(\) = a very\\nlo{241}\.\.\. TestB/0 # GetParam\(\) = one line TestB/1 # GetParam\(\) = two\\nlines TestB/2 # GetParam\(\) = a very\\nlo{241}\.\.\. """) # The expected output when running gtest_list_tests_unittest_ with # --gtest_list_tests and --gtest_filter=Foo*. EXPECTED_OUTPUT_FILTER_FOO_RE = re.compile(r"""FooDeathTest\. Test1 Foo\. Bar1 Bar2 DISABLED_Bar3 FooBar\. Baz FooTest\. Test1 DISABLED_Test2 Test3 """) # Utilities. def Run(args): """Runs gtest_list_tests_unittest_ and returns the list of tests printed.""" return gtest_test_utils.Subprocess([EXE_PATH] + args, capture_stderr=False).output # The unit test. class GTestListTestsUnitTest(gtest_test_utils.TestCase): """Tests using the --gtest_list_tests flag to list all tests.""" def RunAndVerify(self, flag_value, expected_output_re, other_flag): """Runs gtest_list_tests_unittest_ and verifies that it prints the correct tests. Args: flag_value: value of the --gtest_list_tests flag; None if the flag should not be present. expected_output_re: regular expression that matches the expected output after running command; other_flag: a different flag to be passed to command along with gtest_list_tests; None if the flag should not be present. """ if flag_value is None: flag = '' flag_expression = 'not set' elif flag_value == '0': flag = '--%s=0' % LIST_TESTS_FLAG flag_expression = '0' else: flag = '--%s' % LIST_TESTS_FLAG flag_expression = '1' args = [flag] if other_flag is not None: args += [other_flag] output = Run(args) if expected_output_re: self.assert_( expected_output_re.match(output), ('when %s is %s, the output of "%s" is "%s",\n' 'which does not match regex "%s"' % (LIST_TESTS_FLAG, flag_expression, ' '.join(args), output, expected_output_re.pattern))) else: self.assert_( not EXPECTED_OUTPUT_NO_FILTER_RE.match(output), ('when %s is %s, the output of "%s" is "%s"'% (LIST_TESTS_FLAG, flag_expression, ' '.join(args), output))) def testDefaultBehavior(self): """Tests the behavior of the default mode.""" self.RunAndVerify(flag_value=None, expected_output_re=None, other_flag=None) def testFlag(self): """Tests using the --gtest_list_tests flag.""" self.RunAndVerify(flag_value='0', expected_output_re=None, other_flag=None) self.RunAndVerify(flag_value='1', expected_output_re=EXPECTED_OUTPUT_NO_FILTER_RE, other_flag=None) def testOverrideNonFilterFlags(self): """Tests that --gtest_list_tests overrides the non-filter flags.""" self.RunAndVerify(flag_value='1', expected_output_re=EXPECTED_OUTPUT_NO_FILTER_RE, other_flag='--gtest_break_on_failure') def testWithFilterFlags(self): """Tests that --gtest_list_tests takes into account the --gtest_filter flag.""" self.RunAndVerify(flag_value='1', expected_output_re=EXPECTED_OUTPUT_FILTER_FOO_RE, other_flag='--gtest_filter=Foo*') if __name__ == '__main__': gtest_test_utils.Main() ffc-2019.1.0.post0/libs/gtest-1.7.0/test/gtest_list_tests_unittest_.cc000066400000000000000000000111461346026536000252650ustar00rootroot00000000000000// Copyright 2006, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: phanna@google.com (Patrick Hanna) // Unit test for Google Test's --gtest_list_tests flag. // // A user can ask Google Test to list all tests that will run // so that when using a filter, a user will know what // tests to look for. The tests will not be run after listing. // // This program will be invoked from a Python unit test. // Don't run it directly. #include "gtest/gtest.h" // Several different test cases and tests that will be listed. TEST(Foo, Bar1) { } TEST(Foo, Bar2) { } TEST(Foo, DISABLED_Bar3) { } TEST(Abc, Xyz) { } TEST(Abc, Def) { } TEST(FooBar, Baz) { } class FooTest : public testing::Test { }; TEST_F(FooTest, Test1) { } TEST_F(FooTest, DISABLED_Test2) { } TEST_F(FooTest, Test3) { } TEST(FooDeathTest, Test1) { } // A group of value-parameterized tests. class MyType { public: explicit MyType(const std::string& a_value) : value_(a_value) {} const std::string& value() const { return value_; } private: std::string value_; }; // Teaches Google Test how to print a MyType. void PrintTo(const MyType& x, std::ostream* os) { *os << x.value(); } class ValueParamTest : public testing::TestWithParam { }; TEST_P(ValueParamTest, TestA) { } TEST_P(ValueParamTest, TestB) { } INSTANTIATE_TEST_CASE_P( MyInstantiation, ValueParamTest, testing::Values(MyType("one line"), MyType("two\nlines"), MyType("a very\nloooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooong line"))); // NOLINT // A group of typed tests. // A deliberately long type name for testing the line-truncating // behavior when printing a type parameter. class VeryLoooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooogName { // NOLINT }; template class TypedTest : public testing::Test { }; template class MyArray { }; typedef testing::Types > MyTypes; TYPED_TEST_CASE(TypedTest, MyTypes); TYPED_TEST(TypedTest, TestA) { } TYPED_TEST(TypedTest, TestB) { } // A group of type-parameterized tests. template class TypeParamTest : public testing::Test { }; TYPED_TEST_CASE_P(TypeParamTest); TYPED_TEST_P(TypeParamTest, TestA) { } TYPED_TEST_P(TypeParamTest, TestB) { } REGISTER_TYPED_TEST_CASE_P(TypeParamTest, TestA, TestB); INSTANTIATE_TYPED_TEST_CASE_P(My, TypeParamTest, MyTypes); int main(int argc, char **argv) { ::testing::InitGoogleTest(&argc, argv); return RUN_ALL_TESTS(); } ffc-2019.1.0.post0/libs/gtest-1.7.0/test/gtest_main_unittest.cc000066400000000000000000000035401346026536000236540ustar00rootroot00000000000000// Copyright 2006, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: wan@google.com (Zhanyong Wan) #include "gtest/gtest.h" // Tests that we don't have to define main() when we link to // gtest_main instead of gtest. namespace { TEST(GTestMainTest, ShouldSucceed) { } } // namespace // We are using the main() function defined in src/gtest_main.cc, so // we don't define it here. ffc-2019.1.0.post0/libs/gtest-1.7.0/test/gtest_no_test_unittest.cc000066400000000000000000000046171346026536000244110ustar00rootroot00000000000000// Copyright 2006, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // Tests that a Google Test program that has no test defined can run // successfully. // // Author: wan@google.com (Zhanyong Wan) #include "gtest/gtest.h" int main(int argc, char **argv) { testing::InitGoogleTest(&argc, argv); // An ad-hoc assertion outside of all tests. // // This serves three purposes: // // 1. It verifies that an ad-hoc assertion can be executed even if // no test is defined. // 2. It verifies that a failed ad-hoc assertion causes the test // program to fail. // 3. We had a bug where the XML output won't be generated if an // assertion is executed before RUN_ALL_TESTS() is called, even // though --gtest_output=xml is specified. This makes sure the // bug is fixed and doesn't regress. EXPECT_EQ(1, 2); // The above EXPECT_EQ() should cause RUN_ALL_TESTS() to return non-zero. return RUN_ALL_TESTS() ? 0 : 1; } ffc-2019.1.0.post0/libs/gtest-1.7.0/test/gtest_output_test.py000077500000000000000000000273451346026536000234470ustar00rootroot00000000000000#!/usr/bin/env python # # Copyright 2008, Google Inc. # All rights reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions are # met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # * Redistributions in binary form must reproduce the above # copyright notice, this list of conditions and the following disclaimer # in the documentation and/or other materials provided with the # distribution. # * Neither the name of Google Inc. nor the names of its # contributors may be used to endorse or promote products derived from # this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """Tests the text output of Google C++ Testing Framework. SYNOPSIS gtest_output_test.py --build_dir=BUILD/DIR --gengolden # where BUILD/DIR contains the built gtest_output_test_ file. gtest_output_test.py --gengolden gtest_output_test.py """ __author__ = 'wan@google.com (Zhanyong Wan)' import os import re import sys import gtest_test_utils # The flag for generating the golden file GENGOLDEN_FLAG = '--gengolden' CATCH_EXCEPTIONS_ENV_VAR_NAME = 'GTEST_CATCH_EXCEPTIONS' IS_WINDOWS = os.name == 'nt' # TODO(vladl@google.com): remove the _lin suffix. GOLDEN_NAME = 'gtest_output_test_golden_lin.txt' PROGRAM_PATH = gtest_test_utils.GetTestExecutablePath('gtest_output_test_') # At least one command we exercise must not have the # --gtest_internal_skip_environment_and_ad_hoc_tests flag. COMMAND_LIST_TESTS = ({}, [PROGRAM_PATH, '--gtest_list_tests']) COMMAND_WITH_COLOR = ({}, [PROGRAM_PATH, '--gtest_color=yes']) COMMAND_WITH_TIME = ({}, [PROGRAM_PATH, '--gtest_print_time', '--gtest_internal_skip_environment_and_ad_hoc_tests', '--gtest_filter=FatalFailureTest.*:LoggingTest.*']) COMMAND_WITH_DISABLED = ( {}, [PROGRAM_PATH, '--gtest_also_run_disabled_tests', '--gtest_internal_skip_environment_and_ad_hoc_tests', '--gtest_filter=*DISABLED_*']) COMMAND_WITH_SHARDING = ( {'GTEST_SHARD_INDEX': '1', 'GTEST_TOTAL_SHARDS': '2'}, [PROGRAM_PATH, '--gtest_internal_skip_environment_and_ad_hoc_tests', '--gtest_filter=PassingTest.*']) GOLDEN_PATH = os.path.join(gtest_test_utils.GetSourceDir(), GOLDEN_NAME) def ToUnixLineEnding(s): """Changes all Windows/Mac line endings in s to UNIX line endings.""" return s.replace('\r\n', '\n').replace('\r', '\n') def RemoveLocations(test_output): """Removes all file location info from a Google Test program's output. Args: test_output: the output of a Google Test program. Returns: output with all file location info (in the form of 'DIRECTORY/FILE_NAME:LINE_NUMBER: 'or 'DIRECTORY\\FILE_NAME(LINE_NUMBER): ') replaced by 'FILE_NAME:#: '. """ return re.sub(r'.*[/\\](.+)(\:\d+|\(\d+\))\: ', r'\1:#: ', test_output) def RemoveStackTraceDetails(output): """Removes all stack traces from a Google Test program's output.""" # *? means "find the shortest string that matches". return re.sub(r'Stack trace:(.|\n)*?\n\n', 'Stack trace: (omitted)\n\n', output) def RemoveStackTraces(output): """Removes all traces of stack traces from a Google Test program's output.""" # *? means "find the shortest string that matches". return re.sub(r'Stack trace:(.|\n)*?\n\n', '', output) def RemoveTime(output): """Removes all time information from a Google Test program's output.""" return re.sub(r'\(\d+ ms', '(? ms', output) def RemoveTypeInfoDetails(test_output): """Removes compiler-specific type info from Google Test program's output. Args: test_output: the output of a Google Test program. Returns: output with type information normalized to canonical form. """ # some compilers output the name of type 'unsigned int' as 'unsigned' return re.sub(r'unsigned int', 'unsigned', test_output) def NormalizeToCurrentPlatform(test_output): """Normalizes platform specific output details for easier comparison.""" if IS_WINDOWS: # Removes the color information that is not present on Windows. test_output = re.sub('\x1b\\[(0;3\d)?m', '', test_output) # Changes failure message headers into the Windows format. test_output = re.sub(r': Failure\n', r': error: ', test_output) # Changes file(line_number) to file:line_number. test_output = re.sub(r'((\w|\.)+)\((\d+)\):', r'\1:\3:', test_output) return test_output def RemoveTestCounts(output): """Removes test counts from a Google Test program's output.""" output = re.sub(r'\d+ tests?, listed below', '? tests, listed below', output) output = re.sub(r'\d+ FAILED TESTS', '? FAILED TESTS', output) output = re.sub(r'\d+ tests? from \d+ test cases?', '? tests from ? test cases', output) output = re.sub(r'\d+ tests? from ([a-zA-Z_])', r'? tests from \1', output) return re.sub(r'\d+ tests?\.', '? tests.', output) def RemoveMatchingTests(test_output, pattern): """Removes output of specified tests from a Google Test program's output. This function strips not only the beginning and the end of a test but also all output in between. Args: test_output: A string containing the test output. pattern: A regex string that matches names of test cases or tests to remove. Returns: Contents of test_output with tests whose names match pattern removed. """ test_output = re.sub( r'.*\[ RUN \] .*%s(.|\n)*?\[( FAILED | OK )\] .*%s.*\n' % ( pattern, pattern), '', test_output) return re.sub(r'.*%s.*\n' % pattern, '', test_output) def NormalizeOutput(output): """Normalizes output (the output of gtest_output_test_.exe).""" output = ToUnixLineEnding(output) output = RemoveLocations(output) output = RemoveStackTraceDetails(output) output = RemoveTime(output) return output def GetShellCommandOutput(env_cmd): """Runs a command in a sub-process, and returns its output in a string. Args: env_cmd: The shell command. A 2-tuple where element 0 is a dict of extra environment variables to set, and element 1 is a string with the command and any flags. Returns: A string with the command's combined standard and diagnostic output. """ # Spawns cmd in a sub-process, and gets its standard I/O file objects. # Set and save the environment properly. environ = os.environ.copy() environ.update(env_cmd[0]) p = gtest_test_utils.Subprocess(env_cmd[1], env=environ) return p.output def GetCommandOutput(env_cmd): """Runs a command and returns its output with all file location info stripped off. Args: env_cmd: The shell command. A 2-tuple where element 0 is a dict of extra environment variables to set, and element 1 is a string with the command and any flags. """ # Disables exception pop-ups on Windows. environ, cmdline = env_cmd environ = dict(environ) # Ensures we are modifying a copy. environ[CATCH_EXCEPTIONS_ENV_VAR_NAME] = '1' return NormalizeOutput(GetShellCommandOutput((environ, cmdline))) def GetOutputOfAllCommands(): """Returns concatenated output from several representative commands.""" return (GetCommandOutput(COMMAND_WITH_COLOR) + GetCommandOutput(COMMAND_WITH_TIME) + GetCommandOutput(COMMAND_WITH_DISABLED) + GetCommandOutput(COMMAND_WITH_SHARDING)) test_list = GetShellCommandOutput(COMMAND_LIST_TESTS) SUPPORTS_DEATH_TESTS = 'DeathTest' in test_list SUPPORTS_TYPED_TESTS = 'TypedTest' in test_list SUPPORTS_THREADS = 'ExpectFailureWithThreadsTest' in test_list SUPPORTS_STACK_TRACES = False CAN_GENERATE_GOLDEN_FILE = (SUPPORTS_DEATH_TESTS and SUPPORTS_TYPED_TESTS and SUPPORTS_THREADS) class GTestOutputTest(gtest_test_utils.TestCase): def RemoveUnsupportedTests(self, test_output): if not SUPPORTS_DEATH_TESTS: test_output = RemoveMatchingTests(test_output, 'DeathTest') if not SUPPORTS_TYPED_TESTS: test_output = RemoveMatchingTests(test_output, 'TypedTest') test_output = RemoveMatchingTests(test_output, 'TypedDeathTest') test_output = RemoveMatchingTests(test_output, 'TypeParamDeathTest') if not SUPPORTS_THREADS: test_output = RemoveMatchingTests(test_output, 'ExpectFailureWithThreadsTest') test_output = RemoveMatchingTests(test_output, 'ScopedFakeTestPartResultReporterTest') test_output = RemoveMatchingTests(test_output, 'WorksConcurrently') if not SUPPORTS_STACK_TRACES: test_output = RemoveStackTraces(test_output) return test_output def testOutput(self): output = GetOutputOfAllCommands() golden_file = open(GOLDEN_PATH, 'rb') # A mis-configured source control system can cause \r appear in EOL # sequences when we read the golden file irrespective of an operating # system used. Therefore, we need to strip those \r's from newlines # unconditionally. golden = ToUnixLineEnding(golden_file.read()) golden_file.close() # We want the test to pass regardless of certain features being # supported or not. # We still have to remove type name specifics in all cases. normalized_actual = RemoveTypeInfoDetails(output) normalized_golden = RemoveTypeInfoDetails(golden) if CAN_GENERATE_GOLDEN_FILE: self.assertEqual(normalized_golden, normalized_actual) else: normalized_actual = NormalizeToCurrentPlatform( RemoveTestCounts(normalized_actual)) normalized_golden = NormalizeToCurrentPlatform( RemoveTestCounts(self.RemoveUnsupportedTests(normalized_golden))) # This code is very handy when debugging golden file differences: if os.getenv('DEBUG_GTEST_OUTPUT_TEST'): open(os.path.join( gtest_test_utils.GetSourceDir(), '_gtest_output_test_normalized_actual.txt'), 'wb').write( normalized_actual) open(os.path.join( gtest_test_utils.GetSourceDir(), '_gtest_output_test_normalized_golden.txt'), 'wb').write( normalized_golden) self.assertEqual(normalized_golden, normalized_actual) if __name__ == '__main__': if sys.argv[1:] == [GENGOLDEN_FLAG]: if CAN_GENERATE_GOLDEN_FILE: output = GetOutputOfAllCommands() golden_file = open(GOLDEN_PATH, 'wb') golden_file.write(output) golden_file.close() else: message = ( """Unable to write a golden file when compiled in an environment that does not support all the required features (death tests, typed tests, and multiple threads). Please generate the golden file using a binary built with those features enabled.""") sys.stderr.write(message) sys.exit(1) else: gtest_test_utils.Main() ffc-2019.1.0.post0/libs/gtest-1.7.0/test/gtest_output_test_.cc000066400000000000000000000772661346026536000235470ustar00rootroot00000000000000// Copyright 2005, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // The purpose of this file is to generate Google Test output under // various conditions. The output will then be verified by // gtest_output_test.py to ensure that Google Test generates the // desired messages. Therefore, most tests in this file are MEANT TO // FAIL. // // Author: wan@google.com (Zhanyong Wan) #include "gtest/gtest-spi.h" #include "gtest/gtest.h" // Indicates that this translation unit is part of Google Test's // implementation. It must come before gtest-internal-inl.h is // included, or there will be a compiler error. This trick is to // prevent a user from accidentally including gtest-internal-inl.h in // his code. #define GTEST_IMPLEMENTATION_ 1 #include "src/gtest-internal-inl.h" #undef GTEST_IMPLEMENTATION_ #include #if GTEST_IS_THREADSAFE using testing::ScopedFakeTestPartResultReporter; using testing::TestPartResultArray; using testing::internal::Notification; using testing::internal::ThreadWithParam; #endif namespace posix = ::testing::internal::posix; using testing::internal::scoped_ptr; // Tests catching fatal failures. // A subroutine used by the following test. void TestEq1(int x) { ASSERT_EQ(1, x); } // This function calls a test subroutine, catches the fatal failure it // generates, and then returns early. void TryTestSubroutine() { // Calls a subrountine that yields a fatal failure. TestEq1(2); // Catches the fatal failure and aborts the test. // // The testing::Test:: prefix is necessary when calling // HasFatalFailure() outside of a TEST, TEST_F, or test fixture. if (testing::Test::HasFatalFailure()) return; // If we get here, something is wrong. FAIL() << "This should never be reached."; } TEST(PassingTest, PassingTest1) { } TEST(PassingTest, PassingTest2) { } // Tests that parameters of failing parameterized tests are printed in the // failing test summary. class FailingParamTest : public testing::TestWithParam {}; TEST_P(FailingParamTest, Fails) { EXPECT_EQ(1, GetParam()); } // This generates a test which will fail. Google Test is expected to print // its parameter when it outputs the list of all failed tests. INSTANTIATE_TEST_CASE_P(PrintingFailingParams, FailingParamTest, testing::Values(2)); static const char kGoldenString[] = "\"Line\0 1\"\nLine 2"; TEST(NonfatalFailureTest, EscapesStringOperands) { std::string actual = "actual \"string\""; EXPECT_EQ(kGoldenString, actual); const char* golden = kGoldenString; EXPECT_EQ(golden, actual); } // Tests catching a fatal failure in a subroutine. TEST(FatalFailureTest, FatalFailureInSubroutine) { printf("(expecting a failure that x should be 1)\n"); TryTestSubroutine(); } // Tests catching a fatal failure in a nested subroutine. TEST(FatalFailureTest, FatalFailureInNestedSubroutine) { printf("(expecting a failure that x should be 1)\n"); // Calls a subrountine that yields a fatal failure. TryTestSubroutine(); // Catches the fatal failure and aborts the test. // // When calling HasFatalFailure() inside a TEST, TEST_F, or test // fixture, the testing::Test:: prefix is not needed. if (HasFatalFailure()) return; // If we get here, something is wrong. FAIL() << "This should never be reached."; } // Tests HasFatalFailure() after a failed EXPECT check. TEST(FatalFailureTest, NonfatalFailureInSubroutine) { printf("(expecting a failure on false)\n"); EXPECT_TRUE(false); // Generates a nonfatal failure ASSERT_FALSE(HasFatalFailure()); // This should succeed. } // Tests interleaving user logging and Google Test assertions. TEST(LoggingTest, InterleavingLoggingAndAssertions) { static const int a[4] = { 3, 9, 2, 6 }; printf("(expecting 2 failures on (3) >= (a[i]))\n"); for (int i = 0; i < static_cast(sizeof(a)/sizeof(*a)); i++) { printf("i == %d\n", i); EXPECT_GE(3, a[i]); } } // Tests the SCOPED_TRACE macro. // A helper function for testing SCOPED_TRACE. void SubWithoutTrace(int n) { EXPECT_EQ(1, n); ASSERT_EQ(2, n); } // Another helper function for testing SCOPED_TRACE. void SubWithTrace(int n) { SCOPED_TRACE(testing::Message() << "n = " << n); SubWithoutTrace(n); } // Tests that SCOPED_TRACE() obeys lexical scopes. TEST(SCOPED_TRACETest, ObeysScopes) { printf("(expected to fail)\n"); // There should be no trace before SCOPED_TRACE() is invoked. ADD_FAILURE() << "This failure is expected, and shouldn't have a trace."; { SCOPED_TRACE("Expected trace"); // After SCOPED_TRACE(), a failure in the current scope should contain // the trace. ADD_FAILURE() << "This failure is expected, and should have a trace."; } // Once the control leaves the scope of the SCOPED_TRACE(), there // should be no trace again. ADD_FAILURE() << "This failure is expected, and shouldn't have a trace."; } // Tests that SCOPED_TRACE works inside a loop. TEST(SCOPED_TRACETest, WorksInLoop) { printf("(expected to fail)\n"); for (int i = 1; i <= 2; i++) { SCOPED_TRACE(testing::Message() << "i = " << i); SubWithoutTrace(i); } } // Tests that SCOPED_TRACE works in a subroutine. TEST(SCOPED_TRACETest, WorksInSubroutine) { printf("(expected to fail)\n"); SubWithTrace(1); SubWithTrace(2); } // Tests that SCOPED_TRACE can be nested. TEST(SCOPED_TRACETest, CanBeNested) { printf("(expected to fail)\n"); SCOPED_TRACE(""); // A trace without a message. SubWithTrace(2); } // Tests that multiple SCOPED_TRACEs can be used in the same scope. TEST(SCOPED_TRACETest, CanBeRepeated) { printf("(expected to fail)\n"); SCOPED_TRACE("A"); ADD_FAILURE() << "This failure is expected, and should contain trace point A."; SCOPED_TRACE("B"); ADD_FAILURE() << "This failure is expected, and should contain trace point A and B."; { SCOPED_TRACE("C"); ADD_FAILURE() << "This failure is expected, and should " << "contain trace point A, B, and C."; } SCOPED_TRACE("D"); ADD_FAILURE() << "This failure is expected, and should " << "contain trace point A, B, and D."; } #if GTEST_IS_THREADSAFE // Tests that SCOPED_TRACE()s can be used concurrently from multiple // threads. Namely, an assertion should be affected by // SCOPED_TRACE()s in its own thread only. // Here's the sequence of actions that happen in the test: // // Thread A (main) | Thread B (spawned) // ===============================|================================ // spawns thread B | // -------------------------------+-------------------------------- // waits for n1 | SCOPED_TRACE("Trace B"); // | generates failure #1 // | notifies n1 // -------------------------------+-------------------------------- // SCOPED_TRACE("Trace A"); | waits for n2 // generates failure #2 | // notifies n2 | // -------------------------------|-------------------------------- // waits for n3 | generates failure #3 // | trace B dies // | generates failure #4 // | notifies n3 // -------------------------------|-------------------------------- // generates failure #5 | finishes // trace A dies | // generates failure #6 | // -------------------------------|-------------------------------- // waits for thread B to finish | struct CheckPoints { Notification n1; Notification n2; Notification n3; }; static void ThreadWithScopedTrace(CheckPoints* check_points) { { SCOPED_TRACE("Trace B"); ADD_FAILURE() << "Expected failure #1 (in thread B, only trace B alive)."; check_points->n1.Notify(); check_points->n2.WaitForNotification(); ADD_FAILURE() << "Expected failure #3 (in thread B, trace A & B both alive)."; } // Trace B dies here. ADD_FAILURE() << "Expected failure #4 (in thread B, only trace A alive)."; check_points->n3.Notify(); } TEST(SCOPED_TRACETest, WorksConcurrently) { printf("(expecting 6 failures)\n"); CheckPoints check_points; ThreadWithParam thread(&ThreadWithScopedTrace, &check_points, NULL); check_points.n1.WaitForNotification(); { SCOPED_TRACE("Trace A"); ADD_FAILURE() << "Expected failure #2 (in thread A, trace A & B both alive)."; check_points.n2.Notify(); check_points.n3.WaitForNotification(); ADD_FAILURE() << "Expected failure #5 (in thread A, only trace A alive)."; } // Trace A dies here. ADD_FAILURE() << "Expected failure #6 (in thread A, no trace alive)."; thread.Join(); } #endif // GTEST_IS_THREADSAFE TEST(DisabledTestsWarningTest, DISABLED_AlsoRunDisabledTestsFlagSuppressesWarning) { // This test body is intentionally empty. Its sole purpose is for // verifying that the --gtest_also_run_disabled_tests flag // suppresses the "YOU HAVE 12 DISABLED TESTS" warning at the end of // the test output. } // Tests using assertions outside of TEST and TEST_F. // // This function creates two failures intentionally. void AdHocTest() { printf("The non-test part of the code is expected to have 2 failures.\n\n"); EXPECT_TRUE(false); EXPECT_EQ(2, 3); } // Runs all TESTs, all TEST_Fs, and the ad hoc test. int RunAllTests() { AdHocTest(); return RUN_ALL_TESTS(); } // Tests non-fatal failures in the fixture constructor. class NonFatalFailureInFixtureConstructorTest : public testing::Test { protected: NonFatalFailureInFixtureConstructorTest() { printf("(expecting 5 failures)\n"); ADD_FAILURE() << "Expected failure #1, in the test fixture c'tor."; } ~NonFatalFailureInFixtureConstructorTest() { ADD_FAILURE() << "Expected failure #5, in the test fixture d'tor."; } virtual void SetUp() { ADD_FAILURE() << "Expected failure #2, in SetUp()."; } virtual void TearDown() { ADD_FAILURE() << "Expected failure #4, in TearDown."; } }; TEST_F(NonFatalFailureInFixtureConstructorTest, FailureInConstructor) { ADD_FAILURE() << "Expected failure #3, in the test body."; } // Tests fatal failures in the fixture constructor. class FatalFailureInFixtureConstructorTest : public testing::Test { protected: FatalFailureInFixtureConstructorTest() { printf("(expecting 2 failures)\n"); Init(); } ~FatalFailureInFixtureConstructorTest() { ADD_FAILURE() << "Expected failure #2, in the test fixture d'tor."; } virtual void SetUp() { ADD_FAILURE() << "UNEXPECTED failure in SetUp(). " << "We should never get here, as the test fixture c'tor " << "had a fatal failure."; } virtual void TearDown() { ADD_FAILURE() << "UNEXPECTED failure in TearDown(). " << "We should never get here, as the test fixture c'tor " << "had a fatal failure."; } private: void Init() { FAIL() << "Expected failure #1, in the test fixture c'tor."; } }; TEST_F(FatalFailureInFixtureConstructorTest, FailureInConstructor) { ADD_FAILURE() << "UNEXPECTED failure in the test body. " << "We should never get here, as the test fixture c'tor " << "had a fatal failure."; } // Tests non-fatal failures in SetUp(). class NonFatalFailureInSetUpTest : public testing::Test { protected: virtual ~NonFatalFailureInSetUpTest() { Deinit(); } virtual void SetUp() { printf("(expecting 4 failures)\n"); ADD_FAILURE() << "Expected failure #1, in SetUp()."; } virtual void TearDown() { FAIL() << "Expected failure #3, in TearDown()."; } private: void Deinit() { FAIL() << "Expected failure #4, in the test fixture d'tor."; } }; TEST_F(NonFatalFailureInSetUpTest, FailureInSetUp) { FAIL() << "Expected failure #2, in the test function."; } // Tests fatal failures in SetUp(). class FatalFailureInSetUpTest : public testing::Test { protected: virtual ~FatalFailureInSetUpTest() { Deinit(); } virtual void SetUp() { printf("(expecting 3 failures)\n"); FAIL() << "Expected failure #1, in SetUp()."; } virtual void TearDown() { FAIL() << "Expected failure #2, in TearDown()."; } private: void Deinit() { FAIL() << "Expected failure #3, in the test fixture d'tor."; } }; TEST_F(FatalFailureInSetUpTest, FailureInSetUp) { FAIL() << "UNEXPECTED failure in the test function. " << "We should never get here, as SetUp() failed."; } TEST(AddFailureAtTest, MessageContainsSpecifiedFileAndLineNumber) { ADD_FAILURE_AT("foo.cc", 42) << "Expected failure in foo.cc"; } #if GTEST_IS_THREADSAFE // A unary function that may die. void DieIf(bool should_die) { GTEST_CHECK_(!should_die) << " - death inside DieIf()."; } // Tests running death tests in a multi-threaded context. // Used for coordination between the main and the spawn thread. struct SpawnThreadNotifications { SpawnThreadNotifications() {} Notification spawn_thread_started; Notification spawn_thread_ok_to_terminate; private: GTEST_DISALLOW_COPY_AND_ASSIGN_(SpawnThreadNotifications); }; // The function to be executed in the thread spawn by the // MultipleThreads test (below). static void ThreadRoutine(SpawnThreadNotifications* notifications) { // Signals the main thread that this thread has started. notifications->spawn_thread_started.Notify(); // Waits for permission to finish from the main thread. notifications->spawn_thread_ok_to_terminate.WaitForNotification(); } // This is a death-test test, but it's not named with a DeathTest // suffix. It starts threads which might interfere with later // death tests, so it must run after all other death tests. class DeathTestAndMultiThreadsTest : public testing::Test { protected: // Starts a thread and waits for it to begin. virtual void SetUp() { thread_.reset(new ThreadWithParam( &ThreadRoutine, ¬ifications_, NULL)); notifications_.spawn_thread_started.WaitForNotification(); } // Tells the thread to finish, and reaps it. // Depending on the version of the thread library in use, // a manager thread might still be left running that will interfere // with later death tests. This is unfortunate, but this class // cleans up after itself as best it can. virtual void TearDown() { notifications_.spawn_thread_ok_to_terminate.Notify(); } private: SpawnThreadNotifications notifications_; scoped_ptr > thread_; }; #endif // GTEST_IS_THREADSAFE // The MixedUpTestCaseTest test case verifies that Google Test will fail a // test if it uses a different fixture class than what other tests in // the same test case use. It deliberately contains two fixture // classes with the same name but defined in different namespaces. // The MixedUpTestCaseWithSameTestNameTest test case verifies that // when the user defines two tests with the same test case name AND // same test name (but in different namespaces), the second test will // fail. namespace foo { class MixedUpTestCaseTest : public testing::Test { }; TEST_F(MixedUpTestCaseTest, FirstTestFromNamespaceFoo) {} TEST_F(MixedUpTestCaseTest, SecondTestFromNamespaceFoo) {} class MixedUpTestCaseWithSameTestNameTest : public testing::Test { }; TEST_F(MixedUpTestCaseWithSameTestNameTest, TheSecondTestWithThisNameShouldFail) {} } // namespace foo namespace bar { class MixedUpTestCaseTest : public testing::Test { }; // The following two tests are expected to fail. We rely on the // golden file to check that Google Test generates the right error message. TEST_F(MixedUpTestCaseTest, ThisShouldFail) {} TEST_F(MixedUpTestCaseTest, ThisShouldFailToo) {} class MixedUpTestCaseWithSameTestNameTest : public testing::Test { }; // Expected to fail. We rely on the golden file to check that Google Test // generates the right error message. TEST_F(MixedUpTestCaseWithSameTestNameTest, TheSecondTestWithThisNameShouldFail) {} } // namespace bar // The following two test cases verify that Google Test catches the user // error of mixing TEST and TEST_F in the same test case. The first // test case checks the scenario where TEST_F appears before TEST, and // the second one checks where TEST appears before TEST_F. class TEST_F_before_TEST_in_same_test_case : public testing::Test { }; TEST_F(TEST_F_before_TEST_in_same_test_case, DefinedUsingTEST_F) {} // Expected to fail. We rely on the golden file to check that Google Test // generates the right error message. TEST(TEST_F_before_TEST_in_same_test_case, DefinedUsingTESTAndShouldFail) {} class TEST_before_TEST_F_in_same_test_case : public testing::Test { }; TEST(TEST_before_TEST_F_in_same_test_case, DefinedUsingTEST) {} // Expected to fail. We rely on the golden file to check that Google Test // generates the right error message. TEST_F(TEST_before_TEST_F_in_same_test_case, DefinedUsingTEST_FAndShouldFail) { } // Used for testing EXPECT_NONFATAL_FAILURE() and EXPECT_FATAL_FAILURE(). int global_integer = 0; // Tests that EXPECT_NONFATAL_FAILURE() can reference global variables. TEST(ExpectNonfatalFailureTest, CanReferenceGlobalVariables) { global_integer = 0; EXPECT_NONFATAL_FAILURE({ EXPECT_EQ(1, global_integer) << "Expected non-fatal failure."; }, "Expected non-fatal failure."); } // Tests that EXPECT_NONFATAL_FAILURE() can reference local variables // (static or not). TEST(ExpectNonfatalFailureTest, CanReferenceLocalVariables) { int m = 0; static int n; n = 1; EXPECT_NONFATAL_FAILURE({ EXPECT_EQ(m, n) << "Expected non-fatal failure."; }, "Expected non-fatal failure."); } // Tests that EXPECT_NONFATAL_FAILURE() succeeds when there is exactly // one non-fatal failure and no fatal failure. TEST(ExpectNonfatalFailureTest, SucceedsWhenThereIsOneNonfatalFailure) { EXPECT_NONFATAL_FAILURE({ ADD_FAILURE() << "Expected non-fatal failure."; }, "Expected non-fatal failure."); } // Tests that EXPECT_NONFATAL_FAILURE() fails when there is no // non-fatal failure. TEST(ExpectNonfatalFailureTest, FailsWhenThereIsNoNonfatalFailure) { printf("(expecting a failure)\n"); EXPECT_NONFATAL_FAILURE({ }, ""); } // Tests that EXPECT_NONFATAL_FAILURE() fails when there are two // non-fatal failures. TEST(ExpectNonfatalFailureTest, FailsWhenThereAreTwoNonfatalFailures) { printf("(expecting a failure)\n"); EXPECT_NONFATAL_FAILURE({ ADD_FAILURE() << "Expected non-fatal failure 1."; ADD_FAILURE() << "Expected non-fatal failure 2."; }, ""); } // Tests that EXPECT_NONFATAL_FAILURE() fails when there is one fatal // failure. TEST(ExpectNonfatalFailureTest, FailsWhenThereIsOneFatalFailure) { printf("(expecting a failure)\n"); EXPECT_NONFATAL_FAILURE({ FAIL() << "Expected fatal failure."; }, ""); } // Tests that EXPECT_NONFATAL_FAILURE() fails when the statement being // tested returns. TEST(ExpectNonfatalFailureTest, FailsWhenStatementReturns) { printf("(expecting a failure)\n"); EXPECT_NONFATAL_FAILURE({ return; }, ""); } #if GTEST_HAS_EXCEPTIONS // Tests that EXPECT_NONFATAL_FAILURE() fails when the statement being // tested throws. TEST(ExpectNonfatalFailureTest, FailsWhenStatementThrows) { printf("(expecting a failure)\n"); try { EXPECT_NONFATAL_FAILURE({ throw 0; }, ""); } catch(int) { // NOLINT } } #endif // GTEST_HAS_EXCEPTIONS // Tests that EXPECT_FATAL_FAILURE() can reference global variables. TEST(ExpectFatalFailureTest, CanReferenceGlobalVariables) { global_integer = 0; EXPECT_FATAL_FAILURE({ ASSERT_EQ(1, global_integer) << "Expected fatal failure."; }, "Expected fatal failure."); } // Tests that EXPECT_FATAL_FAILURE() can reference local static // variables. TEST(ExpectFatalFailureTest, CanReferenceLocalStaticVariables) { static int n; n = 1; EXPECT_FATAL_FAILURE({ ASSERT_EQ(0, n) << "Expected fatal failure."; }, "Expected fatal failure."); } // Tests that EXPECT_FATAL_FAILURE() succeeds when there is exactly // one fatal failure and no non-fatal failure. TEST(ExpectFatalFailureTest, SucceedsWhenThereIsOneFatalFailure) { EXPECT_FATAL_FAILURE({ FAIL() << "Expected fatal failure."; }, "Expected fatal failure."); } // Tests that EXPECT_FATAL_FAILURE() fails when there is no fatal // failure. TEST(ExpectFatalFailureTest, FailsWhenThereIsNoFatalFailure) { printf("(expecting a failure)\n"); EXPECT_FATAL_FAILURE({ }, ""); } // A helper for generating a fatal failure. void FatalFailure() { FAIL() << "Expected fatal failure."; } // Tests that EXPECT_FATAL_FAILURE() fails when there are two // fatal failures. TEST(ExpectFatalFailureTest, FailsWhenThereAreTwoFatalFailures) { printf("(expecting a failure)\n"); EXPECT_FATAL_FAILURE({ FatalFailure(); FatalFailure(); }, ""); } // Tests that EXPECT_FATAL_FAILURE() fails when there is one non-fatal // failure. TEST(ExpectFatalFailureTest, FailsWhenThereIsOneNonfatalFailure) { printf("(expecting a failure)\n"); EXPECT_FATAL_FAILURE({ ADD_FAILURE() << "Expected non-fatal failure."; }, ""); } // Tests that EXPECT_FATAL_FAILURE() fails when the statement being // tested returns. TEST(ExpectFatalFailureTest, FailsWhenStatementReturns) { printf("(expecting a failure)\n"); EXPECT_FATAL_FAILURE({ return; }, ""); } #if GTEST_HAS_EXCEPTIONS // Tests that EXPECT_FATAL_FAILURE() fails when the statement being // tested throws. TEST(ExpectFatalFailureTest, FailsWhenStatementThrows) { printf("(expecting a failure)\n"); try { EXPECT_FATAL_FAILURE({ throw 0; }, ""); } catch(int) { // NOLINT } } #endif // GTEST_HAS_EXCEPTIONS // This #ifdef block tests the output of typed tests. #if GTEST_HAS_TYPED_TEST template class TypedTest : public testing::Test { }; TYPED_TEST_CASE(TypedTest, testing::Types); TYPED_TEST(TypedTest, Success) { EXPECT_EQ(0, TypeParam()); } TYPED_TEST(TypedTest, Failure) { EXPECT_EQ(1, TypeParam()) << "Expected failure"; } #endif // GTEST_HAS_TYPED_TEST // This #ifdef block tests the output of type-parameterized tests. #if GTEST_HAS_TYPED_TEST_P template class TypedTestP : public testing::Test { }; TYPED_TEST_CASE_P(TypedTestP); TYPED_TEST_P(TypedTestP, Success) { EXPECT_EQ(0U, TypeParam()); } TYPED_TEST_P(TypedTestP, Failure) { EXPECT_EQ(1U, TypeParam()) << "Expected failure"; } REGISTER_TYPED_TEST_CASE_P(TypedTestP, Success, Failure); typedef testing::Types UnsignedTypes; INSTANTIATE_TYPED_TEST_CASE_P(Unsigned, TypedTestP, UnsignedTypes); #endif // GTEST_HAS_TYPED_TEST_P #if GTEST_HAS_DEATH_TEST // We rely on the golden file to verify that tests whose test case // name ends with DeathTest are run first. TEST(ADeathTest, ShouldRunFirst) { } # if GTEST_HAS_TYPED_TEST // We rely on the golden file to verify that typed tests whose test // case name ends with DeathTest are run first. template class ATypedDeathTest : public testing::Test { }; typedef testing::Types NumericTypes; TYPED_TEST_CASE(ATypedDeathTest, NumericTypes); TYPED_TEST(ATypedDeathTest, ShouldRunFirst) { } # endif // GTEST_HAS_TYPED_TEST # if GTEST_HAS_TYPED_TEST_P // We rely on the golden file to verify that type-parameterized tests // whose test case name ends with DeathTest are run first. template class ATypeParamDeathTest : public testing::Test { }; TYPED_TEST_CASE_P(ATypeParamDeathTest); TYPED_TEST_P(ATypeParamDeathTest, ShouldRunFirst) { } REGISTER_TYPED_TEST_CASE_P(ATypeParamDeathTest, ShouldRunFirst); INSTANTIATE_TYPED_TEST_CASE_P(My, ATypeParamDeathTest, NumericTypes); # endif // GTEST_HAS_TYPED_TEST_P #endif // GTEST_HAS_DEATH_TEST // Tests various failure conditions of // EXPECT_{,NON}FATAL_FAILURE{,_ON_ALL_THREADS}. class ExpectFailureTest : public testing::Test { public: // Must be public and not protected due to a bug in g++ 3.4.2. enum FailureMode { FATAL_FAILURE, NONFATAL_FAILURE }; static void AddFailure(FailureMode failure) { if (failure == FATAL_FAILURE) { FAIL() << "Expected fatal failure."; } else { ADD_FAILURE() << "Expected non-fatal failure."; } } }; TEST_F(ExpectFailureTest, ExpectFatalFailure) { // Expected fatal failure, but succeeds. printf("(expecting 1 failure)\n"); EXPECT_FATAL_FAILURE(SUCCEED(), "Expected fatal failure."); // Expected fatal failure, but got a non-fatal failure. printf("(expecting 1 failure)\n"); EXPECT_FATAL_FAILURE(AddFailure(NONFATAL_FAILURE), "Expected non-fatal " "failure."); // Wrong message. printf("(expecting 1 failure)\n"); EXPECT_FATAL_FAILURE(AddFailure(FATAL_FAILURE), "Some other fatal failure " "expected."); } TEST_F(ExpectFailureTest, ExpectNonFatalFailure) { // Expected non-fatal failure, but succeeds. printf("(expecting 1 failure)\n"); EXPECT_NONFATAL_FAILURE(SUCCEED(), "Expected non-fatal failure."); // Expected non-fatal failure, but got a fatal failure. printf("(expecting 1 failure)\n"); EXPECT_NONFATAL_FAILURE(AddFailure(FATAL_FAILURE), "Expected fatal failure."); // Wrong message. printf("(expecting 1 failure)\n"); EXPECT_NONFATAL_FAILURE(AddFailure(NONFATAL_FAILURE), "Some other non-fatal " "failure."); } #if GTEST_IS_THREADSAFE class ExpectFailureWithThreadsTest : public ExpectFailureTest { protected: static void AddFailureInOtherThread(FailureMode failure) { ThreadWithParam thread(&AddFailure, failure, NULL); thread.Join(); } }; TEST_F(ExpectFailureWithThreadsTest, ExpectFatalFailure) { // We only intercept the current thread. printf("(expecting 2 failures)\n"); EXPECT_FATAL_FAILURE(AddFailureInOtherThread(FATAL_FAILURE), "Expected fatal failure."); } TEST_F(ExpectFailureWithThreadsTest, ExpectNonFatalFailure) { // We only intercept the current thread. printf("(expecting 2 failures)\n"); EXPECT_NONFATAL_FAILURE(AddFailureInOtherThread(NONFATAL_FAILURE), "Expected non-fatal failure."); } typedef ExpectFailureWithThreadsTest ScopedFakeTestPartResultReporterTest; // Tests that the ScopedFakeTestPartResultReporter only catches failures from // the current thread if it is instantiated with INTERCEPT_ONLY_CURRENT_THREAD. TEST_F(ScopedFakeTestPartResultReporterTest, InterceptOnlyCurrentThread) { printf("(expecting 2 failures)\n"); TestPartResultArray results; { ScopedFakeTestPartResultReporter reporter( ScopedFakeTestPartResultReporter::INTERCEPT_ONLY_CURRENT_THREAD, &results); AddFailureInOtherThread(FATAL_FAILURE); AddFailureInOtherThread(NONFATAL_FAILURE); } // The two failures should not have been intercepted. EXPECT_EQ(0, results.size()) << "This shouldn't fail."; } #endif // GTEST_IS_THREADSAFE TEST_F(ExpectFailureTest, ExpectFatalFailureOnAllThreads) { // Expected fatal failure, but succeeds. printf("(expecting 1 failure)\n"); EXPECT_FATAL_FAILURE_ON_ALL_THREADS(SUCCEED(), "Expected fatal failure."); // Expected fatal failure, but got a non-fatal failure. printf("(expecting 1 failure)\n"); EXPECT_FATAL_FAILURE_ON_ALL_THREADS(AddFailure(NONFATAL_FAILURE), "Expected non-fatal failure."); // Wrong message. printf("(expecting 1 failure)\n"); EXPECT_FATAL_FAILURE_ON_ALL_THREADS(AddFailure(FATAL_FAILURE), "Some other fatal failure expected."); } TEST_F(ExpectFailureTest, ExpectNonFatalFailureOnAllThreads) { // Expected non-fatal failure, but succeeds. printf("(expecting 1 failure)\n"); EXPECT_NONFATAL_FAILURE_ON_ALL_THREADS(SUCCEED(), "Expected non-fatal " "failure."); // Expected non-fatal failure, but got a fatal failure. printf("(expecting 1 failure)\n"); EXPECT_NONFATAL_FAILURE_ON_ALL_THREADS(AddFailure(FATAL_FAILURE), "Expected fatal failure."); // Wrong message. printf("(expecting 1 failure)\n"); EXPECT_NONFATAL_FAILURE_ON_ALL_THREADS(AddFailure(NONFATAL_FAILURE), "Some other non-fatal failure."); } // Two test environments for testing testing::AddGlobalTestEnvironment(). class FooEnvironment : public testing::Environment { public: virtual void SetUp() { printf("%s", "FooEnvironment::SetUp() called.\n"); } virtual void TearDown() { printf("%s", "FooEnvironment::TearDown() called.\n"); FAIL() << "Expected fatal failure."; } }; class BarEnvironment : public testing::Environment { public: virtual void SetUp() { printf("%s", "BarEnvironment::SetUp() called.\n"); } virtual void TearDown() { printf("%s", "BarEnvironment::TearDown() called.\n"); ADD_FAILURE() << "Expected non-fatal failure."; } }; bool GTEST_FLAG(internal_skip_environment_and_ad_hoc_tests) = false; // The main function. // // The idea is to use Google Test to run all the tests we have defined (some // of them are intended to fail), and then compare the test results // with the "golden" file. int main(int argc, char **argv) { testing::GTEST_FLAG(print_time) = false; // We just run the tests, knowing some of them are intended to fail. // We will use a separate Python script to compare the output of // this program with the golden file. // It's hard to test InitGoogleTest() directly, as it has many // global side effects. The following line serves as a sanity test // for it. testing::InitGoogleTest(&argc, argv); if (argc >= 2 && (std::string(argv[1]) == "--gtest_internal_skip_environment_and_ad_hoc_tests")) GTEST_FLAG(internal_skip_environment_and_ad_hoc_tests) = true; #if GTEST_HAS_DEATH_TEST if (testing::internal::GTEST_FLAG(internal_run_death_test) != "") { // Skip the usual output capturing if we're running as the child // process of an threadsafe-style death test. # if GTEST_OS_WINDOWS posix::FReopen("nul:", "w", stdout); # else posix::FReopen("/dev/null", "w", stdout); # endif // GTEST_OS_WINDOWS return RUN_ALL_TESTS(); } #endif // GTEST_HAS_DEATH_TEST if (GTEST_FLAG(internal_skip_environment_and_ad_hoc_tests)) return RUN_ALL_TESTS(); // Registers two global test environments. // The golden file verifies that they are set up in the order they // are registered, and torn down in the reverse order. testing::AddGlobalTestEnvironment(new FooEnvironment); testing::AddGlobalTestEnvironment(new BarEnvironment); return RunAllTests(); } ffc-2019.1.0.post0/libs/gtest-1.7.0/test/gtest_output_test_golden_lin.txt000066400000000000000000000670041346026536000260210ustar00rootroot00000000000000The non-test part of the code is expected to have 2 failures. gtest_output_test_.cc:#: Failure Value of: false Actual: false Expected: true gtest_output_test_.cc:#: Failure Value of: 3 Expected: 2 [==========] Running 63 tests from 28 test cases. [----------] Global test environment set-up. FooEnvironment::SetUp() called. BarEnvironment::SetUp() called. [----------] 1 test from ADeathTest [ RUN ] ADeathTest.ShouldRunFirst [ OK ] ADeathTest.ShouldRunFirst [----------] 1 test from ATypedDeathTest/0, where TypeParam = int [ RUN ] ATypedDeathTest/0.ShouldRunFirst [ OK ] ATypedDeathTest/0.ShouldRunFirst [----------] 1 test from ATypedDeathTest/1, where TypeParam = double [ RUN ] ATypedDeathTest/1.ShouldRunFirst [ OK ] ATypedDeathTest/1.ShouldRunFirst [----------] 1 test from My/ATypeParamDeathTest/0, where TypeParam = int [ RUN ] My/ATypeParamDeathTest/0.ShouldRunFirst [ OK ] My/ATypeParamDeathTest/0.ShouldRunFirst [----------] 1 test from My/ATypeParamDeathTest/1, where TypeParam = double [ RUN ] My/ATypeParamDeathTest/1.ShouldRunFirst [ OK ] My/ATypeParamDeathTest/1.ShouldRunFirst [----------] 2 tests from PassingTest [ RUN ] PassingTest.PassingTest1 [ OK ] PassingTest.PassingTest1 [ RUN ] PassingTest.PassingTest2 [ OK ] PassingTest.PassingTest2 [----------] 1 test from NonfatalFailureTest [ RUN ] NonfatalFailureTest.EscapesStringOperands gtest_output_test_.cc:#: Failure Value of: actual Actual: "actual \"string\"" Expected: kGoldenString Which is: "\"Line" gtest_output_test_.cc:#: Failure Value of: actual Actual: "actual \"string\"" Expected: golden Which is: "\"Line" [ FAILED ] NonfatalFailureTest.EscapesStringOperands [----------] 3 tests from FatalFailureTest [ RUN ] FatalFailureTest.FatalFailureInSubroutine (expecting a failure that x should be 1) gtest_output_test_.cc:#: Failure Value of: x Actual: 2 Expected: 1 [ FAILED ] FatalFailureTest.FatalFailureInSubroutine [ RUN ] FatalFailureTest.FatalFailureInNestedSubroutine (expecting a failure that x should be 1) gtest_output_test_.cc:#: Failure Value of: x Actual: 2 Expected: 1 [ FAILED ] FatalFailureTest.FatalFailureInNestedSubroutine [ RUN ] FatalFailureTest.NonfatalFailureInSubroutine (expecting a failure on false) gtest_output_test_.cc:#: Failure Value of: false Actual: false Expected: true [ FAILED ] FatalFailureTest.NonfatalFailureInSubroutine [----------] 1 test from LoggingTest [ RUN ] LoggingTest.InterleavingLoggingAndAssertions (expecting 2 failures on (3) >= (a[i])) i == 0 i == 1 gtest_output_test_.cc:#: Failure Expected: (3) >= (a[i]), actual: 3 vs 9 i == 2 i == 3 gtest_output_test_.cc:#: Failure Expected: (3) >= (a[i]), actual: 3 vs 6 [ FAILED ] LoggingTest.InterleavingLoggingAndAssertions [----------] 6 tests from SCOPED_TRACETest [ RUN ] SCOPED_TRACETest.ObeysScopes (expected to fail) gtest_output_test_.cc:#: Failure Failed This failure is expected, and shouldn't have a trace. gtest_output_test_.cc:#: Failure Failed This failure is expected, and should have a trace. Google Test trace: gtest_output_test_.cc:#: Expected trace gtest_output_test_.cc:#: Failure Failed This failure is expected, and shouldn't have a trace. [ FAILED ] SCOPED_TRACETest.ObeysScopes [ RUN ] SCOPED_TRACETest.WorksInLoop (expected to fail) gtest_output_test_.cc:#: Failure Value of: n Actual: 1 Expected: 2 Google Test trace: gtest_output_test_.cc:#: i = 1 gtest_output_test_.cc:#: Failure Value of: n Actual: 2 Expected: 1 Google Test trace: gtest_output_test_.cc:#: i = 2 [ FAILED ] SCOPED_TRACETest.WorksInLoop [ RUN ] SCOPED_TRACETest.WorksInSubroutine (expected to fail) gtest_output_test_.cc:#: Failure Value of: n Actual: 1 Expected: 2 Google Test trace: gtest_output_test_.cc:#: n = 1 gtest_output_test_.cc:#: Failure Value of: n Actual: 2 Expected: 1 Google Test trace: gtest_output_test_.cc:#: n = 2 [ FAILED ] SCOPED_TRACETest.WorksInSubroutine [ RUN ] SCOPED_TRACETest.CanBeNested (expected to fail) gtest_output_test_.cc:#: Failure Value of: n Actual: 2 Expected: 1 Google Test trace: gtest_output_test_.cc:#: n = 2 gtest_output_test_.cc:#: [ FAILED ] SCOPED_TRACETest.CanBeNested [ RUN ] SCOPED_TRACETest.CanBeRepeated (expected to fail) gtest_output_test_.cc:#: Failure Failed This failure is expected, and should contain trace point A. Google Test trace: gtest_output_test_.cc:#: A gtest_output_test_.cc:#: Failure Failed This failure is expected, and should contain trace point A and B. Google Test trace: gtest_output_test_.cc:#: B gtest_output_test_.cc:#: A gtest_output_test_.cc:#: Failure Failed This failure is expected, and should contain trace point A, B, and C. Google Test trace: gtest_output_test_.cc:#: C gtest_output_test_.cc:#: B gtest_output_test_.cc:#: A gtest_output_test_.cc:#: Failure Failed This failure is expected, and should contain trace point A, B, and D. Google Test trace: gtest_output_test_.cc:#: D gtest_output_test_.cc:#: B gtest_output_test_.cc:#: A [ FAILED ] SCOPED_TRACETest.CanBeRepeated [ RUN ] SCOPED_TRACETest.WorksConcurrently (expecting 6 failures) gtest_output_test_.cc:#: Failure Failed Expected failure #1 (in thread B, only trace B alive). Google Test trace: gtest_output_test_.cc:#: Trace B gtest_output_test_.cc:#: Failure Failed Expected failure #2 (in thread A, trace A & B both alive). Google Test trace: gtest_output_test_.cc:#: Trace A gtest_output_test_.cc:#: Failure Failed Expected failure #3 (in thread B, trace A & B both alive). Google Test trace: gtest_output_test_.cc:#: Trace B gtest_output_test_.cc:#: Failure Failed Expected failure #4 (in thread B, only trace A alive). gtest_output_test_.cc:#: Failure Failed Expected failure #5 (in thread A, only trace A alive). Google Test trace: gtest_output_test_.cc:#: Trace A gtest_output_test_.cc:#: Failure Failed Expected failure #6 (in thread A, no trace alive). [ FAILED ] SCOPED_TRACETest.WorksConcurrently [----------] 1 test from NonFatalFailureInFixtureConstructorTest [ RUN ] NonFatalFailureInFixtureConstructorTest.FailureInConstructor (expecting 5 failures) gtest_output_test_.cc:#: Failure Failed Expected failure #1, in the test fixture c'tor. gtest_output_test_.cc:#: Failure Failed Expected failure #2, in SetUp(). gtest_output_test_.cc:#: Failure Failed Expected failure #3, in the test body. gtest_output_test_.cc:#: Failure Failed Expected failure #4, in TearDown. gtest_output_test_.cc:#: Failure Failed Expected failure #5, in the test fixture d'tor. [ FAILED ] NonFatalFailureInFixtureConstructorTest.FailureInConstructor [----------] 1 test from FatalFailureInFixtureConstructorTest [ RUN ] FatalFailureInFixtureConstructorTest.FailureInConstructor (expecting 2 failures) gtest_output_test_.cc:#: Failure Failed Expected failure #1, in the test fixture c'tor. gtest_output_test_.cc:#: Failure Failed Expected failure #2, in the test fixture d'tor. [ FAILED ] FatalFailureInFixtureConstructorTest.FailureInConstructor [----------] 1 test from NonFatalFailureInSetUpTest [ RUN ] NonFatalFailureInSetUpTest.FailureInSetUp (expecting 4 failures) gtest_output_test_.cc:#: Failure Failed Expected failure #1, in SetUp(). gtest_output_test_.cc:#: Failure Failed Expected failure #2, in the test function. gtest_output_test_.cc:#: Failure Failed Expected failure #3, in TearDown(). gtest_output_test_.cc:#: Failure Failed Expected failure #4, in the test fixture d'tor. [ FAILED ] NonFatalFailureInSetUpTest.FailureInSetUp [----------] 1 test from FatalFailureInSetUpTest [ RUN ] FatalFailureInSetUpTest.FailureInSetUp (expecting 3 failures) gtest_output_test_.cc:#: Failure Failed Expected failure #1, in SetUp(). gtest_output_test_.cc:#: Failure Failed Expected failure #2, in TearDown(). gtest_output_test_.cc:#: Failure Failed Expected failure #3, in the test fixture d'tor. [ FAILED ] FatalFailureInSetUpTest.FailureInSetUp [----------] 1 test from AddFailureAtTest [ RUN ] AddFailureAtTest.MessageContainsSpecifiedFileAndLineNumber foo.cc:42: Failure Failed Expected failure in foo.cc [ FAILED ] AddFailureAtTest.MessageContainsSpecifiedFileAndLineNumber [----------] 4 tests from MixedUpTestCaseTest [ RUN ] MixedUpTestCaseTest.FirstTestFromNamespaceFoo [ OK ] MixedUpTestCaseTest.FirstTestFromNamespaceFoo [ RUN ] MixedUpTestCaseTest.SecondTestFromNamespaceFoo [ OK ] MixedUpTestCaseTest.SecondTestFromNamespaceFoo [ RUN ] MixedUpTestCaseTest.ThisShouldFail gtest.cc:#: Failure Failed All tests in the same test case must use the same test fixture class. However, in test case MixedUpTestCaseTest, you defined test FirstTestFromNamespaceFoo and test ThisShouldFail using two different test fixture classes. This can happen if the two classes are from different namespaces or translation units and have the same name. You should probably rename one of the classes to put the tests into different test cases. [ FAILED ] MixedUpTestCaseTest.ThisShouldFail [ RUN ] MixedUpTestCaseTest.ThisShouldFailToo gtest.cc:#: Failure Failed All tests in the same test case must use the same test fixture class. However, in test case MixedUpTestCaseTest, you defined test FirstTestFromNamespaceFoo and test ThisShouldFailToo using two different test fixture classes. This can happen if the two classes are from different namespaces or translation units and have the same name. You should probably rename one of the classes to put the tests into different test cases. [ FAILED ] MixedUpTestCaseTest.ThisShouldFailToo [----------] 2 tests from MixedUpTestCaseWithSameTestNameTest [ RUN ] MixedUpTestCaseWithSameTestNameTest.TheSecondTestWithThisNameShouldFail [ OK ] MixedUpTestCaseWithSameTestNameTest.TheSecondTestWithThisNameShouldFail [ RUN ] MixedUpTestCaseWithSameTestNameTest.TheSecondTestWithThisNameShouldFail gtest.cc:#: Failure Failed All tests in the same test case must use the same test fixture class. However, in test case MixedUpTestCaseWithSameTestNameTest, you defined test TheSecondTestWithThisNameShouldFail and test TheSecondTestWithThisNameShouldFail using two different test fixture classes. This can happen if the two classes are from different namespaces or translation units and have the same name. You should probably rename one of the classes to put the tests into different test cases. [ FAILED ] MixedUpTestCaseWithSameTestNameTest.TheSecondTestWithThisNameShouldFail [----------] 2 tests from TEST_F_before_TEST_in_same_test_case [ RUN ] TEST_F_before_TEST_in_same_test_case.DefinedUsingTEST_F [ OK ] TEST_F_before_TEST_in_same_test_case.DefinedUsingTEST_F [ RUN ] TEST_F_before_TEST_in_same_test_case.DefinedUsingTESTAndShouldFail gtest.cc:#: Failure Failed All tests in the same test case must use the same test fixture class, so mixing TEST_F and TEST in the same test case is illegal. In test case TEST_F_before_TEST_in_same_test_case, test DefinedUsingTEST_F is defined using TEST_F but test DefinedUsingTESTAndShouldFail is defined using TEST. You probably want to change the TEST to TEST_F or move it to another test case. [ FAILED ] TEST_F_before_TEST_in_same_test_case.DefinedUsingTESTAndShouldFail [----------] 2 tests from TEST_before_TEST_F_in_same_test_case [ RUN ] TEST_before_TEST_F_in_same_test_case.DefinedUsingTEST [ OK ] TEST_before_TEST_F_in_same_test_case.DefinedUsingTEST [ RUN ] TEST_before_TEST_F_in_same_test_case.DefinedUsingTEST_FAndShouldFail gtest.cc:#: Failure Failed All tests in the same test case must use the same test fixture class, so mixing TEST_F and TEST in the same test case is illegal. In test case TEST_before_TEST_F_in_same_test_case, test DefinedUsingTEST_FAndShouldFail is defined using TEST_F but test DefinedUsingTEST is defined using TEST. You probably want to change the TEST to TEST_F or move it to another test case. [ FAILED ] TEST_before_TEST_F_in_same_test_case.DefinedUsingTEST_FAndShouldFail [----------] 8 tests from ExpectNonfatalFailureTest [ RUN ] ExpectNonfatalFailureTest.CanReferenceGlobalVariables [ OK ] ExpectNonfatalFailureTest.CanReferenceGlobalVariables [ RUN ] ExpectNonfatalFailureTest.CanReferenceLocalVariables [ OK ] ExpectNonfatalFailureTest.CanReferenceLocalVariables [ RUN ] ExpectNonfatalFailureTest.SucceedsWhenThereIsOneNonfatalFailure [ OK ] ExpectNonfatalFailureTest.SucceedsWhenThereIsOneNonfatalFailure [ RUN ] ExpectNonfatalFailureTest.FailsWhenThereIsNoNonfatalFailure (expecting a failure) gtest.cc:#: Failure Expected: 1 non-fatal failure Actual: 0 failures [ FAILED ] ExpectNonfatalFailureTest.FailsWhenThereIsNoNonfatalFailure [ RUN ] ExpectNonfatalFailureTest.FailsWhenThereAreTwoNonfatalFailures (expecting a failure) gtest.cc:#: Failure Expected: 1 non-fatal failure Actual: 2 failures gtest_output_test_.cc:#: Non-fatal failure: Failed Expected non-fatal failure 1. gtest_output_test_.cc:#: Non-fatal failure: Failed Expected non-fatal failure 2. [ FAILED ] ExpectNonfatalFailureTest.FailsWhenThereAreTwoNonfatalFailures [ RUN ] ExpectNonfatalFailureTest.FailsWhenThereIsOneFatalFailure (expecting a failure) gtest.cc:#: Failure Expected: 1 non-fatal failure Actual: gtest_output_test_.cc:#: Fatal failure: Failed Expected fatal failure. [ FAILED ] ExpectNonfatalFailureTest.FailsWhenThereIsOneFatalFailure [ RUN ] ExpectNonfatalFailureTest.FailsWhenStatementReturns (expecting a failure) gtest.cc:#: Failure Expected: 1 non-fatal failure Actual: 0 failures [ FAILED ] ExpectNonfatalFailureTest.FailsWhenStatementReturns [ RUN ] ExpectNonfatalFailureTest.FailsWhenStatementThrows (expecting a failure) gtest.cc:#: Failure Expected: 1 non-fatal failure Actual: 0 failures [ FAILED ] ExpectNonfatalFailureTest.FailsWhenStatementThrows [----------] 8 tests from ExpectFatalFailureTest [ RUN ] ExpectFatalFailureTest.CanReferenceGlobalVariables [ OK ] ExpectFatalFailureTest.CanReferenceGlobalVariables [ RUN ] ExpectFatalFailureTest.CanReferenceLocalStaticVariables [ OK ] ExpectFatalFailureTest.CanReferenceLocalStaticVariables [ RUN ] ExpectFatalFailureTest.SucceedsWhenThereIsOneFatalFailure [ OK ] ExpectFatalFailureTest.SucceedsWhenThereIsOneFatalFailure [ RUN ] ExpectFatalFailureTest.FailsWhenThereIsNoFatalFailure (expecting a failure) gtest.cc:#: Failure Expected: 1 fatal failure Actual: 0 failures [ FAILED ] ExpectFatalFailureTest.FailsWhenThereIsNoFatalFailure [ RUN ] ExpectFatalFailureTest.FailsWhenThereAreTwoFatalFailures (expecting a failure) gtest.cc:#: Failure Expected: 1 fatal failure Actual: 2 failures gtest_output_test_.cc:#: Fatal failure: Failed Expected fatal failure. gtest_output_test_.cc:#: Fatal failure: Failed Expected fatal failure. [ FAILED ] ExpectFatalFailureTest.FailsWhenThereAreTwoFatalFailures [ RUN ] ExpectFatalFailureTest.FailsWhenThereIsOneNonfatalFailure (expecting a failure) gtest.cc:#: Failure Expected: 1 fatal failure Actual: gtest_output_test_.cc:#: Non-fatal failure: Failed Expected non-fatal failure. [ FAILED ] ExpectFatalFailureTest.FailsWhenThereIsOneNonfatalFailure [ RUN ] ExpectFatalFailureTest.FailsWhenStatementReturns (expecting a failure) gtest.cc:#: Failure Expected: 1 fatal failure Actual: 0 failures [ FAILED ] ExpectFatalFailureTest.FailsWhenStatementReturns [ RUN ] ExpectFatalFailureTest.FailsWhenStatementThrows (expecting a failure) gtest.cc:#: Failure Expected: 1 fatal failure Actual: 0 failures [ FAILED ] ExpectFatalFailureTest.FailsWhenStatementThrows [----------] 2 tests from TypedTest/0, where TypeParam = int [ RUN ] TypedTest/0.Success [ OK ] TypedTest/0.Success [ RUN ] TypedTest/0.Failure gtest_output_test_.cc:#: Failure Value of: TypeParam() Actual: 0 Expected: 1 Expected failure [ FAILED ] TypedTest/0.Failure, where TypeParam = int [----------] 2 tests from Unsigned/TypedTestP/0, where TypeParam = unsigned char [ RUN ] Unsigned/TypedTestP/0.Success [ OK ] Unsigned/TypedTestP/0.Success [ RUN ] Unsigned/TypedTestP/0.Failure gtest_output_test_.cc:#: Failure Value of: TypeParam() Actual: '\0' Expected: 1U Which is: 1 Expected failure [ FAILED ] Unsigned/TypedTestP/0.Failure, where TypeParam = unsigned char [----------] 2 tests from Unsigned/TypedTestP/1, where TypeParam = unsigned int [ RUN ] Unsigned/TypedTestP/1.Success [ OK ] Unsigned/TypedTestP/1.Success [ RUN ] Unsigned/TypedTestP/1.Failure gtest_output_test_.cc:#: Failure Value of: TypeParam() Actual: 0 Expected: 1U Which is: 1 Expected failure [ FAILED ] Unsigned/TypedTestP/1.Failure, where TypeParam = unsigned int [----------] 4 tests from ExpectFailureTest [ RUN ] ExpectFailureTest.ExpectFatalFailure (expecting 1 failure) gtest.cc:#: Failure Expected: 1 fatal failure Actual: gtest_output_test_.cc:#: Success: Succeeded (expecting 1 failure) gtest.cc:#: Failure Expected: 1 fatal failure Actual: gtest_output_test_.cc:#: Non-fatal failure: Failed Expected non-fatal failure. (expecting 1 failure) gtest.cc:#: Failure Expected: 1 fatal failure containing "Some other fatal failure expected." Actual: gtest_output_test_.cc:#: Fatal failure: Failed Expected fatal failure. [ FAILED ] ExpectFailureTest.ExpectFatalFailure [ RUN ] ExpectFailureTest.ExpectNonFatalFailure (expecting 1 failure) gtest.cc:#: Failure Expected: 1 non-fatal failure Actual: gtest_output_test_.cc:#: Success: Succeeded (expecting 1 failure) gtest.cc:#: Failure Expected: 1 non-fatal failure Actual: gtest_output_test_.cc:#: Fatal failure: Failed Expected fatal failure. (expecting 1 failure) gtest.cc:#: Failure Expected: 1 non-fatal failure containing "Some other non-fatal failure." Actual: gtest_output_test_.cc:#: Non-fatal failure: Failed Expected non-fatal failure. [ FAILED ] ExpectFailureTest.ExpectNonFatalFailure [ RUN ] ExpectFailureTest.ExpectFatalFailureOnAllThreads (expecting 1 failure) gtest.cc:#: Failure Expected: 1 fatal failure Actual: gtest_output_test_.cc:#: Success: Succeeded (expecting 1 failure) gtest.cc:#: Failure Expected: 1 fatal failure Actual: gtest_output_test_.cc:#: Non-fatal failure: Failed Expected non-fatal failure. (expecting 1 failure) gtest.cc:#: Failure Expected: 1 fatal failure containing "Some other fatal failure expected." Actual: gtest_output_test_.cc:#: Fatal failure: Failed Expected fatal failure. [ FAILED ] ExpectFailureTest.ExpectFatalFailureOnAllThreads [ RUN ] ExpectFailureTest.ExpectNonFatalFailureOnAllThreads (expecting 1 failure) gtest.cc:#: Failure Expected: 1 non-fatal failure Actual: gtest_output_test_.cc:#: Success: Succeeded (expecting 1 failure) gtest.cc:#: Failure Expected: 1 non-fatal failure Actual: gtest_output_test_.cc:#: Fatal failure: Failed Expected fatal failure. (expecting 1 failure) gtest.cc:#: Failure Expected: 1 non-fatal failure containing "Some other non-fatal failure." Actual: gtest_output_test_.cc:#: Non-fatal failure: Failed Expected non-fatal failure. [ FAILED ] ExpectFailureTest.ExpectNonFatalFailureOnAllThreads [----------] 2 tests from ExpectFailureWithThreadsTest [ RUN ] ExpectFailureWithThreadsTest.ExpectFatalFailure (expecting 2 failures) gtest_output_test_.cc:#: Failure Failed Expected fatal failure. gtest.cc:#: Failure Expected: 1 fatal failure Actual: 0 failures [ FAILED ] ExpectFailureWithThreadsTest.ExpectFatalFailure [ RUN ] ExpectFailureWithThreadsTest.ExpectNonFatalFailure (expecting 2 failures) gtest_output_test_.cc:#: Failure Failed Expected non-fatal failure. gtest.cc:#: Failure Expected: 1 non-fatal failure Actual: 0 failures [ FAILED ] ExpectFailureWithThreadsTest.ExpectNonFatalFailure [----------] 1 test from ScopedFakeTestPartResultReporterTest [ RUN ] ScopedFakeTestPartResultReporterTest.InterceptOnlyCurrentThread (expecting 2 failures) gtest_output_test_.cc:#: Failure Failed Expected fatal failure. gtest_output_test_.cc:#: Failure Failed Expected non-fatal failure. [ FAILED ] ScopedFakeTestPartResultReporterTest.InterceptOnlyCurrentThread [----------] 1 test from PrintingFailingParams/FailingParamTest [ RUN ] PrintingFailingParams/FailingParamTest.Fails/0 gtest_output_test_.cc:#: Failure Value of: GetParam() Actual: 2 Expected: 1 [ FAILED ] PrintingFailingParams/FailingParamTest.Fails/0, where GetParam() = 2 [----------] Global test environment tear-down BarEnvironment::TearDown() called. gtest_output_test_.cc:#: Failure Failed Expected non-fatal failure. FooEnvironment::TearDown() called. gtest_output_test_.cc:#: Failure Failed Expected fatal failure. [==========] 63 tests from 28 test cases ran. [ PASSED ] 21 tests. [ FAILED ] 42 tests, listed below: [ FAILED ] NonfatalFailureTest.EscapesStringOperands [ FAILED ] FatalFailureTest.FatalFailureInSubroutine [ FAILED ] FatalFailureTest.FatalFailureInNestedSubroutine [ FAILED ] FatalFailureTest.NonfatalFailureInSubroutine [ FAILED ] LoggingTest.InterleavingLoggingAndAssertions [ FAILED ] SCOPED_TRACETest.ObeysScopes [ FAILED ] SCOPED_TRACETest.WorksInLoop [ FAILED ] SCOPED_TRACETest.WorksInSubroutine [ FAILED ] SCOPED_TRACETest.CanBeNested [ FAILED ] SCOPED_TRACETest.CanBeRepeated [ FAILED ] SCOPED_TRACETest.WorksConcurrently [ FAILED ] NonFatalFailureInFixtureConstructorTest.FailureInConstructor [ FAILED ] FatalFailureInFixtureConstructorTest.FailureInConstructor [ FAILED ] NonFatalFailureInSetUpTest.FailureInSetUp [ FAILED ] FatalFailureInSetUpTest.FailureInSetUp [ FAILED ] AddFailureAtTest.MessageContainsSpecifiedFileAndLineNumber [ FAILED ] MixedUpTestCaseTest.ThisShouldFail [ FAILED ] MixedUpTestCaseTest.ThisShouldFailToo [ FAILED ] MixedUpTestCaseWithSameTestNameTest.TheSecondTestWithThisNameShouldFail [ FAILED ] TEST_F_before_TEST_in_same_test_case.DefinedUsingTESTAndShouldFail [ FAILED ] TEST_before_TEST_F_in_same_test_case.DefinedUsingTEST_FAndShouldFail [ FAILED ] ExpectNonfatalFailureTest.FailsWhenThereIsNoNonfatalFailure [ FAILED ] ExpectNonfatalFailureTest.FailsWhenThereAreTwoNonfatalFailures [ FAILED ] ExpectNonfatalFailureTest.FailsWhenThereIsOneFatalFailure [ FAILED ] ExpectNonfatalFailureTest.FailsWhenStatementReturns [ FAILED ] ExpectNonfatalFailureTest.FailsWhenStatementThrows [ FAILED ] ExpectFatalFailureTest.FailsWhenThereIsNoFatalFailure [ FAILED ] ExpectFatalFailureTest.FailsWhenThereAreTwoFatalFailures [ FAILED ] ExpectFatalFailureTest.FailsWhenThereIsOneNonfatalFailure [ FAILED ] ExpectFatalFailureTest.FailsWhenStatementReturns [ FAILED ] ExpectFatalFailureTest.FailsWhenStatementThrows [ FAILED ] TypedTest/0.Failure, where TypeParam = int [ FAILED ] Unsigned/TypedTestP/0.Failure, where TypeParam = unsigned char [ FAILED ] Unsigned/TypedTestP/1.Failure, where TypeParam = unsigned int [ FAILED ] ExpectFailureTest.ExpectFatalFailure [ FAILED ] ExpectFailureTest.ExpectNonFatalFailure [ FAILED ] ExpectFailureTest.ExpectFatalFailureOnAllThreads [ FAILED ] ExpectFailureTest.ExpectNonFatalFailureOnAllThreads [ FAILED ] ExpectFailureWithThreadsTest.ExpectFatalFailure [ FAILED ] ExpectFailureWithThreadsTest.ExpectNonFatalFailure [ FAILED ] ScopedFakeTestPartResultReporterTest.InterceptOnlyCurrentThread [ FAILED ] PrintingFailingParams/FailingParamTest.Fails/0, where GetParam() = 2 42 FAILED TESTS  YOU HAVE 1 DISABLED TEST Note: Google Test filter = FatalFailureTest.*:LoggingTest.* [==========] Running 4 tests from 2 test cases. [----------] Global test environment set-up. [----------] 3 tests from FatalFailureTest [ RUN ] FatalFailureTest.FatalFailureInSubroutine (expecting a failure that x should be 1) gtest_output_test_.cc:#: Failure Value of: x Actual: 2 Expected: 1 [ FAILED ] FatalFailureTest.FatalFailureInSubroutine (? ms) [ RUN ] FatalFailureTest.FatalFailureInNestedSubroutine (expecting a failure that x should be 1) gtest_output_test_.cc:#: Failure Value of: x Actual: 2 Expected: 1 [ FAILED ] FatalFailureTest.FatalFailureInNestedSubroutine (? ms) [ RUN ] FatalFailureTest.NonfatalFailureInSubroutine (expecting a failure on false) gtest_output_test_.cc:#: Failure Value of: false Actual: false Expected: true [ FAILED ] FatalFailureTest.NonfatalFailureInSubroutine (? ms) [----------] 3 tests from FatalFailureTest (? ms total) [----------] 1 test from LoggingTest [ RUN ] LoggingTest.InterleavingLoggingAndAssertions (expecting 2 failures on (3) >= (a[i])) i == 0 i == 1 gtest_output_test_.cc:#: Failure Expected: (3) >= (a[i]), actual: 3 vs 9 i == 2 i == 3 gtest_output_test_.cc:#: Failure Expected: (3) >= (a[i]), actual: 3 vs 6 [ FAILED ] LoggingTest.InterleavingLoggingAndAssertions (? ms) [----------] 1 test from LoggingTest (? ms total) [----------] Global test environment tear-down [==========] 4 tests from 2 test cases ran. (? ms total) [ PASSED ] 0 tests. [ FAILED ] 4 tests, listed below: [ FAILED ] FatalFailureTest.FatalFailureInSubroutine [ FAILED ] FatalFailureTest.FatalFailureInNestedSubroutine [ FAILED ] FatalFailureTest.NonfatalFailureInSubroutine [ FAILED ] LoggingTest.InterleavingLoggingAndAssertions 4 FAILED TESTS Note: Google Test filter = *DISABLED_* [==========] Running 1 test from 1 test case. [----------] Global test environment set-up. [----------] 1 test from DisabledTestsWarningTest [ RUN ] DisabledTestsWarningTest.DISABLED_AlsoRunDisabledTestsFlagSuppressesWarning [ OK ] DisabledTestsWarningTest.DISABLED_AlsoRunDisabledTestsFlagSuppressesWarning [----------] Global test environment tear-down [==========] 1 test from 1 test case ran. [ PASSED ] 1 test. Note: Google Test filter = PassingTest.* Note: This is test shard 2 of 2. [==========] Running 1 test from 1 test case. [----------] Global test environment set-up. [----------] 1 test from PassingTest [ RUN ] PassingTest.PassingTest2 [ OK ] PassingTest.PassingTest2 [----------] Global test environment tear-down [==========] 1 test from 1 test case ran. [ PASSED ] 1 test. ffc-2019.1.0.post0/libs/gtest-1.7.0/test/gtest_pred_impl_unittest.cc000066400000000000000000002271021346026536000247050ustar00rootroot00000000000000// Copyright 2006, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // This file is AUTOMATICALLY GENERATED on 10/31/2011 by command // 'gen_gtest_pred_impl.py 5'. DO NOT EDIT BY HAND! // Regression test for gtest_pred_impl.h // // This file is generated by a script and quite long. If you intend to // learn how Google Test works by reading its unit tests, read // gtest_unittest.cc instead. // // This is intended as a regression test for the Google Test predicate // assertions. We compile it as part of the gtest_unittest target // only to keep the implementation tidy and compact, as it is quite // involved to set up the stage for testing Google Test using Google // Test itself. // // Currently, gtest_unittest takes ~11 seconds to run in the testing // daemon. In the future, if it grows too large and needs much more // time to finish, we should consider separating this file into a // stand-alone regression test. #include #include "gtest/gtest.h" #include "gtest/gtest-spi.h" // A user-defined data type. struct Bool { explicit Bool(int val) : value(val != 0) {} bool operator>(int n) const { return value > Bool(n).value; } Bool operator+(const Bool& rhs) const { return Bool(value + rhs.value); } bool operator==(const Bool& rhs) const { return value == rhs.value; } bool value; }; // Enables Bool to be used in assertions. std::ostream& operator<<(std::ostream& os, const Bool& x) { return os << (x.value ? "true" : "false"); } // Sample functions/functors for testing unary predicate assertions. // A unary predicate function. template bool PredFunction1(T1 v1) { return v1 > 0; } // The following two functions are needed to circumvent a bug in // gcc 2.95.3, which sometimes has problem with the above template // function. bool PredFunction1Int(int v1) { return v1 > 0; } bool PredFunction1Bool(Bool v1) { return v1 > 0; } // A unary predicate functor. struct PredFunctor1 { template bool operator()(const T1& v1) { return v1 > 0; } }; // A unary predicate-formatter function. template testing::AssertionResult PredFormatFunction1(const char* e1, const T1& v1) { if (PredFunction1(v1)) return testing::AssertionSuccess(); return testing::AssertionFailure() << e1 << " is expected to be positive, but evaluates to " << v1 << "."; } // A unary predicate-formatter functor. struct PredFormatFunctor1 { template testing::AssertionResult operator()(const char* e1, const T1& v1) const { return PredFormatFunction1(e1, v1); } }; // Tests for {EXPECT|ASSERT}_PRED_FORMAT1. class Predicate1Test : public testing::Test { protected: virtual void SetUp() { expected_to_finish_ = true; finished_ = false; n1_ = 0; } virtual void TearDown() { // Verifies that each of the predicate's arguments was evaluated // exactly once. EXPECT_EQ(1, n1_) << "The predicate assertion didn't evaluate argument 2 " "exactly once."; // Verifies that the control flow in the test function is expected. if (expected_to_finish_ && !finished_) { FAIL() << "The predicate assertion unexpactedly aborted the test."; } else if (!expected_to_finish_ && finished_) { FAIL() << "The failed predicate assertion didn't abort the test " "as expected."; } } // true iff the test function is expected to run to finish. static bool expected_to_finish_; // true iff the test function did run to finish. static bool finished_; static int n1_; }; bool Predicate1Test::expected_to_finish_; bool Predicate1Test::finished_; int Predicate1Test::n1_; typedef Predicate1Test EXPECT_PRED_FORMAT1Test; typedef Predicate1Test ASSERT_PRED_FORMAT1Test; typedef Predicate1Test EXPECT_PRED1Test; typedef Predicate1Test ASSERT_PRED1Test; // Tests a successful EXPECT_PRED1 where the // predicate-formatter is a function on a built-in type (int). TEST_F(EXPECT_PRED1Test, FunctionOnBuiltInTypeSuccess) { EXPECT_PRED1(PredFunction1Int, ++n1_); finished_ = true; } // Tests a successful EXPECT_PRED1 where the // predicate-formatter is a function on a user-defined type (Bool). TEST_F(EXPECT_PRED1Test, FunctionOnUserTypeSuccess) { EXPECT_PRED1(PredFunction1Bool, Bool(++n1_)); finished_ = true; } // Tests a successful EXPECT_PRED1 where the // predicate-formatter is a functor on a built-in type (int). TEST_F(EXPECT_PRED1Test, FunctorOnBuiltInTypeSuccess) { EXPECT_PRED1(PredFunctor1(), ++n1_); finished_ = true; } // Tests a successful EXPECT_PRED1 where the // predicate-formatter is a functor on a user-defined type (Bool). TEST_F(EXPECT_PRED1Test, FunctorOnUserTypeSuccess) { EXPECT_PRED1(PredFunctor1(), Bool(++n1_)); finished_ = true; } // Tests a failed EXPECT_PRED1 where the // predicate-formatter is a function on a built-in type (int). TEST_F(EXPECT_PRED1Test, FunctionOnBuiltInTypeFailure) { EXPECT_NONFATAL_FAILURE({ // NOLINT EXPECT_PRED1(PredFunction1Int, n1_++); finished_ = true; }, ""); } // Tests a failed EXPECT_PRED1 where the // predicate-formatter is a function on a user-defined type (Bool). TEST_F(EXPECT_PRED1Test, FunctionOnUserTypeFailure) { EXPECT_NONFATAL_FAILURE({ // NOLINT EXPECT_PRED1(PredFunction1Bool, Bool(n1_++)); finished_ = true; }, ""); } // Tests a failed EXPECT_PRED1 where the // predicate-formatter is a functor on a built-in type (int). TEST_F(EXPECT_PRED1Test, FunctorOnBuiltInTypeFailure) { EXPECT_NONFATAL_FAILURE({ // NOLINT EXPECT_PRED1(PredFunctor1(), n1_++); finished_ = true; }, ""); } // Tests a failed EXPECT_PRED1 where the // predicate-formatter is a functor on a user-defined type (Bool). TEST_F(EXPECT_PRED1Test, FunctorOnUserTypeFailure) { EXPECT_NONFATAL_FAILURE({ // NOLINT EXPECT_PRED1(PredFunctor1(), Bool(n1_++)); finished_ = true; }, ""); } // Tests a successful ASSERT_PRED1 where the // predicate-formatter is a function on a built-in type (int). TEST_F(ASSERT_PRED1Test, FunctionOnBuiltInTypeSuccess) { ASSERT_PRED1(PredFunction1Int, ++n1_); finished_ = true; } // Tests a successful ASSERT_PRED1 where the // predicate-formatter is a function on a user-defined type (Bool). TEST_F(ASSERT_PRED1Test, FunctionOnUserTypeSuccess) { ASSERT_PRED1(PredFunction1Bool, Bool(++n1_)); finished_ = true; } // Tests a successful ASSERT_PRED1 where the // predicate-formatter is a functor on a built-in type (int). TEST_F(ASSERT_PRED1Test, FunctorOnBuiltInTypeSuccess) { ASSERT_PRED1(PredFunctor1(), ++n1_); finished_ = true; } // Tests a successful ASSERT_PRED1 where the // predicate-formatter is a functor on a user-defined type (Bool). TEST_F(ASSERT_PRED1Test, FunctorOnUserTypeSuccess) { ASSERT_PRED1(PredFunctor1(), Bool(++n1_)); finished_ = true; } // Tests a failed ASSERT_PRED1 where the // predicate-formatter is a function on a built-in type (int). TEST_F(ASSERT_PRED1Test, FunctionOnBuiltInTypeFailure) { expected_to_finish_ = false; EXPECT_FATAL_FAILURE({ // NOLINT ASSERT_PRED1(PredFunction1Int, n1_++); finished_ = true; }, ""); } // Tests a failed ASSERT_PRED1 where the // predicate-formatter is a function on a user-defined type (Bool). TEST_F(ASSERT_PRED1Test, FunctionOnUserTypeFailure) { expected_to_finish_ = false; EXPECT_FATAL_FAILURE({ // NOLINT ASSERT_PRED1(PredFunction1Bool, Bool(n1_++)); finished_ = true; }, ""); } // Tests a failed ASSERT_PRED1 where the // predicate-formatter is a functor on a built-in type (int). TEST_F(ASSERT_PRED1Test, FunctorOnBuiltInTypeFailure) { expected_to_finish_ = false; EXPECT_FATAL_FAILURE({ // NOLINT ASSERT_PRED1(PredFunctor1(), n1_++); finished_ = true; }, ""); } // Tests a failed ASSERT_PRED1 where the // predicate-formatter is a functor on a user-defined type (Bool). TEST_F(ASSERT_PRED1Test, FunctorOnUserTypeFailure) { expected_to_finish_ = false; EXPECT_FATAL_FAILURE({ // NOLINT ASSERT_PRED1(PredFunctor1(), Bool(n1_++)); finished_ = true; }, ""); } // Tests a successful EXPECT_PRED_FORMAT1 where the // predicate-formatter is a function on a built-in type (int). TEST_F(EXPECT_PRED_FORMAT1Test, FunctionOnBuiltInTypeSuccess) { EXPECT_PRED_FORMAT1(PredFormatFunction1, ++n1_); finished_ = true; } // Tests a successful EXPECT_PRED_FORMAT1 where the // predicate-formatter is a function on a user-defined type (Bool). TEST_F(EXPECT_PRED_FORMAT1Test, FunctionOnUserTypeSuccess) { EXPECT_PRED_FORMAT1(PredFormatFunction1, Bool(++n1_)); finished_ = true; } // Tests a successful EXPECT_PRED_FORMAT1 where the // predicate-formatter is a functor on a built-in type (int). TEST_F(EXPECT_PRED_FORMAT1Test, FunctorOnBuiltInTypeSuccess) { EXPECT_PRED_FORMAT1(PredFormatFunctor1(), ++n1_); finished_ = true; } // Tests a successful EXPECT_PRED_FORMAT1 where the // predicate-formatter is a functor on a user-defined type (Bool). TEST_F(EXPECT_PRED_FORMAT1Test, FunctorOnUserTypeSuccess) { EXPECT_PRED_FORMAT1(PredFormatFunctor1(), Bool(++n1_)); finished_ = true; } // Tests a failed EXPECT_PRED_FORMAT1 where the // predicate-formatter is a function on a built-in type (int). TEST_F(EXPECT_PRED_FORMAT1Test, FunctionOnBuiltInTypeFailure) { EXPECT_NONFATAL_FAILURE({ // NOLINT EXPECT_PRED_FORMAT1(PredFormatFunction1, n1_++); finished_ = true; }, ""); } // Tests a failed EXPECT_PRED_FORMAT1 where the // predicate-formatter is a function on a user-defined type (Bool). TEST_F(EXPECT_PRED_FORMAT1Test, FunctionOnUserTypeFailure) { EXPECT_NONFATAL_FAILURE({ // NOLINT EXPECT_PRED_FORMAT1(PredFormatFunction1, Bool(n1_++)); finished_ = true; }, ""); } // Tests a failed EXPECT_PRED_FORMAT1 where the // predicate-formatter is a functor on a built-in type (int). TEST_F(EXPECT_PRED_FORMAT1Test, FunctorOnBuiltInTypeFailure) { EXPECT_NONFATAL_FAILURE({ // NOLINT EXPECT_PRED_FORMAT1(PredFormatFunctor1(), n1_++); finished_ = true; }, ""); } // Tests a failed EXPECT_PRED_FORMAT1 where the // predicate-formatter is a functor on a user-defined type (Bool). TEST_F(EXPECT_PRED_FORMAT1Test, FunctorOnUserTypeFailure) { EXPECT_NONFATAL_FAILURE({ // NOLINT EXPECT_PRED_FORMAT1(PredFormatFunctor1(), Bool(n1_++)); finished_ = true; }, ""); } // Tests a successful ASSERT_PRED_FORMAT1 where the // predicate-formatter is a function on a built-in type (int). TEST_F(ASSERT_PRED_FORMAT1Test, FunctionOnBuiltInTypeSuccess) { ASSERT_PRED_FORMAT1(PredFormatFunction1, ++n1_); finished_ = true; } // Tests a successful ASSERT_PRED_FORMAT1 where the // predicate-formatter is a function on a user-defined type (Bool). TEST_F(ASSERT_PRED_FORMAT1Test, FunctionOnUserTypeSuccess) { ASSERT_PRED_FORMAT1(PredFormatFunction1, Bool(++n1_)); finished_ = true; } // Tests a successful ASSERT_PRED_FORMAT1 where the // predicate-formatter is a functor on a built-in type (int). TEST_F(ASSERT_PRED_FORMAT1Test, FunctorOnBuiltInTypeSuccess) { ASSERT_PRED_FORMAT1(PredFormatFunctor1(), ++n1_); finished_ = true; } // Tests a successful ASSERT_PRED_FORMAT1 where the // predicate-formatter is a functor on a user-defined type (Bool). TEST_F(ASSERT_PRED_FORMAT1Test, FunctorOnUserTypeSuccess) { ASSERT_PRED_FORMAT1(PredFormatFunctor1(), Bool(++n1_)); finished_ = true; } // Tests a failed ASSERT_PRED_FORMAT1 where the // predicate-formatter is a function on a built-in type (int). TEST_F(ASSERT_PRED_FORMAT1Test, FunctionOnBuiltInTypeFailure) { expected_to_finish_ = false; EXPECT_FATAL_FAILURE({ // NOLINT ASSERT_PRED_FORMAT1(PredFormatFunction1, n1_++); finished_ = true; }, ""); } // Tests a failed ASSERT_PRED_FORMAT1 where the // predicate-formatter is a function on a user-defined type (Bool). TEST_F(ASSERT_PRED_FORMAT1Test, FunctionOnUserTypeFailure) { expected_to_finish_ = false; EXPECT_FATAL_FAILURE({ // NOLINT ASSERT_PRED_FORMAT1(PredFormatFunction1, Bool(n1_++)); finished_ = true; }, ""); } // Tests a failed ASSERT_PRED_FORMAT1 where the // predicate-formatter is a functor on a built-in type (int). TEST_F(ASSERT_PRED_FORMAT1Test, FunctorOnBuiltInTypeFailure) { expected_to_finish_ = false; EXPECT_FATAL_FAILURE({ // NOLINT ASSERT_PRED_FORMAT1(PredFormatFunctor1(), n1_++); finished_ = true; }, ""); } // Tests a failed ASSERT_PRED_FORMAT1 where the // predicate-formatter is a functor on a user-defined type (Bool). TEST_F(ASSERT_PRED_FORMAT1Test, FunctorOnUserTypeFailure) { expected_to_finish_ = false; EXPECT_FATAL_FAILURE({ // NOLINT ASSERT_PRED_FORMAT1(PredFormatFunctor1(), Bool(n1_++)); finished_ = true; }, ""); } // Sample functions/functors for testing binary predicate assertions. // A binary predicate function. template bool PredFunction2(T1 v1, T2 v2) { return v1 + v2 > 0; } // The following two functions are needed to circumvent a bug in // gcc 2.95.3, which sometimes has problem with the above template // function. bool PredFunction2Int(int v1, int v2) { return v1 + v2 > 0; } bool PredFunction2Bool(Bool v1, Bool v2) { return v1 + v2 > 0; } // A binary predicate functor. struct PredFunctor2 { template bool operator()(const T1& v1, const T2& v2) { return v1 + v2 > 0; } }; // A binary predicate-formatter function. template testing::AssertionResult PredFormatFunction2(const char* e1, const char* e2, const T1& v1, const T2& v2) { if (PredFunction2(v1, v2)) return testing::AssertionSuccess(); return testing::AssertionFailure() << e1 << " + " << e2 << " is expected to be positive, but evaluates to " << v1 + v2 << "."; } // A binary predicate-formatter functor. struct PredFormatFunctor2 { template testing::AssertionResult operator()(const char* e1, const char* e2, const T1& v1, const T2& v2) const { return PredFormatFunction2(e1, e2, v1, v2); } }; // Tests for {EXPECT|ASSERT}_PRED_FORMAT2. class Predicate2Test : public testing::Test { protected: virtual void SetUp() { expected_to_finish_ = true; finished_ = false; n1_ = n2_ = 0; } virtual void TearDown() { // Verifies that each of the predicate's arguments was evaluated // exactly once. EXPECT_EQ(1, n1_) << "The predicate assertion didn't evaluate argument 2 " "exactly once."; EXPECT_EQ(1, n2_) << "The predicate assertion didn't evaluate argument 3 " "exactly once."; // Verifies that the control flow in the test function is expected. if (expected_to_finish_ && !finished_) { FAIL() << "The predicate assertion unexpactedly aborted the test."; } else if (!expected_to_finish_ && finished_) { FAIL() << "The failed predicate assertion didn't abort the test " "as expected."; } } // true iff the test function is expected to run to finish. static bool expected_to_finish_; // true iff the test function did run to finish. static bool finished_; static int n1_; static int n2_; }; bool Predicate2Test::expected_to_finish_; bool Predicate2Test::finished_; int Predicate2Test::n1_; int Predicate2Test::n2_; typedef Predicate2Test EXPECT_PRED_FORMAT2Test; typedef Predicate2Test ASSERT_PRED_FORMAT2Test; typedef Predicate2Test EXPECT_PRED2Test; typedef Predicate2Test ASSERT_PRED2Test; // Tests a successful EXPECT_PRED2 where the // predicate-formatter is a function on a built-in type (int). TEST_F(EXPECT_PRED2Test, FunctionOnBuiltInTypeSuccess) { EXPECT_PRED2(PredFunction2Int, ++n1_, ++n2_); finished_ = true; } // Tests a successful EXPECT_PRED2 where the // predicate-formatter is a function on a user-defined type (Bool). TEST_F(EXPECT_PRED2Test, FunctionOnUserTypeSuccess) { EXPECT_PRED2(PredFunction2Bool, Bool(++n1_), Bool(++n2_)); finished_ = true; } // Tests a successful EXPECT_PRED2 where the // predicate-formatter is a functor on a built-in type (int). TEST_F(EXPECT_PRED2Test, FunctorOnBuiltInTypeSuccess) { EXPECT_PRED2(PredFunctor2(), ++n1_, ++n2_); finished_ = true; } // Tests a successful EXPECT_PRED2 where the // predicate-formatter is a functor on a user-defined type (Bool). TEST_F(EXPECT_PRED2Test, FunctorOnUserTypeSuccess) { EXPECT_PRED2(PredFunctor2(), Bool(++n1_), Bool(++n2_)); finished_ = true; } // Tests a failed EXPECT_PRED2 where the // predicate-formatter is a function on a built-in type (int). TEST_F(EXPECT_PRED2Test, FunctionOnBuiltInTypeFailure) { EXPECT_NONFATAL_FAILURE({ // NOLINT EXPECT_PRED2(PredFunction2Int, n1_++, n2_++); finished_ = true; }, ""); } // Tests a failed EXPECT_PRED2 where the // predicate-formatter is a function on a user-defined type (Bool). TEST_F(EXPECT_PRED2Test, FunctionOnUserTypeFailure) { EXPECT_NONFATAL_FAILURE({ // NOLINT EXPECT_PRED2(PredFunction2Bool, Bool(n1_++), Bool(n2_++)); finished_ = true; }, ""); } // Tests a failed EXPECT_PRED2 where the // predicate-formatter is a functor on a built-in type (int). TEST_F(EXPECT_PRED2Test, FunctorOnBuiltInTypeFailure) { EXPECT_NONFATAL_FAILURE({ // NOLINT EXPECT_PRED2(PredFunctor2(), n1_++, n2_++); finished_ = true; }, ""); } // Tests a failed EXPECT_PRED2 where the // predicate-formatter is a functor on a user-defined type (Bool). TEST_F(EXPECT_PRED2Test, FunctorOnUserTypeFailure) { EXPECT_NONFATAL_FAILURE({ // NOLINT EXPECT_PRED2(PredFunctor2(), Bool(n1_++), Bool(n2_++)); finished_ = true; }, ""); } // Tests a successful ASSERT_PRED2 where the // predicate-formatter is a function on a built-in type (int). TEST_F(ASSERT_PRED2Test, FunctionOnBuiltInTypeSuccess) { ASSERT_PRED2(PredFunction2Int, ++n1_, ++n2_); finished_ = true; } // Tests a successful ASSERT_PRED2 where the // predicate-formatter is a function on a user-defined type (Bool). TEST_F(ASSERT_PRED2Test, FunctionOnUserTypeSuccess) { ASSERT_PRED2(PredFunction2Bool, Bool(++n1_), Bool(++n2_)); finished_ = true; } // Tests a successful ASSERT_PRED2 where the // predicate-formatter is a functor on a built-in type (int). TEST_F(ASSERT_PRED2Test, FunctorOnBuiltInTypeSuccess) { ASSERT_PRED2(PredFunctor2(), ++n1_, ++n2_); finished_ = true; } // Tests a successful ASSERT_PRED2 where the // predicate-formatter is a functor on a user-defined type (Bool). TEST_F(ASSERT_PRED2Test, FunctorOnUserTypeSuccess) { ASSERT_PRED2(PredFunctor2(), Bool(++n1_), Bool(++n2_)); finished_ = true; } // Tests a failed ASSERT_PRED2 where the // predicate-formatter is a function on a built-in type (int). TEST_F(ASSERT_PRED2Test, FunctionOnBuiltInTypeFailure) { expected_to_finish_ = false; EXPECT_FATAL_FAILURE({ // NOLINT ASSERT_PRED2(PredFunction2Int, n1_++, n2_++); finished_ = true; }, ""); } // Tests a failed ASSERT_PRED2 where the // predicate-formatter is a function on a user-defined type (Bool). TEST_F(ASSERT_PRED2Test, FunctionOnUserTypeFailure) { expected_to_finish_ = false; EXPECT_FATAL_FAILURE({ // NOLINT ASSERT_PRED2(PredFunction2Bool, Bool(n1_++), Bool(n2_++)); finished_ = true; }, ""); } // Tests a failed ASSERT_PRED2 where the // predicate-formatter is a functor on a built-in type (int). TEST_F(ASSERT_PRED2Test, FunctorOnBuiltInTypeFailure) { expected_to_finish_ = false; EXPECT_FATAL_FAILURE({ // NOLINT ASSERT_PRED2(PredFunctor2(), n1_++, n2_++); finished_ = true; }, ""); } // Tests a failed ASSERT_PRED2 where the // predicate-formatter is a functor on a user-defined type (Bool). TEST_F(ASSERT_PRED2Test, FunctorOnUserTypeFailure) { expected_to_finish_ = false; EXPECT_FATAL_FAILURE({ // NOLINT ASSERT_PRED2(PredFunctor2(), Bool(n1_++), Bool(n2_++)); finished_ = true; }, ""); } // Tests a successful EXPECT_PRED_FORMAT2 where the // predicate-formatter is a function on a built-in type (int). TEST_F(EXPECT_PRED_FORMAT2Test, FunctionOnBuiltInTypeSuccess) { EXPECT_PRED_FORMAT2(PredFormatFunction2, ++n1_, ++n2_); finished_ = true; } // Tests a successful EXPECT_PRED_FORMAT2 where the // predicate-formatter is a function on a user-defined type (Bool). TEST_F(EXPECT_PRED_FORMAT2Test, FunctionOnUserTypeSuccess) { EXPECT_PRED_FORMAT2(PredFormatFunction2, Bool(++n1_), Bool(++n2_)); finished_ = true; } // Tests a successful EXPECT_PRED_FORMAT2 where the // predicate-formatter is a functor on a built-in type (int). TEST_F(EXPECT_PRED_FORMAT2Test, FunctorOnBuiltInTypeSuccess) { EXPECT_PRED_FORMAT2(PredFormatFunctor2(), ++n1_, ++n2_); finished_ = true; } // Tests a successful EXPECT_PRED_FORMAT2 where the // predicate-formatter is a functor on a user-defined type (Bool). TEST_F(EXPECT_PRED_FORMAT2Test, FunctorOnUserTypeSuccess) { EXPECT_PRED_FORMAT2(PredFormatFunctor2(), Bool(++n1_), Bool(++n2_)); finished_ = true; } // Tests a failed EXPECT_PRED_FORMAT2 where the // predicate-formatter is a function on a built-in type (int). TEST_F(EXPECT_PRED_FORMAT2Test, FunctionOnBuiltInTypeFailure) { EXPECT_NONFATAL_FAILURE({ // NOLINT EXPECT_PRED_FORMAT2(PredFormatFunction2, n1_++, n2_++); finished_ = true; }, ""); } // Tests a failed EXPECT_PRED_FORMAT2 where the // predicate-formatter is a function on a user-defined type (Bool). TEST_F(EXPECT_PRED_FORMAT2Test, FunctionOnUserTypeFailure) { EXPECT_NONFATAL_FAILURE({ // NOLINT EXPECT_PRED_FORMAT2(PredFormatFunction2, Bool(n1_++), Bool(n2_++)); finished_ = true; }, ""); } // Tests a failed EXPECT_PRED_FORMAT2 where the // predicate-formatter is a functor on a built-in type (int). TEST_F(EXPECT_PRED_FORMAT2Test, FunctorOnBuiltInTypeFailure) { EXPECT_NONFATAL_FAILURE({ // NOLINT EXPECT_PRED_FORMAT2(PredFormatFunctor2(), n1_++, n2_++); finished_ = true; }, ""); } // Tests a failed EXPECT_PRED_FORMAT2 where the // predicate-formatter is a functor on a user-defined type (Bool). TEST_F(EXPECT_PRED_FORMAT2Test, FunctorOnUserTypeFailure) { EXPECT_NONFATAL_FAILURE({ // NOLINT EXPECT_PRED_FORMAT2(PredFormatFunctor2(), Bool(n1_++), Bool(n2_++)); finished_ = true; }, ""); } // Tests a successful ASSERT_PRED_FORMAT2 where the // predicate-formatter is a function on a built-in type (int). TEST_F(ASSERT_PRED_FORMAT2Test, FunctionOnBuiltInTypeSuccess) { ASSERT_PRED_FORMAT2(PredFormatFunction2, ++n1_, ++n2_); finished_ = true; } // Tests a successful ASSERT_PRED_FORMAT2 where the // predicate-formatter is a function on a user-defined type (Bool). TEST_F(ASSERT_PRED_FORMAT2Test, FunctionOnUserTypeSuccess) { ASSERT_PRED_FORMAT2(PredFormatFunction2, Bool(++n1_), Bool(++n2_)); finished_ = true; } // Tests a successful ASSERT_PRED_FORMAT2 where the // predicate-formatter is a functor on a built-in type (int). TEST_F(ASSERT_PRED_FORMAT2Test, FunctorOnBuiltInTypeSuccess) { ASSERT_PRED_FORMAT2(PredFormatFunctor2(), ++n1_, ++n2_); finished_ = true; } // Tests a successful ASSERT_PRED_FORMAT2 where the // predicate-formatter is a functor on a user-defined type (Bool). TEST_F(ASSERT_PRED_FORMAT2Test, FunctorOnUserTypeSuccess) { ASSERT_PRED_FORMAT2(PredFormatFunctor2(), Bool(++n1_), Bool(++n2_)); finished_ = true; } // Tests a failed ASSERT_PRED_FORMAT2 where the // predicate-formatter is a function on a built-in type (int). TEST_F(ASSERT_PRED_FORMAT2Test, FunctionOnBuiltInTypeFailure) { expected_to_finish_ = false; EXPECT_FATAL_FAILURE({ // NOLINT ASSERT_PRED_FORMAT2(PredFormatFunction2, n1_++, n2_++); finished_ = true; }, ""); } // Tests a failed ASSERT_PRED_FORMAT2 where the // predicate-formatter is a function on a user-defined type (Bool). TEST_F(ASSERT_PRED_FORMAT2Test, FunctionOnUserTypeFailure) { expected_to_finish_ = false; EXPECT_FATAL_FAILURE({ // NOLINT ASSERT_PRED_FORMAT2(PredFormatFunction2, Bool(n1_++), Bool(n2_++)); finished_ = true; }, ""); } // Tests a failed ASSERT_PRED_FORMAT2 where the // predicate-formatter is a functor on a built-in type (int). TEST_F(ASSERT_PRED_FORMAT2Test, FunctorOnBuiltInTypeFailure) { expected_to_finish_ = false; EXPECT_FATAL_FAILURE({ // NOLINT ASSERT_PRED_FORMAT2(PredFormatFunctor2(), n1_++, n2_++); finished_ = true; }, ""); } // Tests a failed ASSERT_PRED_FORMAT2 where the // predicate-formatter is a functor on a user-defined type (Bool). TEST_F(ASSERT_PRED_FORMAT2Test, FunctorOnUserTypeFailure) { expected_to_finish_ = false; EXPECT_FATAL_FAILURE({ // NOLINT ASSERT_PRED_FORMAT2(PredFormatFunctor2(), Bool(n1_++), Bool(n2_++)); finished_ = true; }, ""); } // Sample functions/functors for testing ternary predicate assertions. // A ternary predicate function. template bool PredFunction3(T1 v1, T2 v2, T3 v3) { return v1 + v2 + v3 > 0; } // The following two functions are needed to circumvent a bug in // gcc 2.95.3, which sometimes has problem with the above template // function. bool PredFunction3Int(int v1, int v2, int v3) { return v1 + v2 + v3 > 0; } bool PredFunction3Bool(Bool v1, Bool v2, Bool v3) { return v1 + v2 + v3 > 0; } // A ternary predicate functor. struct PredFunctor3 { template bool operator()(const T1& v1, const T2& v2, const T3& v3) { return v1 + v2 + v3 > 0; } }; // A ternary predicate-formatter function. template testing::AssertionResult PredFormatFunction3(const char* e1, const char* e2, const char* e3, const T1& v1, const T2& v2, const T3& v3) { if (PredFunction3(v1, v2, v3)) return testing::AssertionSuccess(); return testing::AssertionFailure() << e1 << " + " << e2 << " + " << e3 << " is expected to be positive, but evaluates to " << v1 + v2 + v3 << "."; } // A ternary predicate-formatter functor. struct PredFormatFunctor3 { template testing::AssertionResult operator()(const char* e1, const char* e2, const char* e3, const T1& v1, const T2& v2, const T3& v3) const { return PredFormatFunction3(e1, e2, e3, v1, v2, v3); } }; // Tests for {EXPECT|ASSERT}_PRED_FORMAT3. class Predicate3Test : public testing::Test { protected: virtual void SetUp() { expected_to_finish_ = true; finished_ = false; n1_ = n2_ = n3_ = 0; } virtual void TearDown() { // Verifies that each of the predicate's arguments was evaluated // exactly once. EXPECT_EQ(1, n1_) << "The predicate assertion didn't evaluate argument 2 " "exactly once."; EXPECT_EQ(1, n2_) << "The predicate assertion didn't evaluate argument 3 " "exactly once."; EXPECT_EQ(1, n3_) << "The predicate assertion didn't evaluate argument 4 " "exactly once."; // Verifies that the control flow in the test function is expected. if (expected_to_finish_ && !finished_) { FAIL() << "The predicate assertion unexpactedly aborted the test."; } else if (!expected_to_finish_ && finished_) { FAIL() << "The failed predicate assertion didn't abort the test " "as expected."; } } // true iff the test function is expected to run to finish. static bool expected_to_finish_; // true iff the test function did run to finish. static bool finished_; static int n1_; static int n2_; static int n3_; }; bool Predicate3Test::expected_to_finish_; bool Predicate3Test::finished_; int Predicate3Test::n1_; int Predicate3Test::n2_; int Predicate3Test::n3_; typedef Predicate3Test EXPECT_PRED_FORMAT3Test; typedef Predicate3Test ASSERT_PRED_FORMAT3Test; typedef Predicate3Test EXPECT_PRED3Test; typedef Predicate3Test ASSERT_PRED3Test; // Tests a successful EXPECT_PRED3 where the // predicate-formatter is a function on a built-in type (int). TEST_F(EXPECT_PRED3Test, FunctionOnBuiltInTypeSuccess) { EXPECT_PRED3(PredFunction3Int, ++n1_, ++n2_, ++n3_); finished_ = true; } // Tests a successful EXPECT_PRED3 where the // predicate-formatter is a function on a user-defined type (Bool). TEST_F(EXPECT_PRED3Test, FunctionOnUserTypeSuccess) { EXPECT_PRED3(PredFunction3Bool, Bool(++n1_), Bool(++n2_), Bool(++n3_)); finished_ = true; } // Tests a successful EXPECT_PRED3 where the // predicate-formatter is a functor on a built-in type (int). TEST_F(EXPECT_PRED3Test, FunctorOnBuiltInTypeSuccess) { EXPECT_PRED3(PredFunctor3(), ++n1_, ++n2_, ++n3_); finished_ = true; } // Tests a successful EXPECT_PRED3 where the // predicate-formatter is a functor on a user-defined type (Bool). TEST_F(EXPECT_PRED3Test, FunctorOnUserTypeSuccess) { EXPECT_PRED3(PredFunctor3(), Bool(++n1_), Bool(++n2_), Bool(++n3_)); finished_ = true; } // Tests a failed EXPECT_PRED3 where the // predicate-formatter is a function on a built-in type (int). TEST_F(EXPECT_PRED3Test, FunctionOnBuiltInTypeFailure) { EXPECT_NONFATAL_FAILURE({ // NOLINT EXPECT_PRED3(PredFunction3Int, n1_++, n2_++, n3_++); finished_ = true; }, ""); } // Tests a failed EXPECT_PRED3 where the // predicate-formatter is a function on a user-defined type (Bool). TEST_F(EXPECT_PRED3Test, FunctionOnUserTypeFailure) { EXPECT_NONFATAL_FAILURE({ // NOLINT EXPECT_PRED3(PredFunction3Bool, Bool(n1_++), Bool(n2_++), Bool(n3_++)); finished_ = true; }, ""); } // Tests a failed EXPECT_PRED3 where the // predicate-formatter is a functor on a built-in type (int). TEST_F(EXPECT_PRED3Test, FunctorOnBuiltInTypeFailure) { EXPECT_NONFATAL_FAILURE({ // NOLINT EXPECT_PRED3(PredFunctor3(), n1_++, n2_++, n3_++); finished_ = true; }, ""); } // Tests a failed EXPECT_PRED3 where the // predicate-formatter is a functor on a user-defined type (Bool). TEST_F(EXPECT_PRED3Test, FunctorOnUserTypeFailure) { EXPECT_NONFATAL_FAILURE({ // NOLINT EXPECT_PRED3(PredFunctor3(), Bool(n1_++), Bool(n2_++), Bool(n3_++)); finished_ = true; }, ""); } // Tests a successful ASSERT_PRED3 where the // predicate-formatter is a function on a built-in type (int). TEST_F(ASSERT_PRED3Test, FunctionOnBuiltInTypeSuccess) { ASSERT_PRED3(PredFunction3Int, ++n1_, ++n2_, ++n3_); finished_ = true; } // Tests a successful ASSERT_PRED3 where the // predicate-formatter is a function on a user-defined type (Bool). TEST_F(ASSERT_PRED3Test, FunctionOnUserTypeSuccess) { ASSERT_PRED3(PredFunction3Bool, Bool(++n1_), Bool(++n2_), Bool(++n3_)); finished_ = true; } // Tests a successful ASSERT_PRED3 where the // predicate-formatter is a functor on a built-in type (int). TEST_F(ASSERT_PRED3Test, FunctorOnBuiltInTypeSuccess) { ASSERT_PRED3(PredFunctor3(), ++n1_, ++n2_, ++n3_); finished_ = true; } // Tests a successful ASSERT_PRED3 where the // predicate-formatter is a functor on a user-defined type (Bool). TEST_F(ASSERT_PRED3Test, FunctorOnUserTypeSuccess) { ASSERT_PRED3(PredFunctor3(), Bool(++n1_), Bool(++n2_), Bool(++n3_)); finished_ = true; } // Tests a failed ASSERT_PRED3 where the // predicate-formatter is a function on a built-in type (int). TEST_F(ASSERT_PRED3Test, FunctionOnBuiltInTypeFailure) { expected_to_finish_ = false; EXPECT_FATAL_FAILURE({ // NOLINT ASSERT_PRED3(PredFunction3Int, n1_++, n2_++, n3_++); finished_ = true; }, ""); } // Tests a failed ASSERT_PRED3 where the // predicate-formatter is a function on a user-defined type (Bool). TEST_F(ASSERT_PRED3Test, FunctionOnUserTypeFailure) { expected_to_finish_ = false; EXPECT_FATAL_FAILURE({ // NOLINT ASSERT_PRED3(PredFunction3Bool, Bool(n1_++), Bool(n2_++), Bool(n3_++)); finished_ = true; }, ""); } // Tests a failed ASSERT_PRED3 where the // predicate-formatter is a functor on a built-in type (int). TEST_F(ASSERT_PRED3Test, FunctorOnBuiltInTypeFailure) { expected_to_finish_ = false; EXPECT_FATAL_FAILURE({ // NOLINT ASSERT_PRED3(PredFunctor3(), n1_++, n2_++, n3_++); finished_ = true; }, ""); } // Tests a failed ASSERT_PRED3 where the // predicate-formatter is a functor on a user-defined type (Bool). TEST_F(ASSERT_PRED3Test, FunctorOnUserTypeFailure) { expected_to_finish_ = false; EXPECT_FATAL_FAILURE({ // NOLINT ASSERT_PRED3(PredFunctor3(), Bool(n1_++), Bool(n2_++), Bool(n3_++)); finished_ = true; }, ""); } // Tests a successful EXPECT_PRED_FORMAT3 where the // predicate-formatter is a function on a built-in type (int). TEST_F(EXPECT_PRED_FORMAT3Test, FunctionOnBuiltInTypeSuccess) { EXPECT_PRED_FORMAT3(PredFormatFunction3, ++n1_, ++n2_, ++n3_); finished_ = true; } // Tests a successful EXPECT_PRED_FORMAT3 where the // predicate-formatter is a function on a user-defined type (Bool). TEST_F(EXPECT_PRED_FORMAT3Test, FunctionOnUserTypeSuccess) { EXPECT_PRED_FORMAT3(PredFormatFunction3, Bool(++n1_), Bool(++n2_), Bool(++n3_)); finished_ = true; } // Tests a successful EXPECT_PRED_FORMAT3 where the // predicate-formatter is a functor on a built-in type (int). TEST_F(EXPECT_PRED_FORMAT3Test, FunctorOnBuiltInTypeSuccess) { EXPECT_PRED_FORMAT3(PredFormatFunctor3(), ++n1_, ++n2_, ++n3_); finished_ = true; } // Tests a successful EXPECT_PRED_FORMAT3 where the // predicate-formatter is a functor on a user-defined type (Bool). TEST_F(EXPECT_PRED_FORMAT3Test, FunctorOnUserTypeSuccess) { EXPECT_PRED_FORMAT3(PredFormatFunctor3(), Bool(++n1_), Bool(++n2_), Bool(++n3_)); finished_ = true; } // Tests a failed EXPECT_PRED_FORMAT3 where the // predicate-formatter is a function on a built-in type (int). TEST_F(EXPECT_PRED_FORMAT3Test, FunctionOnBuiltInTypeFailure) { EXPECT_NONFATAL_FAILURE({ // NOLINT EXPECT_PRED_FORMAT3(PredFormatFunction3, n1_++, n2_++, n3_++); finished_ = true; }, ""); } // Tests a failed EXPECT_PRED_FORMAT3 where the // predicate-formatter is a function on a user-defined type (Bool). TEST_F(EXPECT_PRED_FORMAT3Test, FunctionOnUserTypeFailure) { EXPECT_NONFATAL_FAILURE({ // NOLINT EXPECT_PRED_FORMAT3(PredFormatFunction3, Bool(n1_++), Bool(n2_++), Bool(n3_++)); finished_ = true; }, ""); } // Tests a failed EXPECT_PRED_FORMAT3 where the // predicate-formatter is a functor on a built-in type (int). TEST_F(EXPECT_PRED_FORMAT3Test, FunctorOnBuiltInTypeFailure) { EXPECT_NONFATAL_FAILURE({ // NOLINT EXPECT_PRED_FORMAT3(PredFormatFunctor3(), n1_++, n2_++, n3_++); finished_ = true; }, ""); } // Tests a failed EXPECT_PRED_FORMAT3 where the // predicate-formatter is a functor on a user-defined type (Bool). TEST_F(EXPECT_PRED_FORMAT3Test, FunctorOnUserTypeFailure) { EXPECT_NONFATAL_FAILURE({ // NOLINT EXPECT_PRED_FORMAT3(PredFormatFunctor3(), Bool(n1_++), Bool(n2_++), Bool(n3_++)); finished_ = true; }, ""); } // Tests a successful ASSERT_PRED_FORMAT3 where the // predicate-formatter is a function on a built-in type (int). TEST_F(ASSERT_PRED_FORMAT3Test, FunctionOnBuiltInTypeSuccess) { ASSERT_PRED_FORMAT3(PredFormatFunction3, ++n1_, ++n2_, ++n3_); finished_ = true; } // Tests a successful ASSERT_PRED_FORMAT3 where the // predicate-formatter is a function on a user-defined type (Bool). TEST_F(ASSERT_PRED_FORMAT3Test, FunctionOnUserTypeSuccess) { ASSERT_PRED_FORMAT3(PredFormatFunction3, Bool(++n1_), Bool(++n2_), Bool(++n3_)); finished_ = true; } // Tests a successful ASSERT_PRED_FORMAT3 where the // predicate-formatter is a functor on a built-in type (int). TEST_F(ASSERT_PRED_FORMAT3Test, FunctorOnBuiltInTypeSuccess) { ASSERT_PRED_FORMAT3(PredFormatFunctor3(), ++n1_, ++n2_, ++n3_); finished_ = true; } // Tests a successful ASSERT_PRED_FORMAT3 where the // predicate-formatter is a functor on a user-defined type (Bool). TEST_F(ASSERT_PRED_FORMAT3Test, FunctorOnUserTypeSuccess) { ASSERT_PRED_FORMAT3(PredFormatFunctor3(), Bool(++n1_), Bool(++n2_), Bool(++n3_)); finished_ = true; } // Tests a failed ASSERT_PRED_FORMAT3 where the // predicate-formatter is a function on a built-in type (int). TEST_F(ASSERT_PRED_FORMAT3Test, FunctionOnBuiltInTypeFailure) { expected_to_finish_ = false; EXPECT_FATAL_FAILURE({ // NOLINT ASSERT_PRED_FORMAT3(PredFormatFunction3, n1_++, n2_++, n3_++); finished_ = true; }, ""); } // Tests a failed ASSERT_PRED_FORMAT3 where the // predicate-formatter is a function on a user-defined type (Bool). TEST_F(ASSERT_PRED_FORMAT3Test, FunctionOnUserTypeFailure) { expected_to_finish_ = false; EXPECT_FATAL_FAILURE({ // NOLINT ASSERT_PRED_FORMAT3(PredFormatFunction3, Bool(n1_++), Bool(n2_++), Bool(n3_++)); finished_ = true; }, ""); } // Tests a failed ASSERT_PRED_FORMAT3 where the // predicate-formatter is a functor on a built-in type (int). TEST_F(ASSERT_PRED_FORMAT3Test, FunctorOnBuiltInTypeFailure) { expected_to_finish_ = false; EXPECT_FATAL_FAILURE({ // NOLINT ASSERT_PRED_FORMAT3(PredFormatFunctor3(), n1_++, n2_++, n3_++); finished_ = true; }, ""); } // Tests a failed ASSERT_PRED_FORMAT3 where the // predicate-formatter is a functor on a user-defined type (Bool). TEST_F(ASSERT_PRED_FORMAT3Test, FunctorOnUserTypeFailure) { expected_to_finish_ = false; EXPECT_FATAL_FAILURE({ // NOLINT ASSERT_PRED_FORMAT3(PredFormatFunctor3(), Bool(n1_++), Bool(n2_++), Bool(n3_++)); finished_ = true; }, ""); } // Sample functions/functors for testing 4-ary predicate assertions. // A 4-ary predicate function. template bool PredFunction4(T1 v1, T2 v2, T3 v3, T4 v4) { return v1 + v2 + v3 + v4 > 0; } // The following two functions are needed to circumvent a bug in // gcc 2.95.3, which sometimes has problem with the above template // function. bool PredFunction4Int(int v1, int v2, int v3, int v4) { return v1 + v2 + v3 + v4 > 0; } bool PredFunction4Bool(Bool v1, Bool v2, Bool v3, Bool v4) { return v1 + v2 + v3 + v4 > 0; } // A 4-ary predicate functor. struct PredFunctor4 { template bool operator()(const T1& v1, const T2& v2, const T3& v3, const T4& v4) { return v1 + v2 + v3 + v4 > 0; } }; // A 4-ary predicate-formatter function. template testing::AssertionResult PredFormatFunction4(const char* e1, const char* e2, const char* e3, const char* e4, const T1& v1, const T2& v2, const T3& v3, const T4& v4) { if (PredFunction4(v1, v2, v3, v4)) return testing::AssertionSuccess(); return testing::AssertionFailure() << e1 << " + " << e2 << " + " << e3 << " + " << e4 << " is expected to be positive, but evaluates to " << v1 + v2 + v3 + v4 << "."; } // A 4-ary predicate-formatter functor. struct PredFormatFunctor4 { template testing::AssertionResult operator()(const char* e1, const char* e2, const char* e3, const char* e4, const T1& v1, const T2& v2, const T3& v3, const T4& v4) const { return PredFormatFunction4(e1, e2, e3, e4, v1, v2, v3, v4); } }; // Tests for {EXPECT|ASSERT}_PRED_FORMAT4. class Predicate4Test : public testing::Test { protected: virtual void SetUp() { expected_to_finish_ = true; finished_ = false; n1_ = n2_ = n3_ = n4_ = 0; } virtual void TearDown() { // Verifies that each of the predicate's arguments was evaluated // exactly once. EXPECT_EQ(1, n1_) << "The predicate assertion didn't evaluate argument 2 " "exactly once."; EXPECT_EQ(1, n2_) << "The predicate assertion didn't evaluate argument 3 " "exactly once."; EXPECT_EQ(1, n3_) << "The predicate assertion didn't evaluate argument 4 " "exactly once."; EXPECT_EQ(1, n4_) << "The predicate assertion didn't evaluate argument 5 " "exactly once."; // Verifies that the control flow in the test function is expected. if (expected_to_finish_ && !finished_) { FAIL() << "The predicate assertion unexpactedly aborted the test."; } else if (!expected_to_finish_ && finished_) { FAIL() << "The failed predicate assertion didn't abort the test " "as expected."; } } // true iff the test function is expected to run to finish. static bool expected_to_finish_; // true iff the test function did run to finish. static bool finished_; static int n1_; static int n2_; static int n3_; static int n4_; }; bool Predicate4Test::expected_to_finish_; bool Predicate4Test::finished_; int Predicate4Test::n1_; int Predicate4Test::n2_; int Predicate4Test::n3_; int Predicate4Test::n4_; typedef Predicate4Test EXPECT_PRED_FORMAT4Test; typedef Predicate4Test ASSERT_PRED_FORMAT4Test; typedef Predicate4Test EXPECT_PRED4Test; typedef Predicate4Test ASSERT_PRED4Test; // Tests a successful EXPECT_PRED4 where the // predicate-formatter is a function on a built-in type (int). TEST_F(EXPECT_PRED4Test, FunctionOnBuiltInTypeSuccess) { EXPECT_PRED4(PredFunction4Int, ++n1_, ++n2_, ++n3_, ++n4_); finished_ = true; } // Tests a successful EXPECT_PRED4 where the // predicate-formatter is a function on a user-defined type (Bool). TEST_F(EXPECT_PRED4Test, FunctionOnUserTypeSuccess) { EXPECT_PRED4(PredFunction4Bool, Bool(++n1_), Bool(++n2_), Bool(++n3_), Bool(++n4_)); finished_ = true; } // Tests a successful EXPECT_PRED4 where the // predicate-formatter is a functor on a built-in type (int). TEST_F(EXPECT_PRED4Test, FunctorOnBuiltInTypeSuccess) { EXPECT_PRED4(PredFunctor4(), ++n1_, ++n2_, ++n3_, ++n4_); finished_ = true; } // Tests a successful EXPECT_PRED4 where the // predicate-formatter is a functor on a user-defined type (Bool). TEST_F(EXPECT_PRED4Test, FunctorOnUserTypeSuccess) { EXPECT_PRED4(PredFunctor4(), Bool(++n1_), Bool(++n2_), Bool(++n3_), Bool(++n4_)); finished_ = true; } // Tests a failed EXPECT_PRED4 where the // predicate-formatter is a function on a built-in type (int). TEST_F(EXPECT_PRED4Test, FunctionOnBuiltInTypeFailure) { EXPECT_NONFATAL_FAILURE({ // NOLINT EXPECT_PRED4(PredFunction4Int, n1_++, n2_++, n3_++, n4_++); finished_ = true; }, ""); } // Tests a failed EXPECT_PRED4 where the // predicate-formatter is a function on a user-defined type (Bool). TEST_F(EXPECT_PRED4Test, FunctionOnUserTypeFailure) { EXPECT_NONFATAL_FAILURE({ // NOLINT EXPECT_PRED4(PredFunction4Bool, Bool(n1_++), Bool(n2_++), Bool(n3_++), Bool(n4_++)); finished_ = true; }, ""); } // Tests a failed EXPECT_PRED4 where the // predicate-formatter is a functor on a built-in type (int). TEST_F(EXPECT_PRED4Test, FunctorOnBuiltInTypeFailure) { EXPECT_NONFATAL_FAILURE({ // NOLINT EXPECT_PRED4(PredFunctor4(), n1_++, n2_++, n3_++, n4_++); finished_ = true; }, ""); } // Tests a failed EXPECT_PRED4 where the // predicate-formatter is a functor on a user-defined type (Bool). TEST_F(EXPECT_PRED4Test, FunctorOnUserTypeFailure) { EXPECT_NONFATAL_FAILURE({ // NOLINT EXPECT_PRED4(PredFunctor4(), Bool(n1_++), Bool(n2_++), Bool(n3_++), Bool(n4_++)); finished_ = true; }, ""); } // Tests a successful ASSERT_PRED4 where the // predicate-formatter is a function on a built-in type (int). TEST_F(ASSERT_PRED4Test, FunctionOnBuiltInTypeSuccess) { ASSERT_PRED4(PredFunction4Int, ++n1_, ++n2_, ++n3_, ++n4_); finished_ = true; } // Tests a successful ASSERT_PRED4 where the // predicate-formatter is a function on a user-defined type (Bool). TEST_F(ASSERT_PRED4Test, FunctionOnUserTypeSuccess) { ASSERT_PRED4(PredFunction4Bool, Bool(++n1_), Bool(++n2_), Bool(++n3_), Bool(++n4_)); finished_ = true; } // Tests a successful ASSERT_PRED4 where the // predicate-formatter is a functor on a built-in type (int). TEST_F(ASSERT_PRED4Test, FunctorOnBuiltInTypeSuccess) { ASSERT_PRED4(PredFunctor4(), ++n1_, ++n2_, ++n3_, ++n4_); finished_ = true; } // Tests a successful ASSERT_PRED4 where the // predicate-formatter is a functor on a user-defined type (Bool). TEST_F(ASSERT_PRED4Test, FunctorOnUserTypeSuccess) { ASSERT_PRED4(PredFunctor4(), Bool(++n1_), Bool(++n2_), Bool(++n3_), Bool(++n4_)); finished_ = true; } // Tests a failed ASSERT_PRED4 where the // predicate-formatter is a function on a built-in type (int). TEST_F(ASSERT_PRED4Test, FunctionOnBuiltInTypeFailure) { expected_to_finish_ = false; EXPECT_FATAL_FAILURE({ // NOLINT ASSERT_PRED4(PredFunction4Int, n1_++, n2_++, n3_++, n4_++); finished_ = true; }, ""); } // Tests a failed ASSERT_PRED4 where the // predicate-formatter is a function on a user-defined type (Bool). TEST_F(ASSERT_PRED4Test, FunctionOnUserTypeFailure) { expected_to_finish_ = false; EXPECT_FATAL_FAILURE({ // NOLINT ASSERT_PRED4(PredFunction4Bool, Bool(n1_++), Bool(n2_++), Bool(n3_++), Bool(n4_++)); finished_ = true; }, ""); } // Tests a failed ASSERT_PRED4 where the // predicate-formatter is a functor on a built-in type (int). TEST_F(ASSERT_PRED4Test, FunctorOnBuiltInTypeFailure) { expected_to_finish_ = false; EXPECT_FATAL_FAILURE({ // NOLINT ASSERT_PRED4(PredFunctor4(), n1_++, n2_++, n3_++, n4_++); finished_ = true; }, ""); } // Tests a failed ASSERT_PRED4 where the // predicate-formatter is a functor on a user-defined type (Bool). TEST_F(ASSERT_PRED4Test, FunctorOnUserTypeFailure) { expected_to_finish_ = false; EXPECT_FATAL_FAILURE({ // NOLINT ASSERT_PRED4(PredFunctor4(), Bool(n1_++), Bool(n2_++), Bool(n3_++), Bool(n4_++)); finished_ = true; }, ""); } // Tests a successful EXPECT_PRED_FORMAT4 where the // predicate-formatter is a function on a built-in type (int). TEST_F(EXPECT_PRED_FORMAT4Test, FunctionOnBuiltInTypeSuccess) { EXPECT_PRED_FORMAT4(PredFormatFunction4, ++n1_, ++n2_, ++n3_, ++n4_); finished_ = true; } // Tests a successful EXPECT_PRED_FORMAT4 where the // predicate-formatter is a function on a user-defined type (Bool). TEST_F(EXPECT_PRED_FORMAT4Test, FunctionOnUserTypeSuccess) { EXPECT_PRED_FORMAT4(PredFormatFunction4, Bool(++n1_), Bool(++n2_), Bool(++n3_), Bool(++n4_)); finished_ = true; } // Tests a successful EXPECT_PRED_FORMAT4 where the // predicate-formatter is a functor on a built-in type (int). TEST_F(EXPECT_PRED_FORMAT4Test, FunctorOnBuiltInTypeSuccess) { EXPECT_PRED_FORMAT4(PredFormatFunctor4(), ++n1_, ++n2_, ++n3_, ++n4_); finished_ = true; } // Tests a successful EXPECT_PRED_FORMAT4 where the // predicate-formatter is a functor on a user-defined type (Bool). TEST_F(EXPECT_PRED_FORMAT4Test, FunctorOnUserTypeSuccess) { EXPECT_PRED_FORMAT4(PredFormatFunctor4(), Bool(++n1_), Bool(++n2_), Bool(++n3_), Bool(++n4_)); finished_ = true; } // Tests a failed EXPECT_PRED_FORMAT4 where the // predicate-formatter is a function on a built-in type (int). TEST_F(EXPECT_PRED_FORMAT4Test, FunctionOnBuiltInTypeFailure) { EXPECT_NONFATAL_FAILURE({ // NOLINT EXPECT_PRED_FORMAT4(PredFormatFunction4, n1_++, n2_++, n3_++, n4_++); finished_ = true; }, ""); } // Tests a failed EXPECT_PRED_FORMAT4 where the // predicate-formatter is a function on a user-defined type (Bool). TEST_F(EXPECT_PRED_FORMAT4Test, FunctionOnUserTypeFailure) { EXPECT_NONFATAL_FAILURE({ // NOLINT EXPECT_PRED_FORMAT4(PredFormatFunction4, Bool(n1_++), Bool(n2_++), Bool(n3_++), Bool(n4_++)); finished_ = true; }, ""); } // Tests a failed EXPECT_PRED_FORMAT4 where the // predicate-formatter is a functor on a built-in type (int). TEST_F(EXPECT_PRED_FORMAT4Test, FunctorOnBuiltInTypeFailure) { EXPECT_NONFATAL_FAILURE({ // NOLINT EXPECT_PRED_FORMAT4(PredFormatFunctor4(), n1_++, n2_++, n3_++, n4_++); finished_ = true; }, ""); } // Tests a failed EXPECT_PRED_FORMAT4 where the // predicate-formatter is a functor on a user-defined type (Bool). TEST_F(EXPECT_PRED_FORMAT4Test, FunctorOnUserTypeFailure) { EXPECT_NONFATAL_FAILURE({ // NOLINT EXPECT_PRED_FORMAT4(PredFormatFunctor4(), Bool(n1_++), Bool(n2_++), Bool(n3_++), Bool(n4_++)); finished_ = true; }, ""); } // Tests a successful ASSERT_PRED_FORMAT4 where the // predicate-formatter is a function on a built-in type (int). TEST_F(ASSERT_PRED_FORMAT4Test, FunctionOnBuiltInTypeSuccess) { ASSERT_PRED_FORMAT4(PredFormatFunction4, ++n1_, ++n2_, ++n3_, ++n4_); finished_ = true; } // Tests a successful ASSERT_PRED_FORMAT4 where the // predicate-formatter is a function on a user-defined type (Bool). TEST_F(ASSERT_PRED_FORMAT4Test, FunctionOnUserTypeSuccess) { ASSERT_PRED_FORMAT4(PredFormatFunction4, Bool(++n1_), Bool(++n2_), Bool(++n3_), Bool(++n4_)); finished_ = true; } // Tests a successful ASSERT_PRED_FORMAT4 where the // predicate-formatter is a functor on a built-in type (int). TEST_F(ASSERT_PRED_FORMAT4Test, FunctorOnBuiltInTypeSuccess) { ASSERT_PRED_FORMAT4(PredFormatFunctor4(), ++n1_, ++n2_, ++n3_, ++n4_); finished_ = true; } // Tests a successful ASSERT_PRED_FORMAT4 where the // predicate-formatter is a functor on a user-defined type (Bool). TEST_F(ASSERT_PRED_FORMAT4Test, FunctorOnUserTypeSuccess) { ASSERT_PRED_FORMAT4(PredFormatFunctor4(), Bool(++n1_), Bool(++n2_), Bool(++n3_), Bool(++n4_)); finished_ = true; } // Tests a failed ASSERT_PRED_FORMAT4 where the // predicate-formatter is a function on a built-in type (int). TEST_F(ASSERT_PRED_FORMAT4Test, FunctionOnBuiltInTypeFailure) { expected_to_finish_ = false; EXPECT_FATAL_FAILURE({ // NOLINT ASSERT_PRED_FORMAT4(PredFormatFunction4, n1_++, n2_++, n3_++, n4_++); finished_ = true; }, ""); } // Tests a failed ASSERT_PRED_FORMAT4 where the // predicate-formatter is a function on a user-defined type (Bool). TEST_F(ASSERT_PRED_FORMAT4Test, FunctionOnUserTypeFailure) { expected_to_finish_ = false; EXPECT_FATAL_FAILURE({ // NOLINT ASSERT_PRED_FORMAT4(PredFormatFunction4, Bool(n1_++), Bool(n2_++), Bool(n3_++), Bool(n4_++)); finished_ = true; }, ""); } // Tests a failed ASSERT_PRED_FORMAT4 where the // predicate-formatter is a functor on a built-in type (int). TEST_F(ASSERT_PRED_FORMAT4Test, FunctorOnBuiltInTypeFailure) { expected_to_finish_ = false; EXPECT_FATAL_FAILURE({ // NOLINT ASSERT_PRED_FORMAT4(PredFormatFunctor4(), n1_++, n2_++, n3_++, n4_++); finished_ = true; }, ""); } // Tests a failed ASSERT_PRED_FORMAT4 where the // predicate-formatter is a functor on a user-defined type (Bool). TEST_F(ASSERT_PRED_FORMAT4Test, FunctorOnUserTypeFailure) { expected_to_finish_ = false; EXPECT_FATAL_FAILURE({ // NOLINT ASSERT_PRED_FORMAT4(PredFormatFunctor4(), Bool(n1_++), Bool(n2_++), Bool(n3_++), Bool(n4_++)); finished_ = true; }, ""); } // Sample functions/functors for testing 5-ary predicate assertions. // A 5-ary predicate function. template bool PredFunction5(T1 v1, T2 v2, T3 v3, T4 v4, T5 v5) { return v1 + v2 + v3 + v4 + v5 > 0; } // The following two functions are needed to circumvent a bug in // gcc 2.95.3, which sometimes has problem with the above template // function. bool PredFunction5Int(int v1, int v2, int v3, int v4, int v5) { return v1 + v2 + v3 + v4 + v5 > 0; } bool PredFunction5Bool(Bool v1, Bool v2, Bool v3, Bool v4, Bool v5) { return v1 + v2 + v3 + v4 + v5 > 0; } // A 5-ary predicate functor. struct PredFunctor5 { template bool operator()(const T1& v1, const T2& v2, const T3& v3, const T4& v4, const T5& v5) { return v1 + v2 + v3 + v4 + v5 > 0; } }; // A 5-ary predicate-formatter function. template testing::AssertionResult PredFormatFunction5(const char* e1, const char* e2, const char* e3, const char* e4, const char* e5, const T1& v1, const T2& v2, const T3& v3, const T4& v4, const T5& v5) { if (PredFunction5(v1, v2, v3, v4, v5)) return testing::AssertionSuccess(); return testing::AssertionFailure() << e1 << " + " << e2 << " + " << e3 << " + " << e4 << " + " << e5 << " is expected to be positive, but evaluates to " << v1 + v2 + v3 + v4 + v5 << "."; } // A 5-ary predicate-formatter functor. struct PredFormatFunctor5 { template testing::AssertionResult operator()(const char* e1, const char* e2, const char* e3, const char* e4, const char* e5, const T1& v1, const T2& v2, const T3& v3, const T4& v4, const T5& v5) const { return PredFormatFunction5(e1, e2, e3, e4, e5, v1, v2, v3, v4, v5); } }; // Tests for {EXPECT|ASSERT}_PRED_FORMAT5. class Predicate5Test : public testing::Test { protected: virtual void SetUp() { expected_to_finish_ = true; finished_ = false; n1_ = n2_ = n3_ = n4_ = n5_ = 0; } virtual void TearDown() { // Verifies that each of the predicate's arguments was evaluated // exactly once. EXPECT_EQ(1, n1_) << "The predicate assertion didn't evaluate argument 2 " "exactly once."; EXPECT_EQ(1, n2_) << "The predicate assertion didn't evaluate argument 3 " "exactly once."; EXPECT_EQ(1, n3_) << "The predicate assertion didn't evaluate argument 4 " "exactly once."; EXPECT_EQ(1, n4_) << "The predicate assertion didn't evaluate argument 5 " "exactly once."; EXPECT_EQ(1, n5_) << "The predicate assertion didn't evaluate argument 6 " "exactly once."; // Verifies that the control flow in the test function is expected. if (expected_to_finish_ && !finished_) { FAIL() << "The predicate assertion unexpactedly aborted the test."; } else if (!expected_to_finish_ && finished_) { FAIL() << "The failed predicate assertion didn't abort the test " "as expected."; } } // true iff the test function is expected to run to finish. static bool expected_to_finish_; // true iff the test function did run to finish. static bool finished_; static int n1_; static int n2_; static int n3_; static int n4_; static int n5_; }; bool Predicate5Test::expected_to_finish_; bool Predicate5Test::finished_; int Predicate5Test::n1_; int Predicate5Test::n2_; int Predicate5Test::n3_; int Predicate5Test::n4_; int Predicate5Test::n5_; typedef Predicate5Test EXPECT_PRED_FORMAT5Test; typedef Predicate5Test ASSERT_PRED_FORMAT5Test; typedef Predicate5Test EXPECT_PRED5Test; typedef Predicate5Test ASSERT_PRED5Test; // Tests a successful EXPECT_PRED5 where the // predicate-formatter is a function on a built-in type (int). TEST_F(EXPECT_PRED5Test, FunctionOnBuiltInTypeSuccess) { EXPECT_PRED5(PredFunction5Int, ++n1_, ++n2_, ++n3_, ++n4_, ++n5_); finished_ = true; } // Tests a successful EXPECT_PRED5 where the // predicate-formatter is a function on a user-defined type (Bool). TEST_F(EXPECT_PRED5Test, FunctionOnUserTypeSuccess) { EXPECT_PRED5(PredFunction5Bool, Bool(++n1_), Bool(++n2_), Bool(++n3_), Bool(++n4_), Bool(++n5_)); finished_ = true; } // Tests a successful EXPECT_PRED5 where the // predicate-formatter is a functor on a built-in type (int). TEST_F(EXPECT_PRED5Test, FunctorOnBuiltInTypeSuccess) { EXPECT_PRED5(PredFunctor5(), ++n1_, ++n2_, ++n3_, ++n4_, ++n5_); finished_ = true; } // Tests a successful EXPECT_PRED5 where the // predicate-formatter is a functor on a user-defined type (Bool). TEST_F(EXPECT_PRED5Test, FunctorOnUserTypeSuccess) { EXPECT_PRED5(PredFunctor5(), Bool(++n1_), Bool(++n2_), Bool(++n3_), Bool(++n4_), Bool(++n5_)); finished_ = true; } // Tests a failed EXPECT_PRED5 where the // predicate-formatter is a function on a built-in type (int). TEST_F(EXPECT_PRED5Test, FunctionOnBuiltInTypeFailure) { EXPECT_NONFATAL_FAILURE({ // NOLINT EXPECT_PRED5(PredFunction5Int, n1_++, n2_++, n3_++, n4_++, n5_++); finished_ = true; }, ""); } // Tests a failed EXPECT_PRED5 where the // predicate-formatter is a function on a user-defined type (Bool). TEST_F(EXPECT_PRED5Test, FunctionOnUserTypeFailure) { EXPECT_NONFATAL_FAILURE({ // NOLINT EXPECT_PRED5(PredFunction5Bool, Bool(n1_++), Bool(n2_++), Bool(n3_++), Bool(n4_++), Bool(n5_++)); finished_ = true; }, ""); } // Tests a failed EXPECT_PRED5 where the // predicate-formatter is a functor on a built-in type (int). TEST_F(EXPECT_PRED5Test, FunctorOnBuiltInTypeFailure) { EXPECT_NONFATAL_FAILURE({ // NOLINT EXPECT_PRED5(PredFunctor5(), n1_++, n2_++, n3_++, n4_++, n5_++); finished_ = true; }, ""); } // Tests a failed EXPECT_PRED5 where the // predicate-formatter is a functor on a user-defined type (Bool). TEST_F(EXPECT_PRED5Test, FunctorOnUserTypeFailure) { EXPECT_NONFATAL_FAILURE({ // NOLINT EXPECT_PRED5(PredFunctor5(), Bool(n1_++), Bool(n2_++), Bool(n3_++), Bool(n4_++), Bool(n5_++)); finished_ = true; }, ""); } // Tests a successful ASSERT_PRED5 where the // predicate-formatter is a function on a built-in type (int). TEST_F(ASSERT_PRED5Test, FunctionOnBuiltInTypeSuccess) { ASSERT_PRED5(PredFunction5Int, ++n1_, ++n2_, ++n3_, ++n4_, ++n5_); finished_ = true; } // Tests a successful ASSERT_PRED5 where the // predicate-formatter is a function on a user-defined type (Bool). TEST_F(ASSERT_PRED5Test, FunctionOnUserTypeSuccess) { ASSERT_PRED5(PredFunction5Bool, Bool(++n1_), Bool(++n2_), Bool(++n3_), Bool(++n4_), Bool(++n5_)); finished_ = true; } // Tests a successful ASSERT_PRED5 where the // predicate-formatter is a functor on a built-in type (int). TEST_F(ASSERT_PRED5Test, FunctorOnBuiltInTypeSuccess) { ASSERT_PRED5(PredFunctor5(), ++n1_, ++n2_, ++n3_, ++n4_, ++n5_); finished_ = true; } // Tests a successful ASSERT_PRED5 where the // predicate-formatter is a functor on a user-defined type (Bool). TEST_F(ASSERT_PRED5Test, FunctorOnUserTypeSuccess) { ASSERT_PRED5(PredFunctor5(), Bool(++n1_), Bool(++n2_), Bool(++n3_), Bool(++n4_), Bool(++n5_)); finished_ = true; } // Tests a failed ASSERT_PRED5 where the // predicate-formatter is a function on a built-in type (int). TEST_F(ASSERT_PRED5Test, FunctionOnBuiltInTypeFailure) { expected_to_finish_ = false; EXPECT_FATAL_FAILURE({ // NOLINT ASSERT_PRED5(PredFunction5Int, n1_++, n2_++, n3_++, n4_++, n5_++); finished_ = true; }, ""); } // Tests a failed ASSERT_PRED5 where the // predicate-formatter is a function on a user-defined type (Bool). TEST_F(ASSERT_PRED5Test, FunctionOnUserTypeFailure) { expected_to_finish_ = false; EXPECT_FATAL_FAILURE({ // NOLINT ASSERT_PRED5(PredFunction5Bool, Bool(n1_++), Bool(n2_++), Bool(n3_++), Bool(n4_++), Bool(n5_++)); finished_ = true; }, ""); } // Tests a failed ASSERT_PRED5 where the // predicate-formatter is a functor on a built-in type (int). TEST_F(ASSERT_PRED5Test, FunctorOnBuiltInTypeFailure) { expected_to_finish_ = false; EXPECT_FATAL_FAILURE({ // NOLINT ASSERT_PRED5(PredFunctor5(), n1_++, n2_++, n3_++, n4_++, n5_++); finished_ = true; }, ""); } // Tests a failed ASSERT_PRED5 where the // predicate-formatter is a functor on a user-defined type (Bool). TEST_F(ASSERT_PRED5Test, FunctorOnUserTypeFailure) { expected_to_finish_ = false; EXPECT_FATAL_FAILURE({ // NOLINT ASSERT_PRED5(PredFunctor5(), Bool(n1_++), Bool(n2_++), Bool(n3_++), Bool(n4_++), Bool(n5_++)); finished_ = true; }, ""); } // Tests a successful EXPECT_PRED_FORMAT5 where the // predicate-formatter is a function on a built-in type (int). TEST_F(EXPECT_PRED_FORMAT5Test, FunctionOnBuiltInTypeSuccess) { EXPECT_PRED_FORMAT5(PredFormatFunction5, ++n1_, ++n2_, ++n3_, ++n4_, ++n5_); finished_ = true; } // Tests a successful EXPECT_PRED_FORMAT5 where the // predicate-formatter is a function on a user-defined type (Bool). TEST_F(EXPECT_PRED_FORMAT5Test, FunctionOnUserTypeSuccess) { EXPECT_PRED_FORMAT5(PredFormatFunction5, Bool(++n1_), Bool(++n2_), Bool(++n3_), Bool(++n4_), Bool(++n5_)); finished_ = true; } // Tests a successful EXPECT_PRED_FORMAT5 where the // predicate-formatter is a functor on a built-in type (int). TEST_F(EXPECT_PRED_FORMAT5Test, FunctorOnBuiltInTypeSuccess) { EXPECT_PRED_FORMAT5(PredFormatFunctor5(), ++n1_, ++n2_, ++n3_, ++n4_, ++n5_); finished_ = true; } // Tests a successful EXPECT_PRED_FORMAT5 where the // predicate-formatter is a functor on a user-defined type (Bool). TEST_F(EXPECT_PRED_FORMAT5Test, FunctorOnUserTypeSuccess) { EXPECT_PRED_FORMAT5(PredFormatFunctor5(), Bool(++n1_), Bool(++n2_), Bool(++n3_), Bool(++n4_), Bool(++n5_)); finished_ = true; } // Tests a failed EXPECT_PRED_FORMAT5 where the // predicate-formatter is a function on a built-in type (int). TEST_F(EXPECT_PRED_FORMAT5Test, FunctionOnBuiltInTypeFailure) { EXPECT_NONFATAL_FAILURE({ // NOLINT EXPECT_PRED_FORMAT5(PredFormatFunction5, n1_++, n2_++, n3_++, n4_++, n5_++); finished_ = true; }, ""); } // Tests a failed EXPECT_PRED_FORMAT5 where the // predicate-formatter is a function on a user-defined type (Bool). TEST_F(EXPECT_PRED_FORMAT5Test, FunctionOnUserTypeFailure) { EXPECT_NONFATAL_FAILURE({ // NOLINT EXPECT_PRED_FORMAT5(PredFormatFunction5, Bool(n1_++), Bool(n2_++), Bool(n3_++), Bool(n4_++), Bool(n5_++)); finished_ = true; }, ""); } // Tests a failed EXPECT_PRED_FORMAT5 where the // predicate-formatter is a functor on a built-in type (int). TEST_F(EXPECT_PRED_FORMAT5Test, FunctorOnBuiltInTypeFailure) { EXPECT_NONFATAL_FAILURE({ // NOLINT EXPECT_PRED_FORMAT5(PredFormatFunctor5(), n1_++, n2_++, n3_++, n4_++, n5_++); finished_ = true; }, ""); } // Tests a failed EXPECT_PRED_FORMAT5 where the // predicate-formatter is a functor on a user-defined type (Bool). TEST_F(EXPECT_PRED_FORMAT5Test, FunctorOnUserTypeFailure) { EXPECT_NONFATAL_FAILURE({ // NOLINT EXPECT_PRED_FORMAT5(PredFormatFunctor5(), Bool(n1_++), Bool(n2_++), Bool(n3_++), Bool(n4_++), Bool(n5_++)); finished_ = true; }, ""); } // Tests a successful ASSERT_PRED_FORMAT5 where the // predicate-formatter is a function on a built-in type (int). TEST_F(ASSERT_PRED_FORMAT5Test, FunctionOnBuiltInTypeSuccess) { ASSERT_PRED_FORMAT5(PredFormatFunction5, ++n1_, ++n2_, ++n3_, ++n4_, ++n5_); finished_ = true; } // Tests a successful ASSERT_PRED_FORMAT5 where the // predicate-formatter is a function on a user-defined type (Bool). TEST_F(ASSERT_PRED_FORMAT5Test, FunctionOnUserTypeSuccess) { ASSERT_PRED_FORMAT5(PredFormatFunction5, Bool(++n1_), Bool(++n2_), Bool(++n3_), Bool(++n4_), Bool(++n5_)); finished_ = true; } // Tests a successful ASSERT_PRED_FORMAT5 where the // predicate-formatter is a functor on a built-in type (int). TEST_F(ASSERT_PRED_FORMAT5Test, FunctorOnBuiltInTypeSuccess) { ASSERT_PRED_FORMAT5(PredFormatFunctor5(), ++n1_, ++n2_, ++n3_, ++n4_, ++n5_); finished_ = true; } // Tests a successful ASSERT_PRED_FORMAT5 where the // predicate-formatter is a functor on a user-defined type (Bool). TEST_F(ASSERT_PRED_FORMAT5Test, FunctorOnUserTypeSuccess) { ASSERT_PRED_FORMAT5(PredFormatFunctor5(), Bool(++n1_), Bool(++n2_), Bool(++n3_), Bool(++n4_), Bool(++n5_)); finished_ = true; } // Tests a failed ASSERT_PRED_FORMAT5 where the // predicate-formatter is a function on a built-in type (int). TEST_F(ASSERT_PRED_FORMAT5Test, FunctionOnBuiltInTypeFailure) { expected_to_finish_ = false; EXPECT_FATAL_FAILURE({ // NOLINT ASSERT_PRED_FORMAT5(PredFormatFunction5, n1_++, n2_++, n3_++, n4_++, n5_++); finished_ = true; }, ""); } // Tests a failed ASSERT_PRED_FORMAT5 where the // predicate-formatter is a function on a user-defined type (Bool). TEST_F(ASSERT_PRED_FORMAT5Test, FunctionOnUserTypeFailure) { expected_to_finish_ = false; EXPECT_FATAL_FAILURE({ // NOLINT ASSERT_PRED_FORMAT5(PredFormatFunction5, Bool(n1_++), Bool(n2_++), Bool(n3_++), Bool(n4_++), Bool(n5_++)); finished_ = true; }, ""); } // Tests a failed ASSERT_PRED_FORMAT5 where the // predicate-formatter is a functor on a built-in type (int). TEST_F(ASSERT_PRED_FORMAT5Test, FunctorOnBuiltInTypeFailure) { expected_to_finish_ = false; EXPECT_FATAL_FAILURE({ // NOLINT ASSERT_PRED_FORMAT5(PredFormatFunctor5(), n1_++, n2_++, n3_++, n4_++, n5_++); finished_ = true; }, ""); } // Tests a failed ASSERT_PRED_FORMAT5 where the // predicate-formatter is a functor on a user-defined type (Bool). TEST_F(ASSERT_PRED_FORMAT5Test, FunctorOnUserTypeFailure) { expected_to_finish_ = false; EXPECT_FATAL_FAILURE({ // NOLINT ASSERT_PRED_FORMAT5(PredFormatFunctor5(), Bool(n1_++), Bool(n2_++), Bool(n3_++), Bool(n4_++), Bool(n5_++)); finished_ = true; }, ""); } ffc-2019.1.0.post0/libs/gtest-1.7.0/test/gtest_premature_exit_test.cc000066400000000000000000000112441346026536000250650ustar00rootroot00000000000000// Copyright 2013, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: wan@google.com (Zhanyong Wan) // // Tests that Google Test manipulates the premature-exit-detection // file correctly. #include #include "gtest/gtest.h" using ::testing::InitGoogleTest; using ::testing::Test; using ::testing::internal::posix::GetEnv; using ::testing::internal::posix::Stat; using ::testing::internal::posix::StatStruct; namespace { // Is the TEST_PREMATURE_EXIT_FILE environment variable expected to be // set? const bool kTestPrematureExitFileEnvVarShouldBeSet = false; class PrematureExitTest : public Test { public: // Returns true iff the given file exists. static bool FileExists(const char* filepath) { StatStruct stat; return Stat(filepath, &stat) == 0; } protected: PrematureExitTest() { premature_exit_file_path_ = GetEnv("TEST_PREMATURE_EXIT_FILE"); // Normalize NULL to "" for ease of handling. if (premature_exit_file_path_ == NULL) { premature_exit_file_path_ = ""; } } // Returns true iff the premature-exit file exists. bool PrematureExitFileExists() const { return FileExists(premature_exit_file_path_); } const char* premature_exit_file_path_; }; typedef PrematureExitTest PrematureExitDeathTest; // Tests that: // - the premature-exit file exists during the execution of a // death test (EXPECT_DEATH*), and // - a death test doesn't interfere with the main test process's // handling of the premature-exit file. TEST_F(PrematureExitDeathTest, FileExistsDuringExecutionOfDeathTest) { if (*premature_exit_file_path_ == '\0') { return; } EXPECT_DEATH_IF_SUPPORTED({ // If the file exists, crash the process such that the main test // process will catch the (expected) crash and report a success; // otherwise don't crash, which will cause the main test process // to report that the death test has failed. if (PrematureExitFileExists()) { exit(1); } }, ""); } // Tests that TEST_PREMATURE_EXIT_FILE is set where it's expected to // be set. TEST_F(PrematureExitTest, TestPrematureExitFileEnvVarIsSet) { if (kTestPrematureExitFileEnvVarShouldBeSet) { const char* const filepath = GetEnv("TEST_PREMATURE_EXIT_FILE"); ASSERT_TRUE(filepath != NULL); ASSERT_NE(*filepath, '\0'); } } // Tests that the premature-exit file exists during the execution of a // normal (non-death) test. TEST_F(PrematureExitTest, PrematureExitFileExistsDuringTestExecution) { if (*premature_exit_file_path_ == '\0') { return; } EXPECT_TRUE(PrematureExitFileExists()) << " file " << premature_exit_file_path_ << " should exist during test execution, but doesn't."; } } // namespace int main(int argc, char **argv) { InitGoogleTest(&argc, argv); const int exit_code = RUN_ALL_TESTS(); // Test that the premature-exit file is deleted upon return from // RUN_ALL_TESTS(). const char* const filepath = GetEnv("TEST_PREMATURE_EXIT_FILE"); if (filepath != NULL && *filepath != '\0') { if (PrematureExitTest::FileExists(filepath)) { printf( "File %s shouldn't exist after the test program finishes, but does.", filepath); return 1; } } return exit_code; } ffc-2019.1.0.post0/libs/gtest-1.7.0/test/gtest_prod_test.cc000066400000000000000000000042411346026536000227730ustar00rootroot00000000000000// Copyright 2006, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: wan@google.com (Zhanyong Wan) // // Unit test for include/gtest/gtest_prod.h. #include "gtest/gtest.h" #include "test/production.h" // Tests that private members can be accessed from a TEST declared as // a friend of the class. TEST(PrivateCodeTest, CanAccessPrivateMembers) { PrivateCode a; EXPECT_EQ(0, a.x_); a.set_x(1); EXPECT_EQ(1, a.x_); } typedef testing::Test PrivateCodeFixtureTest; // Tests that private members can be accessed from a TEST_F declared // as a friend of the class. TEST_F(PrivateCodeFixtureTest, CanAccessPrivateMembers) { PrivateCode a; EXPECT_EQ(0, a.x_); a.set_x(2); EXPECT_EQ(2, a.x_); } ffc-2019.1.0.post0/libs/gtest-1.7.0/test/gtest_repeat_test.cc000066400000000000000000000200011346026536000232770ustar00rootroot00000000000000// Copyright 2008, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: wan@google.com (Zhanyong Wan) // Tests the --gtest_repeat=number flag. #include #include #include "gtest/gtest.h" // Indicates that this translation unit is part of Google Test's // implementation. It must come before gtest-internal-inl.h is // included, or there will be a compiler error. This trick is to // prevent a user from accidentally including gtest-internal-inl.h in // his code. #define GTEST_IMPLEMENTATION_ 1 #include "src/gtest-internal-inl.h" #undef GTEST_IMPLEMENTATION_ namespace testing { GTEST_DECLARE_string_(death_test_style); GTEST_DECLARE_string_(filter); GTEST_DECLARE_int32_(repeat); } // namespace testing using testing::GTEST_FLAG(death_test_style); using testing::GTEST_FLAG(filter); using testing::GTEST_FLAG(repeat); namespace { // We need this when we are testing Google Test itself and therefore // cannot use Google Test assertions. #define GTEST_CHECK_INT_EQ_(expected, actual) \ do {\ const int expected_val = (expected);\ const int actual_val = (actual);\ if (::testing::internal::IsTrue(expected_val != actual_val)) {\ ::std::cout << "Value of: " #actual "\n"\ << " Actual: " << actual_val << "\n"\ << "Expected: " #expected "\n"\ << "Which is: " << expected_val << "\n";\ ::testing::internal::posix::Abort();\ }\ } while (::testing::internal::AlwaysFalse()) // Used for verifying that global environment set-up and tear-down are // inside the gtest_repeat loop. int g_environment_set_up_count = 0; int g_environment_tear_down_count = 0; class MyEnvironment : public testing::Environment { public: MyEnvironment() {} virtual void SetUp() { g_environment_set_up_count++; } virtual void TearDown() { g_environment_tear_down_count++; } }; // A test that should fail. int g_should_fail_count = 0; TEST(FooTest, ShouldFail) { g_should_fail_count++; EXPECT_EQ(0, 1) << "Expected failure."; } // A test that should pass. int g_should_pass_count = 0; TEST(FooTest, ShouldPass) { g_should_pass_count++; } // A test that contains a thread-safe death test and a fast death // test. It should pass. int g_death_test_count = 0; TEST(BarDeathTest, ThreadSafeAndFast) { g_death_test_count++; GTEST_FLAG(death_test_style) = "threadsafe"; EXPECT_DEATH_IF_SUPPORTED(::testing::internal::posix::Abort(), ""); GTEST_FLAG(death_test_style) = "fast"; EXPECT_DEATH_IF_SUPPORTED(::testing::internal::posix::Abort(), ""); } #if GTEST_HAS_PARAM_TEST int g_param_test_count = 0; const int kNumberOfParamTests = 10; class MyParamTest : public testing::TestWithParam {}; TEST_P(MyParamTest, ShouldPass) { // TODO(vladl@google.com): Make parameter value checking robust // WRT order of tests. GTEST_CHECK_INT_EQ_(g_param_test_count % kNumberOfParamTests, GetParam()); g_param_test_count++; } INSTANTIATE_TEST_CASE_P(MyParamSequence, MyParamTest, testing::Range(0, kNumberOfParamTests)); #endif // GTEST_HAS_PARAM_TEST // Resets the count for each test. void ResetCounts() { g_environment_set_up_count = 0; g_environment_tear_down_count = 0; g_should_fail_count = 0; g_should_pass_count = 0; g_death_test_count = 0; #if GTEST_HAS_PARAM_TEST g_param_test_count = 0; #endif // GTEST_HAS_PARAM_TEST } // Checks that the count for each test is expected. void CheckCounts(int expected) { GTEST_CHECK_INT_EQ_(expected, g_environment_set_up_count); GTEST_CHECK_INT_EQ_(expected, g_environment_tear_down_count); GTEST_CHECK_INT_EQ_(expected, g_should_fail_count); GTEST_CHECK_INT_EQ_(expected, g_should_pass_count); GTEST_CHECK_INT_EQ_(expected, g_death_test_count); #if GTEST_HAS_PARAM_TEST GTEST_CHECK_INT_EQ_(expected * kNumberOfParamTests, g_param_test_count); #endif // GTEST_HAS_PARAM_TEST } // Tests the behavior of Google Test when --gtest_repeat is not specified. void TestRepeatUnspecified() { ResetCounts(); GTEST_CHECK_INT_EQ_(1, RUN_ALL_TESTS()); CheckCounts(1); } // Tests the behavior of Google Test when --gtest_repeat has the given value. void TestRepeat(int repeat) { GTEST_FLAG(repeat) = repeat; ResetCounts(); GTEST_CHECK_INT_EQ_(repeat > 0 ? 1 : 0, RUN_ALL_TESTS()); CheckCounts(repeat); } // Tests using --gtest_repeat when --gtest_filter specifies an empty // set of tests. void TestRepeatWithEmptyFilter(int repeat) { GTEST_FLAG(repeat) = repeat; GTEST_FLAG(filter) = "None"; ResetCounts(); GTEST_CHECK_INT_EQ_(0, RUN_ALL_TESTS()); CheckCounts(0); } // Tests using --gtest_repeat when --gtest_filter specifies a set of // successful tests. void TestRepeatWithFilterForSuccessfulTests(int repeat) { GTEST_FLAG(repeat) = repeat; GTEST_FLAG(filter) = "*-*ShouldFail"; ResetCounts(); GTEST_CHECK_INT_EQ_(0, RUN_ALL_TESTS()); GTEST_CHECK_INT_EQ_(repeat, g_environment_set_up_count); GTEST_CHECK_INT_EQ_(repeat, g_environment_tear_down_count); GTEST_CHECK_INT_EQ_(0, g_should_fail_count); GTEST_CHECK_INT_EQ_(repeat, g_should_pass_count); GTEST_CHECK_INT_EQ_(repeat, g_death_test_count); #if GTEST_HAS_PARAM_TEST GTEST_CHECK_INT_EQ_(repeat * kNumberOfParamTests, g_param_test_count); #endif // GTEST_HAS_PARAM_TEST } // Tests using --gtest_repeat when --gtest_filter specifies a set of // failed tests. void TestRepeatWithFilterForFailedTests(int repeat) { GTEST_FLAG(repeat) = repeat; GTEST_FLAG(filter) = "*ShouldFail"; ResetCounts(); GTEST_CHECK_INT_EQ_(1, RUN_ALL_TESTS()); GTEST_CHECK_INT_EQ_(repeat, g_environment_set_up_count); GTEST_CHECK_INT_EQ_(repeat, g_environment_tear_down_count); GTEST_CHECK_INT_EQ_(repeat, g_should_fail_count); GTEST_CHECK_INT_EQ_(0, g_should_pass_count); GTEST_CHECK_INT_EQ_(0, g_death_test_count); #if GTEST_HAS_PARAM_TEST GTEST_CHECK_INT_EQ_(0, g_param_test_count); #endif // GTEST_HAS_PARAM_TEST } } // namespace int main(int argc, char **argv) { testing::InitGoogleTest(&argc, argv); testing::AddGlobalTestEnvironment(new MyEnvironment); TestRepeatUnspecified(); TestRepeat(0); TestRepeat(1); TestRepeat(5); TestRepeatWithEmptyFilter(2); TestRepeatWithEmptyFilter(3); TestRepeatWithFilterForSuccessfulTests(3); TestRepeatWithFilterForFailedTests(4); // It would be nice to verify that the tests indeed loop forever // when GTEST_FLAG(repeat) is negative, but this test will be quite // complicated to write. Since this flag is for interactive // debugging only and doesn't affect the normal test result, such a // test would be an overkill. printf("PASS\n"); return 0; } ffc-2019.1.0.post0/libs/gtest-1.7.0/test/gtest_shuffle_test.py000077500000000000000000000304051346026536000235320ustar00rootroot00000000000000#!/usr/bin/env python # # Copyright 2009 Google Inc. All Rights Reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions are # met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # * Redistributions in binary form must reproduce the above # copyright notice, this list of conditions and the following disclaimer # in the documentation and/or other materials provided with the # distribution. # * Neither the name of Google Inc. nor the names of its # contributors may be used to endorse or promote products derived from # this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """Verifies that test shuffling works.""" __author__ = 'wan@google.com (Zhanyong Wan)' import os import gtest_test_utils # Command to run the gtest_shuffle_test_ program. COMMAND = gtest_test_utils.GetTestExecutablePath('gtest_shuffle_test_') # The environment variables for test sharding. TOTAL_SHARDS_ENV_VAR = 'GTEST_TOTAL_SHARDS' SHARD_INDEX_ENV_VAR = 'GTEST_SHARD_INDEX' TEST_FILTER = 'A*.A:A*.B:C*' ALL_TESTS = [] ACTIVE_TESTS = [] FILTERED_TESTS = [] SHARDED_TESTS = [] SHUFFLED_ALL_TESTS = [] SHUFFLED_ACTIVE_TESTS = [] SHUFFLED_FILTERED_TESTS = [] SHUFFLED_SHARDED_TESTS = [] def AlsoRunDisabledTestsFlag(): return '--gtest_also_run_disabled_tests' def FilterFlag(test_filter): return '--gtest_filter=%s' % (test_filter,) def RepeatFlag(n): return '--gtest_repeat=%s' % (n,) def ShuffleFlag(): return '--gtest_shuffle' def RandomSeedFlag(n): return '--gtest_random_seed=%s' % (n,) def RunAndReturnOutput(extra_env, args): """Runs the test program and returns its output.""" environ_copy = os.environ.copy() environ_copy.update(extra_env) return gtest_test_utils.Subprocess([COMMAND] + args, env=environ_copy).output def GetTestsForAllIterations(extra_env, args): """Runs the test program and returns a list of test lists. Args: extra_env: a map from environment variables to their values args: command line flags to pass to gtest_shuffle_test_ Returns: A list where the i-th element is the list of tests run in the i-th test iteration. """ test_iterations = [] for line in RunAndReturnOutput(extra_env, args).split('\n'): if line.startswith('----'): tests = [] test_iterations.append(tests) elif line.strip(): tests.append(line.strip()) # 'TestCaseName.TestName' return test_iterations def GetTestCases(tests): """Returns a list of test cases in the given full test names. Args: tests: a list of full test names Returns: A list of test cases from 'tests', in their original order. Consecutive duplicates are removed. """ test_cases = [] for test in tests: test_case = test.split('.')[0] if not test_case in test_cases: test_cases.append(test_case) return test_cases def CalculateTestLists(): """Calculates the list of tests run under different flags.""" if not ALL_TESTS: ALL_TESTS.extend( GetTestsForAllIterations({}, [AlsoRunDisabledTestsFlag()])[0]) if not ACTIVE_TESTS: ACTIVE_TESTS.extend(GetTestsForAllIterations({}, [])[0]) if not FILTERED_TESTS: FILTERED_TESTS.extend( GetTestsForAllIterations({}, [FilterFlag(TEST_FILTER)])[0]) if not SHARDED_TESTS: SHARDED_TESTS.extend( GetTestsForAllIterations({TOTAL_SHARDS_ENV_VAR: '3', SHARD_INDEX_ENV_VAR: '1'}, [])[0]) if not SHUFFLED_ALL_TESTS: SHUFFLED_ALL_TESTS.extend(GetTestsForAllIterations( {}, [AlsoRunDisabledTestsFlag(), ShuffleFlag(), RandomSeedFlag(1)])[0]) if not SHUFFLED_ACTIVE_TESTS: SHUFFLED_ACTIVE_TESTS.extend(GetTestsForAllIterations( {}, [ShuffleFlag(), RandomSeedFlag(1)])[0]) if not SHUFFLED_FILTERED_TESTS: SHUFFLED_FILTERED_TESTS.extend(GetTestsForAllIterations( {}, [ShuffleFlag(), RandomSeedFlag(1), FilterFlag(TEST_FILTER)])[0]) if not SHUFFLED_SHARDED_TESTS: SHUFFLED_SHARDED_TESTS.extend( GetTestsForAllIterations({TOTAL_SHARDS_ENV_VAR: '3', SHARD_INDEX_ENV_VAR: '1'}, [ShuffleFlag(), RandomSeedFlag(1)])[0]) class GTestShuffleUnitTest(gtest_test_utils.TestCase): """Tests test shuffling.""" def setUp(self): CalculateTestLists() def testShufflePreservesNumberOfTests(self): self.assertEqual(len(ALL_TESTS), len(SHUFFLED_ALL_TESTS)) self.assertEqual(len(ACTIVE_TESTS), len(SHUFFLED_ACTIVE_TESTS)) self.assertEqual(len(FILTERED_TESTS), len(SHUFFLED_FILTERED_TESTS)) self.assertEqual(len(SHARDED_TESTS), len(SHUFFLED_SHARDED_TESTS)) def testShuffleChangesTestOrder(self): self.assert_(SHUFFLED_ALL_TESTS != ALL_TESTS, SHUFFLED_ALL_TESTS) self.assert_(SHUFFLED_ACTIVE_TESTS != ACTIVE_TESTS, SHUFFLED_ACTIVE_TESTS) self.assert_(SHUFFLED_FILTERED_TESTS != FILTERED_TESTS, SHUFFLED_FILTERED_TESTS) self.assert_(SHUFFLED_SHARDED_TESTS != SHARDED_TESTS, SHUFFLED_SHARDED_TESTS) def testShuffleChangesTestCaseOrder(self): self.assert_(GetTestCases(SHUFFLED_ALL_TESTS) != GetTestCases(ALL_TESTS), GetTestCases(SHUFFLED_ALL_TESTS)) self.assert_( GetTestCases(SHUFFLED_ACTIVE_TESTS) != GetTestCases(ACTIVE_TESTS), GetTestCases(SHUFFLED_ACTIVE_TESTS)) self.assert_( GetTestCases(SHUFFLED_FILTERED_TESTS) != GetTestCases(FILTERED_TESTS), GetTestCases(SHUFFLED_FILTERED_TESTS)) self.assert_( GetTestCases(SHUFFLED_SHARDED_TESTS) != GetTestCases(SHARDED_TESTS), GetTestCases(SHUFFLED_SHARDED_TESTS)) def testShuffleDoesNotRepeatTest(self): for test in SHUFFLED_ALL_TESTS: self.assertEqual(1, SHUFFLED_ALL_TESTS.count(test), '%s appears more than once' % (test,)) for test in SHUFFLED_ACTIVE_TESTS: self.assertEqual(1, SHUFFLED_ACTIVE_TESTS.count(test), '%s appears more than once' % (test,)) for test in SHUFFLED_FILTERED_TESTS: self.assertEqual(1, SHUFFLED_FILTERED_TESTS.count(test), '%s appears more than once' % (test,)) for test in SHUFFLED_SHARDED_TESTS: self.assertEqual(1, SHUFFLED_SHARDED_TESTS.count(test), '%s appears more than once' % (test,)) def testShuffleDoesNotCreateNewTest(self): for test in SHUFFLED_ALL_TESTS: self.assert_(test in ALL_TESTS, '%s is an invalid test' % (test,)) for test in SHUFFLED_ACTIVE_TESTS: self.assert_(test in ACTIVE_TESTS, '%s is an invalid test' % (test,)) for test in SHUFFLED_FILTERED_TESTS: self.assert_(test in FILTERED_TESTS, '%s is an invalid test' % (test,)) for test in SHUFFLED_SHARDED_TESTS: self.assert_(test in SHARDED_TESTS, '%s is an invalid test' % (test,)) def testShuffleIncludesAllTests(self): for test in ALL_TESTS: self.assert_(test in SHUFFLED_ALL_TESTS, '%s is missing' % (test,)) for test in ACTIVE_TESTS: self.assert_(test in SHUFFLED_ACTIVE_TESTS, '%s is missing' % (test,)) for test in FILTERED_TESTS: self.assert_(test in SHUFFLED_FILTERED_TESTS, '%s is missing' % (test,)) for test in SHARDED_TESTS: self.assert_(test in SHUFFLED_SHARDED_TESTS, '%s is missing' % (test,)) def testShuffleLeavesDeathTestsAtFront(self): non_death_test_found = False for test in SHUFFLED_ACTIVE_TESTS: if 'DeathTest.' in test: self.assert_(not non_death_test_found, '%s appears after a non-death test' % (test,)) else: non_death_test_found = True def _VerifyTestCasesDoNotInterleave(self, tests): test_cases = [] for test in tests: [test_case, _] = test.split('.') if test_cases and test_cases[-1] != test_case: test_cases.append(test_case) self.assertEqual(1, test_cases.count(test_case), 'Test case %s is not grouped together in %s' % (test_case, tests)) def testShuffleDoesNotInterleaveTestCases(self): self._VerifyTestCasesDoNotInterleave(SHUFFLED_ALL_TESTS) self._VerifyTestCasesDoNotInterleave(SHUFFLED_ACTIVE_TESTS) self._VerifyTestCasesDoNotInterleave(SHUFFLED_FILTERED_TESTS) self._VerifyTestCasesDoNotInterleave(SHUFFLED_SHARDED_TESTS) def testShuffleRestoresOrderAfterEachIteration(self): # Get the test lists in all 3 iterations, using random seed 1, 2, # and 3 respectively. Google Test picks a different seed in each # iteration, and this test depends on the current implementation # picking successive numbers. This dependency is not ideal, but # makes the test much easier to write. [tests_in_iteration1, tests_in_iteration2, tests_in_iteration3] = ( GetTestsForAllIterations( {}, [ShuffleFlag(), RandomSeedFlag(1), RepeatFlag(3)])) # Make sure running the tests with random seed 1 gets the same # order as in iteration 1 above. [tests_with_seed1] = GetTestsForAllIterations( {}, [ShuffleFlag(), RandomSeedFlag(1)]) self.assertEqual(tests_in_iteration1, tests_with_seed1) # Make sure running the tests with random seed 2 gets the same # order as in iteration 2 above. Success means that Google Test # correctly restores the test order before re-shuffling at the # beginning of iteration 2. [tests_with_seed2] = GetTestsForAllIterations( {}, [ShuffleFlag(), RandomSeedFlag(2)]) self.assertEqual(tests_in_iteration2, tests_with_seed2) # Make sure running the tests with random seed 3 gets the same # order as in iteration 3 above. Success means that Google Test # correctly restores the test order before re-shuffling at the # beginning of iteration 3. [tests_with_seed3] = GetTestsForAllIterations( {}, [ShuffleFlag(), RandomSeedFlag(3)]) self.assertEqual(tests_in_iteration3, tests_with_seed3) def testShuffleGeneratesNewOrderInEachIteration(self): [tests_in_iteration1, tests_in_iteration2, tests_in_iteration3] = ( GetTestsForAllIterations( {}, [ShuffleFlag(), RandomSeedFlag(1), RepeatFlag(3)])) self.assert_(tests_in_iteration1 != tests_in_iteration2, tests_in_iteration1) self.assert_(tests_in_iteration1 != tests_in_iteration3, tests_in_iteration1) self.assert_(tests_in_iteration2 != tests_in_iteration3, tests_in_iteration2) def testShuffleShardedTestsPreservesPartition(self): # If we run M tests on N shards, the same M tests should be run in # total, regardless of the random seeds used by the shards. [tests1] = GetTestsForAllIterations({TOTAL_SHARDS_ENV_VAR: '3', SHARD_INDEX_ENV_VAR: '0'}, [ShuffleFlag(), RandomSeedFlag(1)]) [tests2] = GetTestsForAllIterations({TOTAL_SHARDS_ENV_VAR: '3', SHARD_INDEX_ENV_VAR: '1'}, [ShuffleFlag(), RandomSeedFlag(20)]) [tests3] = GetTestsForAllIterations({TOTAL_SHARDS_ENV_VAR: '3', SHARD_INDEX_ENV_VAR: '2'}, [ShuffleFlag(), RandomSeedFlag(25)]) sorted_sharded_tests = tests1 + tests2 + tests3 sorted_sharded_tests.sort() sorted_active_tests = [] sorted_active_tests.extend(ACTIVE_TESTS) sorted_active_tests.sort() self.assertEqual(sorted_active_tests, sorted_sharded_tests) if __name__ == '__main__': gtest_test_utils.Main() ffc-2019.1.0.post0/libs/gtest-1.7.0/test/gtest_shuffle_test_.cc000066400000000000000000000063521346026536000236270ustar00rootroot00000000000000// Copyright 2009, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: wan@google.com (Zhanyong Wan) // Verifies that test shuffling works. #include "gtest/gtest.h" namespace { using ::testing::EmptyTestEventListener; using ::testing::InitGoogleTest; using ::testing::Message; using ::testing::Test; using ::testing::TestEventListeners; using ::testing::TestInfo; using ::testing::UnitTest; using ::testing::internal::scoped_ptr; // The test methods are empty, as the sole purpose of this program is // to print the test names before/after shuffling. class A : public Test {}; TEST_F(A, A) {} TEST_F(A, B) {} TEST(ADeathTest, A) {} TEST(ADeathTest, B) {} TEST(ADeathTest, C) {} TEST(B, A) {} TEST(B, B) {} TEST(B, C) {} TEST(B, DISABLED_D) {} TEST(B, DISABLED_E) {} TEST(BDeathTest, A) {} TEST(BDeathTest, B) {} TEST(C, A) {} TEST(C, B) {} TEST(C, C) {} TEST(C, DISABLED_D) {} TEST(CDeathTest, A) {} TEST(DISABLED_D, A) {} TEST(DISABLED_D, DISABLED_B) {} // This printer prints the full test names only, starting each test // iteration with a "----" marker. class TestNamePrinter : public EmptyTestEventListener { public: virtual void OnTestIterationStart(const UnitTest& /* unit_test */, int /* iteration */) { printf("----\n"); } virtual void OnTestStart(const TestInfo& test_info) { printf("%s.%s\n", test_info.test_case_name(), test_info.name()); } }; } // namespace int main(int argc, char **argv) { InitGoogleTest(&argc, argv); // Replaces the default printer with TestNamePrinter, which prints // the test name only. TestEventListeners& listeners = UnitTest::GetInstance()->listeners(); delete listeners.Release(listeners.default_result_printer()); listeners.Append(new TestNamePrinter); return RUN_ALL_TESTS(); } ffc-2019.1.0.post0/libs/gtest-1.7.0/test/gtest_sole_header_test.cc000066400000000000000000000042551346026536000243060ustar00rootroot00000000000000// Copyright 2008, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: mheule@google.com (Markus Heule) // // This test verifies that it's possible to use Google Test by including // the gtest.h header file alone. #include "gtest/gtest.h" namespace { void Subroutine() { EXPECT_EQ(42, 42); } TEST(NoFatalFailureTest, ExpectNoFatalFailure) { EXPECT_NO_FATAL_FAILURE(;); EXPECT_NO_FATAL_FAILURE(SUCCEED()); EXPECT_NO_FATAL_FAILURE(Subroutine()); EXPECT_NO_FATAL_FAILURE({ SUCCEED(); }); } TEST(NoFatalFailureTest, AssertNoFatalFailure) { ASSERT_NO_FATAL_FAILURE(;); ASSERT_NO_FATAL_FAILURE(SUCCEED()); ASSERT_NO_FATAL_FAILURE(Subroutine()); ASSERT_NO_FATAL_FAILURE({ SUCCEED(); }); } } // namespace ffc-2019.1.0.post0/libs/gtest-1.7.0/test/gtest_stress_test.cc000066400000000000000000000226511346026536000233570ustar00rootroot00000000000000// Copyright 2007, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: wan@google.com (Zhanyong Wan) // Tests that SCOPED_TRACE() and various Google Test assertions can be // used in a large number of threads concurrently. #include "gtest/gtest.h" #include #include // We must define this macro in order to #include // gtest-internal-inl.h. This is how Google Test prevents a user from // accidentally depending on its internal implementation. #define GTEST_IMPLEMENTATION_ 1 #include "src/gtest-internal-inl.h" #undef GTEST_IMPLEMENTATION_ #if GTEST_IS_THREADSAFE namespace testing { namespace { using internal::Notification; using internal::TestPropertyKeyIs; using internal::ThreadWithParam; using internal::scoped_ptr; // In order to run tests in this file, for platforms where Google Test is // thread safe, implement ThreadWithParam. See the description of its API // in gtest-port.h, where it is defined for already supported platforms. // How many threads to create? const int kThreadCount = 50; std::string IdToKey(int id, const char* suffix) { Message key; key << "key_" << id << "_" << suffix; return key.GetString(); } std::string IdToString(int id) { Message id_message; id_message << id; return id_message.GetString(); } void ExpectKeyAndValueWereRecordedForId( const std::vector& properties, int id, const char* suffix) { TestPropertyKeyIs matches_key(IdToKey(id, suffix).c_str()); const std::vector::const_iterator property = std::find_if(properties.begin(), properties.end(), matches_key); ASSERT_TRUE(property != properties.end()) << "expecting " << suffix << " value for id " << id; EXPECT_STREQ(IdToString(id).c_str(), property->value()); } // Calls a large number of Google Test assertions, where exactly one of them // will fail. void ManyAsserts(int id) { GTEST_LOG_(INFO) << "Thread #" << id << " running..."; SCOPED_TRACE(Message() << "Thread #" << id); for (int i = 0; i < kThreadCount; i++) { SCOPED_TRACE(Message() << "Iteration #" << i); // A bunch of assertions that should succeed. EXPECT_TRUE(true); ASSERT_FALSE(false) << "This shouldn't fail."; EXPECT_STREQ("a", "a"); ASSERT_LE(5, 6); EXPECT_EQ(i, i) << "This shouldn't fail."; // RecordProperty() should interact safely with other threads as well. // The shared_key forces property updates. Test::RecordProperty(IdToKey(id, "string").c_str(), IdToString(id).c_str()); Test::RecordProperty(IdToKey(id, "int").c_str(), id); Test::RecordProperty("shared_key", IdToString(id).c_str()); // This assertion should fail kThreadCount times per thread. It // is for testing whether Google Test can handle failed assertions in a // multi-threaded context. EXPECT_LT(i, 0) << "This should always fail."; } } void CheckTestFailureCount(int expected_failures) { const TestInfo* const info = UnitTest::GetInstance()->current_test_info(); const TestResult* const result = info->result(); GTEST_CHECK_(expected_failures == result->total_part_count()) << "Logged " << result->total_part_count() << " failures " << " vs. " << expected_failures << " expected"; } // Tests using SCOPED_TRACE() and Google Test assertions in many threads // concurrently. TEST(StressTest, CanUseScopedTraceAndAssertionsInManyThreads) { { scoped_ptr > threads[kThreadCount]; Notification threads_can_start; for (int i = 0; i != kThreadCount; i++) threads[i].reset(new ThreadWithParam(&ManyAsserts, i, &threads_can_start)); threads_can_start.Notify(); // Blocks until all the threads are done. for (int i = 0; i != kThreadCount; i++) threads[i]->Join(); } // Ensures that kThreadCount*kThreadCount failures have been reported. const TestInfo* const info = UnitTest::GetInstance()->current_test_info(); const TestResult* const result = info->result(); std::vector properties; // We have no access to the TestResult's list of properties but we can // copy them one by one. for (int i = 0; i < result->test_property_count(); ++i) properties.push_back(result->GetTestProperty(i)); EXPECT_EQ(kThreadCount * 2 + 1, result->test_property_count()) << "String and int values recorded on each thread, " << "as well as one shared_key"; for (int i = 0; i < kThreadCount; ++i) { ExpectKeyAndValueWereRecordedForId(properties, i, "string"); ExpectKeyAndValueWereRecordedForId(properties, i, "int"); } CheckTestFailureCount(kThreadCount*kThreadCount); } void FailingThread(bool is_fatal) { if (is_fatal) FAIL() << "Fatal failure in some other thread. " << "(This failure is expected.)"; else ADD_FAILURE() << "Non-fatal failure in some other thread. " << "(This failure is expected.)"; } void GenerateFatalFailureInAnotherThread(bool is_fatal) { ThreadWithParam thread(&FailingThread, is_fatal, NULL); thread.Join(); } TEST(NoFatalFailureTest, ExpectNoFatalFailureIgnoresFailuresInOtherThreads) { EXPECT_NO_FATAL_FAILURE(GenerateFatalFailureInAnotherThread(true)); // We should only have one failure (the one from // GenerateFatalFailureInAnotherThread()), since the EXPECT_NO_FATAL_FAILURE // should succeed. CheckTestFailureCount(1); } void AssertNoFatalFailureIgnoresFailuresInOtherThreads() { ASSERT_NO_FATAL_FAILURE(GenerateFatalFailureInAnotherThread(true)); } TEST(NoFatalFailureTest, AssertNoFatalFailureIgnoresFailuresInOtherThreads) { // Using a subroutine, to make sure, that the test continues. AssertNoFatalFailureIgnoresFailuresInOtherThreads(); // We should only have one failure (the one from // GenerateFatalFailureInAnotherThread()), since the EXPECT_NO_FATAL_FAILURE // should succeed. CheckTestFailureCount(1); } TEST(FatalFailureTest, ExpectFatalFailureIgnoresFailuresInOtherThreads) { // This statement should fail, since the current thread doesn't generate a // fatal failure, only another one does. EXPECT_FATAL_FAILURE(GenerateFatalFailureInAnotherThread(true), "expected"); CheckTestFailureCount(2); } TEST(FatalFailureOnAllThreadsTest, ExpectFatalFailureOnAllThreads) { // This statement should succeed, because failures in all threads are // considered. EXPECT_FATAL_FAILURE_ON_ALL_THREADS( GenerateFatalFailureInAnotherThread(true), "expected"); CheckTestFailureCount(0); // We need to add a failure, because main() checks that there are failures. // But when only this test is run, we shouldn't have any failures. ADD_FAILURE() << "This is an expected non-fatal failure."; } TEST(NonFatalFailureTest, ExpectNonFatalFailureIgnoresFailuresInOtherThreads) { // This statement should fail, since the current thread doesn't generate a // fatal failure, only another one does. EXPECT_NONFATAL_FAILURE(GenerateFatalFailureInAnotherThread(false), "expected"); CheckTestFailureCount(2); } TEST(NonFatalFailureOnAllThreadsTest, ExpectNonFatalFailureOnAllThreads) { // This statement should succeed, because failures in all threads are // considered. EXPECT_NONFATAL_FAILURE_ON_ALL_THREADS( GenerateFatalFailureInAnotherThread(false), "expected"); CheckTestFailureCount(0); // We need to add a failure, because main() checks that there are failures, // But when only this test is run, we shouldn't have any failures. ADD_FAILURE() << "This is an expected non-fatal failure."; } } // namespace } // namespace testing int main(int argc, char **argv) { testing::InitGoogleTest(&argc, argv); const int result = RUN_ALL_TESTS(); // Expected to fail. GTEST_CHECK_(result == 1) << "RUN_ALL_TESTS() did not fail as expected"; printf("\nPASS\n"); return 0; } #else TEST(StressTest, DISABLED_ThreadSafetyTestsAreSkippedWhenGoogleTestIsNotThreadSafe) { } int main(int argc, char **argv) { testing::InitGoogleTest(&argc, argv); return RUN_ALL_TESTS(); } #endif // GTEST_IS_THREADSAFE ffc-2019.1.0.post0/libs/gtest-1.7.0/test/gtest_test_utils.py000077500000000000000000000250741346026536000232440ustar00rootroot00000000000000#!/usr/bin/env python # # Copyright 2006, Google Inc. # All rights reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions are # met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # * Redistributions in binary form must reproduce the above # copyright notice, this list of conditions and the following disclaimer # in the documentation and/or other materials provided with the # distribution. # * Neither the name of Google Inc. nor the names of its # contributors may be used to endorse or promote products derived from # this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """Unit test utilities for Google C++ Testing Framework.""" __author__ = 'wan@google.com (Zhanyong Wan)' import atexit import os import shutil import sys import tempfile import unittest _test_module = unittest # Suppresses the 'Import not at the top of the file' lint complaint. # pylint: disable-msg=C6204 try: import subprocess _SUBPROCESS_MODULE_AVAILABLE = True except: import popen2 _SUBPROCESS_MODULE_AVAILABLE = False # pylint: enable-msg=C6204 GTEST_OUTPUT_VAR_NAME = 'GTEST_OUTPUT' IS_WINDOWS = os.name == 'nt' IS_CYGWIN = os.name == 'posix' and 'CYGWIN' in os.uname()[0] # The environment variable for specifying the path to the premature-exit file. PREMATURE_EXIT_FILE_ENV_VAR = 'TEST_PREMATURE_EXIT_FILE' environ = os.environ.copy() def SetEnvVar(env_var, value): """Sets/unsets an environment variable to a given value.""" if value is not None: environ[env_var] = value elif env_var in environ: del environ[env_var] # Here we expose a class from a particular module, depending on the # environment. The comment suppresses the 'Invalid variable name' lint # complaint. TestCase = _test_module.TestCase # pylint: disable-msg=C6409 # Initially maps a flag to its default value. After # _ParseAndStripGTestFlags() is called, maps a flag to its actual value. _flag_map = {'source_dir': os.path.dirname(sys.argv[0]), 'build_dir': os.path.dirname(sys.argv[0])} _gtest_flags_are_parsed = False def _ParseAndStripGTestFlags(argv): """Parses and strips Google Test flags from argv. This is idempotent.""" # Suppresses the lint complaint about a global variable since we need it # here to maintain module-wide state. global _gtest_flags_are_parsed # pylint: disable-msg=W0603 if _gtest_flags_are_parsed: return _gtest_flags_are_parsed = True for flag in _flag_map: # The environment variable overrides the default value. if flag.upper() in os.environ: _flag_map[flag] = os.environ[flag.upper()] # The command line flag overrides the environment variable. i = 1 # Skips the program name. while i < len(argv): prefix = '--' + flag + '=' if argv[i].startswith(prefix): _flag_map[flag] = argv[i][len(prefix):] del argv[i] break else: # We don't increment i in case we just found a --gtest_* flag # and removed it from argv. i += 1 def GetFlag(flag): """Returns the value of the given flag.""" # In case GetFlag() is called before Main(), we always call # _ParseAndStripGTestFlags() here to make sure the --gtest_* flags # are parsed. _ParseAndStripGTestFlags(sys.argv) return _flag_map[flag] def GetSourceDir(): """Returns the absolute path of the directory where the .py files are.""" return os.path.abspath(GetFlag('source_dir')) def GetBuildDir(): """Returns the absolute path of the directory where the test binaries are.""" return os.path.abspath(GetFlag('build_dir')) _temp_dir = None def _RemoveTempDir(): if _temp_dir: shutil.rmtree(_temp_dir, ignore_errors=True) atexit.register(_RemoveTempDir) def GetTempDir(): """Returns a directory for temporary files.""" global _temp_dir if not _temp_dir: _temp_dir = tempfile.mkdtemp() return _temp_dir def GetTestExecutablePath(executable_name, build_dir=None): """Returns the absolute path of the test binary given its name. The function will print a message and abort the program if the resulting file doesn't exist. Args: executable_name: name of the test binary that the test script runs. build_dir: directory where to look for executables, by default the result of GetBuildDir(). Returns: The absolute path of the test binary. """ path = os.path.abspath(os.path.join(build_dir or GetBuildDir(), executable_name)) if (IS_WINDOWS or IS_CYGWIN) and not path.endswith('.exe'): path += '.exe' if not os.path.exists(path): message = ( 'Unable to find the test binary. Please make sure to provide path\n' 'to the binary via the --build_dir flag or the BUILD_DIR\n' 'environment variable.') print >> sys.stderr, message sys.exit(1) return path def GetExitStatus(exit_code): """Returns the argument to exit(), or -1 if exit() wasn't called. Args: exit_code: the result value of os.system(command). """ if os.name == 'nt': # On Windows, os.WEXITSTATUS() doesn't work and os.system() returns # the argument to exit() directly. return exit_code else: # On Unix, os.WEXITSTATUS() must be used to extract the exit status # from the result of os.system(). if os.WIFEXITED(exit_code): return os.WEXITSTATUS(exit_code) else: return -1 class Subprocess: def __init__(self, command, working_dir=None, capture_stderr=True, env=None): """Changes into a specified directory, if provided, and executes a command. Restores the old directory afterwards. Args: command: The command to run, in the form of sys.argv. working_dir: The directory to change into. capture_stderr: Determines whether to capture stderr in the output member or to discard it. env: Dictionary with environment to pass to the subprocess. Returns: An object that represents outcome of the executed process. It has the following attributes: terminated_by_signal True iff the child process has been terminated by a signal. signal Sygnal that terminated the child process. exited True iff the child process exited normally. exit_code The code with which the child process exited. output Child process's stdout and stderr output combined in a string. """ # The subprocess module is the preferrable way of running programs # since it is available and behaves consistently on all platforms, # including Windows. But it is only available starting in python 2.4. # In earlier python versions, we revert to the popen2 module, which is # available in python 2.0 and later but doesn't provide required # functionality (Popen4) under Windows. This allows us to support Mac # OS X 10.4 Tiger, which has python 2.3 installed. if _SUBPROCESS_MODULE_AVAILABLE: if capture_stderr: stderr = subprocess.STDOUT else: stderr = subprocess.PIPE p = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=stderr, cwd=working_dir, universal_newlines=True, env=env) # communicate returns a tuple with the file obect for the child's # output. self.output = p.communicate()[0] self._return_code = p.returncode else: old_dir = os.getcwd() def _ReplaceEnvDict(dest, src): # Changes made by os.environ.clear are not inheritable by child # processes until Python 2.6. To produce inheritable changes we have # to delete environment items with the del statement. for key in dest.keys(): del dest[key] dest.update(src) # When 'env' is not None, backup the environment variables and replace # them with the passed 'env'. When 'env' is None, we simply use the # current 'os.environ' for compatibility with the subprocess.Popen # semantics used above. if env is not None: old_environ = os.environ.copy() _ReplaceEnvDict(os.environ, env) try: if working_dir is not None: os.chdir(working_dir) if capture_stderr: p = popen2.Popen4(command) else: p = popen2.Popen3(command) p.tochild.close() self.output = p.fromchild.read() ret_code = p.wait() finally: os.chdir(old_dir) # Restore the old environment variables # if they were replaced. if env is not None: _ReplaceEnvDict(os.environ, old_environ) # Converts ret_code to match the semantics of # subprocess.Popen.returncode. if os.WIFSIGNALED(ret_code): self._return_code = -os.WTERMSIG(ret_code) else: # os.WIFEXITED(ret_code) should return True here. self._return_code = os.WEXITSTATUS(ret_code) if self._return_code < 0: self.terminated_by_signal = True self.exited = False self.signal = -self._return_code else: self.terminated_by_signal = False self.exited = True self.exit_code = self._return_code def Main(): """Runs the unit test.""" # We must call _ParseAndStripGTestFlags() before calling # unittest.main(). Otherwise the latter will be confused by the # --gtest_* flags. _ParseAndStripGTestFlags(sys.argv) # The tested binaries should not be writing XML output files unless the # script explicitly instructs them to. # TODO(vladl@google.com): Move this into Subprocess when we implement # passing environment into it as a parameter. if GTEST_OUTPUT_VAR_NAME in os.environ: del os.environ[GTEST_OUTPUT_VAR_NAME] _test_module.main() ffc-2019.1.0.post0/libs/gtest-1.7.0/test/gtest_throw_on_failure_ex_test.cc000066400000000000000000000065641346026536000261030ustar00rootroot00000000000000// Copyright 2009, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: wan@google.com (Zhanyong Wan) // Tests Google Test's throw-on-failure mode with exceptions enabled. #include "gtest/gtest.h" #include #include #include #include // Prints the given failure message and exits the program with // non-zero. We use this instead of a Google Test assertion to // indicate a failure, as the latter is been tested and cannot be // relied on. void Fail(const char* msg) { printf("FAILURE: %s\n", msg); fflush(stdout); exit(1); } // Tests that an assertion failure throws a subclass of // std::runtime_error. void TestFailureThrowsRuntimeError() { testing::GTEST_FLAG(throw_on_failure) = true; // A successful assertion shouldn't throw. try { EXPECT_EQ(3, 3); } catch(...) { Fail("A successful assertion wrongfully threw."); } // A failed assertion should throw a subclass of std::runtime_error. try { EXPECT_EQ(2, 3) << "Expected failure"; } catch(const std::runtime_error& e) { if (strstr(e.what(), "Expected failure") != NULL) return; printf("%s", "A failed assertion did throw an exception of the right type, " "but the message is incorrect. Instead of containing \"Expected " "failure\", it is:\n"); Fail(e.what()); } catch(...) { Fail("A failed assertion threw the wrong type of exception."); } Fail("A failed assertion should've thrown but didn't."); } int main(int argc, char** argv) { testing::InitGoogleTest(&argc, argv); // We want to ensure that people can use Google Test assertions in // other testing frameworks, as long as they initialize Google Test // properly and set the thrown-on-failure mode. Therefore, we don't // use Google Test's constructs for defining and running tests // (e.g. TEST and RUN_ALL_TESTS) here. TestFailureThrowsRuntimeError(); return 0; } ffc-2019.1.0.post0/libs/gtest-1.7.0/test/gtest_throw_on_failure_test.py000077500000000000000000000132061346026536000254440ustar00rootroot00000000000000#!/usr/bin/env python # # Copyright 2009, Google Inc. # All rights reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions are # met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # * Redistributions in binary form must reproduce the above # copyright notice, this list of conditions and the following disclaimer # in the documentation and/or other materials provided with the # distribution. # * Neither the name of Google Inc. nor the names of its # contributors may be used to endorse or promote products derived from # this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """Tests Google Test's throw-on-failure mode with exceptions disabled. This script invokes gtest_throw_on_failure_test_ (a program written with Google Test) with different environments and command line flags. """ __author__ = 'wan@google.com (Zhanyong Wan)' import os import gtest_test_utils # Constants. # The command line flag for enabling/disabling the throw-on-failure mode. THROW_ON_FAILURE = 'gtest_throw_on_failure' # Path to the gtest_throw_on_failure_test_ program, compiled with # exceptions disabled. EXE_PATH = gtest_test_utils.GetTestExecutablePath( 'gtest_throw_on_failure_test_') # Utilities. def SetEnvVar(env_var, value): """Sets an environment variable to a given value; unsets it when the given value is None. """ env_var = env_var.upper() if value is not None: os.environ[env_var] = value elif env_var in os.environ: del os.environ[env_var] def Run(command): """Runs a command; returns True/False if its exit code is/isn't 0.""" print 'Running "%s". . .' % ' '.join(command) p = gtest_test_utils.Subprocess(command) return p.exited and p.exit_code == 0 # The tests. TODO(wan@google.com): refactor the class to share common # logic with code in gtest_break_on_failure_unittest.py. class ThrowOnFailureTest(gtest_test_utils.TestCase): """Tests the throw-on-failure mode.""" def RunAndVerify(self, env_var_value, flag_value, should_fail): """Runs gtest_throw_on_failure_test_ and verifies that it does (or does not) exit with a non-zero code. Args: env_var_value: value of the GTEST_BREAK_ON_FAILURE environment variable; None if the variable should be unset. flag_value: value of the --gtest_break_on_failure flag; None if the flag should not be present. should_fail: True iff the program is expected to fail. """ SetEnvVar(THROW_ON_FAILURE, env_var_value) if env_var_value is None: env_var_value_msg = ' is not set' else: env_var_value_msg = '=' + env_var_value if flag_value is None: flag = '' elif flag_value == '0': flag = '--%s=0' % THROW_ON_FAILURE else: flag = '--%s' % THROW_ON_FAILURE command = [EXE_PATH] if flag: command.append(flag) if should_fail: should_or_not = 'should' else: should_or_not = 'should not' failed = not Run(command) SetEnvVar(THROW_ON_FAILURE, None) msg = ('when %s%s, an assertion failure in "%s" %s cause a non-zero ' 'exit code.' % (THROW_ON_FAILURE, env_var_value_msg, ' '.join(command), should_or_not)) self.assert_(failed == should_fail, msg) def testDefaultBehavior(self): """Tests the behavior of the default mode.""" self.RunAndVerify(env_var_value=None, flag_value=None, should_fail=False) def testThrowOnFailureEnvVar(self): """Tests using the GTEST_THROW_ON_FAILURE environment variable.""" self.RunAndVerify(env_var_value='0', flag_value=None, should_fail=False) self.RunAndVerify(env_var_value='1', flag_value=None, should_fail=True) def testThrowOnFailureFlag(self): """Tests using the --gtest_throw_on_failure flag.""" self.RunAndVerify(env_var_value=None, flag_value='0', should_fail=False) self.RunAndVerify(env_var_value=None, flag_value='1', should_fail=True) def testThrowOnFailureFlagOverridesEnvVar(self): """Tests that --gtest_throw_on_failure overrides GTEST_THROW_ON_FAILURE.""" self.RunAndVerify(env_var_value='0', flag_value='0', should_fail=False) self.RunAndVerify(env_var_value='0', flag_value='1', should_fail=True) self.RunAndVerify(env_var_value='1', flag_value='0', should_fail=False) self.RunAndVerify(env_var_value='1', flag_value='1', should_fail=True) if __name__ == '__main__': gtest_test_utils.Main() ffc-2019.1.0.post0/libs/gtest-1.7.0/test/gtest_throw_on_failure_test_.cc000066400000000000000000000060401346026536000255330ustar00rootroot00000000000000// Copyright 2009, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: wan@google.com (Zhanyong Wan) // Tests Google Test's throw-on-failure mode with exceptions disabled. // // This program must be compiled with exceptions disabled. It will be // invoked by gtest_throw_on_failure_test.py, and is expected to exit // with non-zero in the throw-on-failure mode or 0 otherwise. #include "gtest/gtest.h" #include // for fflush, fprintf, NULL, etc. #include // for exit #include // for set_terminate // This terminate handler aborts the program using exit() rather than abort(). // This avoids showing pop-ups on Windows systems and core dumps on Unix-like // ones. void TerminateHandler() { fprintf(stderr, "%s\n", "Unhandled C++ exception terminating the program."); fflush(NULL); exit(1); } int main(int argc, char** argv) { #if GTEST_HAS_EXCEPTIONS std::set_terminate(&TerminateHandler); #endif testing::InitGoogleTest(&argc, argv); // We want to ensure that people can use Google Test assertions in // other testing frameworks, as long as they initialize Google Test // properly and set the throw-on-failure mode. Therefore, we don't // use Google Test's constructs for defining and running tests // (e.g. TEST and RUN_ALL_TESTS) here. // In the throw-on-failure mode with exceptions disabled, this // assertion will cause the program to exit with a non-zero code. EXPECT_EQ(2, 3); // When not in the throw-on-failure mode, the control will reach // here. return 0; } ffc-2019.1.0.post0/libs/gtest-1.7.0/test/gtest_uninitialized_test.py000077500000000000000000000046601346026536000247520ustar00rootroot00000000000000#!/usr/bin/env python # # Copyright 2008, Google Inc. # All rights reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions are # met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # * Redistributions in binary form must reproduce the above # copyright notice, this list of conditions and the following disclaimer # in the documentation and/or other materials provided with the # distribution. # * Neither the name of Google Inc. nor the names of its # contributors may be used to endorse or promote products derived from # this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """Verifies that Google Test warns the user when not initialized properly.""" __author__ = 'wan@google.com (Zhanyong Wan)' import gtest_test_utils COMMAND = gtest_test_utils.GetTestExecutablePath('gtest_uninitialized_test_') def Assert(condition): if not condition: raise AssertionError def AssertEq(expected, actual): if expected != actual: print 'Expected: %s' % (expected,) print ' Actual: %s' % (actual,) raise AssertionError def TestExitCodeAndOutput(command): """Runs the given command and verifies its exit code and output.""" # Verifies that 'command' exits with code 1. p = gtest_test_utils.Subprocess(command) Assert(p.exited) AssertEq(1, p.exit_code) Assert('InitGoogleTest' in p.output) class GTestUninitializedTest(gtest_test_utils.TestCase): def testExitCodeAndOutput(self): TestExitCodeAndOutput(COMMAND) if __name__ == '__main__': gtest_test_utils.Main() ffc-2019.1.0.post0/libs/gtest-1.7.0/test/gtest_uninitialized_test_.cc000066400000000000000000000036011346026536000250350ustar00rootroot00000000000000// Copyright 2008, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: wan@google.com (Zhanyong Wan) #include "gtest/gtest.h" TEST(DummyTest, Dummy) { // This test doesn't verify anything. We just need it to create a // realistic stage for testing the behavior of Google Test when // RUN_ALL_TESTS() is called without testing::InitGoogleTest() being // called first. } int main() { return RUN_ALL_TESTS(); } ffc-2019.1.0.post0/libs/gtest-1.7.0/test/gtest_unittest.cc000066400000000000000000007231111346026536000226530ustar00rootroot00000000000000// Copyright 2005, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: wan@google.com (Zhanyong Wan) // // Tests for Google Test itself. This verifies that the basic constructs of // Google Test work. #include "gtest/gtest.h" // Verifies that the command line flag variables can be accessed // in code once has been #included. // Do not move it after other #includes. TEST(CommandLineFlagsTest, CanBeAccessedInCodeOnceGTestHIsIncluded) { bool dummy = testing::GTEST_FLAG(also_run_disabled_tests) || testing::GTEST_FLAG(break_on_failure) || testing::GTEST_FLAG(catch_exceptions) || testing::GTEST_FLAG(color) != "unknown" || testing::GTEST_FLAG(filter) != "unknown" || testing::GTEST_FLAG(list_tests) || testing::GTEST_FLAG(output) != "unknown" || testing::GTEST_FLAG(print_time) || testing::GTEST_FLAG(random_seed) || testing::GTEST_FLAG(repeat) > 0 || testing::GTEST_FLAG(show_internal_stack_frames) || testing::GTEST_FLAG(shuffle) || testing::GTEST_FLAG(stack_trace_depth) > 0 || testing::GTEST_FLAG(stream_result_to) != "unknown" || testing::GTEST_FLAG(throw_on_failure); EXPECT_TRUE(dummy || !dummy); // Suppresses warning that dummy is unused. } #include // For INT_MAX. #include #include #include #include #include #include #include "gtest/gtest-spi.h" // Indicates that this translation unit is part of Google Test's // implementation. It must come before gtest-internal-inl.h is // included, or there will be a compiler error. This trick is to // prevent a user from accidentally including gtest-internal-inl.h in // his code. #define GTEST_IMPLEMENTATION_ 1 #include "src/gtest-internal-inl.h" #undef GTEST_IMPLEMENTATION_ namespace testing { namespace internal { #if GTEST_CAN_STREAM_RESULTS_ class StreamingListenerTest : public Test { public: class FakeSocketWriter : public StreamingListener::AbstractSocketWriter { public: // Sends a string to the socket. virtual void Send(const string& message) { output_ += message; } string output_; }; StreamingListenerTest() : fake_sock_writer_(new FakeSocketWriter), streamer_(fake_sock_writer_), test_info_obj_("FooTest", "Bar", NULL, NULL, 0, NULL) {} protected: string* output() { return &(fake_sock_writer_->output_); } FakeSocketWriter* const fake_sock_writer_; StreamingListener streamer_; UnitTest unit_test_; TestInfo test_info_obj_; // The name test_info_ was taken by testing::Test. }; TEST_F(StreamingListenerTest, OnTestProgramEnd) { *output() = ""; streamer_.OnTestProgramEnd(unit_test_); EXPECT_EQ("event=TestProgramEnd&passed=1\n", *output()); } TEST_F(StreamingListenerTest, OnTestIterationEnd) { *output() = ""; streamer_.OnTestIterationEnd(unit_test_, 42); EXPECT_EQ("event=TestIterationEnd&passed=1&elapsed_time=0ms\n", *output()); } TEST_F(StreamingListenerTest, OnTestCaseStart) { *output() = ""; streamer_.OnTestCaseStart(TestCase("FooTest", "Bar", NULL, NULL)); EXPECT_EQ("event=TestCaseStart&name=FooTest\n", *output()); } TEST_F(StreamingListenerTest, OnTestCaseEnd) { *output() = ""; streamer_.OnTestCaseEnd(TestCase("FooTest", "Bar", NULL, NULL)); EXPECT_EQ("event=TestCaseEnd&passed=1&elapsed_time=0ms\n", *output()); } TEST_F(StreamingListenerTest, OnTestStart) { *output() = ""; streamer_.OnTestStart(test_info_obj_); EXPECT_EQ("event=TestStart&name=Bar\n", *output()); } TEST_F(StreamingListenerTest, OnTestEnd) { *output() = ""; streamer_.OnTestEnd(test_info_obj_); EXPECT_EQ("event=TestEnd&passed=1&elapsed_time=0ms\n", *output()); } TEST_F(StreamingListenerTest, OnTestPartResult) { *output() = ""; streamer_.OnTestPartResult(TestPartResult( TestPartResult::kFatalFailure, "foo.cc", 42, "failed=\n&%")); // Meta characters in the failure message should be properly escaped. EXPECT_EQ( "event=TestPartResult&file=foo.cc&line=42&message=failed%3D%0A%26%25\n", *output()); } #endif // GTEST_CAN_STREAM_RESULTS_ // Provides access to otherwise private parts of the TestEventListeners class // that are needed to test it. class TestEventListenersAccessor { public: static TestEventListener* GetRepeater(TestEventListeners* listeners) { return listeners->repeater(); } static void SetDefaultResultPrinter(TestEventListeners* listeners, TestEventListener* listener) { listeners->SetDefaultResultPrinter(listener); } static void SetDefaultXmlGenerator(TestEventListeners* listeners, TestEventListener* listener) { listeners->SetDefaultXmlGenerator(listener); } static bool EventForwardingEnabled(const TestEventListeners& listeners) { return listeners.EventForwardingEnabled(); } static void SuppressEventForwarding(TestEventListeners* listeners) { listeners->SuppressEventForwarding(); } }; class UnitTestRecordPropertyTestHelper : public Test { protected: UnitTestRecordPropertyTestHelper() {} // Forwards to UnitTest::RecordProperty() to bypass access controls. void UnitTestRecordProperty(const char* key, const std::string& value) { unit_test_.RecordProperty(key, value); } UnitTest unit_test_; }; } // namespace internal } // namespace testing using testing::AssertionFailure; using testing::AssertionResult; using testing::AssertionSuccess; using testing::DoubleLE; using testing::EmptyTestEventListener; using testing::Environment; using testing::FloatLE; using testing::GTEST_FLAG(also_run_disabled_tests); using testing::GTEST_FLAG(break_on_failure); using testing::GTEST_FLAG(catch_exceptions); using testing::GTEST_FLAG(color); using testing::GTEST_FLAG(death_test_use_fork); using testing::GTEST_FLAG(filter); using testing::GTEST_FLAG(list_tests); using testing::GTEST_FLAG(output); using testing::GTEST_FLAG(print_time); using testing::GTEST_FLAG(random_seed); using testing::GTEST_FLAG(repeat); using testing::GTEST_FLAG(show_internal_stack_frames); using testing::GTEST_FLAG(shuffle); using testing::GTEST_FLAG(stack_trace_depth); using testing::GTEST_FLAG(stream_result_to); using testing::GTEST_FLAG(throw_on_failure); using testing::IsNotSubstring; using testing::IsSubstring; using testing::Message; using testing::ScopedFakeTestPartResultReporter; using testing::StaticAssertTypeEq; using testing::Test; using testing::TestCase; using testing::TestEventListeners; using testing::TestInfo; using testing::TestPartResult; using testing::TestPartResultArray; using testing::TestProperty; using testing::TestResult; using testing::TimeInMillis; using testing::UnitTest; using testing::kMaxStackTraceDepth; using testing::internal::AddReference; using testing::internal::AlwaysFalse; using testing::internal::AlwaysTrue; using testing::internal::AppendUserMessage; using testing::internal::ArrayAwareFind; using testing::internal::ArrayEq; using testing::internal::CodePointToUtf8; using testing::internal::CompileAssertTypesEqual; using testing::internal::CopyArray; using testing::internal::CountIf; using testing::internal::EqFailure; using testing::internal::FloatingPoint; using testing::internal::ForEach; using testing::internal::FormatEpochTimeInMillisAsIso8601; using testing::internal::FormatTimeInMillisAsSeconds; using testing::internal::GTestFlagSaver; using testing::internal::GetCurrentOsStackTraceExceptTop; using testing::internal::GetElementOr; using testing::internal::GetNextRandomSeed; using testing::internal::GetRandomSeedFromFlag; using testing::internal::GetTestTypeId; using testing::internal::GetTimeInMillis; using testing::internal::GetTypeId; using testing::internal::GetUnitTestImpl; using testing::internal::ImplicitlyConvertible; using testing::internal::Int32; using testing::internal::Int32FromEnvOrDie; using testing::internal::IsAProtocolMessage; using testing::internal::IsContainer; using testing::internal::IsContainerTest; using testing::internal::IsNotContainer; using testing::internal::NativeArray; using testing::internal::ParseInt32Flag; using testing::internal::RemoveConst; using testing::internal::RemoveReference; using testing::internal::ShouldRunTestOnShard; using testing::internal::ShouldShard; using testing::internal::ShouldUseColor; using testing::internal::Shuffle; using testing::internal::ShuffleRange; using testing::internal::SkipPrefix; using testing::internal::StreamableToString; using testing::internal::String; using testing::internal::TestEventListenersAccessor; using testing::internal::TestResultAccessor; using testing::internal::UInt32; using testing::internal::WideStringToUtf8; using testing::internal::kCopy; using testing::internal::kMaxRandomSeed; using testing::internal::kReference; using testing::internal::kTestTypeIdInGoogleTest; using testing::internal::scoped_ptr; #if GTEST_HAS_STREAM_REDIRECTION using testing::internal::CaptureStdout; using testing::internal::GetCapturedStdout; #endif #if GTEST_IS_THREADSAFE using testing::internal::ThreadWithParam; #endif class TestingVector : public std::vector { }; ::std::ostream& operator<<(::std::ostream& os, const TestingVector& vector) { os << "{ "; for (size_t i = 0; i < vector.size(); i++) { os << vector[i] << " "; } os << "}"; return os; } // This line tests that we can define tests in an unnamed namespace. namespace { TEST(GetRandomSeedFromFlagTest, HandlesZero) { const int seed = GetRandomSeedFromFlag(0); EXPECT_LE(1, seed); EXPECT_LE(seed, static_cast(kMaxRandomSeed)); } TEST(GetRandomSeedFromFlagTest, PreservesValidSeed) { EXPECT_EQ(1, GetRandomSeedFromFlag(1)); EXPECT_EQ(2, GetRandomSeedFromFlag(2)); EXPECT_EQ(kMaxRandomSeed - 1, GetRandomSeedFromFlag(kMaxRandomSeed - 1)); EXPECT_EQ(static_cast(kMaxRandomSeed), GetRandomSeedFromFlag(kMaxRandomSeed)); } TEST(GetRandomSeedFromFlagTest, NormalizesInvalidSeed) { const int seed1 = GetRandomSeedFromFlag(-1); EXPECT_LE(1, seed1); EXPECT_LE(seed1, static_cast(kMaxRandomSeed)); const int seed2 = GetRandomSeedFromFlag(kMaxRandomSeed + 1); EXPECT_LE(1, seed2); EXPECT_LE(seed2, static_cast(kMaxRandomSeed)); } TEST(GetNextRandomSeedTest, WorksForValidInput) { EXPECT_EQ(2, GetNextRandomSeed(1)); EXPECT_EQ(3, GetNextRandomSeed(2)); EXPECT_EQ(static_cast(kMaxRandomSeed), GetNextRandomSeed(kMaxRandomSeed - 1)); EXPECT_EQ(1, GetNextRandomSeed(kMaxRandomSeed)); // We deliberately don't test GetNextRandomSeed() with invalid // inputs, as that requires death tests, which are expensive. This // is fine as GetNextRandomSeed() is internal and has a // straightforward definition. } static void ClearCurrentTestPartResults() { TestResultAccessor::ClearTestPartResults( GetUnitTestImpl()->current_test_result()); } // Tests GetTypeId. TEST(GetTypeIdTest, ReturnsSameValueForSameType) { EXPECT_EQ(GetTypeId(), GetTypeId()); EXPECT_EQ(GetTypeId(), GetTypeId()); } class SubClassOfTest : public Test {}; class AnotherSubClassOfTest : public Test {}; TEST(GetTypeIdTest, ReturnsDifferentValuesForDifferentTypes) { EXPECT_NE(GetTypeId(), GetTypeId()); EXPECT_NE(GetTypeId(), GetTypeId()); EXPECT_NE(GetTypeId(), GetTestTypeId()); EXPECT_NE(GetTypeId(), GetTestTypeId()); EXPECT_NE(GetTypeId(), GetTestTypeId()); EXPECT_NE(GetTypeId(), GetTypeId()); } // Verifies that GetTestTypeId() returns the same value, no matter it // is called from inside Google Test or outside of it. TEST(GetTestTypeIdTest, ReturnsTheSameValueInsideOrOutsideOfGoogleTest) { EXPECT_EQ(kTestTypeIdInGoogleTest, GetTestTypeId()); } // Tests FormatTimeInMillisAsSeconds(). TEST(FormatTimeInMillisAsSecondsTest, FormatsZero) { EXPECT_EQ("0", FormatTimeInMillisAsSeconds(0)); } TEST(FormatTimeInMillisAsSecondsTest, FormatsPositiveNumber) { EXPECT_EQ("0.003", FormatTimeInMillisAsSeconds(3)); EXPECT_EQ("0.01", FormatTimeInMillisAsSeconds(10)); EXPECT_EQ("0.2", FormatTimeInMillisAsSeconds(200)); EXPECT_EQ("1.2", FormatTimeInMillisAsSeconds(1200)); EXPECT_EQ("3", FormatTimeInMillisAsSeconds(3000)); } TEST(FormatTimeInMillisAsSecondsTest, FormatsNegativeNumber) { EXPECT_EQ("-0.003", FormatTimeInMillisAsSeconds(-3)); EXPECT_EQ("-0.01", FormatTimeInMillisAsSeconds(-10)); EXPECT_EQ("-0.2", FormatTimeInMillisAsSeconds(-200)); EXPECT_EQ("-1.2", FormatTimeInMillisAsSeconds(-1200)); EXPECT_EQ("-3", FormatTimeInMillisAsSeconds(-3000)); } // Tests FormatEpochTimeInMillisAsIso8601(). The correctness of conversion // for particular dates below was verified in Python using // datetime.datetime.fromutctimestamp(/1000). // FormatEpochTimeInMillisAsIso8601 depends on the current timezone, so we // have to set up a particular timezone to obtain predictable results. class FormatEpochTimeInMillisAsIso8601Test : public Test { public: // On Cygwin, GCC doesn't allow unqualified integer literals to exceed // 32 bits, even when 64-bit integer types are available. We have to // force the constants to have a 64-bit type here. static const TimeInMillis kMillisPerSec = 1000; private: virtual void SetUp() { saved_tz_ = NULL; #if _MSC_VER # pragma warning(push) // Saves the current warning state. # pragma warning(disable:4996) // Temporarily disables warning 4996 // (function or variable may be unsafe // for getenv, function is deprecated for // strdup). if (getenv("TZ")) saved_tz_ = strdup(getenv("TZ")); # pragma warning(pop) // Restores the warning state again. #else if (getenv("TZ")) saved_tz_ = strdup(getenv("TZ")); #endif // Set up the time zone for FormatEpochTimeInMillisAsIso8601 to use. We // cannot use the local time zone because the function's output depends // on the time zone. SetTimeZone("UTC+00"); } virtual void TearDown() { SetTimeZone(saved_tz_); free(const_cast(saved_tz_)); saved_tz_ = NULL; } static void SetTimeZone(const char* time_zone) { // tzset() distinguishes between the TZ variable being present and empty // and not being present, so we have to consider the case of time_zone // being NULL. #if _MSC_VER // ...Unless it's MSVC, whose standard library's _putenv doesn't // distinguish between an empty and a missing variable. const std::string env_var = std::string("TZ=") + (time_zone ? time_zone : ""); _putenv(env_var.c_str()); # pragma warning(push) // Saves the current warning state. # pragma warning(disable:4996) // Temporarily disables warning 4996 // (function is deprecated). tzset(); # pragma warning(pop) // Restores the warning state again. #else if (time_zone) { setenv(("TZ"), time_zone, 1); } else { unsetenv("TZ"); } tzset(); #endif } const char* saved_tz_; }; const TimeInMillis FormatEpochTimeInMillisAsIso8601Test::kMillisPerSec; TEST_F(FormatEpochTimeInMillisAsIso8601Test, PrintsTwoDigitSegments) { EXPECT_EQ("2011-10-31T18:52:42", FormatEpochTimeInMillisAsIso8601(1320087162 * kMillisPerSec)); } TEST_F(FormatEpochTimeInMillisAsIso8601Test, MillisecondsDoNotAffectResult) { EXPECT_EQ( "2011-10-31T18:52:42", FormatEpochTimeInMillisAsIso8601(1320087162 * kMillisPerSec + 234)); } TEST_F(FormatEpochTimeInMillisAsIso8601Test, PrintsLeadingZeroes) { EXPECT_EQ("2011-09-03T05:07:02", FormatEpochTimeInMillisAsIso8601(1315026422 * kMillisPerSec)); } TEST_F(FormatEpochTimeInMillisAsIso8601Test, Prints24HourTime) { EXPECT_EQ("2011-09-28T17:08:22", FormatEpochTimeInMillisAsIso8601(1317229702 * kMillisPerSec)); } TEST_F(FormatEpochTimeInMillisAsIso8601Test, PrintsEpochStart) { EXPECT_EQ("1970-01-01T00:00:00", FormatEpochTimeInMillisAsIso8601(0)); } #if GTEST_CAN_COMPARE_NULL # ifdef __BORLANDC__ // Silences warnings: "Condition is always true", "Unreachable code" # pragma option push -w-ccc -w-rch # endif // Tests that GTEST_IS_NULL_LITERAL_(x) is true when x is a null // pointer literal. TEST(NullLiteralTest, IsTrueForNullLiterals) { EXPECT_TRUE(GTEST_IS_NULL_LITERAL_(NULL)); EXPECT_TRUE(GTEST_IS_NULL_LITERAL_(0)); EXPECT_TRUE(GTEST_IS_NULL_LITERAL_(0U)); EXPECT_TRUE(GTEST_IS_NULL_LITERAL_(0L)); } // Tests that GTEST_IS_NULL_LITERAL_(x) is false when x is not a null // pointer literal. TEST(NullLiteralTest, IsFalseForNonNullLiterals) { EXPECT_FALSE(GTEST_IS_NULL_LITERAL_(1)); EXPECT_FALSE(GTEST_IS_NULL_LITERAL_(0.0)); EXPECT_FALSE(GTEST_IS_NULL_LITERAL_('a')); EXPECT_FALSE(GTEST_IS_NULL_LITERAL_(static_cast(NULL))); } # ifdef __BORLANDC__ // Restores warnings after previous "#pragma option push" suppressed them. # pragma option pop # endif #endif // GTEST_CAN_COMPARE_NULL // // Tests CodePointToUtf8(). // Tests that the NUL character L'\0' is encoded correctly. TEST(CodePointToUtf8Test, CanEncodeNul) { EXPECT_EQ("", CodePointToUtf8(L'\0')); } // Tests that ASCII characters are encoded correctly. TEST(CodePointToUtf8Test, CanEncodeAscii) { EXPECT_EQ("a", CodePointToUtf8(L'a')); EXPECT_EQ("Z", CodePointToUtf8(L'Z')); EXPECT_EQ("&", CodePointToUtf8(L'&')); EXPECT_EQ("\x7F", CodePointToUtf8(L'\x7F')); } // Tests that Unicode code-points that have 8 to 11 bits are encoded // as 110xxxxx 10xxxxxx. TEST(CodePointToUtf8Test, CanEncode8To11Bits) { // 000 1101 0011 => 110-00011 10-010011 EXPECT_EQ("\xC3\x93", CodePointToUtf8(L'\xD3')); // 101 0111 0110 => 110-10101 10-110110 // Some compilers (e.g., GCC on MinGW) cannot handle non-ASCII codepoints // in wide strings and wide chars. In order to accomodate them, we have to // introduce such character constants as integers. EXPECT_EQ("\xD5\xB6", CodePointToUtf8(static_cast(0x576))); } // Tests that Unicode code-points that have 12 to 16 bits are encoded // as 1110xxxx 10xxxxxx 10xxxxxx. TEST(CodePointToUtf8Test, CanEncode12To16Bits) { // 0000 1000 1101 0011 => 1110-0000 10-100011 10-010011 EXPECT_EQ("\xE0\xA3\x93", CodePointToUtf8(static_cast(0x8D3))); // 1100 0111 0100 1101 => 1110-1100 10-011101 10-001101 EXPECT_EQ("\xEC\x9D\x8D", CodePointToUtf8(static_cast(0xC74D))); } #if !GTEST_WIDE_STRING_USES_UTF16_ // Tests in this group require a wchar_t to hold > 16 bits, and thus // are skipped on Windows, Cygwin, and Symbian, where a wchar_t is // 16-bit wide. This code may not compile on those systems. // Tests that Unicode code-points that have 17 to 21 bits are encoded // as 11110xxx 10xxxxxx 10xxxxxx 10xxxxxx. TEST(CodePointToUtf8Test, CanEncode17To21Bits) { // 0 0001 0000 1000 1101 0011 => 11110-000 10-010000 10-100011 10-010011 EXPECT_EQ("\xF0\x90\xA3\x93", CodePointToUtf8(L'\x108D3')); // 0 0001 0000 0100 0000 0000 => 11110-000 10-010000 10-010000 10-000000 EXPECT_EQ("\xF0\x90\x90\x80", CodePointToUtf8(L'\x10400')); // 1 0000 1000 0110 0011 0100 => 11110-100 10-001000 10-011000 10-110100 EXPECT_EQ("\xF4\x88\x98\xB4", CodePointToUtf8(L'\x108634')); } // Tests that encoding an invalid code-point generates the expected result. TEST(CodePointToUtf8Test, CanEncodeInvalidCodePoint) { EXPECT_EQ("(Invalid Unicode 0x1234ABCD)", CodePointToUtf8(L'\x1234ABCD')); } #endif // !GTEST_WIDE_STRING_USES_UTF16_ // Tests WideStringToUtf8(). // Tests that the NUL character L'\0' is encoded correctly. TEST(WideStringToUtf8Test, CanEncodeNul) { EXPECT_STREQ("", WideStringToUtf8(L"", 0).c_str()); EXPECT_STREQ("", WideStringToUtf8(L"", -1).c_str()); } // Tests that ASCII strings are encoded correctly. TEST(WideStringToUtf8Test, CanEncodeAscii) { EXPECT_STREQ("a", WideStringToUtf8(L"a", 1).c_str()); EXPECT_STREQ("ab", WideStringToUtf8(L"ab", 2).c_str()); EXPECT_STREQ("a", WideStringToUtf8(L"a", -1).c_str()); EXPECT_STREQ("ab", WideStringToUtf8(L"ab", -1).c_str()); } // Tests that Unicode code-points that have 8 to 11 bits are encoded // as 110xxxxx 10xxxxxx. TEST(WideStringToUtf8Test, CanEncode8To11Bits) { // 000 1101 0011 => 110-00011 10-010011 EXPECT_STREQ("\xC3\x93", WideStringToUtf8(L"\xD3", 1).c_str()); EXPECT_STREQ("\xC3\x93", WideStringToUtf8(L"\xD3", -1).c_str()); // 101 0111 0110 => 110-10101 10-110110 const wchar_t s[] = { 0x576, '\0' }; EXPECT_STREQ("\xD5\xB6", WideStringToUtf8(s, 1).c_str()); EXPECT_STREQ("\xD5\xB6", WideStringToUtf8(s, -1).c_str()); } // Tests that Unicode code-points that have 12 to 16 bits are encoded // as 1110xxxx 10xxxxxx 10xxxxxx. TEST(WideStringToUtf8Test, CanEncode12To16Bits) { // 0000 1000 1101 0011 => 1110-0000 10-100011 10-010011 const wchar_t s1[] = { 0x8D3, '\0' }; EXPECT_STREQ("\xE0\xA3\x93", WideStringToUtf8(s1, 1).c_str()); EXPECT_STREQ("\xE0\xA3\x93", WideStringToUtf8(s1, -1).c_str()); // 1100 0111 0100 1101 => 1110-1100 10-011101 10-001101 const wchar_t s2[] = { 0xC74D, '\0' }; EXPECT_STREQ("\xEC\x9D\x8D", WideStringToUtf8(s2, 1).c_str()); EXPECT_STREQ("\xEC\x9D\x8D", WideStringToUtf8(s2, -1).c_str()); } // Tests that the conversion stops when the function encounters \0 character. TEST(WideStringToUtf8Test, StopsOnNulCharacter) { EXPECT_STREQ("ABC", WideStringToUtf8(L"ABC\0XYZ", 100).c_str()); } // Tests that the conversion stops when the function reaches the limit // specified by the 'length' parameter. TEST(WideStringToUtf8Test, StopsWhenLengthLimitReached) { EXPECT_STREQ("ABC", WideStringToUtf8(L"ABCDEF", 3).c_str()); } #if !GTEST_WIDE_STRING_USES_UTF16_ // Tests that Unicode code-points that have 17 to 21 bits are encoded // as 11110xxx 10xxxxxx 10xxxxxx 10xxxxxx. This code may not compile // on the systems using UTF-16 encoding. TEST(WideStringToUtf8Test, CanEncode17To21Bits) { // 0 0001 0000 1000 1101 0011 => 11110-000 10-010000 10-100011 10-010011 EXPECT_STREQ("\xF0\x90\xA3\x93", WideStringToUtf8(L"\x108D3", 1).c_str()); EXPECT_STREQ("\xF0\x90\xA3\x93", WideStringToUtf8(L"\x108D3", -1).c_str()); // 1 0000 1000 0110 0011 0100 => 11110-100 10-001000 10-011000 10-110100 EXPECT_STREQ("\xF4\x88\x98\xB4", WideStringToUtf8(L"\x108634", 1).c_str()); EXPECT_STREQ("\xF4\x88\x98\xB4", WideStringToUtf8(L"\x108634", -1).c_str()); } // Tests that encoding an invalid code-point generates the expected result. TEST(WideStringToUtf8Test, CanEncodeInvalidCodePoint) { EXPECT_STREQ("(Invalid Unicode 0xABCDFF)", WideStringToUtf8(L"\xABCDFF", -1).c_str()); } #else // !GTEST_WIDE_STRING_USES_UTF16_ // Tests that surrogate pairs are encoded correctly on the systems using // UTF-16 encoding in the wide strings. TEST(WideStringToUtf8Test, CanEncodeValidUtf16SUrrogatePairs) { const wchar_t s[] = { 0xD801, 0xDC00, '\0' }; EXPECT_STREQ("\xF0\x90\x90\x80", WideStringToUtf8(s, -1).c_str()); } // Tests that encoding an invalid UTF-16 surrogate pair // generates the expected result. TEST(WideStringToUtf8Test, CanEncodeInvalidUtf16SurrogatePair) { // Leading surrogate is at the end of the string. const wchar_t s1[] = { 0xD800, '\0' }; EXPECT_STREQ("\xED\xA0\x80", WideStringToUtf8(s1, -1).c_str()); // Leading surrogate is not followed by the trailing surrogate. const wchar_t s2[] = { 0xD800, 'M', '\0' }; EXPECT_STREQ("\xED\xA0\x80M", WideStringToUtf8(s2, -1).c_str()); // Trailing surrogate appearas without a leading surrogate. const wchar_t s3[] = { 0xDC00, 'P', 'Q', 'R', '\0' }; EXPECT_STREQ("\xED\xB0\x80PQR", WideStringToUtf8(s3, -1).c_str()); } #endif // !GTEST_WIDE_STRING_USES_UTF16_ // Tests that codepoint concatenation works correctly. #if !GTEST_WIDE_STRING_USES_UTF16_ TEST(WideStringToUtf8Test, ConcatenatesCodepointsCorrectly) { const wchar_t s[] = { 0x108634, 0xC74D, '\n', 0x576, 0x8D3, 0x108634, '\0'}; EXPECT_STREQ( "\xF4\x88\x98\xB4" "\xEC\x9D\x8D" "\n" "\xD5\xB6" "\xE0\xA3\x93" "\xF4\x88\x98\xB4", WideStringToUtf8(s, -1).c_str()); } #else TEST(WideStringToUtf8Test, ConcatenatesCodepointsCorrectly) { const wchar_t s[] = { 0xC74D, '\n', 0x576, 0x8D3, '\0'}; EXPECT_STREQ( "\xEC\x9D\x8D" "\n" "\xD5\xB6" "\xE0\xA3\x93", WideStringToUtf8(s, -1).c_str()); } #endif // !GTEST_WIDE_STRING_USES_UTF16_ // Tests the Random class. TEST(RandomDeathTest, GeneratesCrashesOnInvalidRange) { testing::internal::Random random(42); EXPECT_DEATH_IF_SUPPORTED( random.Generate(0), "Cannot generate a number in the range \\[0, 0\\)"); EXPECT_DEATH_IF_SUPPORTED( random.Generate(testing::internal::Random::kMaxRange + 1), "Generation of a number in \\[0, 2147483649\\) was requested, " "but this can only generate numbers in \\[0, 2147483648\\)"); } TEST(RandomTest, GeneratesNumbersWithinRange) { const UInt32 kRange = 10000; testing::internal::Random random(12345); for (int i = 0; i < 10; i++) { EXPECT_LT(random.Generate(kRange), kRange) << " for iteration " << i; } testing::internal::Random random2(testing::internal::Random::kMaxRange); for (int i = 0; i < 10; i++) { EXPECT_LT(random2.Generate(kRange), kRange) << " for iteration " << i; } } TEST(RandomTest, RepeatsWhenReseeded) { const int kSeed = 123; const int kArraySize = 10; const UInt32 kRange = 10000; UInt32 values[kArraySize]; testing::internal::Random random(kSeed); for (int i = 0; i < kArraySize; i++) { values[i] = random.Generate(kRange); } random.Reseed(kSeed); for (int i = 0; i < kArraySize; i++) { EXPECT_EQ(values[i], random.Generate(kRange)) << " for iteration " << i; } } // Tests STL container utilities. // Tests CountIf(). static bool IsPositive(int n) { return n > 0; } TEST(ContainerUtilityTest, CountIf) { std::vector v; EXPECT_EQ(0, CountIf(v, IsPositive)); // Works for an empty container. v.push_back(-1); v.push_back(0); EXPECT_EQ(0, CountIf(v, IsPositive)); // Works when no value satisfies. v.push_back(2); v.push_back(-10); v.push_back(10); EXPECT_EQ(2, CountIf(v, IsPositive)); } // Tests ForEach(). static int g_sum = 0; static void Accumulate(int n) { g_sum += n; } TEST(ContainerUtilityTest, ForEach) { std::vector v; g_sum = 0; ForEach(v, Accumulate); EXPECT_EQ(0, g_sum); // Works for an empty container; g_sum = 0; v.push_back(1); ForEach(v, Accumulate); EXPECT_EQ(1, g_sum); // Works for a container with one element. g_sum = 0; v.push_back(20); v.push_back(300); ForEach(v, Accumulate); EXPECT_EQ(321, g_sum); } // Tests GetElementOr(). TEST(ContainerUtilityTest, GetElementOr) { std::vector a; EXPECT_EQ('x', GetElementOr(a, 0, 'x')); a.push_back('a'); a.push_back('b'); EXPECT_EQ('a', GetElementOr(a, 0, 'x')); EXPECT_EQ('b', GetElementOr(a, 1, 'x')); EXPECT_EQ('x', GetElementOr(a, -2, 'x')); EXPECT_EQ('x', GetElementOr(a, 2, 'x')); } TEST(ContainerUtilityDeathTest, ShuffleRange) { std::vector a; a.push_back(0); a.push_back(1); a.push_back(2); testing::internal::Random random(1); EXPECT_DEATH_IF_SUPPORTED( ShuffleRange(&random, -1, 1, &a), "Invalid shuffle range start -1: must be in range \\[0, 3\\]"); EXPECT_DEATH_IF_SUPPORTED( ShuffleRange(&random, 4, 4, &a), "Invalid shuffle range start 4: must be in range \\[0, 3\\]"); EXPECT_DEATH_IF_SUPPORTED( ShuffleRange(&random, 3, 2, &a), "Invalid shuffle range finish 2: must be in range \\[3, 3\\]"); EXPECT_DEATH_IF_SUPPORTED( ShuffleRange(&random, 3, 4, &a), "Invalid shuffle range finish 4: must be in range \\[3, 3\\]"); } class VectorShuffleTest : public Test { protected: static const int kVectorSize = 20; VectorShuffleTest() : random_(1) { for (int i = 0; i < kVectorSize; i++) { vector_.push_back(i); } } static bool VectorIsCorrupt(const TestingVector& vector) { if (kVectorSize != static_cast(vector.size())) { return true; } bool found_in_vector[kVectorSize] = { false }; for (size_t i = 0; i < vector.size(); i++) { const int e = vector[i]; if (e < 0 || e >= kVectorSize || found_in_vector[e]) { return true; } found_in_vector[e] = true; } // Vector size is correct, elements' range is correct, no // duplicate elements. Therefore no corruption has occurred. return false; } static bool VectorIsNotCorrupt(const TestingVector& vector) { return !VectorIsCorrupt(vector); } static bool RangeIsShuffled(const TestingVector& vector, int begin, int end) { for (int i = begin; i < end; i++) { if (i != vector[i]) { return true; } } return false; } static bool RangeIsUnshuffled( const TestingVector& vector, int begin, int end) { return !RangeIsShuffled(vector, begin, end); } static bool VectorIsShuffled(const TestingVector& vector) { return RangeIsShuffled(vector, 0, static_cast(vector.size())); } static bool VectorIsUnshuffled(const TestingVector& vector) { return !VectorIsShuffled(vector); } testing::internal::Random random_; TestingVector vector_; }; // class VectorShuffleTest const int VectorShuffleTest::kVectorSize; TEST_F(VectorShuffleTest, HandlesEmptyRange) { // Tests an empty range at the beginning... ShuffleRange(&random_, 0, 0, &vector_); ASSERT_PRED1(VectorIsNotCorrupt, vector_); ASSERT_PRED1(VectorIsUnshuffled, vector_); // ...in the middle... ShuffleRange(&random_, kVectorSize/2, kVectorSize/2, &vector_); ASSERT_PRED1(VectorIsNotCorrupt, vector_); ASSERT_PRED1(VectorIsUnshuffled, vector_); // ...at the end... ShuffleRange(&random_, kVectorSize - 1, kVectorSize - 1, &vector_); ASSERT_PRED1(VectorIsNotCorrupt, vector_); ASSERT_PRED1(VectorIsUnshuffled, vector_); // ...and past the end. ShuffleRange(&random_, kVectorSize, kVectorSize, &vector_); ASSERT_PRED1(VectorIsNotCorrupt, vector_); ASSERT_PRED1(VectorIsUnshuffled, vector_); } TEST_F(VectorShuffleTest, HandlesRangeOfSizeOne) { // Tests a size one range at the beginning... ShuffleRange(&random_, 0, 1, &vector_); ASSERT_PRED1(VectorIsNotCorrupt, vector_); ASSERT_PRED1(VectorIsUnshuffled, vector_); // ...in the middle... ShuffleRange(&random_, kVectorSize/2, kVectorSize/2 + 1, &vector_); ASSERT_PRED1(VectorIsNotCorrupt, vector_); ASSERT_PRED1(VectorIsUnshuffled, vector_); // ...and at the end. ShuffleRange(&random_, kVectorSize - 1, kVectorSize, &vector_); ASSERT_PRED1(VectorIsNotCorrupt, vector_); ASSERT_PRED1(VectorIsUnshuffled, vector_); } // Because we use our own random number generator and a fixed seed, // we can guarantee that the following "random" tests will succeed. TEST_F(VectorShuffleTest, ShufflesEntireVector) { Shuffle(&random_, &vector_); ASSERT_PRED1(VectorIsNotCorrupt, vector_); EXPECT_FALSE(VectorIsUnshuffled(vector_)) << vector_; // Tests the first and last elements in particular to ensure that // there are no off-by-one problems in our shuffle algorithm. EXPECT_NE(0, vector_[0]); EXPECT_NE(kVectorSize - 1, vector_[kVectorSize - 1]); } TEST_F(VectorShuffleTest, ShufflesStartOfVector) { const int kRangeSize = kVectorSize/2; ShuffleRange(&random_, 0, kRangeSize, &vector_); ASSERT_PRED1(VectorIsNotCorrupt, vector_); EXPECT_PRED3(RangeIsShuffled, vector_, 0, kRangeSize); EXPECT_PRED3(RangeIsUnshuffled, vector_, kRangeSize, kVectorSize); } TEST_F(VectorShuffleTest, ShufflesEndOfVector) { const int kRangeSize = kVectorSize / 2; ShuffleRange(&random_, kRangeSize, kVectorSize, &vector_); ASSERT_PRED1(VectorIsNotCorrupt, vector_); EXPECT_PRED3(RangeIsUnshuffled, vector_, 0, kRangeSize); EXPECT_PRED3(RangeIsShuffled, vector_, kRangeSize, kVectorSize); } TEST_F(VectorShuffleTest, ShufflesMiddleOfVector) { int kRangeSize = kVectorSize/3; ShuffleRange(&random_, kRangeSize, 2*kRangeSize, &vector_); ASSERT_PRED1(VectorIsNotCorrupt, vector_); EXPECT_PRED3(RangeIsUnshuffled, vector_, 0, kRangeSize); EXPECT_PRED3(RangeIsShuffled, vector_, kRangeSize, 2*kRangeSize); EXPECT_PRED3(RangeIsUnshuffled, vector_, 2*kRangeSize, kVectorSize); } TEST_F(VectorShuffleTest, ShufflesRepeatably) { TestingVector vector2; for (int i = 0; i < kVectorSize; i++) { vector2.push_back(i); } random_.Reseed(1234); Shuffle(&random_, &vector_); random_.Reseed(1234); Shuffle(&random_, &vector2); ASSERT_PRED1(VectorIsNotCorrupt, vector_); ASSERT_PRED1(VectorIsNotCorrupt, vector2); for (int i = 0; i < kVectorSize; i++) { EXPECT_EQ(vector_[i], vector2[i]) << " where i is " << i; } } // Tests the size of the AssertHelper class. TEST(AssertHelperTest, AssertHelperIsSmall) { // To avoid breaking clients that use lots of assertions in one // function, we cannot grow the size of AssertHelper. EXPECT_LE(sizeof(testing::internal::AssertHelper), sizeof(void*)); } // Tests String::EndsWithCaseInsensitive(). TEST(StringTest, EndsWithCaseInsensitive) { EXPECT_TRUE(String::EndsWithCaseInsensitive("foobar", "BAR")); EXPECT_TRUE(String::EndsWithCaseInsensitive("foobaR", "bar")); EXPECT_TRUE(String::EndsWithCaseInsensitive("foobar", "")); EXPECT_TRUE(String::EndsWithCaseInsensitive("", "")); EXPECT_FALSE(String::EndsWithCaseInsensitive("Foobar", "foo")); EXPECT_FALSE(String::EndsWithCaseInsensitive("foobar", "Foo")); EXPECT_FALSE(String::EndsWithCaseInsensitive("", "foo")); } // C++Builder's preprocessor is buggy; it fails to expand macros that // appear in macro parameters after wide char literals. Provide an alias // for NULL as a workaround. static const wchar_t* const kNull = NULL; // Tests String::CaseInsensitiveWideCStringEquals TEST(StringTest, CaseInsensitiveWideCStringEquals) { EXPECT_TRUE(String::CaseInsensitiveWideCStringEquals(NULL, NULL)); EXPECT_FALSE(String::CaseInsensitiveWideCStringEquals(kNull, L"")); EXPECT_FALSE(String::CaseInsensitiveWideCStringEquals(L"", kNull)); EXPECT_FALSE(String::CaseInsensitiveWideCStringEquals(kNull, L"foobar")); EXPECT_FALSE(String::CaseInsensitiveWideCStringEquals(L"foobar", kNull)); EXPECT_TRUE(String::CaseInsensitiveWideCStringEquals(L"foobar", L"foobar")); EXPECT_TRUE(String::CaseInsensitiveWideCStringEquals(L"foobar", L"FOOBAR")); EXPECT_TRUE(String::CaseInsensitiveWideCStringEquals(L"FOOBAR", L"foobar")); } #if GTEST_OS_WINDOWS // Tests String::ShowWideCString(). TEST(StringTest, ShowWideCString) { EXPECT_STREQ("(null)", String::ShowWideCString(NULL).c_str()); EXPECT_STREQ("", String::ShowWideCString(L"").c_str()); EXPECT_STREQ("foo", String::ShowWideCString(L"foo").c_str()); } # if GTEST_OS_WINDOWS_MOBILE TEST(StringTest, AnsiAndUtf16Null) { EXPECT_EQ(NULL, String::AnsiToUtf16(NULL)); EXPECT_EQ(NULL, String::Utf16ToAnsi(NULL)); } TEST(StringTest, AnsiAndUtf16ConvertBasic) { const char* ansi = String::Utf16ToAnsi(L"str"); EXPECT_STREQ("str", ansi); delete [] ansi; const WCHAR* utf16 = String::AnsiToUtf16("str"); EXPECT_EQ(0, wcsncmp(L"str", utf16, 3)); delete [] utf16; } TEST(StringTest, AnsiAndUtf16ConvertPathChars) { const char* ansi = String::Utf16ToAnsi(L".:\\ \"*?"); EXPECT_STREQ(".:\\ \"*?", ansi); delete [] ansi; const WCHAR* utf16 = String::AnsiToUtf16(".:\\ \"*?"); EXPECT_EQ(0, wcsncmp(L".:\\ \"*?", utf16, 3)); delete [] utf16; } # endif // GTEST_OS_WINDOWS_MOBILE #endif // GTEST_OS_WINDOWS // Tests TestProperty construction. TEST(TestPropertyTest, StringValue) { TestProperty property("key", "1"); EXPECT_STREQ("key", property.key()); EXPECT_STREQ("1", property.value()); } // Tests TestProperty replacing a value. TEST(TestPropertyTest, ReplaceStringValue) { TestProperty property("key", "1"); EXPECT_STREQ("1", property.value()); property.SetValue("2"); EXPECT_STREQ("2", property.value()); } // AddFatalFailure() and AddNonfatalFailure() must be stand-alone // functions (i.e. their definitions cannot be inlined at the call // sites), or C++Builder won't compile the code. static void AddFatalFailure() { FAIL() << "Expected fatal failure."; } static void AddNonfatalFailure() { ADD_FAILURE() << "Expected non-fatal failure."; } class ScopedFakeTestPartResultReporterTest : public Test { public: // Must be public and not protected due to a bug in g++ 3.4.2. enum FailureMode { FATAL_FAILURE, NONFATAL_FAILURE }; static void AddFailure(FailureMode failure) { if (failure == FATAL_FAILURE) { AddFatalFailure(); } else { AddNonfatalFailure(); } } }; // Tests that ScopedFakeTestPartResultReporter intercepts test // failures. TEST_F(ScopedFakeTestPartResultReporterTest, InterceptsTestFailures) { TestPartResultArray results; { ScopedFakeTestPartResultReporter reporter( ScopedFakeTestPartResultReporter::INTERCEPT_ONLY_CURRENT_THREAD, &results); AddFailure(NONFATAL_FAILURE); AddFailure(FATAL_FAILURE); } EXPECT_EQ(2, results.size()); EXPECT_TRUE(results.GetTestPartResult(0).nonfatally_failed()); EXPECT_TRUE(results.GetTestPartResult(1).fatally_failed()); } TEST_F(ScopedFakeTestPartResultReporterTest, DeprecatedConstructor) { TestPartResultArray results; { // Tests, that the deprecated constructor still works. ScopedFakeTestPartResultReporter reporter(&results); AddFailure(NONFATAL_FAILURE); } EXPECT_EQ(1, results.size()); } #if GTEST_IS_THREADSAFE class ScopedFakeTestPartResultReporterWithThreadsTest : public ScopedFakeTestPartResultReporterTest { protected: static void AddFailureInOtherThread(FailureMode failure) { ThreadWithParam thread(&AddFailure, failure, NULL); thread.Join(); } }; TEST_F(ScopedFakeTestPartResultReporterWithThreadsTest, InterceptsTestFailuresInAllThreads) { TestPartResultArray results; { ScopedFakeTestPartResultReporter reporter( ScopedFakeTestPartResultReporter::INTERCEPT_ALL_THREADS, &results); AddFailure(NONFATAL_FAILURE); AddFailure(FATAL_FAILURE); AddFailureInOtherThread(NONFATAL_FAILURE); AddFailureInOtherThread(FATAL_FAILURE); } EXPECT_EQ(4, results.size()); EXPECT_TRUE(results.GetTestPartResult(0).nonfatally_failed()); EXPECT_TRUE(results.GetTestPartResult(1).fatally_failed()); EXPECT_TRUE(results.GetTestPartResult(2).nonfatally_failed()); EXPECT_TRUE(results.GetTestPartResult(3).fatally_failed()); } #endif // GTEST_IS_THREADSAFE // Tests EXPECT_FATAL_FAILURE{,ON_ALL_THREADS}. Makes sure that they // work even if the failure is generated in a called function rather than // the current context. typedef ScopedFakeTestPartResultReporterTest ExpectFatalFailureTest; TEST_F(ExpectFatalFailureTest, CatchesFatalFaliure) { EXPECT_FATAL_FAILURE(AddFatalFailure(), "Expected fatal failure."); } #if GTEST_HAS_GLOBAL_STRING TEST_F(ExpectFatalFailureTest, AcceptsStringObject) { EXPECT_FATAL_FAILURE(AddFatalFailure(), ::string("Expected fatal failure.")); } #endif TEST_F(ExpectFatalFailureTest, AcceptsStdStringObject) { EXPECT_FATAL_FAILURE(AddFatalFailure(), ::std::string("Expected fatal failure.")); } TEST_F(ExpectFatalFailureTest, CatchesFatalFailureOnAllThreads) { // We have another test below to verify that the macro catches fatal // failures generated on another thread. EXPECT_FATAL_FAILURE_ON_ALL_THREADS(AddFatalFailure(), "Expected fatal failure."); } #ifdef __BORLANDC__ // Silences warnings: "Condition is always true" # pragma option push -w-ccc #endif // Tests that EXPECT_FATAL_FAILURE() can be used in a non-void // function even when the statement in it contains ASSERT_*. int NonVoidFunction() { EXPECT_FATAL_FAILURE(ASSERT_TRUE(false), ""); EXPECT_FATAL_FAILURE_ON_ALL_THREADS(FAIL(), ""); return 0; } TEST_F(ExpectFatalFailureTest, CanBeUsedInNonVoidFunction) { NonVoidFunction(); } // Tests that EXPECT_FATAL_FAILURE(statement, ...) doesn't abort the // current function even though 'statement' generates a fatal failure. void DoesNotAbortHelper(bool* aborted) { EXPECT_FATAL_FAILURE(ASSERT_TRUE(false), ""); EXPECT_FATAL_FAILURE_ON_ALL_THREADS(FAIL(), ""); *aborted = false; } #ifdef __BORLANDC__ // Restores warnings after previous "#pragma option push" suppressed them. # pragma option pop #endif TEST_F(ExpectFatalFailureTest, DoesNotAbort) { bool aborted = true; DoesNotAbortHelper(&aborted); EXPECT_FALSE(aborted); } // Tests that the EXPECT_FATAL_FAILURE{,_ON_ALL_THREADS} accepts a // statement that contains a macro which expands to code containing an // unprotected comma. static int global_var = 0; #define GTEST_USE_UNPROTECTED_COMMA_ global_var++, global_var++ TEST_F(ExpectFatalFailureTest, AcceptsMacroThatExpandsToUnprotectedComma) { #ifndef __BORLANDC__ // ICE's in C++Builder. EXPECT_FATAL_FAILURE({ GTEST_USE_UNPROTECTED_COMMA_; AddFatalFailure(); }, ""); #endif EXPECT_FATAL_FAILURE_ON_ALL_THREADS({ GTEST_USE_UNPROTECTED_COMMA_; AddFatalFailure(); }, ""); } // Tests EXPECT_NONFATAL_FAILURE{,ON_ALL_THREADS}. typedef ScopedFakeTestPartResultReporterTest ExpectNonfatalFailureTest; TEST_F(ExpectNonfatalFailureTest, CatchesNonfatalFailure) { EXPECT_NONFATAL_FAILURE(AddNonfatalFailure(), "Expected non-fatal failure."); } #if GTEST_HAS_GLOBAL_STRING TEST_F(ExpectNonfatalFailureTest, AcceptsStringObject) { EXPECT_NONFATAL_FAILURE(AddNonfatalFailure(), ::string("Expected non-fatal failure.")); } #endif TEST_F(ExpectNonfatalFailureTest, AcceptsStdStringObject) { EXPECT_NONFATAL_FAILURE(AddNonfatalFailure(), ::std::string("Expected non-fatal failure.")); } TEST_F(ExpectNonfatalFailureTest, CatchesNonfatalFailureOnAllThreads) { // We have another test below to verify that the macro catches // non-fatal failures generated on another thread. EXPECT_NONFATAL_FAILURE_ON_ALL_THREADS(AddNonfatalFailure(), "Expected non-fatal failure."); } // Tests that the EXPECT_NONFATAL_FAILURE{,_ON_ALL_THREADS} accepts a // statement that contains a macro which expands to code containing an // unprotected comma. TEST_F(ExpectNonfatalFailureTest, AcceptsMacroThatExpandsToUnprotectedComma) { EXPECT_NONFATAL_FAILURE({ GTEST_USE_UNPROTECTED_COMMA_; AddNonfatalFailure(); }, ""); EXPECT_NONFATAL_FAILURE_ON_ALL_THREADS({ GTEST_USE_UNPROTECTED_COMMA_; AddNonfatalFailure(); }, ""); } #if GTEST_IS_THREADSAFE typedef ScopedFakeTestPartResultReporterWithThreadsTest ExpectFailureWithThreadsTest; TEST_F(ExpectFailureWithThreadsTest, ExpectFatalFailureOnAllThreads) { EXPECT_FATAL_FAILURE_ON_ALL_THREADS(AddFailureInOtherThread(FATAL_FAILURE), "Expected fatal failure."); } TEST_F(ExpectFailureWithThreadsTest, ExpectNonFatalFailureOnAllThreads) { EXPECT_NONFATAL_FAILURE_ON_ALL_THREADS( AddFailureInOtherThread(NONFATAL_FAILURE), "Expected non-fatal failure."); } #endif // GTEST_IS_THREADSAFE // Tests the TestProperty class. TEST(TestPropertyTest, ConstructorWorks) { const TestProperty property("key", "value"); EXPECT_STREQ("key", property.key()); EXPECT_STREQ("value", property.value()); } TEST(TestPropertyTest, SetValue) { TestProperty property("key", "value_1"); EXPECT_STREQ("key", property.key()); property.SetValue("value_2"); EXPECT_STREQ("key", property.key()); EXPECT_STREQ("value_2", property.value()); } // Tests the TestResult class // The test fixture for testing TestResult. class TestResultTest : public Test { protected: typedef std::vector TPRVector; // We make use of 2 TestPartResult objects, TestPartResult * pr1, * pr2; // ... and 3 TestResult objects. TestResult * r0, * r1, * r2; virtual void SetUp() { // pr1 is for success. pr1 = new TestPartResult(TestPartResult::kSuccess, "foo/bar.cc", 10, "Success!"); // pr2 is for fatal failure. pr2 = new TestPartResult(TestPartResult::kFatalFailure, "foo/bar.cc", -1, // This line number means "unknown" "Failure!"); // Creates the TestResult objects. r0 = new TestResult(); r1 = new TestResult(); r2 = new TestResult(); // In order to test TestResult, we need to modify its internal // state, in particular the TestPartResult vector it holds. // test_part_results() returns a const reference to this vector. // We cast it to a non-const object s.t. it can be modified (yes, // this is a hack). TPRVector* results1 = const_cast( &TestResultAccessor::test_part_results(*r1)); TPRVector* results2 = const_cast( &TestResultAccessor::test_part_results(*r2)); // r0 is an empty TestResult. // r1 contains a single SUCCESS TestPartResult. results1->push_back(*pr1); // r2 contains a SUCCESS, and a FAILURE. results2->push_back(*pr1); results2->push_back(*pr2); } virtual void TearDown() { delete pr1; delete pr2; delete r0; delete r1; delete r2; } // Helper that compares two two TestPartResults. static void CompareTestPartResult(const TestPartResult& expected, const TestPartResult& actual) { EXPECT_EQ(expected.type(), actual.type()); EXPECT_STREQ(expected.file_name(), actual.file_name()); EXPECT_EQ(expected.line_number(), actual.line_number()); EXPECT_STREQ(expected.summary(), actual.summary()); EXPECT_STREQ(expected.message(), actual.message()); EXPECT_EQ(expected.passed(), actual.passed()); EXPECT_EQ(expected.failed(), actual.failed()); EXPECT_EQ(expected.nonfatally_failed(), actual.nonfatally_failed()); EXPECT_EQ(expected.fatally_failed(), actual.fatally_failed()); } }; // Tests TestResult::total_part_count(). TEST_F(TestResultTest, total_part_count) { ASSERT_EQ(0, r0->total_part_count()); ASSERT_EQ(1, r1->total_part_count()); ASSERT_EQ(2, r2->total_part_count()); } // Tests TestResult::Passed(). TEST_F(TestResultTest, Passed) { ASSERT_TRUE(r0->Passed()); ASSERT_TRUE(r1->Passed()); ASSERT_FALSE(r2->Passed()); } // Tests TestResult::Failed(). TEST_F(TestResultTest, Failed) { ASSERT_FALSE(r0->Failed()); ASSERT_FALSE(r1->Failed()); ASSERT_TRUE(r2->Failed()); } // Tests TestResult::GetTestPartResult(). typedef TestResultTest TestResultDeathTest; TEST_F(TestResultDeathTest, GetTestPartResult) { CompareTestPartResult(*pr1, r2->GetTestPartResult(0)); CompareTestPartResult(*pr2, r2->GetTestPartResult(1)); EXPECT_DEATH_IF_SUPPORTED(r2->GetTestPartResult(2), ""); EXPECT_DEATH_IF_SUPPORTED(r2->GetTestPartResult(-1), ""); } // Tests TestResult has no properties when none are added. TEST(TestResultPropertyTest, NoPropertiesFoundWhenNoneAreAdded) { TestResult test_result; ASSERT_EQ(0, test_result.test_property_count()); } // Tests TestResult has the expected property when added. TEST(TestResultPropertyTest, OnePropertyFoundWhenAdded) { TestResult test_result; TestProperty property("key_1", "1"); TestResultAccessor::RecordProperty(&test_result, "testcase", property); ASSERT_EQ(1, test_result.test_property_count()); const TestProperty& actual_property = test_result.GetTestProperty(0); EXPECT_STREQ("key_1", actual_property.key()); EXPECT_STREQ("1", actual_property.value()); } // Tests TestResult has multiple properties when added. TEST(TestResultPropertyTest, MultiplePropertiesFoundWhenAdded) { TestResult test_result; TestProperty property_1("key_1", "1"); TestProperty property_2("key_2", "2"); TestResultAccessor::RecordProperty(&test_result, "testcase", property_1); TestResultAccessor::RecordProperty(&test_result, "testcase", property_2); ASSERT_EQ(2, test_result.test_property_count()); const TestProperty& actual_property_1 = test_result.GetTestProperty(0); EXPECT_STREQ("key_1", actual_property_1.key()); EXPECT_STREQ("1", actual_property_1.value()); const TestProperty& actual_property_2 = test_result.GetTestProperty(1); EXPECT_STREQ("key_2", actual_property_2.key()); EXPECT_STREQ("2", actual_property_2.value()); } // Tests TestResult::RecordProperty() overrides values for duplicate keys. TEST(TestResultPropertyTest, OverridesValuesForDuplicateKeys) { TestResult test_result; TestProperty property_1_1("key_1", "1"); TestProperty property_2_1("key_2", "2"); TestProperty property_1_2("key_1", "12"); TestProperty property_2_2("key_2", "22"); TestResultAccessor::RecordProperty(&test_result, "testcase", property_1_1); TestResultAccessor::RecordProperty(&test_result, "testcase", property_2_1); TestResultAccessor::RecordProperty(&test_result, "testcase", property_1_2); TestResultAccessor::RecordProperty(&test_result, "testcase", property_2_2); ASSERT_EQ(2, test_result.test_property_count()); const TestProperty& actual_property_1 = test_result.GetTestProperty(0); EXPECT_STREQ("key_1", actual_property_1.key()); EXPECT_STREQ("12", actual_property_1.value()); const TestProperty& actual_property_2 = test_result.GetTestProperty(1); EXPECT_STREQ("key_2", actual_property_2.key()); EXPECT_STREQ("22", actual_property_2.value()); } // Tests TestResult::GetTestProperty(). TEST(TestResultPropertyTest, GetTestProperty) { TestResult test_result; TestProperty property_1("key_1", "1"); TestProperty property_2("key_2", "2"); TestProperty property_3("key_3", "3"); TestResultAccessor::RecordProperty(&test_result, "testcase", property_1); TestResultAccessor::RecordProperty(&test_result, "testcase", property_2); TestResultAccessor::RecordProperty(&test_result, "testcase", property_3); const TestProperty& fetched_property_1 = test_result.GetTestProperty(0); const TestProperty& fetched_property_2 = test_result.GetTestProperty(1); const TestProperty& fetched_property_3 = test_result.GetTestProperty(2); EXPECT_STREQ("key_1", fetched_property_1.key()); EXPECT_STREQ("1", fetched_property_1.value()); EXPECT_STREQ("key_2", fetched_property_2.key()); EXPECT_STREQ("2", fetched_property_2.value()); EXPECT_STREQ("key_3", fetched_property_3.key()); EXPECT_STREQ("3", fetched_property_3.value()); EXPECT_DEATH_IF_SUPPORTED(test_result.GetTestProperty(3), ""); EXPECT_DEATH_IF_SUPPORTED(test_result.GetTestProperty(-1), ""); } // Tests that GTestFlagSaver works on Windows and Mac. class GTestFlagSaverTest : public Test { protected: // Saves the Google Test flags such that we can restore them later, and // then sets them to their default values. This will be called // before the first test in this test case is run. static void SetUpTestCase() { saver_ = new GTestFlagSaver; GTEST_FLAG(also_run_disabled_tests) = false; GTEST_FLAG(break_on_failure) = false; GTEST_FLAG(catch_exceptions) = false; GTEST_FLAG(death_test_use_fork) = false; GTEST_FLAG(color) = "auto"; GTEST_FLAG(filter) = ""; GTEST_FLAG(list_tests) = false; GTEST_FLAG(output) = ""; GTEST_FLAG(print_time) = true; GTEST_FLAG(random_seed) = 0; GTEST_FLAG(repeat) = 1; GTEST_FLAG(shuffle) = false; GTEST_FLAG(stack_trace_depth) = kMaxStackTraceDepth; GTEST_FLAG(stream_result_to) = ""; GTEST_FLAG(throw_on_failure) = false; } // Restores the Google Test flags that the tests have modified. This will // be called after the last test in this test case is run. static void TearDownTestCase() { delete saver_; saver_ = NULL; } // Verifies that the Google Test flags have their default values, and then // modifies each of them. void VerifyAndModifyFlags() { EXPECT_FALSE(GTEST_FLAG(also_run_disabled_tests)); EXPECT_FALSE(GTEST_FLAG(break_on_failure)); EXPECT_FALSE(GTEST_FLAG(catch_exceptions)); EXPECT_STREQ("auto", GTEST_FLAG(color).c_str()); EXPECT_FALSE(GTEST_FLAG(death_test_use_fork)); EXPECT_STREQ("", GTEST_FLAG(filter).c_str()); EXPECT_FALSE(GTEST_FLAG(list_tests)); EXPECT_STREQ("", GTEST_FLAG(output).c_str()); EXPECT_TRUE(GTEST_FLAG(print_time)); EXPECT_EQ(0, GTEST_FLAG(random_seed)); EXPECT_EQ(1, GTEST_FLAG(repeat)); EXPECT_FALSE(GTEST_FLAG(shuffle)); EXPECT_EQ(kMaxStackTraceDepth, GTEST_FLAG(stack_trace_depth)); EXPECT_STREQ("", GTEST_FLAG(stream_result_to).c_str()); EXPECT_FALSE(GTEST_FLAG(throw_on_failure)); GTEST_FLAG(also_run_disabled_tests) = true; GTEST_FLAG(break_on_failure) = true; GTEST_FLAG(catch_exceptions) = true; GTEST_FLAG(color) = "no"; GTEST_FLAG(death_test_use_fork) = true; GTEST_FLAG(filter) = "abc"; GTEST_FLAG(list_tests) = true; GTEST_FLAG(output) = "xml:foo.xml"; GTEST_FLAG(print_time) = false; GTEST_FLAG(random_seed) = 1; GTEST_FLAG(repeat) = 100; GTEST_FLAG(shuffle) = true; GTEST_FLAG(stack_trace_depth) = 1; GTEST_FLAG(stream_result_to) = "localhost:1234"; GTEST_FLAG(throw_on_failure) = true; } private: // For saving Google Test flags during this test case. static GTestFlagSaver* saver_; }; GTestFlagSaver* GTestFlagSaverTest::saver_ = NULL; // Google Test doesn't guarantee the order of tests. The following two // tests are designed to work regardless of their order. // Modifies the Google Test flags in the test body. TEST_F(GTestFlagSaverTest, ModifyGTestFlags) { VerifyAndModifyFlags(); } // Verifies that the Google Test flags in the body of the previous test were // restored to their original values. TEST_F(GTestFlagSaverTest, VerifyGTestFlags) { VerifyAndModifyFlags(); } // Sets an environment variable with the given name to the given // value. If the value argument is "", unsets the environment // variable. The caller must ensure that both arguments are not NULL. static void SetEnv(const char* name, const char* value) { #if GTEST_OS_WINDOWS_MOBILE // Environment variables are not supported on Windows CE. return; #elif defined(__BORLANDC__) || defined(__SunOS_5_8) || defined(__SunOS_5_9) // C++Builder's putenv only stores a pointer to its parameter; we have to // ensure that the string remains valid as long as it might be needed. // We use an std::map to do so. static std::map added_env; // Because putenv stores a pointer to the string buffer, we can't delete the // previous string (if present) until after it's replaced. std::string *prev_env = NULL; if (added_env.find(name) != added_env.end()) { prev_env = added_env[name]; } added_env[name] = new std::string( (Message() << name << "=" << value).GetString()); // The standard signature of putenv accepts a 'char*' argument. Other // implementations, like C++Builder's, accept a 'const char*'. // We cast away the 'const' since that would work for both variants. putenv(const_cast(added_env[name]->c_str())); delete prev_env; #elif GTEST_OS_WINDOWS // If we are on Windows proper. _putenv((Message() << name << "=" << value).GetString().c_str()); #else if (*value == '\0') { unsetenv(name); } else { setenv(name, value, 1); } #endif // GTEST_OS_WINDOWS_MOBILE } #if !GTEST_OS_WINDOWS_MOBILE // Environment variables are not supported on Windows CE. using testing::internal::Int32FromGTestEnv; // Tests Int32FromGTestEnv(). // Tests that Int32FromGTestEnv() returns the default value when the // environment variable is not set. TEST(Int32FromGTestEnvTest, ReturnsDefaultWhenVariableIsNotSet) { SetEnv(GTEST_FLAG_PREFIX_UPPER_ "TEMP", ""); EXPECT_EQ(10, Int32FromGTestEnv("temp", 10)); } // Tests that Int32FromGTestEnv() returns the default value when the // environment variable overflows as an Int32. TEST(Int32FromGTestEnvTest, ReturnsDefaultWhenValueOverflows) { printf("(expecting 2 warnings)\n"); SetEnv(GTEST_FLAG_PREFIX_UPPER_ "TEMP", "12345678987654321"); EXPECT_EQ(20, Int32FromGTestEnv("temp", 20)); SetEnv(GTEST_FLAG_PREFIX_UPPER_ "TEMP", "-12345678987654321"); EXPECT_EQ(30, Int32FromGTestEnv("temp", 30)); } // Tests that Int32FromGTestEnv() returns the default value when the // environment variable does not represent a valid decimal integer. TEST(Int32FromGTestEnvTest, ReturnsDefaultWhenValueIsInvalid) { printf("(expecting 2 warnings)\n"); SetEnv(GTEST_FLAG_PREFIX_UPPER_ "TEMP", "A1"); EXPECT_EQ(40, Int32FromGTestEnv("temp", 40)); SetEnv(GTEST_FLAG_PREFIX_UPPER_ "TEMP", "12X"); EXPECT_EQ(50, Int32FromGTestEnv("temp", 50)); } // Tests that Int32FromGTestEnv() parses and returns the value of the // environment variable when it represents a valid decimal integer in // the range of an Int32. TEST(Int32FromGTestEnvTest, ParsesAndReturnsValidValue) { SetEnv(GTEST_FLAG_PREFIX_UPPER_ "TEMP", "123"); EXPECT_EQ(123, Int32FromGTestEnv("temp", 0)); SetEnv(GTEST_FLAG_PREFIX_UPPER_ "TEMP", "-321"); EXPECT_EQ(-321, Int32FromGTestEnv("temp", 0)); } #endif // !GTEST_OS_WINDOWS_MOBILE // Tests ParseInt32Flag(). // Tests that ParseInt32Flag() returns false and doesn't change the // output value when the flag has wrong format TEST(ParseInt32FlagTest, ReturnsFalseForInvalidFlag) { Int32 value = 123; EXPECT_FALSE(ParseInt32Flag("--a=100", "b", &value)); EXPECT_EQ(123, value); EXPECT_FALSE(ParseInt32Flag("a=100", "a", &value)); EXPECT_EQ(123, value); } // Tests that ParseInt32Flag() returns false and doesn't change the // output value when the flag overflows as an Int32. TEST(ParseInt32FlagTest, ReturnsDefaultWhenValueOverflows) { printf("(expecting 2 warnings)\n"); Int32 value = 123; EXPECT_FALSE(ParseInt32Flag("--abc=12345678987654321", "abc", &value)); EXPECT_EQ(123, value); EXPECT_FALSE(ParseInt32Flag("--abc=-12345678987654321", "abc", &value)); EXPECT_EQ(123, value); } // Tests that ParseInt32Flag() returns false and doesn't change the // output value when the flag does not represent a valid decimal // integer. TEST(ParseInt32FlagTest, ReturnsDefaultWhenValueIsInvalid) { printf("(expecting 2 warnings)\n"); Int32 value = 123; EXPECT_FALSE(ParseInt32Flag("--abc=A1", "abc", &value)); EXPECT_EQ(123, value); EXPECT_FALSE(ParseInt32Flag("--abc=12X", "abc", &value)); EXPECT_EQ(123, value); } // Tests that ParseInt32Flag() parses the value of the flag and // returns true when the flag represents a valid decimal integer in // the range of an Int32. TEST(ParseInt32FlagTest, ParsesAndReturnsValidValue) { Int32 value = 123; EXPECT_TRUE(ParseInt32Flag("--" GTEST_FLAG_PREFIX_ "abc=456", "abc", &value)); EXPECT_EQ(456, value); EXPECT_TRUE(ParseInt32Flag("--" GTEST_FLAG_PREFIX_ "abc=-789", "abc", &value)); EXPECT_EQ(-789, value); } // Tests that Int32FromEnvOrDie() parses the value of the var or // returns the correct default. // Environment variables are not supported on Windows CE. #if !GTEST_OS_WINDOWS_MOBILE TEST(Int32FromEnvOrDieTest, ParsesAndReturnsValidValue) { EXPECT_EQ(333, Int32FromEnvOrDie(GTEST_FLAG_PREFIX_UPPER_ "UnsetVar", 333)); SetEnv(GTEST_FLAG_PREFIX_UPPER_ "UnsetVar", "123"); EXPECT_EQ(123, Int32FromEnvOrDie(GTEST_FLAG_PREFIX_UPPER_ "UnsetVar", 333)); SetEnv(GTEST_FLAG_PREFIX_UPPER_ "UnsetVar", "-123"); EXPECT_EQ(-123, Int32FromEnvOrDie(GTEST_FLAG_PREFIX_UPPER_ "UnsetVar", 333)); } #endif // !GTEST_OS_WINDOWS_MOBILE // Tests that Int32FromEnvOrDie() aborts with an error message // if the variable is not an Int32. TEST(Int32FromEnvOrDieDeathTest, AbortsOnFailure) { SetEnv(GTEST_FLAG_PREFIX_UPPER_ "VAR", "xxx"); EXPECT_DEATH_IF_SUPPORTED( Int32FromEnvOrDie(GTEST_FLAG_PREFIX_UPPER_ "VAR", 123), ".*"); } // Tests that Int32FromEnvOrDie() aborts with an error message // if the variable cannot be represnted by an Int32. TEST(Int32FromEnvOrDieDeathTest, AbortsOnInt32Overflow) { SetEnv(GTEST_FLAG_PREFIX_UPPER_ "VAR", "1234567891234567891234"); EXPECT_DEATH_IF_SUPPORTED( Int32FromEnvOrDie(GTEST_FLAG_PREFIX_UPPER_ "VAR", 123), ".*"); } // Tests that ShouldRunTestOnShard() selects all tests // where there is 1 shard. TEST(ShouldRunTestOnShardTest, IsPartitionWhenThereIsOneShard) { EXPECT_TRUE(ShouldRunTestOnShard(1, 0, 0)); EXPECT_TRUE(ShouldRunTestOnShard(1, 0, 1)); EXPECT_TRUE(ShouldRunTestOnShard(1, 0, 2)); EXPECT_TRUE(ShouldRunTestOnShard(1, 0, 3)); EXPECT_TRUE(ShouldRunTestOnShard(1, 0, 4)); } class ShouldShardTest : public testing::Test { protected: virtual void SetUp() { index_var_ = GTEST_FLAG_PREFIX_UPPER_ "INDEX"; total_var_ = GTEST_FLAG_PREFIX_UPPER_ "TOTAL"; } virtual void TearDown() { SetEnv(index_var_, ""); SetEnv(total_var_, ""); } const char* index_var_; const char* total_var_; }; // Tests that sharding is disabled if neither of the environment variables // are set. TEST_F(ShouldShardTest, ReturnsFalseWhenNeitherEnvVarIsSet) { SetEnv(index_var_, ""); SetEnv(total_var_, ""); EXPECT_FALSE(ShouldShard(total_var_, index_var_, false)); EXPECT_FALSE(ShouldShard(total_var_, index_var_, true)); } // Tests that sharding is not enabled if total_shards == 1. TEST_F(ShouldShardTest, ReturnsFalseWhenTotalShardIsOne) { SetEnv(index_var_, "0"); SetEnv(total_var_, "1"); EXPECT_FALSE(ShouldShard(total_var_, index_var_, false)); EXPECT_FALSE(ShouldShard(total_var_, index_var_, true)); } // Tests that sharding is enabled if total_shards > 1 and // we are not in a death test subprocess. // Environment variables are not supported on Windows CE. #if !GTEST_OS_WINDOWS_MOBILE TEST_F(ShouldShardTest, WorksWhenShardEnvVarsAreValid) { SetEnv(index_var_, "4"); SetEnv(total_var_, "22"); EXPECT_TRUE(ShouldShard(total_var_, index_var_, false)); EXPECT_FALSE(ShouldShard(total_var_, index_var_, true)); SetEnv(index_var_, "8"); SetEnv(total_var_, "9"); EXPECT_TRUE(ShouldShard(total_var_, index_var_, false)); EXPECT_FALSE(ShouldShard(total_var_, index_var_, true)); SetEnv(index_var_, "0"); SetEnv(total_var_, "9"); EXPECT_TRUE(ShouldShard(total_var_, index_var_, false)); EXPECT_FALSE(ShouldShard(total_var_, index_var_, true)); } #endif // !GTEST_OS_WINDOWS_MOBILE // Tests that we exit in error if the sharding values are not valid. typedef ShouldShardTest ShouldShardDeathTest; TEST_F(ShouldShardDeathTest, AbortsWhenShardingEnvVarsAreInvalid) { SetEnv(index_var_, "4"); SetEnv(total_var_, "4"); EXPECT_DEATH_IF_SUPPORTED(ShouldShard(total_var_, index_var_, false), ".*"); SetEnv(index_var_, "4"); SetEnv(total_var_, "-2"); EXPECT_DEATH_IF_SUPPORTED(ShouldShard(total_var_, index_var_, false), ".*"); SetEnv(index_var_, "5"); SetEnv(total_var_, ""); EXPECT_DEATH_IF_SUPPORTED(ShouldShard(total_var_, index_var_, false), ".*"); SetEnv(index_var_, ""); SetEnv(total_var_, "5"); EXPECT_DEATH_IF_SUPPORTED(ShouldShard(total_var_, index_var_, false), ".*"); } // Tests that ShouldRunTestOnShard is a partition when 5 // shards are used. TEST(ShouldRunTestOnShardTest, IsPartitionWhenThereAreFiveShards) { // Choose an arbitrary number of tests and shards. const int num_tests = 17; const int num_shards = 5; // Check partitioning: each test should be on exactly 1 shard. for (int test_id = 0; test_id < num_tests; test_id++) { int prev_selected_shard_index = -1; for (int shard_index = 0; shard_index < num_shards; shard_index++) { if (ShouldRunTestOnShard(num_shards, shard_index, test_id)) { if (prev_selected_shard_index < 0) { prev_selected_shard_index = shard_index; } else { ADD_FAILURE() << "Shard " << prev_selected_shard_index << " and " << shard_index << " are both selected to run test " << test_id; } } } } // Check balance: This is not required by the sharding protocol, but is a // desirable property for performance. for (int shard_index = 0; shard_index < num_shards; shard_index++) { int num_tests_on_shard = 0; for (int test_id = 0; test_id < num_tests; test_id++) { num_tests_on_shard += ShouldRunTestOnShard(num_shards, shard_index, test_id); } EXPECT_GE(num_tests_on_shard, num_tests / num_shards); } } // For the same reason we are not explicitly testing everything in the // Test class, there are no separate tests for the following classes // (except for some trivial cases): // // TestCase, UnitTest, UnitTestResultPrinter. // // Similarly, there are no separate tests for the following macros: // // TEST, TEST_F, RUN_ALL_TESTS TEST(UnitTestTest, CanGetOriginalWorkingDir) { ASSERT_TRUE(UnitTest::GetInstance()->original_working_dir() != NULL); EXPECT_STRNE(UnitTest::GetInstance()->original_working_dir(), ""); } TEST(UnitTestTest, ReturnsPlausibleTimestamp) { EXPECT_LT(0, UnitTest::GetInstance()->start_timestamp()); EXPECT_LE(UnitTest::GetInstance()->start_timestamp(), GetTimeInMillis()); } // When a property using a reserved key is supplied to this function, it // tests that a non-fatal failure is added, a fatal failure is not added, // and that the property is not recorded. void ExpectNonFatalFailureRecordingPropertyWithReservedKey( const TestResult& test_result, const char* key) { EXPECT_NONFATAL_FAILURE(Test::RecordProperty(key, "1"), "Reserved key"); ASSERT_EQ(0, test_result.test_property_count()) << "Property for key '" << key << "' recorded unexpectedly."; } void ExpectNonFatalFailureRecordingPropertyWithReservedKeyForCurrentTest( const char* key) { const TestInfo* test_info = UnitTest::GetInstance()->current_test_info(); ASSERT_TRUE(test_info != NULL); ExpectNonFatalFailureRecordingPropertyWithReservedKey(*test_info->result(), key); } void ExpectNonFatalFailureRecordingPropertyWithReservedKeyForCurrentTestCase( const char* key) { const TestCase* test_case = UnitTest::GetInstance()->current_test_case(); ASSERT_TRUE(test_case != NULL); ExpectNonFatalFailureRecordingPropertyWithReservedKey( test_case->ad_hoc_test_result(), key); } void ExpectNonFatalFailureRecordingPropertyWithReservedKeyOutsideOfTestCase( const char* key) { ExpectNonFatalFailureRecordingPropertyWithReservedKey( UnitTest::GetInstance()->ad_hoc_test_result(), key); } // Tests that property recording functions in UnitTest outside of tests // functions correcly. Creating a separate instance of UnitTest ensures it // is in a state similar to the UnitTest's singleton's between tests. class UnitTestRecordPropertyTest : public testing::internal::UnitTestRecordPropertyTestHelper { public: static void SetUpTestCase() { ExpectNonFatalFailureRecordingPropertyWithReservedKeyForCurrentTestCase( "disabled"); ExpectNonFatalFailureRecordingPropertyWithReservedKeyForCurrentTestCase( "errors"); ExpectNonFatalFailureRecordingPropertyWithReservedKeyForCurrentTestCase( "failures"); ExpectNonFatalFailureRecordingPropertyWithReservedKeyForCurrentTestCase( "name"); ExpectNonFatalFailureRecordingPropertyWithReservedKeyForCurrentTestCase( "tests"); ExpectNonFatalFailureRecordingPropertyWithReservedKeyForCurrentTestCase( "time"); Test::RecordProperty("test_case_key_1", "1"); const TestCase* test_case = UnitTest::GetInstance()->current_test_case(); ASSERT_TRUE(test_case != NULL); ASSERT_EQ(1, test_case->ad_hoc_test_result().test_property_count()); EXPECT_STREQ("test_case_key_1", test_case->ad_hoc_test_result().GetTestProperty(0).key()); EXPECT_STREQ("1", test_case->ad_hoc_test_result().GetTestProperty(0).value()); } }; // Tests TestResult has the expected property when added. TEST_F(UnitTestRecordPropertyTest, OnePropertyFoundWhenAdded) { UnitTestRecordProperty("key_1", "1"); ASSERT_EQ(1, unit_test_.ad_hoc_test_result().test_property_count()); EXPECT_STREQ("key_1", unit_test_.ad_hoc_test_result().GetTestProperty(0).key()); EXPECT_STREQ("1", unit_test_.ad_hoc_test_result().GetTestProperty(0).value()); } // Tests TestResult has multiple properties when added. TEST_F(UnitTestRecordPropertyTest, MultiplePropertiesFoundWhenAdded) { UnitTestRecordProperty("key_1", "1"); UnitTestRecordProperty("key_2", "2"); ASSERT_EQ(2, unit_test_.ad_hoc_test_result().test_property_count()); EXPECT_STREQ("key_1", unit_test_.ad_hoc_test_result().GetTestProperty(0).key()); EXPECT_STREQ("1", unit_test_.ad_hoc_test_result().GetTestProperty(0).value()); EXPECT_STREQ("key_2", unit_test_.ad_hoc_test_result().GetTestProperty(1).key()); EXPECT_STREQ("2", unit_test_.ad_hoc_test_result().GetTestProperty(1).value()); } // Tests TestResult::RecordProperty() overrides values for duplicate keys. TEST_F(UnitTestRecordPropertyTest, OverridesValuesForDuplicateKeys) { UnitTestRecordProperty("key_1", "1"); UnitTestRecordProperty("key_2", "2"); UnitTestRecordProperty("key_1", "12"); UnitTestRecordProperty("key_2", "22"); ASSERT_EQ(2, unit_test_.ad_hoc_test_result().test_property_count()); EXPECT_STREQ("key_1", unit_test_.ad_hoc_test_result().GetTestProperty(0).key()); EXPECT_STREQ("12", unit_test_.ad_hoc_test_result().GetTestProperty(0).value()); EXPECT_STREQ("key_2", unit_test_.ad_hoc_test_result().GetTestProperty(1).key()); EXPECT_STREQ("22", unit_test_.ad_hoc_test_result().GetTestProperty(1).value()); } TEST_F(UnitTestRecordPropertyTest, AddFailureInsideTestsWhenUsingTestCaseReservedKeys) { ExpectNonFatalFailureRecordingPropertyWithReservedKeyForCurrentTest( "name"); ExpectNonFatalFailureRecordingPropertyWithReservedKeyForCurrentTest( "value_param"); ExpectNonFatalFailureRecordingPropertyWithReservedKeyForCurrentTest( "type_param"); ExpectNonFatalFailureRecordingPropertyWithReservedKeyForCurrentTest( "status"); ExpectNonFatalFailureRecordingPropertyWithReservedKeyForCurrentTest( "time"); ExpectNonFatalFailureRecordingPropertyWithReservedKeyForCurrentTest( "classname"); } TEST_F(UnitTestRecordPropertyTest, AddRecordWithReservedKeysGeneratesCorrectPropertyList) { EXPECT_NONFATAL_FAILURE( Test::RecordProperty("name", "1"), "'classname', 'name', 'status', 'time', 'type_param', and 'value_param'" " are reserved"); } class UnitTestRecordPropertyTestEnvironment : public Environment { public: virtual void TearDown() { ExpectNonFatalFailureRecordingPropertyWithReservedKeyOutsideOfTestCase( "tests"); ExpectNonFatalFailureRecordingPropertyWithReservedKeyOutsideOfTestCase( "failures"); ExpectNonFatalFailureRecordingPropertyWithReservedKeyOutsideOfTestCase( "disabled"); ExpectNonFatalFailureRecordingPropertyWithReservedKeyOutsideOfTestCase( "errors"); ExpectNonFatalFailureRecordingPropertyWithReservedKeyOutsideOfTestCase( "name"); ExpectNonFatalFailureRecordingPropertyWithReservedKeyOutsideOfTestCase( "timestamp"); ExpectNonFatalFailureRecordingPropertyWithReservedKeyOutsideOfTestCase( "time"); ExpectNonFatalFailureRecordingPropertyWithReservedKeyOutsideOfTestCase( "random_seed"); } }; // This will test property recording outside of any test or test case. static Environment* record_property_env = AddGlobalTestEnvironment(new UnitTestRecordPropertyTestEnvironment); // This group of tests is for predicate assertions (ASSERT_PRED*, etc) // of various arities. They do not attempt to be exhaustive. Rather, // view them as smoke tests that can be easily reviewed and verified. // A more complete set of tests for predicate assertions can be found // in gtest_pred_impl_unittest.cc. // First, some predicates and predicate-formatters needed by the tests. // Returns true iff the argument is an even number. bool IsEven(int n) { return (n % 2) == 0; } // A functor that returns true iff the argument is an even number. struct IsEvenFunctor { bool operator()(int n) { return IsEven(n); } }; // A predicate-formatter function that asserts the argument is an even // number. AssertionResult AssertIsEven(const char* expr, int n) { if (IsEven(n)) { return AssertionSuccess(); } Message msg; msg << expr << " evaluates to " << n << ", which is not even."; return AssertionFailure(msg); } // A predicate function that returns AssertionResult for use in // EXPECT/ASSERT_TRUE/FALSE. AssertionResult ResultIsEven(int n) { if (IsEven(n)) return AssertionSuccess() << n << " is even"; else return AssertionFailure() << n << " is odd"; } // A predicate function that returns AssertionResult but gives no // explanation why it succeeds. Needed for testing that // EXPECT/ASSERT_FALSE handles such functions correctly. AssertionResult ResultIsEvenNoExplanation(int n) { if (IsEven(n)) return AssertionSuccess(); else return AssertionFailure() << n << " is odd"; } // A predicate-formatter functor that asserts the argument is an even // number. struct AssertIsEvenFunctor { AssertionResult operator()(const char* expr, int n) { return AssertIsEven(expr, n); } }; // Returns true iff the sum of the arguments is an even number. bool SumIsEven2(int n1, int n2) { return IsEven(n1 + n2); } // A functor that returns true iff the sum of the arguments is an even // number. struct SumIsEven3Functor { bool operator()(int n1, int n2, int n3) { return IsEven(n1 + n2 + n3); } }; // A predicate-formatter function that asserts the sum of the // arguments is an even number. AssertionResult AssertSumIsEven4( const char* e1, const char* e2, const char* e3, const char* e4, int n1, int n2, int n3, int n4) { const int sum = n1 + n2 + n3 + n4; if (IsEven(sum)) { return AssertionSuccess(); } Message msg; msg << e1 << " + " << e2 << " + " << e3 << " + " << e4 << " (" << n1 << " + " << n2 << " + " << n3 << " + " << n4 << ") evaluates to " << sum << ", which is not even."; return AssertionFailure(msg); } // A predicate-formatter functor that asserts the sum of the arguments // is an even number. struct AssertSumIsEven5Functor { AssertionResult operator()( const char* e1, const char* e2, const char* e3, const char* e4, const char* e5, int n1, int n2, int n3, int n4, int n5) { const int sum = n1 + n2 + n3 + n4 + n5; if (IsEven(sum)) { return AssertionSuccess(); } Message msg; msg << e1 << " + " << e2 << " + " << e3 << " + " << e4 << " + " << e5 << " (" << n1 << " + " << n2 << " + " << n3 << " + " << n4 << " + " << n5 << ") evaluates to " << sum << ", which is not even."; return AssertionFailure(msg); } }; // Tests unary predicate assertions. // Tests unary predicate assertions that don't use a custom formatter. TEST(Pred1Test, WithoutFormat) { // Success cases. EXPECT_PRED1(IsEvenFunctor(), 2) << "This failure is UNEXPECTED!"; ASSERT_PRED1(IsEven, 4); // Failure cases. EXPECT_NONFATAL_FAILURE({ // NOLINT EXPECT_PRED1(IsEven, 5) << "This failure is expected."; }, "This failure is expected."); EXPECT_FATAL_FAILURE(ASSERT_PRED1(IsEvenFunctor(), 5), "evaluates to false"); } // Tests unary predicate assertions that use a custom formatter. TEST(Pred1Test, WithFormat) { // Success cases. EXPECT_PRED_FORMAT1(AssertIsEven, 2); ASSERT_PRED_FORMAT1(AssertIsEvenFunctor(), 4) << "This failure is UNEXPECTED!"; // Failure cases. const int n = 5; EXPECT_NONFATAL_FAILURE(EXPECT_PRED_FORMAT1(AssertIsEvenFunctor(), n), "n evaluates to 5, which is not even."); EXPECT_FATAL_FAILURE({ // NOLINT ASSERT_PRED_FORMAT1(AssertIsEven, 5) << "This failure is expected."; }, "This failure is expected."); } // Tests that unary predicate assertions evaluates their arguments // exactly once. TEST(Pred1Test, SingleEvaluationOnFailure) { // A success case. static int n = 0; EXPECT_PRED1(IsEven, n++); EXPECT_EQ(1, n) << "The argument is not evaluated exactly once."; // A failure case. EXPECT_FATAL_FAILURE({ // NOLINT ASSERT_PRED_FORMAT1(AssertIsEvenFunctor(), n++) << "This failure is expected."; }, "This failure is expected."); EXPECT_EQ(2, n) << "The argument is not evaluated exactly once."; } // Tests predicate assertions whose arity is >= 2. // Tests predicate assertions that don't use a custom formatter. TEST(PredTest, WithoutFormat) { // Success cases. ASSERT_PRED2(SumIsEven2, 2, 4) << "This failure is UNEXPECTED!"; EXPECT_PRED3(SumIsEven3Functor(), 4, 6, 8); // Failure cases. const int n1 = 1; const int n2 = 2; EXPECT_NONFATAL_FAILURE({ // NOLINT EXPECT_PRED2(SumIsEven2, n1, n2) << "This failure is expected."; }, "This failure is expected."); EXPECT_FATAL_FAILURE({ // NOLINT ASSERT_PRED3(SumIsEven3Functor(), 1, 2, 4); }, "evaluates to false"); } // Tests predicate assertions that use a custom formatter. TEST(PredTest, WithFormat) { // Success cases. ASSERT_PRED_FORMAT4(AssertSumIsEven4, 4, 6, 8, 10) << "This failure is UNEXPECTED!"; EXPECT_PRED_FORMAT5(AssertSumIsEven5Functor(), 2, 4, 6, 8, 10); // Failure cases. const int n1 = 1; const int n2 = 2; const int n3 = 4; const int n4 = 6; EXPECT_NONFATAL_FAILURE({ // NOLINT EXPECT_PRED_FORMAT4(AssertSumIsEven4, n1, n2, n3, n4); }, "evaluates to 13, which is not even."); EXPECT_FATAL_FAILURE({ // NOLINT ASSERT_PRED_FORMAT5(AssertSumIsEven5Functor(), 1, 2, 4, 6, 8) << "This failure is expected."; }, "This failure is expected."); } // Tests that predicate assertions evaluates their arguments // exactly once. TEST(PredTest, SingleEvaluationOnFailure) { // A success case. int n1 = 0; int n2 = 0; EXPECT_PRED2(SumIsEven2, n1++, n2++); EXPECT_EQ(1, n1) << "Argument 1 is not evaluated exactly once."; EXPECT_EQ(1, n2) << "Argument 2 is not evaluated exactly once."; // Another success case. n1 = n2 = 0; int n3 = 0; int n4 = 0; int n5 = 0; ASSERT_PRED_FORMAT5(AssertSumIsEven5Functor(), n1++, n2++, n3++, n4++, n5++) << "This failure is UNEXPECTED!"; EXPECT_EQ(1, n1) << "Argument 1 is not evaluated exactly once."; EXPECT_EQ(1, n2) << "Argument 2 is not evaluated exactly once."; EXPECT_EQ(1, n3) << "Argument 3 is not evaluated exactly once."; EXPECT_EQ(1, n4) << "Argument 4 is not evaluated exactly once."; EXPECT_EQ(1, n5) << "Argument 5 is not evaluated exactly once."; // A failure case. n1 = n2 = n3 = 0; EXPECT_NONFATAL_FAILURE({ // NOLINT EXPECT_PRED3(SumIsEven3Functor(), ++n1, n2++, n3++) << "This failure is expected."; }, "This failure is expected."); EXPECT_EQ(1, n1) << "Argument 1 is not evaluated exactly once."; EXPECT_EQ(1, n2) << "Argument 2 is not evaluated exactly once."; EXPECT_EQ(1, n3) << "Argument 3 is not evaluated exactly once."; // Another failure case. n1 = n2 = n3 = n4 = 0; EXPECT_NONFATAL_FAILURE({ // NOLINT EXPECT_PRED_FORMAT4(AssertSumIsEven4, ++n1, n2++, n3++, n4++); }, "evaluates to 1, which is not even."); EXPECT_EQ(1, n1) << "Argument 1 is not evaluated exactly once."; EXPECT_EQ(1, n2) << "Argument 2 is not evaluated exactly once."; EXPECT_EQ(1, n3) << "Argument 3 is not evaluated exactly once."; EXPECT_EQ(1, n4) << "Argument 4 is not evaluated exactly once."; } // Some helper functions for testing using overloaded/template // functions with ASSERT_PREDn and EXPECT_PREDn. bool IsPositive(double x) { return x > 0; } template bool IsNegative(T x) { return x < 0; } template bool GreaterThan(T1 x1, T2 x2) { return x1 > x2; } // Tests that overloaded functions can be used in *_PRED* as long as // their types are explicitly specified. TEST(PredicateAssertionTest, AcceptsOverloadedFunction) { // C++Builder requires C-style casts rather than static_cast. EXPECT_PRED1((bool (*)(int))(IsPositive), 5); // NOLINT ASSERT_PRED1((bool (*)(double))(IsPositive), 6.0); // NOLINT } // Tests that template functions can be used in *_PRED* as long as // their types are explicitly specified. TEST(PredicateAssertionTest, AcceptsTemplateFunction) { EXPECT_PRED1(IsNegative, -5); // Makes sure that we can handle templates with more than one // parameter. ASSERT_PRED2((GreaterThan), 5, 0); } // Some helper functions for testing using overloaded/template // functions with ASSERT_PRED_FORMATn and EXPECT_PRED_FORMATn. AssertionResult IsPositiveFormat(const char* /* expr */, int n) { return n > 0 ? AssertionSuccess() : AssertionFailure(Message() << "Failure"); } AssertionResult IsPositiveFormat(const char* /* expr */, double x) { return x > 0 ? AssertionSuccess() : AssertionFailure(Message() << "Failure"); } template AssertionResult IsNegativeFormat(const char* /* expr */, T x) { return x < 0 ? AssertionSuccess() : AssertionFailure(Message() << "Failure"); } template AssertionResult EqualsFormat(const char* /* expr1 */, const char* /* expr2 */, const T1& x1, const T2& x2) { return x1 == x2 ? AssertionSuccess() : AssertionFailure(Message() << "Failure"); } // Tests that overloaded functions can be used in *_PRED_FORMAT* // without explicitly specifying their types. TEST(PredicateFormatAssertionTest, AcceptsOverloadedFunction) { EXPECT_PRED_FORMAT1(IsPositiveFormat, 5); ASSERT_PRED_FORMAT1(IsPositiveFormat, 6.0); } // Tests that template functions can be used in *_PRED_FORMAT* without // explicitly specifying their types. TEST(PredicateFormatAssertionTest, AcceptsTemplateFunction) { EXPECT_PRED_FORMAT1(IsNegativeFormat, -5); ASSERT_PRED_FORMAT2(EqualsFormat, 3, 3); } // Tests string assertions. // Tests ASSERT_STREQ with non-NULL arguments. TEST(StringAssertionTest, ASSERT_STREQ) { const char * const p1 = "good"; ASSERT_STREQ(p1, p1); // Let p2 have the same content as p1, but be at a different address. const char p2[] = "good"; ASSERT_STREQ(p1, p2); EXPECT_FATAL_FAILURE(ASSERT_STREQ("bad", "good"), "Expected: \"bad\""); } // Tests ASSERT_STREQ with NULL arguments. TEST(StringAssertionTest, ASSERT_STREQ_Null) { ASSERT_STREQ(static_cast(NULL), NULL); EXPECT_FATAL_FAILURE(ASSERT_STREQ(NULL, "non-null"), "non-null"); } // Tests ASSERT_STREQ with NULL arguments. TEST(StringAssertionTest, ASSERT_STREQ_Null2) { EXPECT_FATAL_FAILURE(ASSERT_STREQ("non-null", NULL), "non-null"); } // Tests ASSERT_STRNE. TEST(StringAssertionTest, ASSERT_STRNE) { ASSERT_STRNE("hi", "Hi"); ASSERT_STRNE("Hi", NULL); ASSERT_STRNE(NULL, "Hi"); ASSERT_STRNE("", NULL); ASSERT_STRNE(NULL, ""); ASSERT_STRNE("", "Hi"); ASSERT_STRNE("Hi", ""); EXPECT_FATAL_FAILURE(ASSERT_STRNE("Hi", "Hi"), "\"Hi\" vs \"Hi\""); } // Tests ASSERT_STRCASEEQ. TEST(StringAssertionTest, ASSERT_STRCASEEQ) { ASSERT_STRCASEEQ("hi", "Hi"); ASSERT_STRCASEEQ(static_cast(NULL), NULL); ASSERT_STRCASEEQ("", ""); EXPECT_FATAL_FAILURE(ASSERT_STRCASEEQ("Hi", "hi2"), "(ignoring case)"); } // Tests ASSERT_STRCASENE. TEST(StringAssertionTest, ASSERT_STRCASENE) { ASSERT_STRCASENE("hi1", "Hi2"); ASSERT_STRCASENE("Hi", NULL); ASSERT_STRCASENE(NULL, "Hi"); ASSERT_STRCASENE("", NULL); ASSERT_STRCASENE(NULL, ""); ASSERT_STRCASENE("", "Hi"); ASSERT_STRCASENE("Hi", ""); EXPECT_FATAL_FAILURE(ASSERT_STRCASENE("Hi", "hi"), "(ignoring case)"); } // Tests *_STREQ on wide strings. TEST(StringAssertionTest, STREQ_Wide) { // NULL strings. ASSERT_STREQ(static_cast(NULL), NULL); // Empty strings. ASSERT_STREQ(L"", L""); // Non-null vs NULL. EXPECT_NONFATAL_FAILURE(EXPECT_STREQ(L"non-null", NULL), "non-null"); // Equal strings. EXPECT_STREQ(L"Hi", L"Hi"); // Unequal strings. EXPECT_NONFATAL_FAILURE(EXPECT_STREQ(L"abc", L"Abc"), "Abc"); // Strings containing wide characters. EXPECT_NONFATAL_FAILURE(EXPECT_STREQ(L"abc\x8119", L"abc\x8120"), "abc"); // The streaming variation. EXPECT_NONFATAL_FAILURE({ // NOLINT EXPECT_STREQ(L"abc\x8119", L"abc\x8121") << "Expected failure"; }, "Expected failure"); } // Tests *_STRNE on wide strings. TEST(StringAssertionTest, STRNE_Wide) { // NULL strings. EXPECT_NONFATAL_FAILURE({ // NOLINT EXPECT_STRNE(static_cast(NULL), NULL); }, ""); // Empty strings. EXPECT_NONFATAL_FAILURE(EXPECT_STRNE(L"", L""), "L\"\""); // Non-null vs NULL. ASSERT_STRNE(L"non-null", NULL); // Equal strings. EXPECT_NONFATAL_FAILURE(EXPECT_STRNE(L"Hi", L"Hi"), "L\"Hi\""); // Unequal strings. EXPECT_STRNE(L"abc", L"Abc"); // Strings containing wide characters. EXPECT_NONFATAL_FAILURE(EXPECT_STRNE(L"abc\x8119", L"abc\x8119"), "abc"); // The streaming variation. ASSERT_STRNE(L"abc\x8119", L"abc\x8120") << "This shouldn't happen"; } // Tests for ::testing::IsSubstring(). // Tests that IsSubstring() returns the correct result when the input // argument type is const char*. TEST(IsSubstringTest, ReturnsCorrectResultForCString) { EXPECT_FALSE(IsSubstring("", "", NULL, "a")); EXPECT_FALSE(IsSubstring("", "", "b", NULL)); EXPECT_FALSE(IsSubstring("", "", "needle", "haystack")); EXPECT_TRUE(IsSubstring("", "", static_cast(NULL), NULL)); EXPECT_TRUE(IsSubstring("", "", "needle", "two needles")); } // Tests that IsSubstring() returns the correct result when the input // argument type is const wchar_t*. TEST(IsSubstringTest, ReturnsCorrectResultForWideCString) { EXPECT_FALSE(IsSubstring("", "", kNull, L"a")); EXPECT_FALSE(IsSubstring("", "", L"b", kNull)); EXPECT_FALSE(IsSubstring("", "", L"needle", L"haystack")); EXPECT_TRUE(IsSubstring("", "", static_cast(NULL), NULL)); EXPECT_TRUE(IsSubstring("", "", L"needle", L"two needles")); } // Tests that IsSubstring() generates the correct message when the input // argument type is const char*. TEST(IsSubstringTest, GeneratesCorrectMessageForCString) { EXPECT_STREQ("Value of: needle_expr\n" " Actual: \"needle\"\n" "Expected: a substring of haystack_expr\n" "Which is: \"haystack\"", IsSubstring("needle_expr", "haystack_expr", "needle", "haystack").failure_message()); } // Tests that IsSubstring returns the correct result when the input // argument type is ::std::string. TEST(IsSubstringTest, ReturnsCorrectResultsForStdString) { EXPECT_TRUE(IsSubstring("", "", std::string("hello"), "ahellob")); EXPECT_FALSE(IsSubstring("", "", "hello", std::string("world"))); } #if GTEST_HAS_STD_WSTRING // Tests that IsSubstring returns the correct result when the input // argument type is ::std::wstring. TEST(IsSubstringTest, ReturnsCorrectResultForStdWstring) { EXPECT_TRUE(IsSubstring("", "", ::std::wstring(L"needle"), L"two needles")); EXPECT_FALSE(IsSubstring("", "", L"needle", ::std::wstring(L"haystack"))); } // Tests that IsSubstring() generates the correct message when the input // argument type is ::std::wstring. TEST(IsSubstringTest, GeneratesCorrectMessageForWstring) { EXPECT_STREQ("Value of: needle_expr\n" " Actual: L\"needle\"\n" "Expected: a substring of haystack_expr\n" "Which is: L\"haystack\"", IsSubstring( "needle_expr", "haystack_expr", ::std::wstring(L"needle"), L"haystack").failure_message()); } #endif // GTEST_HAS_STD_WSTRING // Tests for ::testing::IsNotSubstring(). // Tests that IsNotSubstring() returns the correct result when the input // argument type is const char*. TEST(IsNotSubstringTest, ReturnsCorrectResultForCString) { EXPECT_TRUE(IsNotSubstring("", "", "needle", "haystack")); EXPECT_FALSE(IsNotSubstring("", "", "needle", "two needles")); } // Tests that IsNotSubstring() returns the correct result when the input // argument type is const wchar_t*. TEST(IsNotSubstringTest, ReturnsCorrectResultForWideCString) { EXPECT_TRUE(IsNotSubstring("", "", L"needle", L"haystack")); EXPECT_FALSE(IsNotSubstring("", "", L"needle", L"two needles")); } // Tests that IsNotSubstring() generates the correct message when the input // argument type is const wchar_t*. TEST(IsNotSubstringTest, GeneratesCorrectMessageForWideCString) { EXPECT_STREQ("Value of: needle_expr\n" " Actual: L\"needle\"\n" "Expected: not a substring of haystack_expr\n" "Which is: L\"two needles\"", IsNotSubstring( "needle_expr", "haystack_expr", L"needle", L"two needles").failure_message()); } // Tests that IsNotSubstring returns the correct result when the input // argument type is ::std::string. TEST(IsNotSubstringTest, ReturnsCorrectResultsForStdString) { EXPECT_FALSE(IsNotSubstring("", "", std::string("hello"), "ahellob")); EXPECT_TRUE(IsNotSubstring("", "", "hello", std::string("world"))); } // Tests that IsNotSubstring() generates the correct message when the input // argument type is ::std::string. TEST(IsNotSubstringTest, GeneratesCorrectMessageForStdString) { EXPECT_STREQ("Value of: needle_expr\n" " Actual: \"needle\"\n" "Expected: not a substring of haystack_expr\n" "Which is: \"two needles\"", IsNotSubstring( "needle_expr", "haystack_expr", ::std::string("needle"), "two needles").failure_message()); } #if GTEST_HAS_STD_WSTRING // Tests that IsNotSubstring returns the correct result when the input // argument type is ::std::wstring. TEST(IsNotSubstringTest, ReturnsCorrectResultForStdWstring) { EXPECT_FALSE( IsNotSubstring("", "", ::std::wstring(L"needle"), L"two needles")); EXPECT_TRUE(IsNotSubstring("", "", L"needle", ::std::wstring(L"haystack"))); } #endif // GTEST_HAS_STD_WSTRING // Tests floating-point assertions. template class FloatingPointTest : public Test { protected: // Pre-calculated numbers to be used by the tests. struct TestValues { RawType close_to_positive_zero; RawType close_to_negative_zero; RawType further_from_negative_zero; RawType close_to_one; RawType further_from_one; RawType infinity; RawType close_to_infinity; RawType further_from_infinity; RawType nan1; RawType nan2; }; typedef typename testing::internal::FloatingPoint Floating; typedef typename Floating::Bits Bits; virtual void SetUp() { const size_t max_ulps = Floating::kMaxUlps; // The bits that represent 0.0. const Bits zero_bits = Floating(0).bits(); // Makes some numbers close to 0.0. values_.close_to_positive_zero = Floating::ReinterpretBits( zero_bits + max_ulps/2); values_.close_to_negative_zero = -Floating::ReinterpretBits( zero_bits + max_ulps - max_ulps/2); values_.further_from_negative_zero = -Floating::ReinterpretBits( zero_bits + max_ulps + 1 - max_ulps/2); // The bits that represent 1.0. const Bits one_bits = Floating(1).bits(); // Makes some numbers close to 1.0. values_.close_to_one = Floating::ReinterpretBits(one_bits + max_ulps); values_.further_from_one = Floating::ReinterpretBits( one_bits + max_ulps + 1); // +infinity. values_.infinity = Floating::Infinity(); // The bits that represent +infinity. const Bits infinity_bits = Floating(values_.infinity).bits(); // Makes some numbers close to infinity. values_.close_to_infinity = Floating::ReinterpretBits( infinity_bits - max_ulps); values_.further_from_infinity = Floating::ReinterpretBits( infinity_bits - max_ulps - 1); // Makes some NAN's. Sets the most significant bit of the fraction so that // our NaN's are quiet; trying to process a signaling NaN would raise an // exception if our environment enables floating point exceptions. values_.nan1 = Floating::ReinterpretBits(Floating::kExponentBitMask | (static_cast(1) << (Floating::kFractionBitCount - 1)) | 1); values_.nan2 = Floating::ReinterpretBits(Floating::kExponentBitMask | (static_cast(1) << (Floating::kFractionBitCount - 1)) | 200); } void TestSize() { EXPECT_EQ(sizeof(RawType), sizeof(Bits)); } static TestValues values_; }; template typename FloatingPointTest::TestValues FloatingPointTest::values_; // Instantiates FloatingPointTest for testing *_FLOAT_EQ. typedef FloatingPointTest FloatTest; // Tests that the size of Float::Bits matches the size of float. TEST_F(FloatTest, Size) { TestSize(); } // Tests comparing with +0 and -0. TEST_F(FloatTest, Zeros) { EXPECT_FLOAT_EQ(0.0, -0.0); EXPECT_NONFATAL_FAILURE(EXPECT_FLOAT_EQ(-0.0, 1.0), "1.0"); EXPECT_FATAL_FAILURE(ASSERT_FLOAT_EQ(0.0, 1.5), "1.5"); } // Tests comparing numbers close to 0. // // This ensures that *_FLOAT_EQ handles the sign correctly and no // overflow occurs when comparing numbers whose absolute value is very // small. TEST_F(FloatTest, AlmostZeros) { // In C++Builder, names within local classes (such as used by // EXPECT_FATAL_FAILURE) cannot be resolved against static members of the // scoping class. Use a static local alias as a workaround. // We use the assignment syntax since some compilers, like Sun Studio, // don't allow initializing references using construction syntax // (parentheses). static const FloatTest::TestValues& v = this->values_; EXPECT_FLOAT_EQ(0.0, v.close_to_positive_zero); EXPECT_FLOAT_EQ(-0.0, v.close_to_negative_zero); EXPECT_FLOAT_EQ(v.close_to_positive_zero, v.close_to_negative_zero); EXPECT_FATAL_FAILURE({ // NOLINT ASSERT_FLOAT_EQ(v.close_to_positive_zero, v.further_from_negative_zero); }, "v.further_from_negative_zero"); } // Tests comparing numbers close to each other. TEST_F(FloatTest, SmallDiff) { EXPECT_FLOAT_EQ(1.0, values_.close_to_one); EXPECT_NONFATAL_FAILURE(EXPECT_FLOAT_EQ(1.0, values_.further_from_one), "values_.further_from_one"); } // Tests comparing numbers far apart. TEST_F(FloatTest, LargeDiff) { EXPECT_NONFATAL_FAILURE(EXPECT_FLOAT_EQ(2.5, 3.0), "3.0"); } // Tests comparing with infinity. // // This ensures that no overflow occurs when comparing numbers whose // absolute value is very large. TEST_F(FloatTest, Infinity) { EXPECT_FLOAT_EQ(values_.infinity, values_.close_to_infinity); EXPECT_FLOAT_EQ(-values_.infinity, -values_.close_to_infinity); #if !GTEST_OS_SYMBIAN // Nokia's STLport crashes if we try to output infinity or NaN. EXPECT_NONFATAL_FAILURE(EXPECT_FLOAT_EQ(values_.infinity, -values_.infinity), "-values_.infinity"); // This is interesting as the representations of infinity and nan1 // are only 1 DLP apart. EXPECT_NONFATAL_FAILURE(EXPECT_FLOAT_EQ(values_.infinity, values_.nan1), "values_.nan1"); #endif // !GTEST_OS_SYMBIAN } // Tests that comparing with NAN always returns false. TEST_F(FloatTest, NaN) { #if !GTEST_OS_SYMBIAN // Nokia's STLport crashes if we try to output infinity or NaN. // In C++Builder, names within local classes (such as used by // EXPECT_FATAL_FAILURE) cannot be resolved against static members of the // scoping class. Use a static local alias as a workaround. // We use the assignment syntax since some compilers, like Sun Studio, // don't allow initializing references using construction syntax // (parentheses). static const FloatTest::TestValues& v = this->values_; EXPECT_NONFATAL_FAILURE(EXPECT_FLOAT_EQ(v.nan1, v.nan1), "v.nan1"); EXPECT_NONFATAL_FAILURE(EXPECT_FLOAT_EQ(v.nan1, v.nan2), "v.nan2"); EXPECT_NONFATAL_FAILURE(EXPECT_FLOAT_EQ(1.0, v.nan1), "v.nan1"); EXPECT_FATAL_FAILURE(ASSERT_FLOAT_EQ(v.nan1, v.infinity), "v.infinity"); #endif // !GTEST_OS_SYMBIAN } // Tests that *_FLOAT_EQ are reflexive. TEST_F(FloatTest, Reflexive) { EXPECT_FLOAT_EQ(0.0, 0.0); EXPECT_FLOAT_EQ(1.0, 1.0); ASSERT_FLOAT_EQ(values_.infinity, values_.infinity); } // Tests that *_FLOAT_EQ are commutative. TEST_F(FloatTest, Commutative) { // We already tested EXPECT_FLOAT_EQ(1.0, values_.close_to_one). EXPECT_FLOAT_EQ(values_.close_to_one, 1.0); // We already tested EXPECT_FLOAT_EQ(1.0, values_.further_from_one). EXPECT_NONFATAL_FAILURE(EXPECT_FLOAT_EQ(values_.further_from_one, 1.0), "1.0"); } // Tests EXPECT_NEAR. TEST_F(FloatTest, EXPECT_NEAR) { EXPECT_NEAR(-1.0f, -1.1f, 0.2f); EXPECT_NEAR(2.0f, 3.0f, 1.0f); EXPECT_NONFATAL_FAILURE(EXPECT_NEAR(1.0f,1.5f, 0.25f), // NOLINT "The difference between 1.0f and 1.5f is 0.5, " "which exceeds 0.25f"); // To work around a bug in gcc 2.95.0, there is intentionally no // space after the first comma in the previous line. } // Tests ASSERT_NEAR. TEST_F(FloatTest, ASSERT_NEAR) { ASSERT_NEAR(-1.0f, -1.1f, 0.2f); ASSERT_NEAR(2.0f, 3.0f, 1.0f); EXPECT_FATAL_FAILURE(ASSERT_NEAR(1.0f,1.5f, 0.25f), // NOLINT "The difference between 1.0f and 1.5f is 0.5, " "which exceeds 0.25f"); // To work around a bug in gcc 2.95.0, there is intentionally no // space after the first comma in the previous line. } // Tests the cases where FloatLE() should succeed. TEST_F(FloatTest, FloatLESucceeds) { EXPECT_PRED_FORMAT2(FloatLE, 1.0f, 2.0f); // When val1 < val2, ASSERT_PRED_FORMAT2(FloatLE, 1.0f, 1.0f); // val1 == val2, // or when val1 is greater than, but almost equals to, val2. EXPECT_PRED_FORMAT2(FloatLE, values_.close_to_positive_zero, 0.0f); } // Tests the cases where FloatLE() should fail. TEST_F(FloatTest, FloatLEFails) { // When val1 is greater than val2 by a large margin, EXPECT_NONFATAL_FAILURE(EXPECT_PRED_FORMAT2(FloatLE, 2.0f, 1.0f), "(2.0f) <= (1.0f)"); // or by a small yet non-negligible margin, EXPECT_NONFATAL_FAILURE({ // NOLINT EXPECT_PRED_FORMAT2(FloatLE, values_.further_from_one, 1.0f); }, "(values_.further_from_one) <= (1.0f)"); #if !GTEST_OS_SYMBIAN && !defined(__BORLANDC__) // Nokia's STLport crashes if we try to output infinity or NaN. // C++Builder gives bad results for ordered comparisons involving NaNs // due to compiler bugs. EXPECT_NONFATAL_FAILURE({ // NOLINT EXPECT_PRED_FORMAT2(FloatLE, values_.nan1, values_.infinity); }, "(values_.nan1) <= (values_.infinity)"); EXPECT_NONFATAL_FAILURE({ // NOLINT EXPECT_PRED_FORMAT2(FloatLE, -values_.infinity, values_.nan1); }, "(-values_.infinity) <= (values_.nan1)"); EXPECT_FATAL_FAILURE({ // NOLINT ASSERT_PRED_FORMAT2(FloatLE, values_.nan1, values_.nan1); }, "(values_.nan1) <= (values_.nan1)"); #endif // !GTEST_OS_SYMBIAN && !defined(__BORLANDC__) } // Instantiates FloatingPointTest for testing *_DOUBLE_EQ. typedef FloatingPointTest DoubleTest; // Tests that the size of Double::Bits matches the size of double. TEST_F(DoubleTest, Size) { TestSize(); } // Tests comparing with +0 and -0. TEST_F(DoubleTest, Zeros) { EXPECT_DOUBLE_EQ(0.0, -0.0); EXPECT_NONFATAL_FAILURE(EXPECT_DOUBLE_EQ(-0.0, 1.0), "1.0"); EXPECT_FATAL_FAILURE(ASSERT_DOUBLE_EQ(0.0, 1.0), "1.0"); } // Tests comparing numbers close to 0. // // This ensures that *_DOUBLE_EQ handles the sign correctly and no // overflow occurs when comparing numbers whose absolute value is very // small. TEST_F(DoubleTest, AlmostZeros) { // In C++Builder, names within local classes (such as used by // EXPECT_FATAL_FAILURE) cannot be resolved against static members of the // scoping class. Use a static local alias as a workaround. // We use the assignment syntax since some compilers, like Sun Studio, // don't allow initializing references using construction syntax // (parentheses). static const DoubleTest::TestValues& v = this->values_; EXPECT_DOUBLE_EQ(0.0, v.close_to_positive_zero); EXPECT_DOUBLE_EQ(-0.0, v.close_to_negative_zero); EXPECT_DOUBLE_EQ(v.close_to_positive_zero, v.close_to_negative_zero); EXPECT_FATAL_FAILURE({ // NOLINT ASSERT_DOUBLE_EQ(v.close_to_positive_zero, v.further_from_negative_zero); }, "v.further_from_negative_zero"); } // Tests comparing numbers close to each other. TEST_F(DoubleTest, SmallDiff) { EXPECT_DOUBLE_EQ(1.0, values_.close_to_one); EXPECT_NONFATAL_FAILURE(EXPECT_DOUBLE_EQ(1.0, values_.further_from_one), "values_.further_from_one"); } // Tests comparing numbers far apart. TEST_F(DoubleTest, LargeDiff) { EXPECT_NONFATAL_FAILURE(EXPECT_DOUBLE_EQ(2.0, 3.0), "3.0"); } // Tests comparing with infinity. // // This ensures that no overflow occurs when comparing numbers whose // absolute value is very large. TEST_F(DoubleTest, Infinity) { EXPECT_DOUBLE_EQ(values_.infinity, values_.close_to_infinity); EXPECT_DOUBLE_EQ(-values_.infinity, -values_.close_to_infinity); #if !GTEST_OS_SYMBIAN // Nokia's STLport crashes if we try to output infinity or NaN. EXPECT_NONFATAL_FAILURE(EXPECT_DOUBLE_EQ(values_.infinity, -values_.infinity), "-values_.infinity"); // This is interesting as the representations of infinity_ and nan1_ // are only 1 DLP apart. EXPECT_NONFATAL_FAILURE(EXPECT_DOUBLE_EQ(values_.infinity, values_.nan1), "values_.nan1"); #endif // !GTEST_OS_SYMBIAN } // Tests that comparing with NAN always returns false. TEST_F(DoubleTest, NaN) { #if !GTEST_OS_SYMBIAN // In C++Builder, names within local classes (such as used by // EXPECT_FATAL_FAILURE) cannot be resolved against static members of the // scoping class. Use a static local alias as a workaround. // We use the assignment syntax since some compilers, like Sun Studio, // don't allow initializing references using construction syntax // (parentheses). static const DoubleTest::TestValues& v = this->values_; // Nokia's STLport crashes if we try to output infinity or NaN. EXPECT_NONFATAL_FAILURE(EXPECT_DOUBLE_EQ(v.nan1, v.nan1), "v.nan1"); EXPECT_NONFATAL_FAILURE(EXPECT_DOUBLE_EQ(v.nan1, v.nan2), "v.nan2"); EXPECT_NONFATAL_FAILURE(EXPECT_DOUBLE_EQ(1.0, v.nan1), "v.nan1"); EXPECT_FATAL_FAILURE(ASSERT_DOUBLE_EQ(v.nan1, v.infinity), "v.infinity"); #endif // !GTEST_OS_SYMBIAN } // Tests that *_DOUBLE_EQ are reflexive. TEST_F(DoubleTest, Reflexive) { EXPECT_DOUBLE_EQ(0.0, 0.0); EXPECT_DOUBLE_EQ(1.0, 1.0); #if !GTEST_OS_SYMBIAN // Nokia's STLport crashes if we try to output infinity or NaN. ASSERT_DOUBLE_EQ(values_.infinity, values_.infinity); #endif // !GTEST_OS_SYMBIAN } // Tests that *_DOUBLE_EQ are commutative. TEST_F(DoubleTest, Commutative) { // We already tested EXPECT_DOUBLE_EQ(1.0, values_.close_to_one). EXPECT_DOUBLE_EQ(values_.close_to_one, 1.0); // We already tested EXPECT_DOUBLE_EQ(1.0, values_.further_from_one). EXPECT_NONFATAL_FAILURE(EXPECT_DOUBLE_EQ(values_.further_from_one, 1.0), "1.0"); } // Tests EXPECT_NEAR. TEST_F(DoubleTest, EXPECT_NEAR) { EXPECT_NEAR(-1.0, -1.1, 0.2); EXPECT_NEAR(2.0, 3.0, 1.0); EXPECT_NONFATAL_FAILURE(EXPECT_NEAR(1.0, 1.5, 0.25), // NOLINT "The difference between 1.0 and 1.5 is 0.5, " "which exceeds 0.25"); // To work around a bug in gcc 2.95.0, there is intentionally no // space after the first comma in the previous statement. } // Tests ASSERT_NEAR. TEST_F(DoubleTest, ASSERT_NEAR) { ASSERT_NEAR(-1.0, -1.1, 0.2); ASSERT_NEAR(2.0, 3.0, 1.0); EXPECT_FATAL_FAILURE(ASSERT_NEAR(1.0, 1.5, 0.25), // NOLINT "The difference between 1.0 and 1.5 is 0.5, " "which exceeds 0.25"); // To work around a bug in gcc 2.95.0, there is intentionally no // space after the first comma in the previous statement. } // Tests the cases where DoubleLE() should succeed. TEST_F(DoubleTest, DoubleLESucceeds) { EXPECT_PRED_FORMAT2(DoubleLE, 1.0, 2.0); // When val1 < val2, ASSERT_PRED_FORMAT2(DoubleLE, 1.0, 1.0); // val1 == val2, // or when val1 is greater than, but almost equals to, val2. EXPECT_PRED_FORMAT2(DoubleLE, values_.close_to_positive_zero, 0.0); } // Tests the cases where DoubleLE() should fail. TEST_F(DoubleTest, DoubleLEFails) { // When val1 is greater than val2 by a large margin, EXPECT_NONFATAL_FAILURE(EXPECT_PRED_FORMAT2(DoubleLE, 2.0, 1.0), "(2.0) <= (1.0)"); // or by a small yet non-negligible margin, EXPECT_NONFATAL_FAILURE({ // NOLINT EXPECT_PRED_FORMAT2(DoubleLE, values_.further_from_one, 1.0); }, "(values_.further_from_one) <= (1.0)"); #if !GTEST_OS_SYMBIAN && !defined(__BORLANDC__) // Nokia's STLport crashes if we try to output infinity or NaN. // C++Builder gives bad results for ordered comparisons involving NaNs // due to compiler bugs. EXPECT_NONFATAL_FAILURE({ // NOLINT EXPECT_PRED_FORMAT2(DoubleLE, values_.nan1, values_.infinity); }, "(values_.nan1) <= (values_.infinity)"); EXPECT_NONFATAL_FAILURE({ // NOLINT EXPECT_PRED_FORMAT2(DoubleLE, -values_.infinity, values_.nan1); }, " (-values_.infinity) <= (values_.nan1)"); EXPECT_FATAL_FAILURE({ // NOLINT ASSERT_PRED_FORMAT2(DoubleLE, values_.nan1, values_.nan1); }, "(values_.nan1) <= (values_.nan1)"); #endif // !GTEST_OS_SYMBIAN && !defined(__BORLANDC__) } // Verifies that a test or test case whose name starts with DISABLED_ is // not run. // A test whose name starts with DISABLED_. // Should not run. TEST(DisabledTest, DISABLED_TestShouldNotRun) { FAIL() << "Unexpected failure: Disabled test should not be run."; } // A test whose name does not start with DISABLED_. // Should run. TEST(DisabledTest, NotDISABLED_TestShouldRun) { EXPECT_EQ(1, 1); } // A test case whose name starts with DISABLED_. // Should not run. TEST(DISABLED_TestCase, TestShouldNotRun) { FAIL() << "Unexpected failure: Test in disabled test case should not be run."; } // A test case and test whose names start with DISABLED_. // Should not run. TEST(DISABLED_TestCase, DISABLED_TestShouldNotRun) { FAIL() << "Unexpected failure: Test in disabled test case should not be run."; } // Check that when all tests in a test case are disabled, SetupTestCase() and // TearDownTestCase() are not called. class DisabledTestsTest : public Test { protected: static void SetUpTestCase() { FAIL() << "Unexpected failure: All tests disabled in test case. " "SetupTestCase() should not be called."; } static void TearDownTestCase() { FAIL() << "Unexpected failure: All tests disabled in test case. " "TearDownTestCase() should not be called."; } }; TEST_F(DisabledTestsTest, DISABLED_TestShouldNotRun_1) { FAIL() << "Unexpected failure: Disabled test should not be run."; } TEST_F(DisabledTestsTest, DISABLED_TestShouldNotRun_2) { FAIL() << "Unexpected failure: Disabled test should not be run."; } // Tests that disabled typed tests aren't run. #if GTEST_HAS_TYPED_TEST template class TypedTest : public Test { }; typedef testing::Types NumericTypes; TYPED_TEST_CASE(TypedTest, NumericTypes); TYPED_TEST(TypedTest, DISABLED_ShouldNotRun) { FAIL() << "Unexpected failure: Disabled typed test should not run."; } template class DISABLED_TypedTest : public Test { }; TYPED_TEST_CASE(DISABLED_TypedTest, NumericTypes); TYPED_TEST(DISABLED_TypedTest, ShouldNotRun) { FAIL() << "Unexpected failure: Disabled typed test should not run."; } #endif // GTEST_HAS_TYPED_TEST // Tests that disabled type-parameterized tests aren't run. #if GTEST_HAS_TYPED_TEST_P template class TypedTestP : public Test { }; TYPED_TEST_CASE_P(TypedTestP); TYPED_TEST_P(TypedTestP, DISABLED_ShouldNotRun) { FAIL() << "Unexpected failure: " << "Disabled type-parameterized test should not run."; } REGISTER_TYPED_TEST_CASE_P(TypedTestP, DISABLED_ShouldNotRun); INSTANTIATE_TYPED_TEST_CASE_P(My, TypedTestP, NumericTypes); template class DISABLED_TypedTestP : public Test { }; TYPED_TEST_CASE_P(DISABLED_TypedTestP); TYPED_TEST_P(DISABLED_TypedTestP, ShouldNotRun) { FAIL() << "Unexpected failure: " << "Disabled type-parameterized test should not run."; } REGISTER_TYPED_TEST_CASE_P(DISABLED_TypedTestP, ShouldNotRun); INSTANTIATE_TYPED_TEST_CASE_P(My, DISABLED_TypedTestP, NumericTypes); #endif // GTEST_HAS_TYPED_TEST_P // Tests that assertion macros evaluate their arguments exactly once. class SingleEvaluationTest : public Test { public: // Must be public and not protected due to a bug in g++ 3.4.2. // This helper function is needed by the FailedASSERT_STREQ test // below. It's public to work around C++Builder's bug with scoping local // classes. static void CompareAndIncrementCharPtrs() { ASSERT_STREQ(p1_++, p2_++); } // This helper function is needed by the FailedASSERT_NE test below. It's // public to work around C++Builder's bug with scoping local classes. static void CompareAndIncrementInts() { ASSERT_NE(a_++, b_++); } protected: SingleEvaluationTest() { p1_ = s1_; p2_ = s2_; a_ = 0; b_ = 0; } static const char* const s1_; static const char* const s2_; static const char* p1_; static const char* p2_; static int a_; static int b_; }; const char* const SingleEvaluationTest::s1_ = "01234"; const char* const SingleEvaluationTest::s2_ = "abcde"; const char* SingleEvaluationTest::p1_; const char* SingleEvaluationTest::p2_; int SingleEvaluationTest::a_; int SingleEvaluationTest::b_; // Tests that when ASSERT_STREQ fails, it evaluates its arguments // exactly once. TEST_F(SingleEvaluationTest, FailedASSERT_STREQ) { EXPECT_FATAL_FAILURE(SingleEvaluationTest::CompareAndIncrementCharPtrs(), "p2_++"); EXPECT_EQ(s1_ + 1, p1_); EXPECT_EQ(s2_ + 1, p2_); } // Tests that string assertion arguments are evaluated exactly once. TEST_F(SingleEvaluationTest, ASSERT_STR) { // successful EXPECT_STRNE EXPECT_STRNE(p1_++, p2_++); EXPECT_EQ(s1_ + 1, p1_); EXPECT_EQ(s2_ + 1, p2_); // failed EXPECT_STRCASEEQ EXPECT_NONFATAL_FAILURE(EXPECT_STRCASEEQ(p1_++, p2_++), "ignoring case"); EXPECT_EQ(s1_ + 2, p1_); EXPECT_EQ(s2_ + 2, p2_); } // Tests that when ASSERT_NE fails, it evaluates its arguments exactly // once. TEST_F(SingleEvaluationTest, FailedASSERT_NE) { EXPECT_FATAL_FAILURE(SingleEvaluationTest::CompareAndIncrementInts(), "(a_++) != (b_++)"); EXPECT_EQ(1, a_); EXPECT_EQ(1, b_); } // Tests that assertion arguments are evaluated exactly once. TEST_F(SingleEvaluationTest, OtherCases) { // successful EXPECT_TRUE EXPECT_TRUE(0 == a_++); // NOLINT EXPECT_EQ(1, a_); // failed EXPECT_TRUE EXPECT_NONFATAL_FAILURE(EXPECT_TRUE(-1 == a_++), "-1 == a_++"); EXPECT_EQ(2, a_); // successful EXPECT_GT EXPECT_GT(a_++, b_++); EXPECT_EQ(3, a_); EXPECT_EQ(1, b_); // failed EXPECT_LT EXPECT_NONFATAL_FAILURE(EXPECT_LT(a_++, b_++), "(a_++) < (b_++)"); EXPECT_EQ(4, a_); EXPECT_EQ(2, b_); // successful ASSERT_TRUE ASSERT_TRUE(0 < a_++); // NOLINT EXPECT_EQ(5, a_); // successful ASSERT_GT ASSERT_GT(a_++, b_++); EXPECT_EQ(6, a_); EXPECT_EQ(3, b_); } #if GTEST_HAS_EXCEPTIONS void ThrowAnInteger() { throw 1; } // Tests that assertion arguments are evaluated exactly once. TEST_F(SingleEvaluationTest, ExceptionTests) { // successful EXPECT_THROW EXPECT_THROW({ // NOLINT a_++; ThrowAnInteger(); }, int); EXPECT_EQ(1, a_); // failed EXPECT_THROW, throws different EXPECT_NONFATAL_FAILURE(EXPECT_THROW({ // NOLINT a_++; ThrowAnInteger(); }, bool), "throws a different type"); EXPECT_EQ(2, a_); // failed EXPECT_THROW, throws nothing EXPECT_NONFATAL_FAILURE(EXPECT_THROW(a_++, bool), "throws nothing"); EXPECT_EQ(3, a_); // successful EXPECT_NO_THROW EXPECT_NO_THROW(a_++); EXPECT_EQ(4, a_); // failed EXPECT_NO_THROW EXPECT_NONFATAL_FAILURE(EXPECT_NO_THROW({ // NOLINT a_++; ThrowAnInteger(); }), "it throws"); EXPECT_EQ(5, a_); // successful EXPECT_ANY_THROW EXPECT_ANY_THROW({ // NOLINT a_++; ThrowAnInteger(); }); EXPECT_EQ(6, a_); // failed EXPECT_ANY_THROW EXPECT_NONFATAL_FAILURE(EXPECT_ANY_THROW(a_++), "it doesn't"); EXPECT_EQ(7, a_); } #endif // GTEST_HAS_EXCEPTIONS // Tests {ASSERT|EXPECT}_NO_FATAL_FAILURE. class NoFatalFailureTest : public Test { protected: void Succeeds() {} void FailsNonFatal() { ADD_FAILURE() << "some non-fatal failure"; } void Fails() { FAIL() << "some fatal failure"; } void DoAssertNoFatalFailureOnFails() { ASSERT_NO_FATAL_FAILURE(Fails()); ADD_FAILURE() << "shold not reach here."; } void DoExpectNoFatalFailureOnFails() { EXPECT_NO_FATAL_FAILURE(Fails()); ADD_FAILURE() << "other failure"; } }; TEST_F(NoFatalFailureTest, NoFailure) { EXPECT_NO_FATAL_FAILURE(Succeeds()); ASSERT_NO_FATAL_FAILURE(Succeeds()); } TEST_F(NoFatalFailureTest, NonFatalIsNoFailure) { EXPECT_NONFATAL_FAILURE( EXPECT_NO_FATAL_FAILURE(FailsNonFatal()), "some non-fatal failure"); EXPECT_NONFATAL_FAILURE( ASSERT_NO_FATAL_FAILURE(FailsNonFatal()), "some non-fatal failure"); } TEST_F(NoFatalFailureTest, AssertNoFatalFailureOnFatalFailure) { TestPartResultArray gtest_failures; { ScopedFakeTestPartResultReporter gtest_reporter(>est_failures); DoAssertNoFatalFailureOnFails(); } ASSERT_EQ(2, gtest_failures.size()); EXPECT_EQ(TestPartResult::kFatalFailure, gtest_failures.GetTestPartResult(0).type()); EXPECT_EQ(TestPartResult::kFatalFailure, gtest_failures.GetTestPartResult(1).type()); EXPECT_PRED_FORMAT2(testing::IsSubstring, "some fatal failure", gtest_failures.GetTestPartResult(0).message()); EXPECT_PRED_FORMAT2(testing::IsSubstring, "it does", gtest_failures.GetTestPartResult(1).message()); } TEST_F(NoFatalFailureTest, ExpectNoFatalFailureOnFatalFailure) { TestPartResultArray gtest_failures; { ScopedFakeTestPartResultReporter gtest_reporter(>est_failures); DoExpectNoFatalFailureOnFails(); } ASSERT_EQ(3, gtest_failures.size()); EXPECT_EQ(TestPartResult::kFatalFailure, gtest_failures.GetTestPartResult(0).type()); EXPECT_EQ(TestPartResult::kNonFatalFailure, gtest_failures.GetTestPartResult(1).type()); EXPECT_EQ(TestPartResult::kNonFatalFailure, gtest_failures.GetTestPartResult(2).type()); EXPECT_PRED_FORMAT2(testing::IsSubstring, "some fatal failure", gtest_failures.GetTestPartResult(0).message()); EXPECT_PRED_FORMAT2(testing::IsSubstring, "it does", gtest_failures.GetTestPartResult(1).message()); EXPECT_PRED_FORMAT2(testing::IsSubstring, "other failure", gtest_failures.GetTestPartResult(2).message()); } TEST_F(NoFatalFailureTest, MessageIsStreamable) { TestPartResultArray gtest_failures; { ScopedFakeTestPartResultReporter gtest_reporter(>est_failures); EXPECT_NO_FATAL_FAILURE(FAIL() << "foo") << "my message"; } ASSERT_EQ(2, gtest_failures.size()); EXPECT_EQ(TestPartResult::kNonFatalFailure, gtest_failures.GetTestPartResult(0).type()); EXPECT_EQ(TestPartResult::kNonFatalFailure, gtest_failures.GetTestPartResult(1).type()); EXPECT_PRED_FORMAT2(testing::IsSubstring, "foo", gtest_failures.GetTestPartResult(0).message()); EXPECT_PRED_FORMAT2(testing::IsSubstring, "my message", gtest_failures.GetTestPartResult(1).message()); } // Tests non-string assertions. // Tests EqFailure(), used for implementing *EQ* assertions. TEST(AssertionTest, EqFailure) { const std::string foo_val("5"), bar_val("6"); const std::string msg1( EqFailure("foo", "bar", foo_val, bar_val, false) .failure_message()); EXPECT_STREQ( "Value of: bar\n" " Actual: 6\n" "Expected: foo\n" "Which is: 5", msg1.c_str()); const std::string msg2( EqFailure("foo", "6", foo_val, bar_val, false) .failure_message()); EXPECT_STREQ( "Value of: 6\n" "Expected: foo\n" "Which is: 5", msg2.c_str()); const std::string msg3( EqFailure("5", "bar", foo_val, bar_val, false) .failure_message()); EXPECT_STREQ( "Value of: bar\n" " Actual: 6\n" "Expected: 5", msg3.c_str()); const std::string msg4( EqFailure("5", "6", foo_val, bar_val, false).failure_message()); EXPECT_STREQ( "Value of: 6\n" "Expected: 5", msg4.c_str()); const std::string msg5( EqFailure("foo", "bar", std::string("\"x\""), std::string("\"y\""), true).failure_message()); EXPECT_STREQ( "Value of: bar\n" " Actual: \"y\"\n" "Expected: foo (ignoring case)\n" "Which is: \"x\"", msg5.c_str()); } // Tests AppendUserMessage(), used for implementing the *EQ* macros. TEST(AssertionTest, AppendUserMessage) { const std::string foo("foo"); Message msg; EXPECT_STREQ("foo", AppendUserMessage(foo, msg).c_str()); msg << "bar"; EXPECT_STREQ("foo\nbar", AppendUserMessage(foo, msg).c_str()); } #ifdef __BORLANDC__ // Silences warnings: "Condition is always true", "Unreachable code" # pragma option push -w-ccc -w-rch #endif // Tests ASSERT_TRUE. TEST(AssertionTest, ASSERT_TRUE) { ASSERT_TRUE(2 > 1); // NOLINT EXPECT_FATAL_FAILURE(ASSERT_TRUE(2 < 1), "2 < 1"); } // Tests ASSERT_TRUE(predicate) for predicates returning AssertionResult. TEST(AssertionTest, AssertTrueWithAssertionResult) { ASSERT_TRUE(ResultIsEven(2)); #ifndef __BORLANDC__ // ICE's in C++Builder. EXPECT_FATAL_FAILURE(ASSERT_TRUE(ResultIsEven(3)), "Value of: ResultIsEven(3)\n" " Actual: false (3 is odd)\n" "Expected: true"); #endif ASSERT_TRUE(ResultIsEvenNoExplanation(2)); EXPECT_FATAL_FAILURE(ASSERT_TRUE(ResultIsEvenNoExplanation(3)), "Value of: ResultIsEvenNoExplanation(3)\n" " Actual: false (3 is odd)\n" "Expected: true"); } // Tests ASSERT_FALSE. TEST(AssertionTest, ASSERT_FALSE) { ASSERT_FALSE(2 < 1); // NOLINT EXPECT_FATAL_FAILURE(ASSERT_FALSE(2 > 1), "Value of: 2 > 1\n" " Actual: true\n" "Expected: false"); } // Tests ASSERT_FALSE(predicate) for predicates returning AssertionResult. TEST(AssertionTest, AssertFalseWithAssertionResult) { ASSERT_FALSE(ResultIsEven(3)); #ifndef __BORLANDC__ // ICE's in C++Builder. EXPECT_FATAL_FAILURE(ASSERT_FALSE(ResultIsEven(2)), "Value of: ResultIsEven(2)\n" " Actual: true (2 is even)\n" "Expected: false"); #endif ASSERT_FALSE(ResultIsEvenNoExplanation(3)); EXPECT_FATAL_FAILURE(ASSERT_FALSE(ResultIsEvenNoExplanation(2)), "Value of: ResultIsEvenNoExplanation(2)\n" " Actual: true\n" "Expected: false"); } #ifdef __BORLANDC__ // Restores warnings after previous "#pragma option push" supressed them # pragma option pop #endif // Tests using ASSERT_EQ on double values. The purpose is to make // sure that the specialization we did for integer and anonymous enums // isn't used for double arguments. TEST(ExpectTest, ASSERT_EQ_Double) { // A success. ASSERT_EQ(5.6, 5.6); // A failure. EXPECT_FATAL_FAILURE(ASSERT_EQ(5.1, 5.2), "5.1"); } // Tests ASSERT_EQ. TEST(AssertionTest, ASSERT_EQ) { ASSERT_EQ(5, 2 + 3); EXPECT_FATAL_FAILURE(ASSERT_EQ(5, 2*3), "Value of: 2*3\n" " Actual: 6\n" "Expected: 5"); } // Tests ASSERT_EQ(NULL, pointer). #if GTEST_CAN_COMPARE_NULL TEST(AssertionTest, ASSERT_EQ_NULL) { // A success. const char* p = NULL; // Some older GCC versions may issue a spurious waring in this or the next // assertion statement. This warning should not be suppressed with // static_cast since the test verifies the ability to use bare NULL as the // expected parameter to the macro. ASSERT_EQ(NULL, p); // A failure. static int n = 0; EXPECT_FATAL_FAILURE(ASSERT_EQ(NULL, &n), "Value of: &n\n"); } #endif // GTEST_CAN_COMPARE_NULL // Tests ASSERT_EQ(0, non_pointer). Since the literal 0 can be // treated as a null pointer by the compiler, we need to make sure // that ASSERT_EQ(0, non_pointer) isn't interpreted by Google Test as // ASSERT_EQ(static_cast(NULL), non_pointer). TEST(ExpectTest, ASSERT_EQ_0) { int n = 0; // A success. ASSERT_EQ(0, n); // A failure. EXPECT_FATAL_FAILURE(ASSERT_EQ(0, 5.6), "Expected: 0"); } // Tests ASSERT_NE. TEST(AssertionTest, ASSERT_NE) { ASSERT_NE(6, 7); EXPECT_FATAL_FAILURE(ASSERT_NE('a', 'a'), "Expected: ('a') != ('a'), " "actual: 'a' (97, 0x61) vs 'a' (97, 0x61)"); } // Tests ASSERT_LE. TEST(AssertionTest, ASSERT_LE) { ASSERT_LE(2, 3); ASSERT_LE(2, 2); EXPECT_FATAL_FAILURE(ASSERT_LE(2, 0), "Expected: (2) <= (0), actual: 2 vs 0"); } // Tests ASSERT_LT. TEST(AssertionTest, ASSERT_LT) { ASSERT_LT(2, 3); EXPECT_FATAL_FAILURE(ASSERT_LT(2, 2), "Expected: (2) < (2), actual: 2 vs 2"); } // Tests ASSERT_GE. TEST(AssertionTest, ASSERT_GE) { ASSERT_GE(2, 1); ASSERT_GE(2, 2); EXPECT_FATAL_FAILURE(ASSERT_GE(2, 3), "Expected: (2) >= (3), actual: 2 vs 3"); } // Tests ASSERT_GT. TEST(AssertionTest, ASSERT_GT) { ASSERT_GT(2, 1); EXPECT_FATAL_FAILURE(ASSERT_GT(2, 2), "Expected: (2) > (2), actual: 2 vs 2"); } #if GTEST_HAS_EXCEPTIONS void ThrowNothing() {} // Tests ASSERT_THROW. TEST(AssertionTest, ASSERT_THROW) { ASSERT_THROW(ThrowAnInteger(), int); # ifndef __BORLANDC__ // ICE's in C++Builder 2007 and 2009. EXPECT_FATAL_FAILURE( ASSERT_THROW(ThrowAnInteger(), bool), "Expected: ThrowAnInteger() throws an exception of type bool.\n" " Actual: it throws a different type."); # endif EXPECT_FATAL_FAILURE( ASSERT_THROW(ThrowNothing(), bool), "Expected: ThrowNothing() throws an exception of type bool.\n" " Actual: it throws nothing."); } // Tests ASSERT_NO_THROW. TEST(AssertionTest, ASSERT_NO_THROW) { ASSERT_NO_THROW(ThrowNothing()); EXPECT_FATAL_FAILURE(ASSERT_NO_THROW(ThrowAnInteger()), "Expected: ThrowAnInteger() doesn't throw an exception." "\n Actual: it throws."); } // Tests ASSERT_ANY_THROW. TEST(AssertionTest, ASSERT_ANY_THROW) { ASSERT_ANY_THROW(ThrowAnInteger()); EXPECT_FATAL_FAILURE( ASSERT_ANY_THROW(ThrowNothing()), "Expected: ThrowNothing() throws an exception.\n" " Actual: it doesn't."); } #endif // GTEST_HAS_EXCEPTIONS // Makes sure we deal with the precedence of <<. This test should // compile. TEST(AssertionTest, AssertPrecedence) { ASSERT_EQ(1 < 2, true); bool false_value = false; ASSERT_EQ(true && false_value, false); } // A subroutine used by the following test. void TestEq1(int x) { ASSERT_EQ(1, x); } // Tests calling a test subroutine that's not part of a fixture. TEST(AssertionTest, NonFixtureSubroutine) { EXPECT_FATAL_FAILURE(TestEq1(2), "Value of: x"); } // An uncopyable class. class Uncopyable { public: explicit Uncopyable(int a_value) : value_(a_value) {} int value() const { return value_; } bool operator==(const Uncopyable& rhs) const { return value() == rhs.value(); } private: // This constructor deliberately has no implementation, as we don't // want this class to be copyable. Uncopyable(const Uncopyable&); // NOLINT int value_; }; ::std::ostream& operator<<(::std::ostream& os, const Uncopyable& value) { return os << value.value(); } bool IsPositiveUncopyable(const Uncopyable& x) { return x.value() > 0; } // A subroutine used by the following test. void TestAssertNonPositive() { Uncopyable y(-1); ASSERT_PRED1(IsPositiveUncopyable, y); } // A subroutine used by the following test. void TestAssertEqualsUncopyable() { Uncopyable x(5); Uncopyable y(-1); ASSERT_EQ(x, y); } // Tests that uncopyable objects can be used in assertions. TEST(AssertionTest, AssertWorksWithUncopyableObject) { Uncopyable x(5); ASSERT_PRED1(IsPositiveUncopyable, x); ASSERT_EQ(x, x); EXPECT_FATAL_FAILURE(TestAssertNonPositive(), "IsPositiveUncopyable(y) evaluates to false, where\ny evaluates to -1"); EXPECT_FATAL_FAILURE(TestAssertEqualsUncopyable(), "Value of: y\n Actual: -1\nExpected: x\nWhich is: 5"); } // Tests that uncopyable objects can be used in expects. TEST(AssertionTest, ExpectWorksWithUncopyableObject) { Uncopyable x(5); EXPECT_PRED1(IsPositiveUncopyable, x); Uncopyable y(-1); EXPECT_NONFATAL_FAILURE(EXPECT_PRED1(IsPositiveUncopyable, y), "IsPositiveUncopyable(y) evaluates to false, where\ny evaluates to -1"); EXPECT_EQ(x, x); EXPECT_NONFATAL_FAILURE(EXPECT_EQ(x, y), "Value of: y\n Actual: -1\nExpected: x\nWhich is: 5"); } enum NamedEnum { kE1 = 0, kE2 = 1 }; TEST(AssertionTest, NamedEnum) { EXPECT_EQ(kE1, kE1); EXPECT_LT(kE1, kE2); EXPECT_NONFATAL_FAILURE(EXPECT_EQ(kE1, kE2), "Which is: 0"); EXPECT_NONFATAL_FAILURE(EXPECT_EQ(kE1, kE2), "Actual: 1"); } // The version of gcc used in XCode 2.2 has a bug and doesn't allow // anonymous enums in assertions. Therefore the following test is not // done on Mac. // Sun Studio and HP aCC also reject this code. #if !GTEST_OS_MAC && !defined(__SUNPRO_CC) && !defined(__HP_aCC) // Tests using assertions with anonymous enums. enum { kCaseA = -1, # if GTEST_OS_LINUX // We want to test the case where the size of the anonymous enum is // larger than sizeof(int), to make sure our implementation of the // assertions doesn't truncate the enums. However, MSVC // (incorrectly) doesn't allow an enum value to exceed the range of // an int, so this has to be conditionally compiled. // // On Linux, kCaseB and kCaseA have the same value when truncated to // int size. We want to test whether this will confuse the // assertions. kCaseB = testing::internal::kMaxBiggestInt, # else kCaseB = INT_MAX, # endif // GTEST_OS_LINUX kCaseC = 42 }; TEST(AssertionTest, AnonymousEnum) { # if GTEST_OS_LINUX EXPECT_EQ(static_cast(kCaseA), static_cast(kCaseB)); # endif // GTEST_OS_LINUX EXPECT_EQ(kCaseA, kCaseA); EXPECT_NE(kCaseA, kCaseB); EXPECT_LT(kCaseA, kCaseB); EXPECT_LE(kCaseA, kCaseB); EXPECT_GT(kCaseB, kCaseA); EXPECT_GE(kCaseA, kCaseA); EXPECT_NONFATAL_FAILURE(EXPECT_GE(kCaseA, kCaseB), "(kCaseA) >= (kCaseB)"); EXPECT_NONFATAL_FAILURE(EXPECT_GE(kCaseA, kCaseC), "-1 vs 42"); ASSERT_EQ(kCaseA, kCaseA); ASSERT_NE(kCaseA, kCaseB); ASSERT_LT(kCaseA, kCaseB); ASSERT_LE(kCaseA, kCaseB); ASSERT_GT(kCaseB, kCaseA); ASSERT_GE(kCaseA, kCaseA); # ifndef __BORLANDC__ // ICE's in C++Builder. EXPECT_FATAL_FAILURE(ASSERT_EQ(kCaseA, kCaseB), "Value of: kCaseB"); EXPECT_FATAL_FAILURE(ASSERT_EQ(kCaseA, kCaseC), "Actual: 42"); # endif EXPECT_FATAL_FAILURE(ASSERT_EQ(kCaseA, kCaseC), "Which is: -1"); } #endif // !GTEST_OS_MAC && !defined(__SUNPRO_CC) #if GTEST_OS_WINDOWS static HRESULT UnexpectedHRESULTFailure() { return E_UNEXPECTED; } static HRESULT OkHRESULTSuccess() { return S_OK; } static HRESULT FalseHRESULTSuccess() { return S_FALSE; } // HRESULT assertion tests test both zero and non-zero // success codes as well as failure message for each. // // Windows CE doesn't support message texts. TEST(HRESULTAssertionTest, EXPECT_HRESULT_SUCCEEDED) { EXPECT_HRESULT_SUCCEEDED(S_OK); EXPECT_HRESULT_SUCCEEDED(S_FALSE); EXPECT_NONFATAL_FAILURE(EXPECT_HRESULT_SUCCEEDED(UnexpectedHRESULTFailure()), "Expected: (UnexpectedHRESULTFailure()) succeeds.\n" " Actual: 0x8000FFFF"); } TEST(HRESULTAssertionTest, ASSERT_HRESULT_SUCCEEDED) { ASSERT_HRESULT_SUCCEEDED(S_OK); ASSERT_HRESULT_SUCCEEDED(S_FALSE); EXPECT_FATAL_FAILURE(ASSERT_HRESULT_SUCCEEDED(UnexpectedHRESULTFailure()), "Expected: (UnexpectedHRESULTFailure()) succeeds.\n" " Actual: 0x8000FFFF"); } TEST(HRESULTAssertionTest, EXPECT_HRESULT_FAILED) { EXPECT_HRESULT_FAILED(E_UNEXPECTED); EXPECT_NONFATAL_FAILURE(EXPECT_HRESULT_FAILED(OkHRESULTSuccess()), "Expected: (OkHRESULTSuccess()) fails.\n" " Actual: 0x0"); EXPECT_NONFATAL_FAILURE(EXPECT_HRESULT_FAILED(FalseHRESULTSuccess()), "Expected: (FalseHRESULTSuccess()) fails.\n" " Actual: 0x1"); } TEST(HRESULTAssertionTest, ASSERT_HRESULT_FAILED) { ASSERT_HRESULT_FAILED(E_UNEXPECTED); # ifndef __BORLANDC__ // ICE's in C++Builder 2007 and 2009. EXPECT_FATAL_FAILURE(ASSERT_HRESULT_FAILED(OkHRESULTSuccess()), "Expected: (OkHRESULTSuccess()) fails.\n" " Actual: 0x0"); # endif EXPECT_FATAL_FAILURE(ASSERT_HRESULT_FAILED(FalseHRESULTSuccess()), "Expected: (FalseHRESULTSuccess()) fails.\n" " Actual: 0x1"); } // Tests that streaming to the HRESULT macros works. TEST(HRESULTAssertionTest, Streaming) { EXPECT_HRESULT_SUCCEEDED(S_OK) << "unexpected failure"; ASSERT_HRESULT_SUCCEEDED(S_OK) << "unexpected failure"; EXPECT_HRESULT_FAILED(E_UNEXPECTED) << "unexpected failure"; ASSERT_HRESULT_FAILED(E_UNEXPECTED) << "unexpected failure"; EXPECT_NONFATAL_FAILURE( EXPECT_HRESULT_SUCCEEDED(E_UNEXPECTED) << "expected failure", "expected failure"); # ifndef __BORLANDC__ // ICE's in C++Builder 2007 and 2009. EXPECT_FATAL_FAILURE( ASSERT_HRESULT_SUCCEEDED(E_UNEXPECTED) << "expected failure", "expected failure"); # endif EXPECT_NONFATAL_FAILURE( EXPECT_HRESULT_FAILED(S_OK) << "expected failure", "expected failure"); EXPECT_FATAL_FAILURE( ASSERT_HRESULT_FAILED(S_OK) << "expected failure", "expected failure"); } #endif // GTEST_OS_WINDOWS #ifdef __BORLANDC__ // Silences warnings: "Condition is always true", "Unreachable code" # pragma option push -w-ccc -w-rch #endif // Tests that the assertion macros behave like single statements. TEST(AssertionSyntaxTest, BasicAssertionsBehavesLikeSingleStatement) { if (AlwaysFalse()) ASSERT_TRUE(false) << "This should never be executed; " "It's a compilation test only."; if (AlwaysTrue()) EXPECT_FALSE(false); else ; // NOLINT if (AlwaysFalse()) ASSERT_LT(1, 3); if (AlwaysFalse()) ; // NOLINT else EXPECT_GT(3, 2) << ""; } #if GTEST_HAS_EXCEPTIONS // Tests that the compiler will not complain about unreachable code in the // EXPECT_THROW/EXPECT_ANY_THROW/EXPECT_NO_THROW macros. TEST(ExpectThrowTest, DoesNotGenerateUnreachableCodeWarning) { int n = 0; EXPECT_THROW(throw 1, int); EXPECT_NONFATAL_FAILURE(EXPECT_THROW(n++, int), ""); EXPECT_NONFATAL_FAILURE(EXPECT_THROW(throw 1, const char*), ""); EXPECT_NO_THROW(n++); EXPECT_NONFATAL_FAILURE(EXPECT_NO_THROW(throw 1), ""); EXPECT_ANY_THROW(throw 1); EXPECT_NONFATAL_FAILURE(EXPECT_ANY_THROW(n++), ""); } TEST(AssertionSyntaxTest, ExceptionAssertionsBehavesLikeSingleStatement) { if (AlwaysFalse()) EXPECT_THROW(ThrowNothing(), bool); if (AlwaysTrue()) EXPECT_THROW(ThrowAnInteger(), int); else ; // NOLINT if (AlwaysFalse()) EXPECT_NO_THROW(ThrowAnInteger()); if (AlwaysTrue()) EXPECT_NO_THROW(ThrowNothing()); else ; // NOLINT if (AlwaysFalse()) EXPECT_ANY_THROW(ThrowNothing()); if (AlwaysTrue()) EXPECT_ANY_THROW(ThrowAnInteger()); else ; // NOLINT } #endif // GTEST_HAS_EXCEPTIONS TEST(AssertionSyntaxTest, NoFatalFailureAssertionsBehavesLikeSingleStatement) { if (AlwaysFalse()) EXPECT_NO_FATAL_FAILURE(FAIL()) << "This should never be executed. " << "It's a compilation test only."; else ; // NOLINT if (AlwaysFalse()) ASSERT_NO_FATAL_FAILURE(FAIL()) << ""; else ; // NOLINT if (AlwaysTrue()) EXPECT_NO_FATAL_FAILURE(SUCCEED()); else ; // NOLINT if (AlwaysFalse()) ; // NOLINT else ASSERT_NO_FATAL_FAILURE(SUCCEED()); } // Tests that the assertion macros work well with switch statements. TEST(AssertionSyntaxTest, WorksWithSwitch) { switch (0) { case 1: break; default: ASSERT_TRUE(true); } switch (0) case 0: EXPECT_FALSE(false) << "EXPECT_FALSE failed in switch case"; // Binary assertions are implemented using a different code path // than the Boolean assertions. Hence we test them separately. switch (0) { case 1: default: ASSERT_EQ(1, 1) << "ASSERT_EQ failed in default switch handler"; } switch (0) case 0: EXPECT_NE(1, 2); } #if GTEST_HAS_EXCEPTIONS void ThrowAString() { throw "std::string"; } // Test that the exception assertion macros compile and work with const // type qualifier. TEST(AssertionSyntaxTest, WorksWithConst) { ASSERT_THROW(ThrowAString(), const char*); EXPECT_THROW(ThrowAString(), const char*); } #endif // GTEST_HAS_EXCEPTIONS } // namespace namespace testing { // Tests that Google Test tracks SUCCEED*. TEST(SuccessfulAssertionTest, SUCCEED) { SUCCEED(); SUCCEED() << "OK"; EXPECT_EQ(2, GetUnitTestImpl()->current_test_result()->total_part_count()); } // Tests that Google Test doesn't track successful EXPECT_*. TEST(SuccessfulAssertionTest, EXPECT) { EXPECT_TRUE(true); EXPECT_EQ(0, GetUnitTestImpl()->current_test_result()->total_part_count()); } // Tests that Google Test doesn't track successful EXPECT_STR*. TEST(SuccessfulAssertionTest, EXPECT_STR) { EXPECT_STREQ("", ""); EXPECT_EQ(0, GetUnitTestImpl()->current_test_result()->total_part_count()); } // Tests that Google Test doesn't track successful ASSERT_*. TEST(SuccessfulAssertionTest, ASSERT) { ASSERT_TRUE(true); EXPECT_EQ(0, GetUnitTestImpl()->current_test_result()->total_part_count()); } // Tests that Google Test doesn't track successful ASSERT_STR*. TEST(SuccessfulAssertionTest, ASSERT_STR) { ASSERT_STREQ("", ""); EXPECT_EQ(0, GetUnitTestImpl()->current_test_result()->total_part_count()); } } // namespace testing namespace { // Tests the message streaming variation of assertions. TEST(AssertionWithMessageTest, EXPECT) { EXPECT_EQ(1, 1) << "This should succeed."; EXPECT_NONFATAL_FAILURE(EXPECT_NE(1, 1) << "Expected failure #1.", "Expected failure #1"); EXPECT_LE(1, 2) << "This should succeed."; EXPECT_NONFATAL_FAILURE(EXPECT_LT(1, 0) << "Expected failure #2.", "Expected failure #2."); EXPECT_GE(1, 0) << "This should succeed."; EXPECT_NONFATAL_FAILURE(EXPECT_GT(1, 2) << "Expected failure #3.", "Expected failure #3."); EXPECT_STREQ("1", "1") << "This should succeed."; EXPECT_NONFATAL_FAILURE(EXPECT_STRNE("1", "1") << "Expected failure #4.", "Expected failure #4."); EXPECT_STRCASEEQ("a", "A") << "This should succeed."; EXPECT_NONFATAL_FAILURE(EXPECT_STRCASENE("a", "A") << "Expected failure #5.", "Expected failure #5."); EXPECT_FLOAT_EQ(1, 1) << "This should succeed."; EXPECT_NONFATAL_FAILURE(EXPECT_DOUBLE_EQ(1, 1.2) << "Expected failure #6.", "Expected failure #6."); EXPECT_NEAR(1, 1.1, 0.2) << "This should succeed."; } TEST(AssertionWithMessageTest, ASSERT) { ASSERT_EQ(1, 1) << "This should succeed."; ASSERT_NE(1, 2) << "This should succeed."; ASSERT_LE(1, 2) << "This should succeed."; ASSERT_LT(1, 2) << "This should succeed."; ASSERT_GE(1, 0) << "This should succeed."; EXPECT_FATAL_FAILURE(ASSERT_GT(1, 2) << "Expected failure.", "Expected failure."); } TEST(AssertionWithMessageTest, ASSERT_STR) { ASSERT_STREQ("1", "1") << "This should succeed."; ASSERT_STRNE("1", "2") << "This should succeed."; ASSERT_STRCASEEQ("a", "A") << "This should succeed."; EXPECT_FATAL_FAILURE(ASSERT_STRCASENE("a", "A") << "Expected failure.", "Expected failure."); } TEST(AssertionWithMessageTest, ASSERT_FLOATING) { ASSERT_FLOAT_EQ(1, 1) << "This should succeed."; ASSERT_DOUBLE_EQ(1, 1) << "This should succeed."; EXPECT_FATAL_FAILURE(ASSERT_NEAR(1,1.2, 0.1) << "Expect failure.", // NOLINT "Expect failure."); // To work around a bug in gcc 2.95.0, there is intentionally no // space after the first comma in the previous statement. } // Tests using ASSERT_FALSE with a streamed message. TEST(AssertionWithMessageTest, ASSERT_FALSE) { ASSERT_FALSE(false) << "This shouldn't fail."; EXPECT_FATAL_FAILURE({ // NOLINT ASSERT_FALSE(true) << "Expected failure: " << 2 << " > " << 1 << " evaluates to " << true; }, "Expected failure"); } // Tests using FAIL with a streamed message. TEST(AssertionWithMessageTest, FAIL) { EXPECT_FATAL_FAILURE(FAIL() << 0, "0"); } // Tests using SUCCEED with a streamed message. TEST(AssertionWithMessageTest, SUCCEED) { SUCCEED() << "Success == " << 1; } // Tests using ASSERT_TRUE with a streamed message. TEST(AssertionWithMessageTest, ASSERT_TRUE) { ASSERT_TRUE(true) << "This should succeed."; ASSERT_TRUE(true) << true; EXPECT_FATAL_FAILURE({ // NOLINT ASSERT_TRUE(false) << static_cast(NULL) << static_cast(NULL); }, "(null)(null)"); } #if GTEST_OS_WINDOWS // Tests using wide strings in assertion messages. TEST(AssertionWithMessageTest, WideStringMessage) { EXPECT_NONFATAL_FAILURE({ // NOLINT EXPECT_TRUE(false) << L"This failure is expected.\x8119"; }, "This failure is expected."); EXPECT_FATAL_FAILURE({ // NOLINT ASSERT_EQ(1, 2) << "This failure is " << L"expected too.\x8120"; }, "This failure is expected too."); } #endif // GTEST_OS_WINDOWS // Tests EXPECT_TRUE. TEST(ExpectTest, EXPECT_TRUE) { EXPECT_TRUE(true) << "Intentional success"; EXPECT_NONFATAL_FAILURE(EXPECT_TRUE(false) << "Intentional failure #1.", "Intentional failure #1."); EXPECT_NONFATAL_FAILURE(EXPECT_TRUE(false) << "Intentional failure #2.", "Intentional failure #2."); EXPECT_TRUE(2 > 1); // NOLINT EXPECT_NONFATAL_FAILURE(EXPECT_TRUE(2 < 1), "Value of: 2 < 1\n" " Actual: false\n" "Expected: true"); EXPECT_NONFATAL_FAILURE(EXPECT_TRUE(2 > 3), "2 > 3"); } // Tests EXPECT_TRUE(predicate) for predicates returning AssertionResult. TEST(ExpectTest, ExpectTrueWithAssertionResult) { EXPECT_TRUE(ResultIsEven(2)); EXPECT_NONFATAL_FAILURE(EXPECT_TRUE(ResultIsEven(3)), "Value of: ResultIsEven(3)\n" " Actual: false (3 is odd)\n" "Expected: true"); EXPECT_TRUE(ResultIsEvenNoExplanation(2)); EXPECT_NONFATAL_FAILURE(EXPECT_TRUE(ResultIsEvenNoExplanation(3)), "Value of: ResultIsEvenNoExplanation(3)\n" " Actual: false (3 is odd)\n" "Expected: true"); } // Tests EXPECT_FALSE with a streamed message. TEST(ExpectTest, EXPECT_FALSE) { EXPECT_FALSE(2 < 1); // NOLINT EXPECT_FALSE(false) << "Intentional success"; EXPECT_NONFATAL_FAILURE(EXPECT_FALSE(true) << "Intentional failure #1.", "Intentional failure #1."); EXPECT_NONFATAL_FAILURE(EXPECT_FALSE(true) << "Intentional failure #2.", "Intentional failure #2."); EXPECT_NONFATAL_FAILURE(EXPECT_FALSE(2 > 1), "Value of: 2 > 1\n" " Actual: true\n" "Expected: false"); EXPECT_NONFATAL_FAILURE(EXPECT_FALSE(2 < 3), "2 < 3"); } // Tests EXPECT_FALSE(predicate) for predicates returning AssertionResult. TEST(ExpectTest, ExpectFalseWithAssertionResult) { EXPECT_FALSE(ResultIsEven(3)); EXPECT_NONFATAL_FAILURE(EXPECT_FALSE(ResultIsEven(2)), "Value of: ResultIsEven(2)\n" " Actual: true (2 is even)\n" "Expected: false"); EXPECT_FALSE(ResultIsEvenNoExplanation(3)); EXPECT_NONFATAL_FAILURE(EXPECT_FALSE(ResultIsEvenNoExplanation(2)), "Value of: ResultIsEvenNoExplanation(2)\n" " Actual: true\n" "Expected: false"); } #ifdef __BORLANDC__ // Restores warnings after previous "#pragma option push" supressed them # pragma option pop #endif // Tests EXPECT_EQ. TEST(ExpectTest, EXPECT_EQ) { EXPECT_EQ(5, 2 + 3); EXPECT_NONFATAL_FAILURE(EXPECT_EQ(5, 2*3), "Value of: 2*3\n" " Actual: 6\n" "Expected: 5"); EXPECT_NONFATAL_FAILURE(EXPECT_EQ(5, 2 - 3), "2 - 3"); } // Tests using EXPECT_EQ on double values. The purpose is to make // sure that the specialization we did for integer and anonymous enums // isn't used for double arguments. TEST(ExpectTest, EXPECT_EQ_Double) { // A success. EXPECT_EQ(5.6, 5.6); // A failure. EXPECT_NONFATAL_FAILURE(EXPECT_EQ(5.1, 5.2), "5.1"); } #if GTEST_CAN_COMPARE_NULL // Tests EXPECT_EQ(NULL, pointer). TEST(ExpectTest, EXPECT_EQ_NULL) { // A success. const char* p = NULL; // Some older GCC versions may issue a spurious warning in this or the next // assertion statement. This warning should not be suppressed with // static_cast since the test verifies the ability to use bare NULL as the // expected parameter to the macro. EXPECT_EQ(NULL, p); // A failure. int n = 0; EXPECT_NONFATAL_FAILURE(EXPECT_EQ(NULL, &n), "Value of: &n\n"); } #endif // GTEST_CAN_COMPARE_NULL // Tests EXPECT_EQ(0, non_pointer). Since the literal 0 can be // treated as a null pointer by the compiler, we need to make sure // that EXPECT_EQ(0, non_pointer) isn't interpreted by Google Test as // EXPECT_EQ(static_cast(NULL), non_pointer). TEST(ExpectTest, EXPECT_EQ_0) { int n = 0; // A success. EXPECT_EQ(0, n); // A failure. EXPECT_NONFATAL_FAILURE(EXPECT_EQ(0, 5.6), "Expected: 0"); } // Tests EXPECT_NE. TEST(ExpectTest, EXPECT_NE) { EXPECT_NE(6, 7); EXPECT_NONFATAL_FAILURE(EXPECT_NE('a', 'a'), "Expected: ('a') != ('a'), " "actual: 'a' (97, 0x61) vs 'a' (97, 0x61)"); EXPECT_NONFATAL_FAILURE(EXPECT_NE(2, 2), "2"); char* const p0 = NULL; EXPECT_NONFATAL_FAILURE(EXPECT_NE(p0, p0), "p0"); // Only way to get the Nokia compiler to compile the cast // is to have a separate void* variable first. Putting // the two casts on the same line doesn't work, neither does // a direct C-style to char*. void* pv1 = (void*)0x1234; // NOLINT char* const p1 = reinterpret_cast(pv1); EXPECT_NONFATAL_FAILURE(EXPECT_NE(p1, p1), "p1"); } // Tests EXPECT_LE. TEST(ExpectTest, EXPECT_LE) { EXPECT_LE(2, 3); EXPECT_LE(2, 2); EXPECT_NONFATAL_FAILURE(EXPECT_LE(2, 0), "Expected: (2) <= (0), actual: 2 vs 0"); EXPECT_NONFATAL_FAILURE(EXPECT_LE(1.1, 0.9), "(1.1) <= (0.9)"); } // Tests EXPECT_LT. TEST(ExpectTest, EXPECT_LT) { EXPECT_LT(2, 3); EXPECT_NONFATAL_FAILURE(EXPECT_LT(2, 2), "Expected: (2) < (2), actual: 2 vs 2"); EXPECT_NONFATAL_FAILURE(EXPECT_LT(2, 1), "(2) < (1)"); } // Tests EXPECT_GE. TEST(ExpectTest, EXPECT_GE) { EXPECT_GE(2, 1); EXPECT_GE(2, 2); EXPECT_NONFATAL_FAILURE(EXPECT_GE(2, 3), "Expected: (2) >= (3), actual: 2 vs 3"); EXPECT_NONFATAL_FAILURE(EXPECT_GE(0.9, 1.1), "(0.9) >= (1.1)"); } // Tests EXPECT_GT. TEST(ExpectTest, EXPECT_GT) { EXPECT_GT(2, 1); EXPECT_NONFATAL_FAILURE(EXPECT_GT(2, 2), "Expected: (2) > (2), actual: 2 vs 2"); EXPECT_NONFATAL_FAILURE(EXPECT_GT(2, 3), "(2) > (3)"); } #if GTEST_HAS_EXCEPTIONS // Tests EXPECT_THROW. TEST(ExpectTest, EXPECT_THROW) { EXPECT_THROW(ThrowAnInteger(), int); EXPECT_NONFATAL_FAILURE(EXPECT_THROW(ThrowAnInteger(), bool), "Expected: ThrowAnInteger() throws an exception of " "type bool.\n Actual: it throws a different type."); EXPECT_NONFATAL_FAILURE( EXPECT_THROW(ThrowNothing(), bool), "Expected: ThrowNothing() throws an exception of type bool.\n" " Actual: it throws nothing."); } // Tests EXPECT_NO_THROW. TEST(ExpectTest, EXPECT_NO_THROW) { EXPECT_NO_THROW(ThrowNothing()); EXPECT_NONFATAL_FAILURE(EXPECT_NO_THROW(ThrowAnInteger()), "Expected: ThrowAnInteger() doesn't throw an " "exception.\n Actual: it throws."); } // Tests EXPECT_ANY_THROW. TEST(ExpectTest, EXPECT_ANY_THROW) { EXPECT_ANY_THROW(ThrowAnInteger()); EXPECT_NONFATAL_FAILURE( EXPECT_ANY_THROW(ThrowNothing()), "Expected: ThrowNothing() throws an exception.\n" " Actual: it doesn't."); } #endif // GTEST_HAS_EXCEPTIONS // Make sure we deal with the precedence of <<. TEST(ExpectTest, ExpectPrecedence) { EXPECT_EQ(1 < 2, true); EXPECT_NONFATAL_FAILURE(EXPECT_EQ(true, true && false), "Value of: true && false"); } // Tests the StreamableToString() function. // Tests using StreamableToString() on a scalar. TEST(StreamableToStringTest, Scalar) { EXPECT_STREQ("5", StreamableToString(5).c_str()); } // Tests using StreamableToString() on a non-char pointer. TEST(StreamableToStringTest, Pointer) { int n = 0; int* p = &n; EXPECT_STRNE("(null)", StreamableToString(p).c_str()); } // Tests using StreamableToString() on a NULL non-char pointer. TEST(StreamableToStringTest, NullPointer) { int* p = NULL; EXPECT_STREQ("(null)", StreamableToString(p).c_str()); } // Tests using StreamableToString() on a C string. TEST(StreamableToStringTest, CString) { EXPECT_STREQ("Foo", StreamableToString("Foo").c_str()); } // Tests using StreamableToString() on a NULL C string. TEST(StreamableToStringTest, NullCString) { char* p = NULL; EXPECT_STREQ("(null)", StreamableToString(p).c_str()); } // Tests using streamable values as assertion messages. // Tests using std::string as an assertion message. TEST(StreamableTest, string) { static const std::string str( "This failure message is a std::string, and is expected."); EXPECT_FATAL_FAILURE(FAIL() << str, str.c_str()); } // Tests that we can output strings containing embedded NULs. // Limited to Linux because we can only do this with std::string's. TEST(StreamableTest, stringWithEmbeddedNUL) { static const char char_array_with_nul[] = "Here's a NUL\0 and some more string"; static const std::string string_with_nul(char_array_with_nul, sizeof(char_array_with_nul) - 1); // drops the trailing NUL EXPECT_FATAL_FAILURE(FAIL() << string_with_nul, "Here's a NUL\\0 and some more string"); } // Tests that we can output a NUL char. TEST(StreamableTest, NULChar) { EXPECT_FATAL_FAILURE({ // NOLINT FAIL() << "A NUL" << '\0' << " and some more string"; }, "A NUL\\0 and some more string"); } // Tests using int as an assertion message. TEST(StreamableTest, int) { EXPECT_FATAL_FAILURE(FAIL() << 900913, "900913"); } // Tests using NULL char pointer as an assertion message. // // In MSVC, streaming a NULL char * causes access violation. Google Test // implemented a workaround (substituting "(null)" for NULL). This // tests whether the workaround works. TEST(StreamableTest, NullCharPtr) { EXPECT_FATAL_FAILURE(FAIL() << static_cast(NULL), "(null)"); } // Tests that basic IO manipulators (endl, ends, and flush) can be // streamed to testing::Message. TEST(StreamableTest, BasicIoManip) { EXPECT_FATAL_FAILURE({ // NOLINT FAIL() << "Line 1." << std::endl << "A NUL char " << std::ends << std::flush << " in line 2."; }, "Line 1.\nA NUL char \\0 in line 2."); } // Tests the macros that haven't been covered so far. void AddFailureHelper(bool* aborted) { *aborted = true; ADD_FAILURE() << "Intentional failure."; *aborted = false; } // Tests ADD_FAILURE. TEST(MacroTest, ADD_FAILURE) { bool aborted = true; EXPECT_NONFATAL_FAILURE(AddFailureHelper(&aborted), "Intentional failure."); EXPECT_FALSE(aborted); } // Tests ADD_FAILURE_AT. TEST(MacroTest, ADD_FAILURE_AT) { // Verifies that ADD_FAILURE_AT does generate a nonfatal failure and // the failure message contains the user-streamed part. EXPECT_NONFATAL_FAILURE(ADD_FAILURE_AT("foo.cc", 42) << "Wrong!", "Wrong!"); // Verifies that the user-streamed part is optional. EXPECT_NONFATAL_FAILURE(ADD_FAILURE_AT("foo.cc", 42), "Failed"); // Unfortunately, we cannot verify that the failure message contains // the right file path and line number the same way, as // EXPECT_NONFATAL_FAILURE() doesn't get to see the file path and // line number. Instead, we do that in gtest_output_test_.cc. } // Tests FAIL. TEST(MacroTest, FAIL) { EXPECT_FATAL_FAILURE(FAIL(), "Failed"); EXPECT_FATAL_FAILURE(FAIL() << "Intentional failure.", "Intentional failure."); } // Tests SUCCEED TEST(MacroTest, SUCCEED) { SUCCEED(); SUCCEED() << "Explicit success."; } // Tests for EXPECT_EQ() and ASSERT_EQ(). // // These tests fail *intentionally*, s.t. the failure messages can be // generated and tested. // // We have different tests for different argument types. // Tests using bool values in {EXPECT|ASSERT}_EQ. TEST(EqAssertionTest, Bool) { EXPECT_EQ(true, true); EXPECT_FATAL_FAILURE({ bool false_value = false; ASSERT_EQ(false_value, true); }, "Value of: true"); } // Tests using int values in {EXPECT|ASSERT}_EQ. TEST(EqAssertionTest, Int) { ASSERT_EQ(32, 32); EXPECT_NONFATAL_FAILURE(EXPECT_EQ(32, 33), "33"); } // Tests using time_t values in {EXPECT|ASSERT}_EQ. TEST(EqAssertionTest, Time_T) { EXPECT_EQ(static_cast(0), static_cast(0)); EXPECT_FATAL_FAILURE(ASSERT_EQ(static_cast(0), static_cast(1234)), "1234"); } // Tests using char values in {EXPECT|ASSERT}_EQ. TEST(EqAssertionTest, Char) { ASSERT_EQ('z', 'z'); const char ch = 'b'; EXPECT_NONFATAL_FAILURE(EXPECT_EQ('\0', ch), "ch"); EXPECT_NONFATAL_FAILURE(EXPECT_EQ('a', ch), "ch"); } // Tests using wchar_t values in {EXPECT|ASSERT}_EQ. TEST(EqAssertionTest, WideChar) { EXPECT_EQ(L'b', L'b'); EXPECT_NONFATAL_FAILURE(EXPECT_EQ(L'\0', L'x'), "Value of: L'x'\n" " Actual: L'x' (120, 0x78)\n" "Expected: L'\0'\n" "Which is: L'\0' (0, 0x0)"); static wchar_t wchar; wchar = L'b'; EXPECT_NONFATAL_FAILURE(EXPECT_EQ(L'a', wchar), "wchar"); wchar = 0x8119; EXPECT_FATAL_FAILURE(ASSERT_EQ(static_cast(0x8120), wchar), "Value of: wchar"); } // Tests using ::std::string values in {EXPECT|ASSERT}_EQ. TEST(EqAssertionTest, StdString) { // Compares a const char* to an std::string that has identical // content. ASSERT_EQ("Test", ::std::string("Test")); // Compares two identical std::strings. static const ::std::string str1("A * in the middle"); static const ::std::string str2(str1); EXPECT_EQ(str1, str2); // Compares a const char* to an std::string that has different // content EXPECT_NONFATAL_FAILURE(EXPECT_EQ("Test", ::std::string("test")), "\"test\""); // Compares an std::string to a char* that has different content. char* const p1 = const_cast("foo"); EXPECT_NONFATAL_FAILURE(EXPECT_EQ(::std::string("bar"), p1), "p1"); // Compares two std::strings that have different contents, one of // which having a NUL character in the middle. This should fail. static ::std::string str3(str1); str3.at(2) = '\0'; EXPECT_FATAL_FAILURE(ASSERT_EQ(str1, str3), "Value of: str3\n" " Actual: \"A \\0 in the middle\""); } #if GTEST_HAS_STD_WSTRING // Tests using ::std::wstring values in {EXPECT|ASSERT}_EQ. TEST(EqAssertionTest, StdWideString) { // Compares two identical std::wstrings. const ::std::wstring wstr1(L"A * in the middle"); const ::std::wstring wstr2(wstr1); ASSERT_EQ(wstr1, wstr2); // Compares an std::wstring to a const wchar_t* that has identical // content. const wchar_t kTestX8119[] = { 'T', 'e', 's', 't', 0x8119, '\0' }; EXPECT_EQ(::std::wstring(kTestX8119), kTestX8119); // Compares an std::wstring to a const wchar_t* that has different // content. const wchar_t kTestX8120[] = { 'T', 'e', 's', 't', 0x8120, '\0' }; EXPECT_NONFATAL_FAILURE({ // NOLINT EXPECT_EQ(::std::wstring(kTestX8119), kTestX8120); }, "kTestX8120"); // Compares two std::wstrings that have different contents, one of // which having a NUL character in the middle. ::std::wstring wstr3(wstr1); wstr3.at(2) = L'\0'; EXPECT_NONFATAL_FAILURE(EXPECT_EQ(wstr1, wstr3), "wstr3"); // Compares a wchar_t* to an std::wstring that has different // content. EXPECT_FATAL_FAILURE({ // NOLINT ASSERT_EQ(const_cast(L"foo"), ::std::wstring(L"bar")); }, ""); } #endif // GTEST_HAS_STD_WSTRING #if GTEST_HAS_GLOBAL_STRING // Tests using ::string values in {EXPECT|ASSERT}_EQ. TEST(EqAssertionTest, GlobalString) { // Compares a const char* to a ::string that has identical content. EXPECT_EQ("Test", ::string("Test")); // Compares two identical ::strings. const ::string str1("A * in the middle"); const ::string str2(str1); ASSERT_EQ(str1, str2); // Compares a ::string to a const char* that has different content. EXPECT_NONFATAL_FAILURE(EXPECT_EQ(::string("Test"), "test"), "test"); // Compares two ::strings that have different contents, one of which // having a NUL character in the middle. ::string str3(str1); str3.at(2) = '\0'; EXPECT_NONFATAL_FAILURE(EXPECT_EQ(str1, str3), "str3"); // Compares a ::string to a char* that has different content. EXPECT_FATAL_FAILURE({ // NOLINT ASSERT_EQ(::string("bar"), const_cast("foo")); }, ""); } #endif // GTEST_HAS_GLOBAL_STRING #if GTEST_HAS_GLOBAL_WSTRING // Tests using ::wstring values in {EXPECT|ASSERT}_EQ. TEST(EqAssertionTest, GlobalWideString) { // Compares two identical ::wstrings. static const ::wstring wstr1(L"A * in the middle"); static const ::wstring wstr2(wstr1); EXPECT_EQ(wstr1, wstr2); // Compares a const wchar_t* to a ::wstring that has identical content. const wchar_t kTestX8119[] = { 'T', 'e', 's', 't', 0x8119, '\0' }; ASSERT_EQ(kTestX8119, ::wstring(kTestX8119)); // Compares a const wchar_t* to a ::wstring that has different // content. const wchar_t kTestX8120[] = { 'T', 'e', 's', 't', 0x8120, '\0' }; EXPECT_NONFATAL_FAILURE({ // NOLINT EXPECT_EQ(kTestX8120, ::wstring(kTestX8119)); }, "Test\\x8119"); // Compares a wchar_t* to a ::wstring that has different content. wchar_t* const p1 = const_cast(L"foo"); EXPECT_NONFATAL_FAILURE(EXPECT_EQ(p1, ::wstring(L"bar")), "bar"); // Compares two ::wstrings that have different contents, one of which // having a NUL character in the middle. static ::wstring wstr3; wstr3 = wstr1; wstr3.at(2) = L'\0'; EXPECT_FATAL_FAILURE(ASSERT_EQ(wstr1, wstr3), "wstr3"); } #endif // GTEST_HAS_GLOBAL_WSTRING // Tests using char pointers in {EXPECT|ASSERT}_EQ. TEST(EqAssertionTest, CharPointer) { char* const p0 = NULL; // Only way to get the Nokia compiler to compile the cast // is to have a separate void* variable first. Putting // the two casts on the same line doesn't work, neither does // a direct C-style to char*. void* pv1 = (void*)0x1234; // NOLINT void* pv2 = (void*)0xABC0; // NOLINT char* const p1 = reinterpret_cast(pv1); char* const p2 = reinterpret_cast(pv2); ASSERT_EQ(p1, p1); EXPECT_NONFATAL_FAILURE(EXPECT_EQ(p0, p2), "Value of: p2"); EXPECT_NONFATAL_FAILURE(EXPECT_EQ(p1, p2), "p2"); EXPECT_FATAL_FAILURE(ASSERT_EQ(reinterpret_cast(0x1234), reinterpret_cast(0xABC0)), "ABC0"); } // Tests using wchar_t pointers in {EXPECT|ASSERT}_EQ. TEST(EqAssertionTest, WideCharPointer) { wchar_t* const p0 = NULL; // Only way to get the Nokia compiler to compile the cast // is to have a separate void* variable first. Putting // the two casts on the same line doesn't work, neither does // a direct C-style to char*. void* pv1 = (void*)0x1234; // NOLINT void* pv2 = (void*)0xABC0; // NOLINT wchar_t* const p1 = reinterpret_cast(pv1); wchar_t* const p2 = reinterpret_cast(pv2); EXPECT_EQ(p0, p0); EXPECT_NONFATAL_FAILURE(EXPECT_EQ(p0, p2), "Value of: p2"); EXPECT_NONFATAL_FAILURE(EXPECT_EQ(p1, p2), "p2"); void* pv3 = (void*)0x1234; // NOLINT void* pv4 = (void*)0xABC0; // NOLINT const wchar_t* p3 = reinterpret_cast(pv3); const wchar_t* p4 = reinterpret_cast(pv4); EXPECT_NONFATAL_FAILURE(EXPECT_EQ(p3, p4), "p4"); } // Tests using other types of pointers in {EXPECT|ASSERT}_EQ. TEST(EqAssertionTest, OtherPointer) { ASSERT_EQ(static_cast(NULL), static_cast(NULL)); EXPECT_FATAL_FAILURE(ASSERT_EQ(static_cast(NULL), reinterpret_cast(0x1234)), "0x1234"); } // A class that supports binary comparison operators but not streaming. class UnprintableChar { public: explicit UnprintableChar(char ch) : char_(ch) {} bool operator==(const UnprintableChar& rhs) const { return char_ == rhs.char_; } bool operator!=(const UnprintableChar& rhs) const { return char_ != rhs.char_; } bool operator<(const UnprintableChar& rhs) const { return char_ < rhs.char_; } bool operator<=(const UnprintableChar& rhs) const { return char_ <= rhs.char_; } bool operator>(const UnprintableChar& rhs) const { return char_ > rhs.char_; } bool operator>=(const UnprintableChar& rhs) const { return char_ >= rhs.char_; } private: char char_; }; // Tests that ASSERT_EQ() and friends don't require the arguments to // be printable. TEST(ComparisonAssertionTest, AcceptsUnprintableArgs) { const UnprintableChar x('x'), y('y'); ASSERT_EQ(x, x); EXPECT_NE(x, y); ASSERT_LT(x, y); EXPECT_LE(x, y); ASSERT_GT(y, x); EXPECT_GE(x, x); EXPECT_NONFATAL_FAILURE(EXPECT_EQ(x, y), "1-byte object <78>"); EXPECT_NONFATAL_FAILURE(EXPECT_EQ(x, y), "1-byte object <79>"); EXPECT_NONFATAL_FAILURE(EXPECT_LT(y, y), "1-byte object <79>"); EXPECT_NONFATAL_FAILURE(EXPECT_GT(x, y), "1-byte object <78>"); EXPECT_NONFATAL_FAILURE(EXPECT_GT(x, y), "1-byte object <79>"); // Code tested by EXPECT_FATAL_FAILURE cannot reference local // variables, so we have to write UnprintableChar('x') instead of x. #ifndef __BORLANDC__ // ICE's in C++Builder. EXPECT_FATAL_FAILURE(ASSERT_NE(UnprintableChar('x'), UnprintableChar('x')), "1-byte object <78>"); EXPECT_FATAL_FAILURE(ASSERT_LE(UnprintableChar('y'), UnprintableChar('x')), "1-byte object <78>"); #endif EXPECT_FATAL_FAILURE(ASSERT_LE(UnprintableChar('y'), UnprintableChar('x')), "1-byte object <79>"); EXPECT_FATAL_FAILURE(ASSERT_GE(UnprintableChar('x'), UnprintableChar('y')), "1-byte object <78>"); EXPECT_FATAL_FAILURE(ASSERT_GE(UnprintableChar('x'), UnprintableChar('y')), "1-byte object <79>"); } // Tests the FRIEND_TEST macro. // This class has a private member we want to test. We will test it // both in a TEST and in a TEST_F. class Foo { public: Foo() {} private: int Bar() const { return 1; } // Declares the friend tests that can access the private member // Bar(). FRIEND_TEST(FRIEND_TEST_Test, TEST); FRIEND_TEST(FRIEND_TEST_Test2, TEST_F); }; // Tests that the FRIEND_TEST declaration allows a TEST to access a // class's private members. This should compile. TEST(FRIEND_TEST_Test, TEST) { ASSERT_EQ(1, Foo().Bar()); } // The fixture needed to test using FRIEND_TEST with TEST_F. class FRIEND_TEST_Test2 : public Test { protected: Foo foo; }; // Tests that the FRIEND_TEST declaration allows a TEST_F to access a // class's private members. This should compile. TEST_F(FRIEND_TEST_Test2, TEST_F) { ASSERT_EQ(1, foo.Bar()); } // Tests the life cycle of Test objects. // The test fixture for testing the life cycle of Test objects. // // This class counts the number of live test objects that uses this // fixture. class TestLifeCycleTest : public Test { protected: // Constructor. Increments the number of test objects that uses // this fixture. TestLifeCycleTest() { count_++; } // Destructor. Decrements the number of test objects that uses this // fixture. ~TestLifeCycleTest() { count_--; } // Returns the number of live test objects that uses this fixture. int count() const { return count_; } private: static int count_; }; int TestLifeCycleTest::count_ = 0; // Tests the life cycle of test objects. TEST_F(TestLifeCycleTest, Test1) { // There should be only one test object in this test case that's // currently alive. ASSERT_EQ(1, count()); } // Tests the life cycle of test objects. TEST_F(TestLifeCycleTest, Test2) { // After Test1 is done and Test2 is started, there should still be // only one live test object, as the object for Test1 should've been // deleted. ASSERT_EQ(1, count()); } } // namespace // Tests that the copy constructor works when it is NOT optimized away by // the compiler. TEST(AssertionResultTest, CopyConstructorWorksWhenNotOptimied) { // Checks that the copy constructor doesn't try to dereference NULL pointers // in the source object. AssertionResult r1 = AssertionSuccess(); AssertionResult r2 = r1; // The following line is added to prevent the compiler from optimizing // away the constructor call. r1 << "abc"; AssertionResult r3 = r1; EXPECT_EQ(static_cast(r3), static_cast(r1)); EXPECT_STREQ("abc", r1.message()); } // Tests that AssertionSuccess and AssertionFailure construct // AssertionResult objects as expected. TEST(AssertionResultTest, ConstructionWorks) { AssertionResult r1 = AssertionSuccess(); EXPECT_TRUE(r1); EXPECT_STREQ("", r1.message()); AssertionResult r2 = AssertionSuccess() << "abc"; EXPECT_TRUE(r2); EXPECT_STREQ("abc", r2.message()); AssertionResult r3 = AssertionFailure(); EXPECT_FALSE(r3); EXPECT_STREQ("", r3.message()); AssertionResult r4 = AssertionFailure() << "def"; EXPECT_FALSE(r4); EXPECT_STREQ("def", r4.message()); AssertionResult r5 = AssertionFailure(Message() << "ghi"); EXPECT_FALSE(r5); EXPECT_STREQ("ghi", r5.message()); } // Tests that the negation flips the predicate result but keeps the message. TEST(AssertionResultTest, NegationWorks) { AssertionResult r1 = AssertionSuccess() << "abc"; EXPECT_FALSE(!r1); EXPECT_STREQ("abc", (!r1).message()); AssertionResult r2 = AssertionFailure() << "def"; EXPECT_TRUE(!r2); EXPECT_STREQ("def", (!r2).message()); } TEST(AssertionResultTest, StreamingWorks) { AssertionResult r = AssertionSuccess(); r << "abc" << 'd' << 0 << true; EXPECT_STREQ("abcd0true", r.message()); } TEST(AssertionResultTest, CanStreamOstreamManipulators) { AssertionResult r = AssertionSuccess(); r << "Data" << std::endl << std::flush << std::ends << "Will be visible"; EXPECT_STREQ("Data\n\\0Will be visible", r.message()); } // Tests streaming a user type whose definition and operator << are // both in the global namespace. class Base { public: explicit Base(int an_x) : x_(an_x) {} int x() const { return x_; } private: int x_; }; std::ostream& operator<<(std::ostream& os, const Base& val) { return os << val.x(); } std::ostream& operator<<(std::ostream& os, const Base* pointer) { return os << "(" << pointer->x() << ")"; } TEST(MessageTest, CanStreamUserTypeInGlobalNameSpace) { Message msg; Base a(1); msg << a << &a; // Uses ::operator<<. EXPECT_STREQ("1(1)", msg.GetString().c_str()); } // Tests streaming a user type whose definition and operator<< are // both in an unnamed namespace. namespace { class MyTypeInUnnamedNameSpace : public Base { public: explicit MyTypeInUnnamedNameSpace(int an_x): Base(an_x) {} }; std::ostream& operator<<(std::ostream& os, const MyTypeInUnnamedNameSpace& val) { return os << val.x(); } std::ostream& operator<<(std::ostream& os, const MyTypeInUnnamedNameSpace* pointer) { return os << "(" << pointer->x() << ")"; } } // namespace TEST(MessageTest, CanStreamUserTypeInUnnamedNameSpace) { Message msg; MyTypeInUnnamedNameSpace a(1); msg << a << &a; // Uses ::operator<<. EXPECT_STREQ("1(1)", msg.GetString().c_str()); } // Tests streaming a user type whose definition and operator<< are // both in a user namespace. namespace namespace1 { class MyTypeInNameSpace1 : public Base { public: explicit MyTypeInNameSpace1(int an_x): Base(an_x) {} }; std::ostream& operator<<(std::ostream& os, const MyTypeInNameSpace1& val) { return os << val.x(); } std::ostream& operator<<(std::ostream& os, const MyTypeInNameSpace1* pointer) { return os << "(" << pointer->x() << ")"; } } // namespace namespace1 TEST(MessageTest, CanStreamUserTypeInUserNameSpace) { Message msg; namespace1::MyTypeInNameSpace1 a(1); msg << a << &a; // Uses namespace1::operator<<. EXPECT_STREQ("1(1)", msg.GetString().c_str()); } // Tests streaming a user type whose definition is in a user namespace // but whose operator<< is in the global namespace. namespace namespace2 { class MyTypeInNameSpace2 : public ::Base { public: explicit MyTypeInNameSpace2(int an_x): Base(an_x) {} }; } // namespace namespace2 std::ostream& operator<<(std::ostream& os, const namespace2::MyTypeInNameSpace2& val) { return os << val.x(); } std::ostream& operator<<(std::ostream& os, const namespace2::MyTypeInNameSpace2* pointer) { return os << "(" << pointer->x() << ")"; } TEST(MessageTest, CanStreamUserTypeInUserNameSpaceWithStreamOperatorInGlobal) { Message msg; namespace2::MyTypeInNameSpace2 a(1); msg << a << &a; // Uses ::operator<<. EXPECT_STREQ("1(1)", msg.GetString().c_str()); } // Tests streaming NULL pointers to testing::Message. TEST(MessageTest, NullPointers) { Message msg; char* const p1 = NULL; unsigned char* const p2 = NULL; int* p3 = NULL; double* p4 = NULL; bool* p5 = NULL; Message* p6 = NULL; msg << p1 << p2 << p3 << p4 << p5 << p6; ASSERT_STREQ("(null)(null)(null)(null)(null)(null)", msg.GetString().c_str()); } // Tests streaming wide strings to testing::Message. TEST(MessageTest, WideStrings) { // Streams a NULL of type const wchar_t*. const wchar_t* const_wstr = NULL; EXPECT_STREQ("(null)", (Message() << const_wstr).GetString().c_str()); // Streams a NULL of type wchar_t*. wchar_t* wstr = NULL; EXPECT_STREQ("(null)", (Message() << wstr).GetString().c_str()); // Streams a non-NULL of type const wchar_t*. const_wstr = L"abc\x8119"; EXPECT_STREQ("abc\xe8\x84\x99", (Message() << const_wstr).GetString().c_str()); // Streams a non-NULL of type wchar_t*. wstr = const_cast(const_wstr); EXPECT_STREQ("abc\xe8\x84\x99", (Message() << wstr).GetString().c_str()); } // This line tests that we can define tests in the testing namespace. namespace testing { // Tests the TestInfo class. class TestInfoTest : public Test { protected: static const TestInfo* GetTestInfo(const char* test_name) { const TestCase* const test_case = GetUnitTestImpl()-> GetTestCase("TestInfoTest", "", NULL, NULL); for (int i = 0; i < test_case->total_test_count(); ++i) { const TestInfo* const test_info = test_case->GetTestInfo(i); if (strcmp(test_name, test_info->name()) == 0) return test_info; } return NULL; } static const TestResult* GetTestResult( const TestInfo* test_info) { return test_info->result(); } }; // Tests TestInfo::test_case_name() and TestInfo::name(). TEST_F(TestInfoTest, Names) { const TestInfo* const test_info = GetTestInfo("Names"); ASSERT_STREQ("TestInfoTest", test_info->test_case_name()); ASSERT_STREQ("Names", test_info->name()); } // Tests TestInfo::result(). TEST_F(TestInfoTest, result) { const TestInfo* const test_info = GetTestInfo("result"); // Initially, there is no TestPartResult for this test. ASSERT_EQ(0, GetTestResult(test_info)->total_part_count()); // After the previous assertion, there is still none. ASSERT_EQ(0, GetTestResult(test_info)->total_part_count()); } // Tests setting up and tearing down a test case. class SetUpTestCaseTest : public Test { protected: // This will be called once before the first test in this test case // is run. static void SetUpTestCase() { printf("Setting up the test case . . .\n"); // Initializes some shared resource. In this simple example, we // just create a C string. More complex stuff can be done if // desired. shared_resource_ = "123"; // Increments the number of test cases that have been set up. counter_++; // SetUpTestCase() should be called only once. EXPECT_EQ(1, counter_); } // This will be called once after the last test in this test case is // run. static void TearDownTestCase() { printf("Tearing down the test case . . .\n"); // Decrements the number of test cases that have been set up. counter_--; // TearDownTestCase() should be called only once. EXPECT_EQ(0, counter_); // Cleans up the shared resource. shared_resource_ = NULL; } // This will be called before each test in this test case. virtual void SetUp() { // SetUpTestCase() should be called only once, so counter_ should // always be 1. EXPECT_EQ(1, counter_); } // Number of test cases that have been set up. static int counter_; // Some resource to be shared by all tests in this test case. static const char* shared_resource_; }; int SetUpTestCaseTest::counter_ = 0; const char* SetUpTestCaseTest::shared_resource_ = NULL; // A test that uses the shared resource. TEST_F(SetUpTestCaseTest, Test1) { EXPECT_STRNE(NULL, shared_resource_); } // Another test that uses the shared resource. TEST_F(SetUpTestCaseTest, Test2) { EXPECT_STREQ("123", shared_resource_); } // The InitGoogleTestTest test case tests testing::InitGoogleTest(). // The Flags struct stores a copy of all Google Test flags. struct Flags { // Constructs a Flags struct where each flag has its default value. Flags() : also_run_disabled_tests(false), break_on_failure(false), catch_exceptions(false), death_test_use_fork(false), filter(""), list_tests(false), output(""), print_time(true), random_seed(0), repeat(1), shuffle(false), stack_trace_depth(kMaxStackTraceDepth), stream_result_to(""), throw_on_failure(false) {} // Factory methods. // Creates a Flags struct where the gtest_also_run_disabled_tests flag has // the given value. static Flags AlsoRunDisabledTests(bool also_run_disabled_tests) { Flags flags; flags.also_run_disabled_tests = also_run_disabled_tests; return flags; } // Creates a Flags struct where the gtest_break_on_failure flag has // the given value. static Flags BreakOnFailure(bool break_on_failure) { Flags flags; flags.break_on_failure = break_on_failure; return flags; } // Creates a Flags struct where the gtest_catch_exceptions flag has // the given value. static Flags CatchExceptions(bool catch_exceptions) { Flags flags; flags.catch_exceptions = catch_exceptions; return flags; } // Creates a Flags struct where the gtest_death_test_use_fork flag has // the given value. static Flags DeathTestUseFork(bool death_test_use_fork) { Flags flags; flags.death_test_use_fork = death_test_use_fork; return flags; } // Creates a Flags struct where the gtest_filter flag has the given // value. static Flags Filter(const char* filter) { Flags flags; flags.filter = filter; return flags; } // Creates a Flags struct where the gtest_list_tests flag has the // given value. static Flags ListTests(bool list_tests) { Flags flags; flags.list_tests = list_tests; return flags; } // Creates a Flags struct where the gtest_output flag has the given // value. static Flags Output(const char* output) { Flags flags; flags.output = output; return flags; } // Creates a Flags struct where the gtest_print_time flag has the given // value. static Flags PrintTime(bool print_time) { Flags flags; flags.print_time = print_time; return flags; } // Creates a Flags struct where the gtest_random_seed flag has // the given value. static Flags RandomSeed(Int32 random_seed) { Flags flags; flags.random_seed = random_seed; return flags; } // Creates a Flags struct where the gtest_repeat flag has the given // value. static Flags Repeat(Int32 repeat) { Flags flags; flags.repeat = repeat; return flags; } // Creates a Flags struct where the gtest_shuffle flag has // the given value. static Flags Shuffle(bool shuffle) { Flags flags; flags.shuffle = shuffle; return flags; } // Creates a Flags struct where the GTEST_FLAG(stack_trace_depth) flag has // the given value. static Flags StackTraceDepth(Int32 stack_trace_depth) { Flags flags; flags.stack_trace_depth = stack_trace_depth; return flags; } // Creates a Flags struct where the GTEST_FLAG(stream_result_to) flag has // the given value. static Flags StreamResultTo(const char* stream_result_to) { Flags flags; flags.stream_result_to = stream_result_to; return flags; } // Creates a Flags struct where the gtest_throw_on_failure flag has // the given value. static Flags ThrowOnFailure(bool throw_on_failure) { Flags flags; flags.throw_on_failure = throw_on_failure; return flags; } // These fields store the flag values. bool also_run_disabled_tests; bool break_on_failure; bool catch_exceptions; bool death_test_use_fork; const char* filter; bool list_tests; const char* output; bool print_time; Int32 random_seed; Int32 repeat; bool shuffle; Int32 stack_trace_depth; const char* stream_result_to; bool throw_on_failure; }; // Fixture for testing InitGoogleTest(). class InitGoogleTestTest : public Test { protected: // Clears the flags before each test. virtual void SetUp() { GTEST_FLAG(also_run_disabled_tests) = false; GTEST_FLAG(break_on_failure) = false; GTEST_FLAG(catch_exceptions) = false; GTEST_FLAG(death_test_use_fork) = false; GTEST_FLAG(filter) = ""; GTEST_FLAG(list_tests) = false; GTEST_FLAG(output) = ""; GTEST_FLAG(print_time) = true; GTEST_FLAG(random_seed) = 0; GTEST_FLAG(repeat) = 1; GTEST_FLAG(shuffle) = false; GTEST_FLAG(stack_trace_depth) = kMaxStackTraceDepth; GTEST_FLAG(stream_result_to) = ""; GTEST_FLAG(throw_on_failure) = false; } // Asserts that two narrow or wide string arrays are equal. template static void AssertStringArrayEq(size_t size1, CharType** array1, size_t size2, CharType** array2) { ASSERT_EQ(size1, size2) << " Array sizes different."; for (size_t i = 0; i != size1; i++) { ASSERT_STREQ(array1[i], array2[i]) << " where i == " << i; } } // Verifies that the flag values match the expected values. static void CheckFlags(const Flags& expected) { EXPECT_EQ(expected.also_run_disabled_tests, GTEST_FLAG(also_run_disabled_tests)); EXPECT_EQ(expected.break_on_failure, GTEST_FLAG(break_on_failure)); EXPECT_EQ(expected.catch_exceptions, GTEST_FLAG(catch_exceptions)); EXPECT_EQ(expected.death_test_use_fork, GTEST_FLAG(death_test_use_fork)); EXPECT_STREQ(expected.filter, GTEST_FLAG(filter).c_str()); EXPECT_EQ(expected.list_tests, GTEST_FLAG(list_tests)); EXPECT_STREQ(expected.output, GTEST_FLAG(output).c_str()); EXPECT_EQ(expected.print_time, GTEST_FLAG(print_time)); EXPECT_EQ(expected.random_seed, GTEST_FLAG(random_seed)); EXPECT_EQ(expected.repeat, GTEST_FLAG(repeat)); EXPECT_EQ(expected.shuffle, GTEST_FLAG(shuffle)); EXPECT_EQ(expected.stack_trace_depth, GTEST_FLAG(stack_trace_depth)); EXPECT_STREQ(expected.stream_result_to, GTEST_FLAG(stream_result_to).c_str()); EXPECT_EQ(expected.throw_on_failure, GTEST_FLAG(throw_on_failure)); } // Parses a command line (specified by argc1 and argv1), then // verifies that the flag values are expected and that the // recognized flags are removed from the command line. template static void TestParsingFlags(int argc1, const CharType** argv1, int argc2, const CharType** argv2, const Flags& expected, bool should_print_help) { const bool saved_help_flag = ::testing::internal::g_help_flag; ::testing::internal::g_help_flag = false; #if GTEST_HAS_STREAM_REDIRECTION CaptureStdout(); #endif // Parses the command line. internal::ParseGoogleTestFlagsOnly(&argc1, const_cast(argv1)); #if GTEST_HAS_STREAM_REDIRECTION const std::string captured_stdout = GetCapturedStdout(); #endif // Verifies the flag values. CheckFlags(expected); // Verifies that the recognized flags are removed from the command // line. AssertStringArrayEq(argc1 + 1, argv1, argc2 + 1, argv2); // ParseGoogleTestFlagsOnly should neither set g_help_flag nor print the // help message for the flags it recognizes. EXPECT_EQ(should_print_help, ::testing::internal::g_help_flag); #if GTEST_HAS_STREAM_REDIRECTION const char* const expected_help_fragment = "This program contains tests written using"; if (should_print_help) { EXPECT_PRED_FORMAT2(IsSubstring, expected_help_fragment, captured_stdout); } else { EXPECT_PRED_FORMAT2(IsNotSubstring, expected_help_fragment, captured_stdout); } #endif // GTEST_HAS_STREAM_REDIRECTION ::testing::internal::g_help_flag = saved_help_flag; } // This macro wraps TestParsingFlags s.t. the user doesn't need // to specify the array sizes. #define GTEST_TEST_PARSING_FLAGS_(argv1, argv2, expected, should_print_help) \ TestParsingFlags(sizeof(argv1)/sizeof(*argv1) - 1, argv1, \ sizeof(argv2)/sizeof(*argv2) - 1, argv2, \ expected, should_print_help) }; // Tests parsing an empty command line. TEST_F(InitGoogleTestTest, Empty) { const char* argv[] = { NULL }; const char* argv2[] = { NULL }; GTEST_TEST_PARSING_FLAGS_(argv, argv2, Flags(), false); } // Tests parsing a command line that has no flag. TEST_F(InitGoogleTestTest, NoFlag) { const char* argv[] = { "foo.exe", NULL }; const char* argv2[] = { "foo.exe", NULL }; GTEST_TEST_PARSING_FLAGS_(argv, argv2, Flags(), false); } // Tests parsing a bad --gtest_filter flag. TEST_F(InitGoogleTestTest, FilterBad) { const char* argv[] = { "foo.exe", "--gtest_filter", NULL }; const char* argv2[] = { "foo.exe", "--gtest_filter", NULL }; GTEST_TEST_PARSING_FLAGS_(argv, argv2, Flags::Filter(""), true); } // Tests parsing an empty --gtest_filter flag. TEST_F(InitGoogleTestTest, FilterEmpty) { const char* argv[] = { "foo.exe", "--gtest_filter=", NULL }; const char* argv2[] = { "foo.exe", NULL }; GTEST_TEST_PARSING_FLAGS_(argv, argv2, Flags::Filter(""), false); } // Tests parsing a non-empty --gtest_filter flag. TEST_F(InitGoogleTestTest, FilterNonEmpty) { const char* argv[] = { "foo.exe", "--gtest_filter=abc", NULL }; const char* argv2[] = { "foo.exe", NULL }; GTEST_TEST_PARSING_FLAGS_(argv, argv2, Flags::Filter("abc"), false); } // Tests parsing --gtest_break_on_failure. TEST_F(InitGoogleTestTest, BreakOnFailureWithoutValue) { const char* argv[] = { "foo.exe", "--gtest_break_on_failure", NULL }; const char* argv2[] = { "foo.exe", NULL }; GTEST_TEST_PARSING_FLAGS_(argv, argv2, Flags::BreakOnFailure(true), false); } // Tests parsing --gtest_break_on_failure=0. TEST_F(InitGoogleTestTest, BreakOnFailureFalse_0) { const char* argv[] = { "foo.exe", "--gtest_break_on_failure=0", NULL }; const char* argv2[] = { "foo.exe", NULL }; GTEST_TEST_PARSING_FLAGS_(argv, argv2, Flags::BreakOnFailure(false), false); } // Tests parsing --gtest_break_on_failure=f. TEST_F(InitGoogleTestTest, BreakOnFailureFalse_f) { const char* argv[] = { "foo.exe", "--gtest_break_on_failure=f", NULL }; const char* argv2[] = { "foo.exe", NULL }; GTEST_TEST_PARSING_FLAGS_(argv, argv2, Flags::BreakOnFailure(false), false); } // Tests parsing --gtest_break_on_failure=F. TEST_F(InitGoogleTestTest, BreakOnFailureFalse_F) { const char* argv[] = { "foo.exe", "--gtest_break_on_failure=F", NULL }; const char* argv2[] = { "foo.exe", NULL }; GTEST_TEST_PARSING_FLAGS_(argv, argv2, Flags::BreakOnFailure(false), false); } // Tests parsing a --gtest_break_on_failure flag that has a "true" // definition. TEST_F(InitGoogleTestTest, BreakOnFailureTrue) { const char* argv[] = { "foo.exe", "--gtest_break_on_failure=1", NULL }; const char* argv2[] = { "foo.exe", NULL }; GTEST_TEST_PARSING_FLAGS_(argv, argv2, Flags::BreakOnFailure(true), false); } // Tests parsing --gtest_catch_exceptions. TEST_F(InitGoogleTestTest, CatchExceptions) { const char* argv[] = { "foo.exe", "--gtest_catch_exceptions", NULL }; const char* argv2[] = { "foo.exe", NULL }; GTEST_TEST_PARSING_FLAGS_(argv, argv2, Flags::CatchExceptions(true), false); } // Tests parsing --gtest_death_test_use_fork. TEST_F(InitGoogleTestTest, DeathTestUseFork) { const char* argv[] = { "foo.exe", "--gtest_death_test_use_fork", NULL }; const char* argv2[] = { "foo.exe", NULL }; GTEST_TEST_PARSING_FLAGS_(argv, argv2, Flags::DeathTestUseFork(true), false); } // Tests having the same flag twice with different values. The // expected behavior is that the one coming last takes precedence. TEST_F(InitGoogleTestTest, DuplicatedFlags) { const char* argv[] = { "foo.exe", "--gtest_filter=a", "--gtest_filter=b", NULL }; const char* argv2[] = { "foo.exe", NULL }; GTEST_TEST_PARSING_FLAGS_(argv, argv2, Flags::Filter("b"), false); } // Tests having an unrecognized flag on the command line. TEST_F(InitGoogleTestTest, UnrecognizedFlag) { const char* argv[] = { "foo.exe", "--gtest_break_on_failure", "bar", // Unrecognized by Google Test. "--gtest_filter=b", NULL }; const char* argv2[] = { "foo.exe", "bar", NULL }; Flags flags; flags.break_on_failure = true; flags.filter = "b"; GTEST_TEST_PARSING_FLAGS_(argv, argv2, flags, false); } // Tests having a --gtest_list_tests flag TEST_F(InitGoogleTestTest, ListTestsFlag) { const char* argv[] = { "foo.exe", "--gtest_list_tests", NULL }; const char* argv2[] = { "foo.exe", NULL }; GTEST_TEST_PARSING_FLAGS_(argv, argv2, Flags::ListTests(true), false); } // Tests having a --gtest_list_tests flag with a "true" value TEST_F(InitGoogleTestTest, ListTestsTrue) { const char* argv[] = { "foo.exe", "--gtest_list_tests=1", NULL }; const char* argv2[] = { "foo.exe", NULL }; GTEST_TEST_PARSING_FLAGS_(argv, argv2, Flags::ListTests(true), false); } // Tests having a --gtest_list_tests flag with a "false" value TEST_F(InitGoogleTestTest, ListTestsFalse) { const char* argv[] = { "foo.exe", "--gtest_list_tests=0", NULL }; const char* argv2[] = { "foo.exe", NULL }; GTEST_TEST_PARSING_FLAGS_(argv, argv2, Flags::ListTests(false), false); } // Tests parsing --gtest_list_tests=f. TEST_F(InitGoogleTestTest, ListTestsFalse_f) { const char* argv[] = { "foo.exe", "--gtest_list_tests=f", NULL }; const char* argv2[] = { "foo.exe", NULL }; GTEST_TEST_PARSING_FLAGS_(argv, argv2, Flags::ListTests(false), false); } // Tests parsing --gtest_list_tests=F. TEST_F(InitGoogleTestTest, ListTestsFalse_F) { const char* argv[] = { "foo.exe", "--gtest_list_tests=F", NULL }; const char* argv2[] = { "foo.exe", NULL }; GTEST_TEST_PARSING_FLAGS_(argv, argv2, Flags::ListTests(false), false); } // Tests parsing --gtest_output (invalid). TEST_F(InitGoogleTestTest, OutputEmpty) { const char* argv[] = { "foo.exe", "--gtest_output", NULL }; const char* argv2[] = { "foo.exe", "--gtest_output", NULL }; GTEST_TEST_PARSING_FLAGS_(argv, argv2, Flags(), true); } // Tests parsing --gtest_output=xml TEST_F(InitGoogleTestTest, OutputXml) { const char* argv[] = { "foo.exe", "--gtest_output=xml", NULL }; const char* argv2[] = { "foo.exe", NULL }; GTEST_TEST_PARSING_FLAGS_(argv, argv2, Flags::Output("xml"), false); } // Tests parsing --gtest_output=xml:file TEST_F(InitGoogleTestTest, OutputXmlFile) { const char* argv[] = { "foo.exe", "--gtest_output=xml:file", NULL }; const char* argv2[] = { "foo.exe", NULL }; GTEST_TEST_PARSING_FLAGS_(argv, argv2, Flags::Output("xml:file"), false); } // Tests parsing --gtest_output=xml:directory/path/ TEST_F(InitGoogleTestTest, OutputXmlDirectory) { const char* argv[] = { "foo.exe", "--gtest_output=xml:directory/path/", NULL }; const char* argv2[] = { "foo.exe", NULL }; GTEST_TEST_PARSING_FLAGS_(argv, argv2, Flags::Output("xml:directory/path/"), false); } // Tests having a --gtest_print_time flag TEST_F(InitGoogleTestTest, PrintTimeFlag) { const char* argv[] = { "foo.exe", "--gtest_print_time", NULL }; const char* argv2[] = { "foo.exe", NULL }; GTEST_TEST_PARSING_FLAGS_(argv, argv2, Flags::PrintTime(true), false); } // Tests having a --gtest_print_time flag with a "true" value TEST_F(InitGoogleTestTest, PrintTimeTrue) { const char* argv[] = { "foo.exe", "--gtest_print_time=1", NULL }; const char* argv2[] = { "foo.exe", NULL }; GTEST_TEST_PARSING_FLAGS_(argv, argv2, Flags::PrintTime(true), false); } // Tests having a --gtest_print_time flag with a "false" value TEST_F(InitGoogleTestTest, PrintTimeFalse) { const char* argv[] = { "foo.exe", "--gtest_print_time=0", NULL }; const char* argv2[] = { "foo.exe", NULL }; GTEST_TEST_PARSING_FLAGS_(argv, argv2, Flags::PrintTime(false), false); } // Tests parsing --gtest_print_time=f. TEST_F(InitGoogleTestTest, PrintTimeFalse_f) { const char* argv[] = { "foo.exe", "--gtest_print_time=f", NULL }; const char* argv2[] = { "foo.exe", NULL }; GTEST_TEST_PARSING_FLAGS_(argv, argv2, Flags::PrintTime(false), false); } // Tests parsing --gtest_print_time=F. TEST_F(InitGoogleTestTest, PrintTimeFalse_F) { const char* argv[] = { "foo.exe", "--gtest_print_time=F", NULL }; const char* argv2[] = { "foo.exe", NULL }; GTEST_TEST_PARSING_FLAGS_(argv, argv2, Flags::PrintTime(false), false); } // Tests parsing --gtest_random_seed=number TEST_F(InitGoogleTestTest, RandomSeed) { const char* argv[] = { "foo.exe", "--gtest_random_seed=1000", NULL }; const char* argv2[] = { "foo.exe", NULL }; GTEST_TEST_PARSING_FLAGS_(argv, argv2, Flags::RandomSeed(1000), false); } // Tests parsing --gtest_repeat=number TEST_F(InitGoogleTestTest, Repeat) { const char* argv[] = { "foo.exe", "--gtest_repeat=1000", NULL }; const char* argv2[] = { "foo.exe", NULL }; GTEST_TEST_PARSING_FLAGS_(argv, argv2, Flags::Repeat(1000), false); } // Tests having a --gtest_also_run_disabled_tests flag TEST_F(InitGoogleTestTest, AlsoRunDisabledTestsFlag) { const char* argv[] = { "foo.exe", "--gtest_also_run_disabled_tests", NULL }; const char* argv2[] = { "foo.exe", NULL }; GTEST_TEST_PARSING_FLAGS_(argv, argv2, Flags::AlsoRunDisabledTests(true), false); } // Tests having a --gtest_also_run_disabled_tests flag with a "true" value TEST_F(InitGoogleTestTest, AlsoRunDisabledTestsTrue) { const char* argv[] = { "foo.exe", "--gtest_also_run_disabled_tests=1", NULL }; const char* argv2[] = { "foo.exe", NULL }; GTEST_TEST_PARSING_FLAGS_(argv, argv2, Flags::AlsoRunDisabledTests(true), false); } // Tests having a --gtest_also_run_disabled_tests flag with a "false" value TEST_F(InitGoogleTestTest, AlsoRunDisabledTestsFalse) { const char* argv[] = { "foo.exe", "--gtest_also_run_disabled_tests=0", NULL }; const char* argv2[] = { "foo.exe", NULL }; GTEST_TEST_PARSING_FLAGS_(argv, argv2, Flags::AlsoRunDisabledTests(false), false); } // Tests parsing --gtest_shuffle. TEST_F(InitGoogleTestTest, ShuffleWithoutValue) { const char* argv[] = { "foo.exe", "--gtest_shuffle", NULL }; const char* argv2[] = { "foo.exe", NULL }; GTEST_TEST_PARSING_FLAGS_(argv, argv2, Flags::Shuffle(true), false); } // Tests parsing --gtest_shuffle=0. TEST_F(InitGoogleTestTest, ShuffleFalse_0) { const char* argv[] = { "foo.exe", "--gtest_shuffle=0", NULL }; const char* argv2[] = { "foo.exe", NULL }; GTEST_TEST_PARSING_FLAGS_(argv, argv2, Flags::Shuffle(false), false); } // Tests parsing a --gtest_shuffle flag that has a "true" // definition. TEST_F(InitGoogleTestTest, ShuffleTrue) { const char* argv[] = { "foo.exe", "--gtest_shuffle=1", NULL }; const char* argv2[] = { "foo.exe", NULL }; GTEST_TEST_PARSING_FLAGS_(argv, argv2, Flags::Shuffle(true), false); } // Tests parsing --gtest_stack_trace_depth=number. TEST_F(InitGoogleTestTest, StackTraceDepth) { const char* argv[] = { "foo.exe", "--gtest_stack_trace_depth=5", NULL }; const char* argv2[] = { "foo.exe", NULL }; GTEST_TEST_PARSING_FLAGS_(argv, argv2, Flags::StackTraceDepth(5), false); } TEST_F(InitGoogleTestTest, StreamResultTo) { const char* argv[] = { "foo.exe", "--gtest_stream_result_to=localhost:1234", NULL }; const char* argv2[] = { "foo.exe", NULL }; GTEST_TEST_PARSING_FLAGS_( argv, argv2, Flags::StreamResultTo("localhost:1234"), false); } // Tests parsing --gtest_throw_on_failure. TEST_F(InitGoogleTestTest, ThrowOnFailureWithoutValue) { const char* argv[] = { "foo.exe", "--gtest_throw_on_failure", NULL }; const char* argv2[] = { "foo.exe", NULL }; GTEST_TEST_PARSING_FLAGS_(argv, argv2, Flags::ThrowOnFailure(true), false); } // Tests parsing --gtest_throw_on_failure=0. TEST_F(InitGoogleTestTest, ThrowOnFailureFalse_0) { const char* argv[] = { "foo.exe", "--gtest_throw_on_failure=0", NULL }; const char* argv2[] = { "foo.exe", NULL }; GTEST_TEST_PARSING_FLAGS_(argv, argv2, Flags::ThrowOnFailure(false), false); } // Tests parsing a --gtest_throw_on_failure flag that has a "true" // definition. TEST_F(InitGoogleTestTest, ThrowOnFailureTrue) { const char* argv[] = { "foo.exe", "--gtest_throw_on_failure=1", NULL }; const char* argv2[] = { "foo.exe", NULL }; GTEST_TEST_PARSING_FLAGS_(argv, argv2, Flags::ThrowOnFailure(true), false); } #if GTEST_OS_WINDOWS // Tests parsing wide strings. TEST_F(InitGoogleTestTest, WideStrings) { const wchar_t* argv[] = { L"foo.exe", L"--gtest_filter=Foo*", L"--gtest_list_tests=1", L"--gtest_break_on_failure", L"--non_gtest_flag", NULL }; const wchar_t* argv2[] = { L"foo.exe", L"--non_gtest_flag", NULL }; Flags expected_flags; expected_flags.break_on_failure = true; expected_flags.filter = "Foo*"; expected_flags.list_tests = true; GTEST_TEST_PARSING_FLAGS_(argv, argv2, expected_flags, false); } #endif // GTEST_OS_WINDOWS // Tests current_test_info() in UnitTest. class CurrentTestInfoTest : public Test { protected: // Tests that current_test_info() returns NULL before the first test in // the test case is run. static void SetUpTestCase() { // There should be no tests running at this point. const TestInfo* test_info = UnitTest::GetInstance()->current_test_info(); EXPECT_TRUE(test_info == NULL) << "There should be no tests running at this point."; } // Tests that current_test_info() returns NULL after the last test in // the test case has run. static void TearDownTestCase() { const TestInfo* test_info = UnitTest::GetInstance()->current_test_info(); EXPECT_TRUE(test_info == NULL) << "There should be no tests running at this point."; } }; // Tests that current_test_info() returns TestInfo for currently running // test by checking the expected test name against the actual one. TEST_F(CurrentTestInfoTest, WorksForFirstTestInATestCase) { const TestInfo* test_info = UnitTest::GetInstance()->current_test_info(); ASSERT_TRUE(NULL != test_info) << "There is a test running so we should have a valid TestInfo."; EXPECT_STREQ("CurrentTestInfoTest", test_info->test_case_name()) << "Expected the name of the currently running test case."; EXPECT_STREQ("WorksForFirstTestInATestCase", test_info->name()) << "Expected the name of the currently running test."; } // Tests that current_test_info() returns TestInfo for currently running // test by checking the expected test name against the actual one. We // use this test to see that the TestInfo object actually changed from // the previous invocation. TEST_F(CurrentTestInfoTest, WorksForSecondTestInATestCase) { const TestInfo* test_info = UnitTest::GetInstance()->current_test_info(); ASSERT_TRUE(NULL != test_info) << "There is a test running so we should have a valid TestInfo."; EXPECT_STREQ("CurrentTestInfoTest", test_info->test_case_name()) << "Expected the name of the currently running test case."; EXPECT_STREQ("WorksForSecondTestInATestCase", test_info->name()) << "Expected the name of the currently running test."; } } // namespace testing // These two lines test that we can define tests in a namespace that // has the name "testing" and is nested in another namespace. namespace my_namespace { namespace testing { // Makes sure that TEST knows to use ::testing::Test instead of // ::my_namespace::testing::Test. class Test {}; // Makes sure that an assertion knows to use ::testing::Message instead of // ::my_namespace::testing::Message. class Message {}; // Makes sure that an assertion knows to use // ::testing::AssertionResult instead of // ::my_namespace::testing::AssertionResult. class AssertionResult {}; // Tests that an assertion that should succeed works as expected. TEST(NestedTestingNamespaceTest, Success) { EXPECT_EQ(1, 1) << "This shouldn't fail."; } // Tests that an assertion that should fail works as expected. TEST(NestedTestingNamespaceTest, Failure) { EXPECT_FATAL_FAILURE(FAIL() << "This failure is expected.", "This failure is expected."); } } // namespace testing } // namespace my_namespace // Tests that one can call superclass SetUp and TearDown methods-- // that is, that they are not private. // No tests are based on this fixture; the test "passes" if it compiles // successfully. class ProtectedFixtureMethodsTest : public Test { protected: virtual void SetUp() { Test::SetUp(); } virtual void TearDown() { Test::TearDown(); } }; // StreamingAssertionsTest tests the streaming versions of a representative // sample of assertions. TEST(StreamingAssertionsTest, Unconditional) { SUCCEED() << "expected success"; EXPECT_NONFATAL_FAILURE(ADD_FAILURE() << "expected failure", "expected failure"); EXPECT_FATAL_FAILURE(FAIL() << "expected failure", "expected failure"); } #ifdef __BORLANDC__ // Silences warnings: "Condition is always true", "Unreachable code" # pragma option push -w-ccc -w-rch #endif TEST(StreamingAssertionsTest, Truth) { EXPECT_TRUE(true) << "unexpected failure"; ASSERT_TRUE(true) << "unexpected failure"; EXPECT_NONFATAL_FAILURE(EXPECT_TRUE(false) << "expected failure", "expected failure"); EXPECT_FATAL_FAILURE(ASSERT_TRUE(false) << "expected failure", "expected failure"); } TEST(StreamingAssertionsTest, Truth2) { EXPECT_FALSE(false) << "unexpected failure"; ASSERT_FALSE(false) << "unexpected failure"; EXPECT_NONFATAL_FAILURE(EXPECT_FALSE(true) << "expected failure", "expected failure"); EXPECT_FATAL_FAILURE(ASSERT_FALSE(true) << "expected failure", "expected failure"); } #ifdef __BORLANDC__ // Restores warnings after previous "#pragma option push" supressed them # pragma option pop #endif TEST(StreamingAssertionsTest, IntegerEquals) { EXPECT_EQ(1, 1) << "unexpected failure"; ASSERT_EQ(1, 1) << "unexpected failure"; EXPECT_NONFATAL_FAILURE(EXPECT_EQ(1, 2) << "expected failure", "expected failure"); EXPECT_FATAL_FAILURE(ASSERT_EQ(1, 2) << "expected failure", "expected failure"); } TEST(StreamingAssertionsTest, IntegerLessThan) { EXPECT_LT(1, 2) << "unexpected failure"; ASSERT_LT(1, 2) << "unexpected failure"; EXPECT_NONFATAL_FAILURE(EXPECT_LT(2, 1) << "expected failure", "expected failure"); EXPECT_FATAL_FAILURE(ASSERT_LT(2, 1) << "expected failure", "expected failure"); } TEST(StreamingAssertionsTest, StringsEqual) { EXPECT_STREQ("foo", "foo") << "unexpected failure"; ASSERT_STREQ("foo", "foo") << "unexpected failure"; EXPECT_NONFATAL_FAILURE(EXPECT_STREQ("foo", "bar") << "expected failure", "expected failure"); EXPECT_FATAL_FAILURE(ASSERT_STREQ("foo", "bar") << "expected failure", "expected failure"); } TEST(StreamingAssertionsTest, StringsNotEqual) { EXPECT_STRNE("foo", "bar") << "unexpected failure"; ASSERT_STRNE("foo", "bar") << "unexpected failure"; EXPECT_NONFATAL_FAILURE(EXPECT_STRNE("foo", "foo") << "expected failure", "expected failure"); EXPECT_FATAL_FAILURE(ASSERT_STRNE("foo", "foo") << "expected failure", "expected failure"); } TEST(StreamingAssertionsTest, StringsEqualIgnoringCase) { EXPECT_STRCASEEQ("foo", "FOO") << "unexpected failure"; ASSERT_STRCASEEQ("foo", "FOO") << "unexpected failure"; EXPECT_NONFATAL_FAILURE(EXPECT_STRCASEEQ("foo", "bar") << "expected failure", "expected failure"); EXPECT_FATAL_FAILURE(ASSERT_STRCASEEQ("foo", "bar") << "expected failure", "expected failure"); } TEST(StreamingAssertionsTest, StringNotEqualIgnoringCase) { EXPECT_STRCASENE("foo", "bar") << "unexpected failure"; ASSERT_STRCASENE("foo", "bar") << "unexpected failure"; EXPECT_NONFATAL_FAILURE(EXPECT_STRCASENE("foo", "FOO") << "expected failure", "expected failure"); EXPECT_FATAL_FAILURE(ASSERT_STRCASENE("bar", "BAR") << "expected failure", "expected failure"); } TEST(StreamingAssertionsTest, FloatingPointEquals) { EXPECT_FLOAT_EQ(1.0, 1.0) << "unexpected failure"; ASSERT_FLOAT_EQ(1.0, 1.0) << "unexpected failure"; EXPECT_NONFATAL_FAILURE(EXPECT_FLOAT_EQ(0.0, 1.0) << "expected failure", "expected failure"); EXPECT_FATAL_FAILURE(ASSERT_FLOAT_EQ(0.0, 1.0) << "expected failure", "expected failure"); } #if GTEST_HAS_EXCEPTIONS TEST(StreamingAssertionsTest, Throw) { EXPECT_THROW(ThrowAnInteger(), int) << "unexpected failure"; ASSERT_THROW(ThrowAnInteger(), int) << "unexpected failure"; EXPECT_NONFATAL_FAILURE(EXPECT_THROW(ThrowAnInteger(), bool) << "expected failure", "expected failure"); EXPECT_FATAL_FAILURE(ASSERT_THROW(ThrowAnInteger(), bool) << "expected failure", "expected failure"); } TEST(StreamingAssertionsTest, NoThrow) { EXPECT_NO_THROW(ThrowNothing()) << "unexpected failure"; ASSERT_NO_THROW(ThrowNothing()) << "unexpected failure"; EXPECT_NONFATAL_FAILURE(EXPECT_NO_THROW(ThrowAnInteger()) << "expected failure", "expected failure"); EXPECT_FATAL_FAILURE(ASSERT_NO_THROW(ThrowAnInteger()) << "expected failure", "expected failure"); } TEST(StreamingAssertionsTest, AnyThrow) { EXPECT_ANY_THROW(ThrowAnInteger()) << "unexpected failure"; ASSERT_ANY_THROW(ThrowAnInteger()) << "unexpected failure"; EXPECT_NONFATAL_FAILURE(EXPECT_ANY_THROW(ThrowNothing()) << "expected failure", "expected failure"); EXPECT_FATAL_FAILURE(ASSERT_ANY_THROW(ThrowNothing()) << "expected failure", "expected failure"); } #endif // GTEST_HAS_EXCEPTIONS // Tests that Google Test correctly decides whether to use colors in the output. TEST(ColoredOutputTest, UsesColorsWhenGTestColorFlagIsYes) { GTEST_FLAG(color) = "yes"; SetEnv("TERM", "xterm"); // TERM supports colors. EXPECT_TRUE(ShouldUseColor(true)); // Stdout is a TTY. EXPECT_TRUE(ShouldUseColor(false)); // Stdout is not a TTY. SetEnv("TERM", "dumb"); // TERM doesn't support colors. EXPECT_TRUE(ShouldUseColor(true)); // Stdout is a TTY. EXPECT_TRUE(ShouldUseColor(false)); // Stdout is not a TTY. } TEST(ColoredOutputTest, UsesColorsWhenGTestColorFlagIsAliasOfYes) { SetEnv("TERM", "dumb"); // TERM doesn't support colors. GTEST_FLAG(color) = "True"; EXPECT_TRUE(ShouldUseColor(false)); // Stdout is not a TTY. GTEST_FLAG(color) = "t"; EXPECT_TRUE(ShouldUseColor(false)); // Stdout is not a TTY. GTEST_FLAG(color) = "1"; EXPECT_TRUE(ShouldUseColor(false)); // Stdout is not a TTY. } TEST(ColoredOutputTest, UsesNoColorWhenGTestColorFlagIsNo) { GTEST_FLAG(color) = "no"; SetEnv("TERM", "xterm"); // TERM supports colors. EXPECT_FALSE(ShouldUseColor(true)); // Stdout is a TTY. EXPECT_FALSE(ShouldUseColor(false)); // Stdout is not a TTY. SetEnv("TERM", "dumb"); // TERM doesn't support colors. EXPECT_FALSE(ShouldUseColor(true)); // Stdout is a TTY. EXPECT_FALSE(ShouldUseColor(false)); // Stdout is not a TTY. } TEST(ColoredOutputTest, UsesNoColorWhenGTestColorFlagIsInvalid) { SetEnv("TERM", "xterm"); // TERM supports colors. GTEST_FLAG(color) = "F"; EXPECT_FALSE(ShouldUseColor(true)); // Stdout is a TTY. GTEST_FLAG(color) = "0"; EXPECT_FALSE(ShouldUseColor(true)); // Stdout is a TTY. GTEST_FLAG(color) = "unknown"; EXPECT_FALSE(ShouldUseColor(true)); // Stdout is a TTY. } TEST(ColoredOutputTest, UsesColorsWhenStdoutIsTty) { GTEST_FLAG(color) = "auto"; SetEnv("TERM", "xterm"); // TERM supports colors. EXPECT_FALSE(ShouldUseColor(false)); // Stdout is not a TTY. EXPECT_TRUE(ShouldUseColor(true)); // Stdout is a TTY. } TEST(ColoredOutputTest, UsesColorsWhenTermSupportsColors) { GTEST_FLAG(color) = "auto"; #if GTEST_OS_WINDOWS // On Windows, we ignore the TERM variable as it's usually not set. SetEnv("TERM", "dumb"); EXPECT_TRUE(ShouldUseColor(true)); // Stdout is a TTY. SetEnv("TERM", ""); EXPECT_TRUE(ShouldUseColor(true)); // Stdout is a TTY. SetEnv("TERM", "xterm"); EXPECT_TRUE(ShouldUseColor(true)); // Stdout is a TTY. #else // On non-Windows platforms, we rely on TERM to determine if the // terminal supports colors. SetEnv("TERM", "dumb"); // TERM doesn't support colors. EXPECT_FALSE(ShouldUseColor(true)); // Stdout is a TTY. SetEnv("TERM", "emacs"); // TERM doesn't support colors. EXPECT_FALSE(ShouldUseColor(true)); // Stdout is a TTY. SetEnv("TERM", "vt100"); // TERM doesn't support colors. EXPECT_FALSE(ShouldUseColor(true)); // Stdout is a TTY. SetEnv("TERM", "xterm-mono"); // TERM doesn't support colors. EXPECT_FALSE(ShouldUseColor(true)); // Stdout is a TTY. SetEnv("TERM", "xterm"); // TERM supports colors. EXPECT_TRUE(ShouldUseColor(true)); // Stdout is a TTY. SetEnv("TERM", "xterm-color"); // TERM supports colors. EXPECT_TRUE(ShouldUseColor(true)); // Stdout is a TTY. SetEnv("TERM", "xterm-256color"); // TERM supports colors. EXPECT_TRUE(ShouldUseColor(true)); // Stdout is a TTY. SetEnv("TERM", "screen"); // TERM supports colors. EXPECT_TRUE(ShouldUseColor(true)); // Stdout is a TTY. SetEnv("TERM", "screen-256color"); // TERM supports colors. EXPECT_TRUE(ShouldUseColor(true)); // Stdout is a TTY. SetEnv("TERM", "linux"); // TERM supports colors. EXPECT_TRUE(ShouldUseColor(true)); // Stdout is a TTY. SetEnv("TERM", "cygwin"); // TERM supports colors. EXPECT_TRUE(ShouldUseColor(true)); // Stdout is a TTY. #endif // GTEST_OS_WINDOWS } // Verifies that StaticAssertTypeEq works in a namespace scope. static bool dummy1 GTEST_ATTRIBUTE_UNUSED_ = StaticAssertTypeEq(); static bool dummy2 GTEST_ATTRIBUTE_UNUSED_ = StaticAssertTypeEq(); // Verifies that StaticAssertTypeEq works in a class. template class StaticAssertTypeEqTestHelper { public: StaticAssertTypeEqTestHelper() { StaticAssertTypeEq(); } }; TEST(StaticAssertTypeEqTest, WorksInClass) { StaticAssertTypeEqTestHelper(); } // Verifies that StaticAssertTypeEq works inside a function. typedef int IntAlias; TEST(StaticAssertTypeEqTest, CompilesForEqualTypes) { StaticAssertTypeEq(); StaticAssertTypeEq(); } TEST(GetCurrentOsStackTraceExceptTopTest, ReturnsTheStackTrace) { testing::UnitTest* const unit_test = testing::UnitTest::GetInstance(); // We don't have a stack walker in Google Test yet. EXPECT_STREQ("", GetCurrentOsStackTraceExceptTop(unit_test, 0).c_str()); EXPECT_STREQ("", GetCurrentOsStackTraceExceptTop(unit_test, 1).c_str()); } TEST(HasNonfatalFailureTest, ReturnsFalseWhenThereIsNoFailure) { EXPECT_FALSE(HasNonfatalFailure()); } static void FailFatally() { FAIL(); } TEST(HasNonfatalFailureTest, ReturnsFalseWhenThereIsOnlyFatalFailure) { FailFatally(); const bool has_nonfatal_failure = HasNonfatalFailure(); ClearCurrentTestPartResults(); EXPECT_FALSE(has_nonfatal_failure); } TEST(HasNonfatalFailureTest, ReturnsTrueWhenThereIsNonfatalFailure) { ADD_FAILURE(); const bool has_nonfatal_failure = HasNonfatalFailure(); ClearCurrentTestPartResults(); EXPECT_TRUE(has_nonfatal_failure); } TEST(HasNonfatalFailureTest, ReturnsTrueWhenThereAreFatalAndNonfatalFailures) { FailFatally(); ADD_FAILURE(); const bool has_nonfatal_failure = HasNonfatalFailure(); ClearCurrentTestPartResults(); EXPECT_TRUE(has_nonfatal_failure); } // A wrapper for calling HasNonfatalFailure outside of a test body. static bool HasNonfatalFailureHelper() { return testing::Test::HasNonfatalFailure(); } TEST(HasNonfatalFailureTest, WorksOutsideOfTestBody) { EXPECT_FALSE(HasNonfatalFailureHelper()); } TEST(HasNonfatalFailureTest, WorksOutsideOfTestBody2) { ADD_FAILURE(); const bool has_nonfatal_failure = HasNonfatalFailureHelper(); ClearCurrentTestPartResults(); EXPECT_TRUE(has_nonfatal_failure); } TEST(HasFailureTest, ReturnsFalseWhenThereIsNoFailure) { EXPECT_FALSE(HasFailure()); } TEST(HasFailureTest, ReturnsTrueWhenThereIsFatalFailure) { FailFatally(); const bool has_failure = HasFailure(); ClearCurrentTestPartResults(); EXPECT_TRUE(has_failure); } TEST(HasFailureTest, ReturnsTrueWhenThereIsNonfatalFailure) { ADD_FAILURE(); const bool has_failure = HasFailure(); ClearCurrentTestPartResults(); EXPECT_TRUE(has_failure); } TEST(HasFailureTest, ReturnsTrueWhenThereAreFatalAndNonfatalFailures) { FailFatally(); ADD_FAILURE(); const bool has_failure = HasFailure(); ClearCurrentTestPartResults(); EXPECT_TRUE(has_failure); } // A wrapper for calling HasFailure outside of a test body. static bool HasFailureHelper() { return testing::Test::HasFailure(); } TEST(HasFailureTest, WorksOutsideOfTestBody) { EXPECT_FALSE(HasFailureHelper()); } TEST(HasFailureTest, WorksOutsideOfTestBody2) { ADD_FAILURE(); const bool has_failure = HasFailureHelper(); ClearCurrentTestPartResults(); EXPECT_TRUE(has_failure); } class TestListener : public EmptyTestEventListener { public: TestListener() : on_start_counter_(NULL), is_destroyed_(NULL) {} TestListener(int* on_start_counter, bool* is_destroyed) : on_start_counter_(on_start_counter), is_destroyed_(is_destroyed) {} virtual ~TestListener() { if (is_destroyed_) *is_destroyed_ = true; } protected: virtual void OnTestProgramStart(const UnitTest& /*unit_test*/) { if (on_start_counter_ != NULL) (*on_start_counter_)++; } private: int* on_start_counter_; bool* is_destroyed_; }; // Tests the constructor. TEST(TestEventListenersTest, ConstructionWorks) { TestEventListeners listeners; EXPECT_TRUE(TestEventListenersAccessor::GetRepeater(&listeners) != NULL); EXPECT_TRUE(listeners.default_result_printer() == NULL); EXPECT_TRUE(listeners.default_xml_generator() == NULL); } // Tests that the TestEventListeners destructor deletes all the listeners it // owns. TEST(TestEventListenersTest, DestructionWorks) { bool default_result_printer_is_destroyed = false; bool default_xml_printer_is_destroyed = false; bool extra_listener_is_destroyed = false; TestListener* default_result_printer = new TestListener( NULL, &default_result_printer_is_destroyed); TestListener* default_xml_printer = new TestListener( NULL, &default_xml_printer_is_destroyed); TestListener* extra_listener = new TestListener( NULL, &extra_listener_is_destroyed); { TestEventListeners listeners; TestEventListenersAccessor::SetDefaultResultPrinter(&listeners, default_result_printer); TestEventListenersAccessor::SetDefaultXmlGenerator(&listeners, default_xml_printer); listeners.Append(extra_listener); } EXPECT_TRUE(default_result_printer_is_destroyed); EXPECT_TRUE(default_xml_printer_is_destroyed); EXPECT_TRUE(extra_listener_is_destroyed); } // Tests that a listener Append'ed to a TestEventListeners list starts // receiving events. TEST(TestEventListenersTest, Append) { int on_start_counter = 0; bool is_destroyed = false; TestListener* listener = new TestListener(&on_start_counter, &is_destroyed); { TestEventListeners listeners; listeners.Append(listener); TestEventListenersAccessor::GetRepeater(&listeners)->OnTestProgramStart( *UnitTest::GetInstance()); EXPECT_EQ(1, on_start_counter); } EXPECT_TRUE(is_destroyed); } // Tests that listeners receive events in the order they were appended to // the list, except for *End requests, which must be received in the reverse // order. class SequenceTestingListener : public EmptyTestEventListener { public: SequenceTestingListener(std::vector* vector, const char* id) : vector_(vector), id_(id) {} protected: virtual void OnTestProgramStart(const UnitTest& /*unit_test*/) { vector_->push_back(GetEventDescription("OnTestProgramStart")); } virtual void OnTestProgramEnd(const UnitTest& /*unit_test*/) { vector_->push_back(GetEventDescription("OnTestProgramEnd")); } virtual void OnTestIterationStart(const UnitTest& /*unit_test*/, int /*iteration*/) { vector_->push_back(GetEventDescription("OnTestIterationStart")); } virtual void OnTestIterationEnd(const UnitTest& /*unit_test*/, int /*iteration*/) { vector_->push_back(GetEventDescription("OnTestIterationEnd")); } private: std::string GetEventDescription(const char* method) { Message message; message << id_ << "." << method; return message.GetString(); } std::vector* vector_; const char* const id_; GTEST_DISALLOW_COPY_AND_ASSIGN_(SequenceTestingListener); }; TEST(EventListenerTest, AppendKeepsOrder) { std::vector vec; TestEventListeners listeners; listeners.Append(new SequenceTestingListener(&vec, "1st")); listeners.Append(new SequenceTestingListener(&vec, "2nd")); listeners.Append(new SequenceTestingListener(&vec, "3rd")); TestEventListenersAccessor::GetRepeater(&listeners)->OnTestProgramStart( *UnitTest::GetInstance()); ASSERT_EQ(3U, vec.size()); EXPECT_STREQ("1st.OnTestProgramStart", vec[0].c_str()); EXPECT_STREQ("2nd.OnTestProgramStart", vec[1].c_str()); EXPECT_STREQ("3rd.OnTestProgramStart", vec[2].c_str()); vec.clear(); TestEventListenersAccessor::GetRepeater(&listeners)->OnTestProgramEnd( *UnitTest::GetInstance()); ASSERT_EQ(3U, vec.size()); EXPECT_STREQ("3rd.OnTestProgramEnd", vec[0].c_str()); EXPECT_STREQ("2nd.OnTestProgramEnd", vec[1].c_str()); EXPECT_STREQ("1st.OnTestProgramEnd", vec[2].c_str()); vec.clear(); TestEventListenersAccessor::GetRepeater(&listeners)->OnTestIterationStart( *UnitTest::GetInstance(), 0); ASSERT_EQ(3U, vec.size()); EXPECT_STREQ("1st.OnTestIterationStart", vec[0].c_str()); EXPECT_STREQ("2nd.OnTestIterationStart", vec[1].c_str()); EXPECT_STREQ("3rd.OnTestIterationStart", vec[2].c_str()); vec.clear(); TestEventListenersAccessor::GetRepeater(&listeners)->OnTestIterationEnd( *UnitTest::GetInstance(), 0); ASSERT_EQ(3U, vec.size()); EXPECT_STREQ("3rd.OnTestIterationEnd", vec[0].c_str()); EXPECT_STREQ("2nd.OnTestIterationEnd", vec[1].c_str()); EXPECT_STREQ("1st.OnTestIterationEnd", vec[2].c_str()); } // Tests that a listener removed from a TestEventListeners list stops receiving // events and is not deleted when the list is destroyed. TEST(TestEventListenersTest, Release) { int on_start_counter = 0; bool is_destroyed = false; // Although Append passes the ownership of this object to the list, // the following calls release it, and we need to delete it before the // test ends. TestListener* listener = new TestListener(&on_start_counter, &is_destroyed); { TestEventListeners listeners; listeners.Append(listener); EXPECT_EQ(listener, listeners.Release(listener)); TestEventListenersAccessor::GetRepeater(&listeners)->OnTestProgramStart( *UnitTest::GetInstance()); EXPECT_TRUE(listeners.Release(listener) == NULL); } EXPECT_EQ(0, on_start_counter); EXPECT_FALSE(is_destroyed); delete listener; } // Tests that no events are forwarded when event forwarding is disabled. TEST(EventListenerTest, SuppressEventForwarding) { int on_start_counter = 0; TestListener* listener = new TestListener(&on_start_counter, NULL); TestEventListeners listeners; listeners.Append(listener); ASSERT_TRUE(TestEventListenersAccessor::EventForwardingEnabled(listeners)); TestEventListenersAccessor::SuppressEventForwarding(&listeners); ASSERT_FALSE(TestEventListenersAccessor::EventForwardingEnabled(listeners)); TestEventListenersAccessor::GetRepeater(&listeners)->OnTestProgramStart( *UnitTest::GetInstance()); EXPECT_EQ(0, on_start_counter); } // Tests that events generated by Google Test are not forwarded in // death test subprocesses. TEST(EventListenerDeathTest, EventsNotForwardedInDeathTestSubprecesses) { EXPECT_DEATH_IF_SUPPORTED({ GTEST_CHECK_(TestEventListenersAccessor::EventForwardingEnabled( *GetUnitTestImpl()->listeners())) << "expected failure";}, "expected failure"); } // Tests that a listener installed via SetDefaultResultPrinter() starts // receiving events and is returned via default_result_printer() and that // the previous default_result_printer is removed from the list and deleted. TEST(EventListenerTest, default_result_printer) { int on_start_counter = 0; bool is_destroyed = false; TestListener* listener = new TestListener(&on_start_counter, &is_destroyed); TestEventListeners listeners; TestEventListenersAccessor::SetDefaultResultPrinter(&listeners, listener); EXPECT_EQ(listener, listeners.default_result_printer()); TestEventListenersAccessor::GetRepeater(&listeners)->OnTestProgramStart( *UnitTest::GetInstance()); EXPECT_EQ(1, on_start_counter); // Replacing default_result_printer with something else should remove it // from the list and destroy it. TestEventListenersAccessor::SetDefaultResultPrinter(&listeners, NULL); EXPECT_TRUE(listeners.default_result_printer() == NULL); EXPECT_TRUE(is_destroyed); // After broadcasting an event the counter is still the same, indicating // the listener is not in the list anymore. TestEventListenersAccessor::GetRepeater(&listeners)->OnTestProgramStart( *UnitTest::GetInstance()); EXPECT_EQ(1, on_start_counter); } // Tests that the default_result_printer listener stops receiving events // when removed via Release and that is not owned by the list anymore. TEST(EventListenerTest, RemovingDefaultResultPrinterWorks) { int on_start_counter = 0; bool is_destroyed = false; // Although Append passes the ownership of this object to the list, // the following calls release it, and we need to delete it before the // test ends. TestListener* listener = new TestListener(&on_start_counter, &is_destroyed); { TestEventListeners listeners; TestEventListenersAccessor::SetDefaultResultPrinter(&listeners, listener); EXPECT_EQ(listener, listeners.Release(listener)); EXPECT_TRUE(listeners.default_result_printer() == NULL); EXPECT_FALSE(is_destroyed); // Broadcasting events now should not affect default_result_printer. TestEventListenersAccessor::GetRepeater(&listeners)->OnTestProgramStart( *UnitTest::GetInstance()); EXPECT_EQ(0, on_start_counter); } // Destroying the list should not affect the listener now, too. EXPECT_FALSE(is_destroyed); delete listener; } // Tests that a listener installed via SetDefaultXmlGenerator() starts // receiving events and is returned via default_xml_generator() and that // the previous default_xml_generator is removed from the list and deleted. TEST(EventListenerTest, default_xml_generator) { int on_start_counter = 0; bool is_destroyed = false; TestListener* listener = new TestListener(&on_start_counter, &is_destroyed); TestEventListeners listeners; TestEventListenersAccessor::SetDefaultXmlGenerator(&listeners, listener); EXPECT_EQ(listener, listeners.default_xml_generator()); TestEventListenersAccessor::GetRepeater(&listeners)->OnTestProgramStart( *UnitTest::GetInstance()); EXPECT_EQ(1, on_start_counter); // Replacing default_xml_generator with something else should remove it // from the list and destroy it. TestEventListenersAccessor::SetDefaultXmlGenerator(&listeners, NULL); EXPECT_TRUE(listeners.default_xml_generator() == NULL); EXPECT_TRUE(is_destroyed); // After broadcasting an event the counter is still the same, indicating // the listener is not in the list anymore. TestEventListenersAccessor::GetRepeater(&listeners)->OnTestProgramStart( *UnitTest::GetInstance()); EXPECT_EQ(1, on_start_counter); } // Tests that the default_xml_generator listener stops receiving events // when removed via Release and that is not owned by the list anymore. TEST(EventListenerTest, RemovingDefaultXmlGeneratorWorks) { int on_start_counter = 0; bool is_destroyed = false; // Although Append passes the ownership of this object to the list, // the following calls release it, and we need to delete it before the // test ends. TestListener* listener = new TestListener(&on_start_counter, &is_destroyed); { TestEventListeners listeners; TestEventListenersAccessor::SetDefaultXmlGenerator(&listeners, listener); EXPECT_EQ(listener, listeners.Release(listener)); EXPECT_TRUE(listeners.default_xml_generator() == NULL); EXPECT_FALSE(is_destroyed); // Broadcasting events now should not affect default_xml_generator. TestEventListenersAccessor::GetRepeater(&listeners)->OnTestProgramStart( *UnitTest::GetInstance()); EXPECT_EQ(0, on_start_counter); } // Destroying the list should not affect the listener now, too. EXPECT_FALSE(is_destroyed); delete listener; } // Sanity tests to ensure that the alternative, verbose spellings of // some of the macros work. We don't test them thoroughly as that // would be quite involved. Since their implementations are // straightforward, and they are rarely used, we'll just rely on the // users to tell us when they are broken. GTEST_TEST(AlternativeNameTest, Works) { // GTEST_TEST is the same as TEST. GTEST_SUCCEED() << "OK"; // GTEST_SUCCEED is the same as SUCCEED. // GTEST_FAIL is the same as FAIL. EXPECT_FATAL_FAILURE(GTEST_FAIL() << "An expected failure", "An expected failure"); // GTEST_ASSERT_XY is the same as ASSERT_XY. GTEST_ASSERT_EQ(0, 0); EXPECT_FATAL_FAILURE(GTEST_ASSERT_EQ(0, 1) << "An expected failure", "An expected failure"); EXPECT_FATAL_FAILURE(GTEST_ASSERT_EQ(1, 0) << "An expected failure", "An expected failure"); GTEST_ASSERT_NE(0, 1); GTEST_ASSERT_NE(1, 0); EXPECT_FATAL_FAILURE(GTEST_ASSERT_NE(0, 0) << "An expected failure", "An expected failure"); GTEST_ASSERT_LE(0, 0); GTEST_ASSERT_LE(0, 1); EXPECT_FATAL_FAILURE(GTEST_ASSERT_LE(1, 0) << "An expected failure", "An expected failure"); GTEST_ASSERT_LT(0, 1); EXPECT_FATAL_FAILURE(GTEST_ASSERT_LT(0, 0) << "An expected failure", "An expected failure"); EXPECT_FATAL_FAILURE(GTEST_ASSERT_LT(1, 0) << "An expected failure", "An expected failure"); GTEST_ASSERT_GE(0, 0); GTEST_ASSERT_GE(1, 0); EXPECT_FATAL_FAILURE(GTEST_ASSERT_GE(0, 1) << "An expected failure", "An expected failure"); GTEST_ASSERT_GT(1, 0); EXPECT_FATAL_FAILURE(GTEST_ASSERT_GT(0, 1) << "An expected failure", "An expected failure"); EXPECT_FATAL_FAILURE(GTEST_ASSERT_GT(1, 1) << "An expected failure", "An expected failure"); } // Tests for internal utilities necessary for implementation of the universal // printing. // TODO(vladl@google.com): Find a better home for them. class ConversionHelperBase {}; class ConversionHelperDerived : public ConversionHelperBase {}; // Tests that IsAProtocolMessage::value is a compile-time constant. TEST(IsAProtocolMessageTest, ValueIsCompileTimeConstant) { GTEST_COMPILE_ASSERT_(IsAProtocolMessage::value, const_true); GTEST_COMPILE_ASSERT_(!IsAProtocolMessage::value, const_false); } // Tests that IsAProtocolMessage::value is true when T is // proto2::Message or a sub-class of it. TEST(IsAProtocolMessageTest, ValueIsTrueWhenTypeIsAProtocolMessage) { EXPECT_TRUE(IsAProtocolMessage< ::proto2::Message>::value); EXPECT_TRUE(IsAProtocolMessage::value); } // Tests that IsAProtocolMessage::value is false when T is neither // ProtocolMessage nor a sub-class of it. TEST(IsAProtocolMessageTest, ValueIsFalseWhenTypeIsNotAProtocolMessage) { EXPECT_FALSE(IsAProtocolMessage::value); EXPECT_FALSE(IsAProtocolMessage::value); } // Tests that CompileAssertTypesEqual compiles when the type arguments are // equal. TEST(CompileAssertTypesEqual, CompilesWhenTypesAreEqual) { CompileAssertTypesEqual(); CompileAssertTypesEqual(); } // Tests that RemoveReference does not affect non-reference types. TEST(RemoveReferenceTest, DoesNotAffectNonReferenceType) { CompileAssertTypesEqual::type>(); CompileAssertTypesEqual::type>(); } // Tests that RemoveReference removes reference from reference types. TEST(RemoveReferenceTest, RemovesReference) { CompileAssertTypesEqual::type>(); CompileAssertTypesEqual::type>(); } // Tests GTEST_REMOVE_REFERENCE_. template void TestGTestRemoveReference() { CompileAssertTypesEqual(); } TEST(RemoveReferenceTest, MacroVersion) { TestGTestRemoveReference(); TestGTestRemoveReference(); } // Tests that RemoveConst does not affect non-const types. TEST(RemoveConstTest, DoesNotAffectNonConstType) { CompileAssertTypesEqual::type>(); CompileAssertTypesEqual::type>(); } // Tests that RemoveConst removes const from const types. TEST(RemoveConstTest, RemovesConst) { CompileAssertTypesEqual::type>(); CompileAssertTypesEqual::type>(); CompileAssertTypesEqual::type>(); } // Tests GTEST_REMOVE_CONST_. template void TestGTestRemoveConst() { CompileAssertTypesEqual(); } TEST(RemoveConstTest, MacroVersion) { TestGTestRemoveConst(); TestGTestRemoveConst(); TestGTestRemoveConst(); } // Tests GTEST_REMOVE_REFERENCE_AND_CONST_. template void TestGTestRemoveReferenceAndConst() { CompileAssertTypesEqual(); } TEST(RemoveReferenceToConstTest, Works) { TestGTestRemoveReferenceAndConst(); TestGTestRemoveReferenceAndConst(); TestGTestRemoveReferenceAndConst(); TestGTestRemoveReferenceAndConst(); TestGTestRemoveReferenceAndConst(); } // Tests that AddReference does not affect reference types. TEST(AddReferenceTest, DoesNotAffectReferenceType) { CompileAssertTypesEqual::type>(); CompileAssertTypesEqual::type>(); } // Tests that AddReference adds reference to non-reference types. TEST(AddReferenceTest, AddsReference) { CompileAssertTypesEqual::type>(); CompileAssertTypesEqual::type>(); } // Tests GTEST_ADD_REFERENCE_. template void TestGTestAddReference() { CompileAssertTypesEqual(); } TEST(AddReferenceTest, MacroVersion) { TestGTestAddReference(); TestGTestAddReference(); } // Tests GTEST_REFERENCE_TO_CONST_. template void TestGTestReferenceToConst() { CompileAssertTypesEqual(); } TEST(GTestReferenceToConstTest, Works) { TestGTestReferenceToConst(); TestGTestReferenceToConst(); TestGTestReferenceToConst(); TestGTestReferenceToConst(); } // Tests that ImplicitlyConvertible::value is a compile-time constant. TEST(ImplicitlyConvertibleTest, ValueIsCompileTimeConstant) { GTEST_COMPILE_ASSERT_((ImplicitlyConvertible::value), const_true); GTEST_COMPILE_ASSERT_((!ImplicitlyConvertible::value), const_false); } // Tests that ImplicitlyConvertible::value is true when T1 can // be implicitly converted to T2. TEST(ImplicitlyConvertibleTest, ValueIsTrueWhenConvertible) { EXPECT_TRUE((ImplicitlyConvertible::value)); EXPECT_TRUE((ImplicitlyConvertible::value)); EXPECT_TRUE((ImplicitlyConvertible::value)); EXPECT_TRUE((ImplicitlyConvertible::value)); EXPECT_TRUE((ImplicitlyConvertible::value)); EXPECT_TRUE((ImplicitlyConvertible::value)); } // Tests that ImplicitlyConvertible::value is false when T1 // cannot be implicitly converted to T2. TEST(ImplicitlyConvertibleTest, ValueIsFalseWhenNotConvertible) { EXPECT_FALSE((ImplicitlyConvertible::value)); EXPECT_FALSE((ImplicitlyConvertible::value)); EXPECT_FALSE((ImplicitlyConvertible::value)); EXPECT_FALSE((ImplicitlyConvertible::value)); } // Tests IsContainerTest. class NonContainer {}; TEST(IsContainerTestTest, WorksForNonContainer) { EXPECT_EQ(sizeof(IsNotContainer), sizeof(IsContainerTest(0))); EXPECT_EQ(sizeof(IsNotContainer), sizeof(IsContainerTest(0))); EXPECT_EQ(sizeof(IsNotContainer), sizeof(IsContainerTest(0))); } TEST(IsContainerTestTest, WorksForContainer) { EXPECT_EQ(sizeof(IsContainer), sizeof(IsContainerTest >(0))); EXPECT_EQ(sizeof(IsContainer), sizeof(IsContainerTest >(0))); } // Tests ArrayEq(). TEST(ArrayEqTest, WorksForDegeneratedArrays) { EXPECT_TRUE(ArrayEq(5, 5L)); EXPECT_FALSE(ArrayEq('a', 0)); } TEST(ArrayEqTest, WorksForOneDimensionalArrays) { // Note that a and b are distinct but compatible types. const int a[] = { 0, 1 }; long b[] = { 0, 1 }; EXPECT_TRUE(ArrayEq(a, b)); EXPECT_TRUE(ArrayEq(a, 2, b)); b[0] = 2; EXPECT_FALSE(ArrayEq(a, b)); EXPECT_FALSE(ArrayEq(a, 1, b)); } TEST(ArrayEqTest, WorksForTwoDimensionalArrays) { const char a[][3] = { "hi", "lo" }; const char b[][3] = { "hi", "lo" }; const char c[][3] = { "hi", "li" }; EXPECT_TRUE(ArrayEq(a, b)); EXPECT_TRUE(ArrayEq(a, 2, b)); EXPECT_FALSE(ArrayEq(a, c)); EXPECT_FALSE(ArrayEq(a, 2, c)); } // Tests ArrayAwareFind(). TEST(ArrayAwareFindTest, WorksForOneDimensionalArray) { const char a[] = "hello"; EXPECT_EQ(a + 4, ArrayAwareFind(a, a + 5, 'o')); EXPECT_EQ(a + 5, ArrayAwareFind(a, a + 5, 'x')); } TEST(ArrayAwareFindTest, WorksForTwoDimensionalArray) { int a[][2] = { { 0, 1 }, { 2, 3 }, { 4, 5 } }; const int b[2] = { 2, 3 }; EXPECT_EQ(a + 1, ArrayAwareFind(a, a + 3, b)); const int c[2] = { 6, 7 }; EXPECT_EQ(a + 3, ArrayAwareFind(a, a + 3, c)); } // Tests CopyArray(). TEST(CopyArrayTest, WorksForDegeneratedArrays) { int n = 0; CopyArray('a', &n); EXPECT_EQ('a', n); } TEST(CopyArrayTest, WorksForOneDimensionalArrays) { const char a[3] = "hi"; int b[3]; #ifndef __BORLANDC__ // C++Builder cannot compile some array size deductions. CopyArray(a, &b); EXPECT_TRUE(ArrayEq(a, b)); #endif int c[3]; CopyArray(a, 3, c); EXPECT_TRUE(ArrayEq(a, c)); } TEST(CopyArrayTest, WorksForTwoDimensionalArrays) { const int a[2][3] = { { 0, 1, 2 }, { 3, 4, 5 } }; int b[2][3]; #ifndef __BORLANDC__ // C++Builder cannot compile some array size deductions. CopyArray(a, &b); EXPECT_TRUE(ArrayEq(a, b)); #endif int c[2][3]; CopyArray(a, 2, c); EXPECT_TRUE(ArrayEq(a, c)); } // Tests NativeArray. TEST(NativeArrayTest, ConstructorFromArrayWorks) { const int a[3] = { 0, 1, 2 }; NativeArray na(a, 3, kReference); EXPECT_EQ(3U, na.size()); EXPECT_EQ(a, na.begin()); } TEST(NativeArrayTest, CreatesAndDeletesCopyOfArrayWhenAskedTo) { typedef int Array[2]; Array* a = new Array[1]; (*a)[0] = 0; (*a)[1] = 1; NativeArray na(*a, 2, kCopy); EXPECT_NE(*a, na.begin()); delete[] a; EXPECT_EQ(0, na.begin()[0]); EXPECT_EQ(1, na.begin()[1]); // We rely on the heap checker to verify that na deletes the copy of // array. } TEST(NativeArrayTest, TypeMembersAreCorrect) { StaticAssertTypeEq::value_type>(); StaticAssertTypeEq::value_type>(); StaticAssertTypeEq::const_iterator>(); StaticAssertTypeEq::const_iterator>(); } TEST(NativeArrayTest, MethodsWork) { const int a[3] = { 0, 1, 2 }; NativeArray na(a, 3, kCopy); ASSERT_EQ(3U, na.size()); EXPECT_EQ(3, na.end() - na.begin()); NativeArray::const_iterator it = na.begin(); EXPECT_EQ(0, *it); ++it; EXPECT_EQ(1, *it); it++; EXPECT_EQ(2, *it); ++it; EXPECT_EQ(na.end(), it); EXPECT_TRUE(na == na); NativeArray na2(a, 3, kReference); EXPECT_TRUE(na == na2); const int b1[3] = { 0, 1, 1 }; const int b2[4] = { 0, 1, 2, 3 }; EXPECT_FALSE(na == NativeArray(b1, 3, kReference)); EXPECT_FALSE(na == NativeArray(b2, 4, kCopy)); } TEST(NativeArrayTest, WorksForTwoDimensionalArray) { const char a[2][3] = { "hi", "lo" }; NativeArray na(a, 2, kReference); ASSERT_EQ(2U, na.size()); EXPECT_EQ(a, na.begin()); } // Tests SkipPrefix(). TEST(SkipPrefixTest, SkipsWhenPrefixMatches) { const char* const str = "hello"; const char* p = str; EXPECT_TRUE(SkipPrefix("", &p)); EXPECT_EQ(str, p); p = str; EXPECT_TRUE(SkipPrefix("hell", &p)); EXPECT_EQ(str + 4, p); } TEST(SkipPrefixTest, DoesNotSkipWhenPrefixDoesNotMatch) { const char* const str = "world"; const char* p = str; EXPECT_FALSE(SkipPrefix("W", &p)); EXPECT_EQ(str, p); p = str; EXPECT_FALSE(SkipPrefix("world!", &p)); EXPECT_EQ(str, p); } ffc-2019.1.0.post0/libs/gtest-1.7.0/test/gtest_xml_outfile1_test_.cc000066400000000000000000000037331346026536000246030ustar00rootroot00000000000000// Copyright 2008, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: keith.ray@gmail.com (Keith Ray) // // gtest_xml_outfile1_test_ writes some xml via TestProperty used by // gtest_xml_outfiles_test.py #include "gtest/gtest.h" class PropertyOne : public testing::Test { protected: virtual void SetUp() { RecordProperty("SetUpProp", 1); } virtual void TearDown() { RecordProperty("TearDownProp", 1); } }; TEST_F(PropertyOne, TestSomeProperties) { RecordProperty("TestSomeProperty", 1); } ffc-2019.1.0.post0/libs/gtest-1.7.0/test/gtest_xml_outfile2_test_.cc000066400000000000000000000037331346026536000246040ustar00rootroot00000000000000// Copyright 2008, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: keith.ray@gmail.com (Keith Ray) // // gtest_xml_outfile2_test_ writes some xml via TestProperty used by // gtest_xml_outfiles_test.py #include "gtest/gtest.h" class PropertyTwo : public testing::Test { protected: virtual void SetUp() { RecordProperty("SetUpProp", 2); } virtual void TearDown() { RecordProperty("TearDownProp", 2); } }; TEST_F(PropertyTwo, TestSomeProperties) { RecordProperty("TestSomeProperty", 2); } ffc-2019.1.0.post0/libs/gtest-1.7.0/test/gtest_xml_outfiles_test.py000077500000000000000000000123341346026536000246110ustar00rootroot00000000000000#!/usr/bin/env python # # Copyright 2008, Google Inc. # All rights reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions are # met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # * Redistributions in binary form must reproduce the above # copyright notice, this list of conditions and the following disclaimer # in the documentation and/or other materials provided with the # distribution. # * Neither the name of Google Inc. nor the names of its # contributors may be used to endorse or promote products derived from # this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """Unit test for the gtest_xml_output module.""" __author__ = "keith.ray@gmail.com (Keith Ray)" import os from xml.dom import minidom, Node import gtest_test_utils import gtest_xml_test_utils GTEST_OUTPUT_SUBDIR = "xml_outfiles" GTEST_OUTPUT_1_TEST = "gtest_xml_outfile1_test_" GTEST_OUTPUT_2_TEST = "gtest_xml_outfile2_test_" EXPECTED_XML_1 = """ """ EXPECTED_XML_2 = """ """ class GTestXMLOutFilesTest(gtest_xml_test_utils.GTestXMLTestCase): """Unit test for Google Test's XML output functionality.""" def setUp(self): # We want the trailing '/' that the last "" provides in os.path.join, for # telling Google Test to create an output directory instead of a single file # for xml output. self.output_dir_ = os.path.join(gtest_test_utils.GetTempDir(), GTEST_OUTPUT_SUBDIR, "") self.DeleteFilesAndDir() def tearDown(self): self.DeleteFilesAndDir() def DeleteFilesAndDir(self): try: os.remove(os.path.join(self.output_dir_, GTEST_OUTPUT_1_TEST + ".xml")) except os.error: pass try: os.remove(os.path.join(self.output_dir_, GTEST_OUTPUT_2_TEST + ".xml")) except os.error: pass try: os.rmdir(self.output_dir_) except os.error: pass def testOutfile1(self): self._TestOutFile(GTEST_OUTPUT_1_TEST, EXPECTED_XML_1) def testOutfile2(self): self._TestOutFile(GTEST_OUTPUT_2_TEST, EXPECTED_XML_2) def _TestOutFile(self, test_name, expected_xml): gtest_prog_path = gtest_test_utils.GetTestExecutablePath(test_name) command = [gtest_prog_path, "--gtest_output=xml:%s" % self.output_dir_] p = gtest_test_utils.Subprocess(command, working_dir=gtest_test_utils.GetTempDir()) self.assert_(p.exited) self.assertEquals(0, p.exit_code) # TODO(wan@google.com): libtool causes the built test binary to be # named lt-gtest_xml_outfiles_test_ instead of # gtest_xml_outfiles_test_. To account for this possibillity, we # allow both names in the following code. We should remove this # hack when Chandler Carruth's libtool replacement tool is ready. output_file_name1 = test_name + ".xml" output_file1 = os.path.join(self.output_dir_, output_file_name1) output_file_name2 = 'lt-' + output_file_name1 output_file2 = os.path.join(self.output_dir_, output_file_name2) self.assert_(os.path.isfile(output_file1) or os.path.isfile(output_file2), output_file1) expected = minidom.parseString(expected_xml) if os.path.isfile(output_file1): actual = minidom.parse(output_file1) else: actual = minidom.parse(output_file2) self.NormalizeXml(actual.documentElement) self.AssertEquivalentNodes(expected.documentElement, actual.documentElement) expected.unlink() actual.unlink() if __name__ == "__main__": os.environ["GTEST_STACK_TRACE_DEPTH"] = "0" gtest_test_utils.Main() ffc-2019.1.0.post0/libs/gtest-1.7.0/test/gtest_xml_output_unittest.py000077500000000000000000000343641346026536000252260ustar00rootroot00000000000000#!/usr/bin/env python # # Copyright 2006, Google Inc. # All rights reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions are # met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # * Redistributions in binary form must reproduce the above # copyright notice, this list of conditions and the following disclaimer # in the documentation and/or other materials provided with the # distribution. # * Neither the name of Google Inc. nor the names of its # contributors may be used to endorse or promote products derived from # this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """Unit test for the gtest_xml_output module""" __author__ = 'eefacm@gmail.com (Sean Mcafee)' import datetime import errno import os import re import sys from xml.dom import minidom, Node import gtest_test_utils import gtest_xml_test_utils GTEST_FILTER_FLAG = '--gtest_filter' GTEST_LIST_TESTS_FLAG = '--gtest_list_tests' GTEST_OUTPUT_FLAG = "--gtest_output" GTEST_DEFAULT_OUTPUT_FILE = "test_detail.xml" GTEST_PROGRAM_NAME = "gtest_xml_output_unittest_" SUPPORTS_STACK_TRACES = False if SUPPORTS_STACK_TRACES: STACK_TRACE_TEMPLATE = '\nStack trace:\n*' else: STACK_TRACE_TEMPLATE = '' EXPECTED_NON_EMPTY_XML = """ ]]>%(stack)s]]> """ % {'stack': STACK_TRACE_TEMPLATE} EXPECTED_FILTERED_TEST_XML = """ """ EXPECTED_EMPTY_XML = """ """ GTEST_PROGRAM_PATH = gtest_test_utils.GetTestExecutablePath(GTEST_PROGRAM_NAME) SUPPORTS_TYPED_TESTS = 'TypedTest' in gtest_test_utils.Subprocess( [GTEST_PROGRAM_PATH, GTEST_LIST_TESTS_FLAG], capture_stderr=False).output class GTestXMLOutputUnitTest(gtest_xml_test_utils.GTestXMLTestCase): """ Unit test for Google Test's XML output functionality. """ # This test currently breaks on platforms that do not support typed and # type-parameterized tests, so we don't run it under them. if SUPPORTS_TYPED_TESTS: def testNonEmptyXmlOutput(self): """ Runs a test program that generates a non-empty XML output, and tests that the XML output is expected. """ self._TestXmlOutput(GTEST_PROGRAM_NAME, EXPECTED_NON_EMPTY_XML, 1) def testEmptyXmlOutput(self): """Verifies XML output for a Google Test binary without actual tests. Runs a test program that generates an empty XML output, and tests that the XML output is expected. """ self._TestXmlOutput('gtest_no_test_unittest', EXPECTED_EMPTY_XML, 0) def testTimestampValue(self): """Checks whether the timestamp attribute in the XML output is valid. Runs a test program that generates an empty XML output, and checks if the timestamp attribute in the testsuites tag is valid. """ actual = self._GetXmlOutput('gtest_no_test_unittest', [], 0) date_time_str = actual.documentElement.getAttributeNode('timestamp').value # datetime.strptime() is only available in Python 2.5+ so we have to # parse the expected datetime manually. match = re.match(r'(\d+)-(\d\d)-(\d\d)T(\d\d):(\d\d):(\d\d)', date_time_str) self.assertTrue( re.match, 'XML datettime string %s has incorrect format' % date_time_str) date_time_from_xml = datetime.datetime( year=int(match.group(1)), month=int(match.group(2)), day=int(match.group(3)), hour=int(match.group(4)), minute=int(match.group(5)), second=int(match.group(6))) time_delta = abs(datetime.datetime.now() - date_time_from_xml) # timestamp value should be near the current local time self.assertTrue(time_delta < datetime.timedelta(seconds=600), 'time_delta is %s' % time_delta) actual.unlink() def testDefaultOutputFile(self): """ Confirms that Google Test produces an XML output file with the expected default name if no name is explicitly specified. """ output_file = os.path.join(gtest_test_utils.GetTempDir(), GTEST_DEFAULT_OUTPUT_FILE) gtest_prog_path = gtest_test_utils.GetTestExecutablePath( 'gtest_no_test_unittest') try: os.remove(output_file) except OSError, e: if e.errno != errno.ENOENT: raise p = gtest_test_utils.Subprocess( [gtest_prog_path, '%s=xml' % GTEST_OUTPUT_FLAG], working_dir=gtest_test_utils.GetTempDir()) self.assert_(p.exited) self.assertEquals(0, p.exit_code) self.assert_(os.path.isfile(output_file)) def testSuppressedXmlOutput(self): """ Tests that no XML file is generated if the default XML listener is shut down before RUN_ALL_TESTS is invoked. """ xml_path = os.path.join(gtest_test_utils.GetTempDir(), GTEST_PROGRAM_NAME + 'out.xml') if os.path.isfile(xml_path): os.remove(xml_path) command = [GTEST_PROGRAM_PATH, '%s=xml:%s' % (GTEST_OUTPUT_FLAG, xml_path), '--shut_down_xml'] p = gtest_test_utils.Subprocess(command) if p.terminated_by_signal: # p.signal is avalable only if p.terminated_by_signal is True. self.assertFalse( p.terminated_by_signal, '%s was killed by signal %d' % (GTEST_PROGRAM_NAME, p.signal)) else: self.assert_(p.exited) self.assertEquals(1, p.exit_code, "'%s' exited with code %s, which doesn't match " 'the expected exit code %s.' % (command, p.exit_code, 1)) self.assert_(not os.path.isfile(xml_path)) def testFilteredTestXmlOutput(self): """Verifies XML output when a filter is applied. Runs a test program that executes only some tests and verifies that non-selected tests do not show up in the XML output. """ self._TestXmlOutput(GTEST_PROGRAM_NAME, EXPECTED_FILTERED_TEST_XML, 0, extra_args=['%s=SuccessfulTest.*' % GTEST_FILTER_FLAG]) def _GetXmlOutput(self, gtest_prog_name, extra_args, expected_exit_code): """ Returns the xml output generated by running the program gtest_prog_name. Furthermore, the program's exit code must be expected_exit_code. """ xml_path = os.path.join(gtest_test_utils.GetTempDir(), gtest_prog_name + 'out.xml') gtest_prog_path = gtest_test_utils.GetTestExecutablePath(gtest_prog_name) command = ([gtest_prog_path, '%s=xml:%s' % (GTEST_OUTPUT_FLAG, xml_path)] + extra_args) p = gtest_test_utils.Subprocess(command) if p.terminated_by_signal: self.assert_(False, '%s was killed by signal %d' % (gtest_prog_name, p.signal)) else: self.assert_(p.exited) self.assertEquals(expected_exit_code, p.exit_code, "'%s' exited with code %s, which doesn't match " 'the expected exit code %s.' % (command, p.exit_code, expected_exit_code)) actual = minidom.parse(xml_path) return actual def _TestXmlOutput(self, gtest_prog_name, expected_xml, expected_exit_code, extra_args=None): """ Asserts that the XML document generated by running the program gtest_prog_name matches expected_xml, a string containing another XML document. Furthermore, the program's exit code must be expected_exit_code. """ actual = self._GetXmlOutput(gtest_prog_name, extra_args or [], expected_exit_code) expected = minidom.parseString(expected_xml) self.NormalizeXml(actual.documentElement) self.AssertEquivalentNodes(expected.documentElement, actual.documentElement) expected.unlink() actual.unlink() if __name__ == '__main__': os.environ['GTEST_STACK_TRACE_DEPTH'] = '1' gtest_test_utils.Main() ffc-2019.1.0.post0/libs/gtest-1.7.0/test/gtest_xml_output_unittest_.cc000066400000000000000000000137771346026536000253240ustar00rootroot00000000000000// Copyright 2006, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // Author: eefacm@gmail.com (Sean Mcafee) // Unit test for Google Test XML output. // // A user can specify XML output in a Google Test program to run via // either the GTEST_OUTPUT environment variable or the --gtest_output // flag. This is used for testing such functionality. // // This program will be invoked from a Python unit test. Don't run it // directly. #include "gtest/gtest.h" using ::testing::InitGoogleTest; using ::testing::TestEventListeners; using ::testing::TestWithParam; using ::testing::UnitTest; using ::testing::Test; using ::testing::Values; class SuccessfulTest : public Test { }; TEST_F(SuccessfulTest, Succeeds) { SUCCEED() << "This is a success."; ASSERT_EQ(1, 1); } class FailedTest : public Test { }; TEST_F(FailedTest, Fails) { ASSERT_EQ(1, 2); } class DisabledTest : public Test { }; TEST_F(DisabledTest, DISABLED_test_not_run) { FAIL() << "Unexpected failure: Disabled test should not be run"; } TEST(MixedResultTest, Succeeds) { EXPECT_EQ(1, 1); ASSERT_EQ(1, 1); } TEST(MixedResultTest, Fails) { EXPECT_EQ(1, 2); ASSERT_EQ(2, 3); } TEST(MixedResultTest, DISABLED_test) { FAIL() << "Unexpected failure: Disabled test should not be run"; } TEST(XmlQuotingTest, OutputsCData) { FAIL() << "XML output: " ""; } // Helps to test that invalid characters produced by test code do not make // it into the XML file. TEST(InvalidCharactersTest, InvalidCharactersInMessage) { FAIL() << "Invalid characters in brackets [\x1\x2]"; } class PropertyRecordingTest : public Test { public: static void SetUpTestCase() { RecordProperty("SetUpTestCase", "yes"); } static void TearDownTestCase() { RecordProperty("TearDownTestCase", "aye"); } }; TEST_F(PropertyRecordingTest, OneProperty) { RecordProperty("key_1", "1"); } TEST_F(PropertyRecordingTest, IntValuedProperty) { RecordProperty("key_int", 1); } TEST_F(PropertyRecordingTest, ThreeProperties) { RecordProperty("key_1", "1"); RecordProperty("key_2", "2"); RecordProperty("key_3", "3"); } TEST_F(PropertyRecordingTest, TwoValuesForOneKeyUsesLastValue) { RecordProperty("key_1", "1"); RecordProperty("key_1", "2"); } TEST(NoFixtureTest, RecordProperty) { RecordProperty("key", "1"); } void ExternalUtilityThatCallsRecordProperty(const std::string& key, int value) { testing::Test::RecordProperty(key, value); } void ExternalUtilityThatCallsRecordProperty(const std::string& key, const std::string& value) { testing::Test::RecordProperty(key, value); } TEST(NoFixtureTest, ExternalUtilityThatCallsRecordIntValuedProperty) { ExternalUtilityThatCallsRecordProperty("key_for_utility_int", 1); } TEST(NoFixtureTest, ExternalUtilityThatCallsRecordStringValuedProperty) { ExternalUtilityThatCallsRecordProperty("key_for_utility_string", "1"); } // Verifies that the test parameter value is output in the 'value_param' // XML attribute for value-parameterized tests. class ValueParamTest : public TestWithParam {}; TEST_P(ValueParamTest, HasValueParamAttribute) {} TEST_P(ValueParamTest, AnotherTestThatHasValueParamAttribute) {} INSTANTIATE_TEST_CASE_P(Single, ValueParamTest, Values(33, 42)); #if GTEST_HAS_TYPED_TEST // Verifies that the type parameter name is output in the 'type_param' // XML attribute for typed tests. template class TypedTest : public Test {}; typedef testing::Types TypedTestTypes; TYPED_TEST_CASE(TypedTest, TypedTestTypes); TYPED_TEST(TypedTest, HasTypeParamAttribute) {} #endif #if GTEST_HAS_TYPED_TEST_P // Verifies that the type parameter name is output in the 'type_param' // XML attribute for type-parameterized tests. template class TypeParameterizedTestCase : public Test {}; TYPED_TEST_CASE_P(TypeParameterizedTestCase); TYPED_TEST_P(TypeParameterizedTestCase, HasTypeParamAttribute) {} REGISTER_TYPED_TEST_CASE_P(TypeParameterizedTestCase, HasTypeParamAttribute); typedef testing::Types TypeParameterizedTestCaseTypes; INSTANTIATE_TYPED_TEST_CASE_P(Single, TypeParameterizedTestCase, TypeParameterizedTestCaseTypes); #endif int main(int argc, char** argv) { InitGoogleTest(&argc, argv); if (argc > 1 && strcmp(argv[1], "--shut_down_xml") == 0) { TestEventListeners& listeners = UnitTest::GetInstance()->listeners(); delete listeners.Release(listeners.default_xml_generator()); } testing::Test::RecordProperty("ad_hoc_property", "42"); return RUN_ALL_TESTS(); } ffc-2019.1.0.post0/libs/gtest-1.7.0/test/gtest_xml_test_utils.py000077500000000000000000000212541346026536000241200ustar00rootroot00000000000000#!/usr/bin/env python # # Copyright 2006, Google Inc. # All rights reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions are # met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # * Redistributions in binary form must reproduce the above # copyright notice, this list of conditions and the following disclaimer # in the documentation and/or other materials provided with the # distribution. # * Neither the name of Google Inc. nor the names of its # contributors may be used to endorse or promote products derived from # this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """Unit test utilities for gtest_xml_output""" __author__ = 'eefacm@gmail.com (Sean Mcafee)' import re from xml.dom import minidom, Node import gtest_test_utils GTEST_OUTPUT_FLAG = '--gtest_output' GTEST_DEFAULT_OUTPUT_FILE = 'test_detail.xml' class GTestXMLTestCase(gtest_test_utils.TestCase): """ Base class for tests of Google Test's XML output functionality. """ def AssertEquivalentNodes(self, expected_node, actual_node): """ Asserts that actual_node (a DOM node object) is equivalent to expected_node (another DOM node object), in that either both of them are CDATA nodes and have the same value, or both are DOM elements and actual_node meets all of the following conditions: * It has the same tag name as expected_node. * It has the same set of attributes as expected_node, each with the same value as the corresponding attribute of expected_node. Exceptions are any attribute named "time", which needs only be convertible to a floating-point number and any attribute named "type_param" which only has to be non-empty. * It has an equivalent set of child nodes (including elements and CDATA sections) as expected_node. Note that we ignore the order of the children as they are not guaranteed to be in any particular order. """ if expected_node.nodeType == Node.CDATA_SECTION_NODE: self.assertEquals(Node.CDATA_SECTION_NODE, actual_node.nodeType) self.assertEquals(expected_node.nodeValue, actual_node.nodeValue) return self.assertEquals(Node.ELEMENT_NODE, actual_node.nodeType) self.assertEquals(Node.ELEMENT_NODE, expected_node.nodeType) self.assertEquals(expected_node.tagName, actual_node.tagName) expected_attributes = expected_node.attributes actual_attributes = actual_node .attributes self.assertEquals( expected_attributes.length, actual_attributes.length, 'attribute numbers differ in element %s:\nExpected: %r\nActual: %r' % ( actual_node.tagName, expected_attributes.keys(), actual_attributes.keys())) for i in range(expected_attributes.length): expected_attr = expected_attributes.item(i) actual_attr = actual_attributes.get(expected_attr.name) self.assert_( actual_attr is not None, 'expected attribute %s not found in element %s' % (expected_attr.name, actual_node.tagName)) self.assertEquals( expected_attr.value, actual_attr.value, ' values of attribute %s in element %s differ: %s vs %s' % (expected_attr.name, actual_node.tagName, expected_attr.value, actual_attr.value)) expected_children = self._GetChildren(expected_node) actual_children = self._GetChildren(actual_node) self.assertEquals( len(expected_children), len(actual_children), 'number of child elements differ in element ' + actual_node.tagName) for child_id, child in expected_children.iteritems(): self.assert_(child_id in actual_children, '<%s> is not in <%s> (in element %s)' % (child_id, actual_children, actual_node.tagName)) self.AssertEquivalentNodes(child, actual_children[child_id]) identifying_attribute = { 'testsuites': 'name', 'testsuite': 'name', 'testcase': 'name', 'failure': 'message', } def _GetChildren(self, element): """ Fetches all of the child nodes of element, a DOM Element object. Returns them as the values of a dictionary keyed by the IDs of the children. For , and elements, the ID is the value of their "name" attribute; for elements, it is the value of the "message" attribute; CDATA sections and non-whitespace text nodes are concatenated into a single CDATA section with ID "detail". An exception is raised if any element other than the above four is encountered, if two child elements with the same identifying attributes are encountered, or if any other type of node is encountered. """ children = {} for child in element.childNodes: if child.nodeType == Node.ELEMENT_NODE: self.assert_(child.tagName in self.identifying_attribute, 'Encountered unknown element <%s>' % child.tagName) childID = child.getAttribute(self.identifying_attribute[child.tagName]) self.assert_(childID not in children) children[childID] = child elif child.nodeType in [Node.TEXT_NODE, Node.CDATA_SECTION_NODE]: if 'detail' not in children: if (child.nodeType == Node.CDATA_SECTION_NODE or not child.nodeValue.isspace()): children['detail'] = child.ownerDocument.createCDATASection( child.nodeValue) else: children['detail'].nodeValue += child.nodeValue else: self.fail('Encountered unexpected node type %d' % child.nodeType) return children def NormalizeXml(self, element): """ Normalizes Google Test's XML output to eliminate references to transient information that may change from run to run. * The "time" attribute of , and elements is replaced with a single asterisk, if it contains only digit characters. * The "timestamp" attribute of elements is replaced with a single asterisk, if it contains a valid ISO8601 datetime value. * The "type_param" attribute of elements is replaced with a single asterisk (if it sn non-empty) as it is the type name returned by the compiler and is platform dependent. * The line info reported in the first line of the "message" attribute and CDATA section of elements is replaced with the file's basename and a single asterisk for the line number. * The directory names in file paths are removed. * The stack traces are removed. """ if element.tagName == 'testsuites': timestamp = element.getAttributeNode('timestamp') timestamp.value = re.sub(r'^\d{4}-\d\d-\d\dT\d\d:\d\d:\d\d$', '*', timestamp.value) if element.tagName in ('testsuites', 'testsuite', 'testcase'): time = element.getAttributeNode('time') time.value = re.sub(r'^\d+(\.\d+)?$', '*', time.value) type_param = element.getAttributeNode('type_param') if type_param and type_param.value: type_param.value = '*' elif element.tagName == 'failure': source_line_pat = r'^.*[/\\](.*:)\d+\n' # Replaces the source line information with a normalized form. message = element.getAttributeNode('message') message.value = re.sub(source_line_pat, '\\1*\n', message.value) for child in element.childNodes: if child.nodeType == Node.CDATA_SECTION_NODE: # Replaces the source line information with a normalized form. cdata = re.sub(source_line_pat, '\\1*\n', child.nodeValue) # Removes the actual stack trace. child.nodeValue = re.sub(r'\nStack trace:\n(.|\n)*', '', cdata) for child in element.childNodes: if child.nodeType == Node.ELEMENT_NODE: self.NormalizeXml(child) ffc-2019.1.0.post0/libs/gtest-1.7.0/test/production.cc000066400000000000000000000033041346026536000217470ustar00rootroot00000000000000// Copyright 2006, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: wan@google.com (Zhanyong Wan) // // This is part of the unit test for include/gtest/gtest_prod.h. #include "production.h" PrivateCode::PrivateCode() : x_(0) {} ffc-2019.1.0.post0/libs/gtest-1.7.0/test/production.h000066400000000000000000000041741346026536000216170ustar00rootroot00000000000000// Copyright 2006, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: wan@google.com (Zhanyong Wan) // // This is part of the unit test for include/gtest/gtest_prod.h. #ifndef GTEST_TEST_PRODUCTION_H_ #define GTEST_TEST_PRODUCTION_H_ #include "gtest/gtest_prod.h" class PrivateCode { public: // Declares a friend test that does not use a fixture. FRIEND_TEST(PrivateCodeTest, CanAccessPrivateMembers); // Declares a friend test that uses a fixture. FRIEND_TEST(PrivateCodeFixtureTest, CanAccessPrivateMembers); PrivateCode(); int x() const { return x_; } private: void set_x(int an_x) { x_ = an_x; } int x_; }; #endif // GTEST_TEST_PRODUCTION_H_ ffc-2019.1.0.post0/libs/gtest-1.7.0/xcode/000077500000000000000000000000001346026536000173755ustar00rootroot00000000000000ffc-2019.1.0.post0/libs/gtest-1.7.0/xcode/Config/000077500000000000000000000000001346026536000206025ustar00rootroot00000000000000ffc-2019.1.0.post0/libs/gtest-1.7.0/xcode/Config/DebugProject.xcconfig000066400000000000000000000017271346026536000247100ustar00rootroot00000000000000// // DebugProject.xcconfig // // These are Debug Configuration project settings for the gtest framework and // examples. It is set in the "Based On:" dropdown in the "Project" info // dialog. // This file is based on the Xcode Configuration files in: // http://code.google.com/p/google-toolbox-for-mac/ // #include "General.xcconfig" // No optimization GCC_OPTIMIZATION_LEVEL = 0 // Deployment postprocessing is what triggers Xcode to strip, turn it off DEPLOYMENT_POSTPROCESSING = NO // Dead code stripping off DEAD_CODE_STRIPPING = NO // Debug symbols should be on obviously GCC_GENERATE_DEBUGGING_SYMBOLS = YES // Define the DEBUG macro in all debug builds OTHER_CFLAGS = $(OTHER_CFLAGS) -DDEBUG=1 // These are turned off to avoid STL incompatibilities with client code // // Turns on special C++ STL checks to "encourage" good STL use // GCC_PREPROCESSOR_DEFINITIONS = $(GCC_PREPROCESSOR_DEFINITIONS) _GLIBCXX_DEBUG_PEDANTIC _GLIBCXX_DEBUG _GLIBCPP_CONCEPT_CHECKS ffc-2019.1.0.post0/libs/gtest-1.7.0/xcode/Config/FrameworkTarget.xcconfig000066400000000000000000000010471346026536000254320ustar00rootroot00000000000000// // FrameworkTarget.xcconfig // // These are Framework target settings for the gtest framework and examples. It // is set in the "Based On:" dropdown in the "Target" info dialog. // This file is based on the Xcode Configuration files in: // http://code.google.com/p/google-toolbox-for-mac/ // // Dynamic libs need to be position independent GCC_DYNAMIC_NO_PIC = NO // Dynamic libs should not have their external symbols stripped. STRIP_STYLE = non-global // Let the user install by specifying the $DSTROOT with xcodebuild SKIP_INSTALL = NO ffc-2019.1.0.post0/libs/gtest-1.7.0/xcode/Config/General.xcconfig000066400000000000000000000022571346026536000237070ustar00rootroot00000000000000// // General.xcconfig // // These are General configuration settings for the gtest framework and // examples. // This file is based on the Xcode Configuration files in: // http://code.google.com/p/google-toolbox-for-mac/ // // Build for PPC and Intel, 32- and 64-bit ARCHS = i386 x86_64 ppc ppc64 // Zerolink prevents link warnings so turn it off ZERO_LINK = NO // Prebinding considered unhelpful in 10.3 and later PREBINDING = NO // Strictest warning policy WARNING_CFLAGS = -Wall -Werror -Wendif-labels -Wnewline-eof -Wno-sign-compare -Wshadow // Work around Xcode bugs by using external strip. See: // http://lists.apple.com/archives/Xcode-users/2006/Feb/msg00050.html SEPARATE_STRIP = YES // Force C99 dialect GCC_C_LANGUAGE_STANDARD = c99 // not sure why apple defaults this on, but it's pretty risky ALWAYS_SEARCH_USER_PATHS = NO // Turn on position dependent code for most cases (overridden where appropriate) GCC_DYNAMIC_NO_PIC = YES // Default SDK and minimum OS version is 10.4 SDKROOT = $(DEVELOPER_SDK_DIR)/MacOSX10.4u.sdk MACOSX_DEPLOYMENT_TARGET = 10.4 GCC_VERSION = 4.0 // VERSIONING BUILD SETTINGS (used in Info.plist) GTEST_VERSIONINFO_ABOUT = © 2008 Google Inc. ffc-2019.1.0.post0/libs/gtest-1.7.0/xcode/Config/ReleaseProject.xcconfig000066400000000000000000000017411346026536000252360ustar00rootroot00000000000000// // ReleaseProject.xcconfig // // These are Release Configuration project settings for the gtest framework // and examples. It is set in the "Based On:" dropdown in the "Project" info // dialog. // This file is based on the Xcode Configuration files in: // http://code.google.com/p/google-toolbox-for-mac/ // #include "General.xcconfig" // subconfig/Release.xcconfig // Optimize for space and size (Apple recommendation) GCC_OPTIMIZATION_LEVEL = s // Deploment postprocessing is what triggers Xcode to strip DEPLOYMENT_POSTPROCESSING = YES // No symbols GCC_GENERATE_DEBUGGING_SYMBOLS = NO // Dead code strip does not affect ObjC code but can help for C DEAD_CODE_STRIPPING = YES // NDEBUG is used by things like assert.h, so define it for general compat. // ASSERT going away in release tends to create unused vars. OTHER_CFLAGS = $(OTHER_CFLAGS) -DNDEBUG=1 -Wno-unused-variable // When we strip we want to strip all symbols in release, but save externals. STRIP_STYLE = all ffc-2019.1.0.post0/libs/gtest-1.7.0/xcode/Config/StaticLibraryTarget.xcconfig000066400000000000000000000011131346026536000262430ustar00rootroot00000000000000// // StaticLibraryTarget.xcconfig // // These are static library target settings for libgtest.a. It // is set in the "Based On:" dropdown in the "Target" info dialog. // This file is based on the Xcode Configuration files in: // http://code.google.com/p/google-toolbox-for-mac/ // // Static libs can be included in bundles so make them position independent GCC_DYNAMIC_NO_PIC = NO // Static libs should not have their internal globals or external symbols // stripped. STRIP_STYLE = debugging // Let the user install by specifying the $DSTROOT with xcodebuild SKIP_INSTALL = NO ffc-2019.1.0.post0/libs/gtest-1.7.0/xcode/Config/TestTarget.xcconfig000066400000000000000000000003561346026536000244160ustar00rootroot00000000000000// // TestTarget.xcconfig // // These are Test target settings for the gtest framework and examples. It // is set in the "Based On:" dropdown in the "Target" info dialog. PRODUCT_NAME = $(TARGET_NAME) HEADER_SEARCH_PATHS = ../include ffc-2019.1.0.post0/libs/gtest-1.7.0/xcode/Resources/000077500000000000000000000000001346026536000213475ustar00rootroot00000000000000ffc-2019.1.0.post0/libs/gtest-1.7.0/xcode/Resources/Info.plist000066400000000000000000000017621346026536000233250ustar00rootroot00000000000000 CFBundleDevelopmentRegion English CFBundleExecutable ${EXECUTABLE_NAME} CFBundleIconFile CFBundleIdentifier com.google.${PRODUCT_NAME} CFBundleInfoDictionaryVersion 6.0 CFBundlePackageType FMWK CFBundleSignature ???? CFBundleVersion GTEST_VERSIONINFO_LONG CFBundleShortVersionString GTEST_VERSIONINFO_SHORT CFBundleGetInfoString ${PRODUCT_NAME} GTEST_VERSIONINFO_LONG, ${GTEST_VERSIONINFO_ABOUT} NSHumanReadableCopyright ${GTEST_VERSIONINFO_ABOUT} CSResourcesFileMapped ffc-2019.1.0.post0/libs/gtest-1.7.0/xcode/Samples/000077500000000000000000000000001346026536000210015ustar00rootroot00000000000000ffc-2019.1.0.post0/libs/gtest-1.7.0/xcode/Samples/FrameworkSample/000077500000000000000000000000001346026536000241005ustar00rootroot00000000000000ffc-2019.1.0.post0/libs/gtest-1.7.0/xcode/Samples/FrameworkSample/Info.plist000066400000000000000000000015161346026536000260530ustar00rootroot00000000000000 CFBundleDevelopmentRegion English CFBundleExecutable ${EXECUTABLE_NAME} CFBundleIconFile CFBundleIdentifier com.google.gtest.${PRODUCT_NAME:identifier} CFBundleInfoDictionaryVersion 6.0 CFBundleName ${PRODUCT_NAME} CFBundlePackageType FMWK CFBundleShortVersionString 1.0 CFBundleSignature ???? CFBundleVersion 1.0 CSResourcesFileMapped ffc-2019.1.0.post0/libs/gtest-1.7.0/xcode/Samples/FrameworkSample/WidgetFramework.xcodeproj/000077500000000000000000000000001346026536000311755ustar00rootroot00000000000000project.pbxproj000066400000000000000000000372741346026536000342070ustar00rootroot00000000000000ffc-2019.1.0.post0/libs/gtest-1.7.0/xcode/Samples/FrameworkSample/WidgetFramework.xcodeproj// !$*UTF8*$! { archiveVersion = 1; classes = { }; objectVersion = 42; objects = { /* Begin PBXAggregateTarget section */ 4024D162113D7D2400C7059E /* Test */ = { isa = PBXAggregateTarget; buildConfigurationList = 4024D169113D7D4600C7059E /* Build configuration list for PBXAggregateTarget "Test" */; buildPhases = ( 4024D161113D7D2400C7059E /* ShellScript */, ); dependencies = ( 4024D166113D7D3100C7059E /* PBXTargetDependency */, ); name = Test; productName = TestAndBuild; }; 4024D1E9113D83FF00C7059E /* TestAndBuild */ = { isa = PBXAggregateTarget; buildConfigurationList = 4024D1F0113D842B00C7059E /* Build configuration list for PBXAggregateTarget "TestAndBuild" */; buildPhases = ( ); dependencies = ( 4024D1ED113D840900C7059E /* PBXTargetDependency */, 4024D1EF113D840D00C7059E /* PBXTargetDependency */, ); name = TestAndBuild; productName = TestAndBuild; }; /* End PBXAggregateTarget section */ /* Begin PBXBuildFile section */ 3B7EB1250E5AEE3500C7F239 /* widget.cc in Sources */ = {isa = PBXBuildFile; fileRef = 3B7EB1230E5AEE3500C7F239 /* widget.cc */; }; 3B7EB1260E5AEE3500C7F239 /* widget.h in Headers */ = {isa = PBXBuildFile; fileRef = 3B7EB1240E5AEE3500C7F239 /* widget.h */; settings = {ATTRIBUTES = (Public, ); }; }; 3B7EB1280E5AEE4600C7F239 /* widget_test.cc in Sources */ = {isa = PBXBuildFile; fileRef = 3B7EB1270E5AEE4600C7F239 /* widget_test.cc */; }; 3B7EB1480E5AF3B400C7F239 /* Widget.framework in Frameworks */ = {isa = PBXBuildFile; fileRef = 8D07F2C80486CC7A007CD1D0 /* Widget.framework */; }; 4024D188113D7D7800C7059E /* libgtest.a in Frameworks */ = {isa = PBXBuildFile; fileRef = 4024D185113D7D5500C7059E /* libgtest.a */; }; 4024D189113D7D7A00C7059E /* libgtest_main.a in Frameworks */ = {isa = PBXBuildFile; fileRef = 4024D183113D7D5500C7059E /* libgtest_main.a */; }; /* End PBXBuildFile section */ /* Begin PBXContainerItemProxy section */ 3B07BDF00E3F3FAE00647869 /* PBXContainerItemProxy */ = { isa = PBXContainerItemProxy; containerPortal = 0867D690FE84028FC02AAC07 /* Project object */; proxyType = 1; remoteGlobalIDString = 8D07F2BC0486CC7A007CD1D0; remoteInfo = gTestExample; }; 4024D165113D7D3100C7059E /* PBXContainerItemProxy */ = { isa = PBXContainerItemProxy; containerPortal = 0867D690FE84028FC02AAC07 /* Project object */; proxyType = 1; remoteGlobalIDString = 3B07BDE90E3F3F9E00647869; remoteInfo = WidgetFrameworkTest; }; 4024D1EC113D840900C7059E /* PBXContainerItemProxy */ = { isa = PBXContainerItemProxy; containerPortal = 0867D690FE84028FC02AAC07 /* Project object */; proxyType = 1; remoteGlobalIDString = 8D07F2BC0486CC7A007CD1D0; remoteInfo = WidgetFramework; }; 4024D1EE113D840D00C7059E /* PBXContainerItemProxy */ = { isa = PBXContainerItemProxy; containerPortal = 0867D690FE84028FC02AAC07 /* Project object */; proxyType = 1; remoteGlobalIDString = 4024D162113D7D2400C7059E; remoteInfo = Test; }; /* End PBXContainerItemProxy section */ /* Begin PBXFileReference section */ 3B07BDEA0E3F3F9E00647869 /* WidgetFrameworkTest */ = {isa = PBXFileReference; explicitFileType = "compiled.mach-o.executable"; includeInIndex = 0; path = WidgetFrameworkTest; sourceTree = BUILT_PRODUCTS_DIR; }; 3B7EB1230E5AEE3500C7F239 /* widget.cc */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = widget.cc; sourceTree = ""; }; 3B7EB1240E5AEE3500C7F239 /* widget.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = widget.h; sourceTree = ""; }; 3B7EB1270E5AEE4600C7F239 /* widget_test.cc */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = widget_test.cc; sourceTree = ""; }; 4024D183113D7D5500C7059E /* libgtest_main.a */ = {isa = PBXFileReference; lastKnownFileType = archive.ar; name = libgtest_main.a; path = /usr/local/lib/libgtest_main.a; sourceTree = ""; }; 4024D185113D7D5500C7059E /* libgtest.a */ = {isa = PBXFileReference; lastKnownFileType = archive.ar; name = libgtest.a; path = /usr/local/lib/libgtest.a; sourceTree = ""; }; 4024D1E2113D838200C7059E /* runtests.sh */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = text.script.sh; path = runtests.sh; sourceTree = ""; }; 8D07F2C70486CC7A007CD1D0 /* Info.plist */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = text.plist; path = Info.plist; sourceTree = ""; }; 8D07F2C80486CC7A007CD1D0 /* Widget.framework */ = {isa = PBXFileReference; explicitFileType = wrapper.framework; includeInIndex = 0; path = Widget.framework; sourceTree = BUILT_PRODUCTS_DIR; }; /* End PBXFileReference section */ /* Begin PBXFrameworksBuildPhase section */ 3B07BDE80E3F3F9E00647869 /* Frameworks */ = { isa = PBXFrameworksBuildPhase; buildActionMask = 2147483647; files = ( 4024D189113D7D7A00C7059E /* libgtest_main.a in Frameworks */, 4024D188113D7D7800C7059E /* libgtest.a in Frameworks */, 3B7EB1480E5AF3B400C7F239 /* Widget.framework in Frameworks */, ); runOnlyForDeploymentPostprocessing = 0; }; 8D07F2C30486CC7A007CD1D0 /* Frameworks */ = { isa = PBXFrameworksBuildPhase; buildActionMask = 2147483647; files = ( ); runOnlyForDeploymentPostprocessing = 0; }; /* End PBXFrameworksBuildPhase section */ /* Begin PBXGroup section */ 034768DDFF38A45A11DB9C8B /* Products */ = { isa = PBXGroup; children = ( 8D07F2C80486CC7A007CD1D0 /* Widget.framework */, 3B07BDEA0E3F3F9E00647869 /* WidgetFrameworkTest */, ); name = Products; sourceTree = ""; }; 0867D691FE84028FC02AAC07 /* gTestExample */ = { isa = PBXGroup; children = ( 4024D1E1113D836C00C7059E /* Scripts */, 08FB77ACFE841707C02AAC07 /* Source */, 089C1665FE841158C02AAC07 /* Resources */, 3B07BE350E4094E400647869 /* Test */, 0867D69AFE84028FC02AAC07 /* External Frameworks and Libraries */, 034768DDFF38A45A11DB9C8B /* Products */, ); name = gTestExample; sourceTree = ""; }; 0867D69AFE84028FC02AAC07 /* External Frameworks and Libraries */ = { isa = PBXGroup; children = ( 4024D183113D7D5500C7059E /* libgtest_main.a */, 4024D185113D7D5500C7059E /* libgtest.a */, ); name = "External Frameworks and Libraries"; sourceTree = ""; }; 089C1665FE841158C02AAC07 /* Resources */ = { isa = PBXGroup; children = ( 8D07F2C70486CC7A007CD1D0 /* Info.plist */, ); name = Resources; sourceTree = ""; }; 08FB77ACFE841707C02AAC07 /* Source */ = { isa = PBXGroup; children = ( 3B7EB1230E5AEE3500C7F239 /* widget.cc */, 3B7EB1240E5AEE3500C7F239 /* widget.h */, ); name = Source; sourceTree = ""; }; 3B07BE350E4094E400647869 /* Test */ = { isa = PBXGroup; children = ( 3B7EB1270E5AEE4600C7F239 /* widget_test.cc */, ); name = Test; sourceTree = ""; }; 4024D1E1113D836C00C7059E /* Scripts */ = { isa = PBXGroup; children = ( 4024D1E2113D838200C7059E /* runtests.sh */, ); name = Scripts; sourceTree = ""; }; /* End PBXGroup section */ /* Begin PBXHeadersBuildPhase section */ 8D07F2BD0486CC7A007CD1D0 /* Headers */ = { isa = PBXHeadersBuildPhase; buildActionMask = 2147483647; files = ( 3B7EB1260E5AEE3500C7F239 /* widget.h in Headers */, ); runOnlyForDeploymentPostprocessing = 0; }; /* End PBXHeadersBuildPhase section */ /* Begin PBXNativeTarget section */ 3B07BDE90E3F3F9E00647869 /* WidgetFrameworkTest */ = { isa = PBXNativeTarget; buildConfigurationList = 3B07BDF40E3F3FB600647869 /* Build configuration list for PBXNativeTarget "WidgetFrameworkTest" */; buildPhases = ( 3B07BDE70E3F3F9E00647869 /* Sources */, 3B07BDE80E3F3F9E00647869 /* Frameworks */, ); buildRules = ( ); dependencies = ( 3B07BDF10E3F3FAE00647869 /* PBXTargetDependency */, ); name = WidgetFrameworkTest; productName = gTestExampleTest; productReference = 3B07BDEA0E3F3F9E00647869 /* WidgetFrameworkTest */; productType = "com.apple.product-type.tool"; }; 8D07F2BC0486CC7A007CD1D0 /* WidgetFramework */ = { isa = PBXNativeTarget; buildConfigurationList = 4FADC24208B4156D00ABE55E /* Build configuration list for PBXNativeTarget "WidgetFramework" */; buildPhases = ( 8D07F2C10486CC7A007CD1D0 /* Sources */, 8D07F2C30486CC7A007CD1D0 /* Frameworks */, 8D07F2BD0486CC7A007CD1D0 /* Headers */, 8D07F2BF0486CC7A007CD1D0 /* Resources */, 8D07F2C50486CC7A007CD1D0 /* Rez */, ); buildRules = ( ); dependencies = ( ); name = WidgetFramework; productInstallPath = "$(HOME)/Library/Frameworks"; productName = gTestExample; productReference = 8D07F2C80486CC7A007CD1D0 /* Widget.framework */; productType = "com.apple.product-type.framework"; }; /* End PBXNativeTarget section */ /* Begin PBXProject section */ 0867D690FE84028FC02AAC07 /* Project object */ = { isa = PBXProject; buildConfigurationList = 4FADC24608B4156D00ABE55E /* Build configuration list for PBXProject "WidgetFramework" */; compatibilityVersion = "Xcode 2.4"; hasScannedForEncodings = 1; mainGroup = 0867D691FE84028FC02AAC07 /* gTestExample */; productRefGroup = 034768DDFF38A45A11DB9C8B /* Products */; projectDirPath = ""; projectRoot = ""; targets = ( 8D07F2BC0486CC7A007CD1D0 /* WidgetFramework */, 3B07BDE90E3F3F9E00647869 /* WidgetFrameworkTest */, 4024D162113D7D2400C7059E /* Test */, 4024D1E9113D83FF00C7059E /* TestAndBuild */, ); }; /* End PBXProject section */ /* Begin PBXResourcesBuildPhase section */ 8D07F2BF0486CC7A007CD1D0 /* Resources */ = { isa = PBXResourcesBuildPhase; buildActionMask = 2147483647; files = ( ); runOnlyForDeploymentPostprocessing = 0; }; /* End PBXResourcesBuildPhase section */ /* Begin PBXRezBuildPhase section */ 8D07F2C50486CC7A007CD1D0 /* Rez */ = { isa = PBXRezBuildPhase; buildActionMask = 2147483647; files = ( ); runOnlyForDeploymentPostprocessing = 0; }; /* End PBXRezBuildPhase section */ /* Begin PBXShellScriptBuildPhase section */ 4024D161113D7D2400C7059E /* ShellScript */ = { isa = PBXShellScriptBuildPhase; buildActionMask = 2147483647; files = ( ); inputPaths = ( ); outputPaths = ( ); runOnlyForDeploymentPostprocessing = 0; shellPath = /bin/sh; shellScript = "/bin/bash $SRCROOT/runtests.sh $BUILT_PRODUCTS_DIR/WidgetFrameworkTest\n"; }; /* End PBXShellScriptBuildPhase section */ /* Begin PBXSourcesBuildPhase section */ 3B07BDE70E3F3F9E00647869 /* Sources */ = { isa = PBXSourcesBuildPhase; buildActionMask = 2147483647; files = ( 3B7EB1280E5AEE4600C7F239 /* widget_test.cc in Sources */, ); runOnlyForDeploymentPostprocessing = 0; }; 8D07F2C10486CC7A007CD1D0 /* Sources */ = { isa = PBXSourcesBuildPhase; buildActionMask = 2147483647; files = ( 3B7EB1250E5AEE3500C7F239 /* widget.cc in Sources */, ); runOnlyForDeploymentPostprocessing = 0; }; /* End PBXSourcesBuildPhase section */ /* Begin PBXTargetDependency section */ 3B07BDF10E3F3FAE00647869 /* PBXTargetDependency */ = { isa = PBXTargetDependency; target = 8D07F2BC0486CC7A007CD1D0 /* WidgetFramework */; targetProxy = 3B07BDF00E3F3FAE00647869 /* PBXContainerItemProxy */; }; 4024D166113D7D3100C7059E /* PBXTargetDependency */ = { isa = PBXTargetDependency; target = 3B07BDE90E3F3F9E00647869 /* WidgetFrameworkTest */; targetProxy = 4024D165113D7D3100C7059E /* PBXContainerItemProxy */; }; 4024D1ED113D840900C7059E /* PBXTargetDependency */ = { isa = PBXTargetDependency; target = 8D07F2BC0486CC7A007CD1D0 /* WidgetFramework */; targetProxy = 4024D1EC113D840900C7059E /* PBXContainerItemProxy */; }; 4024D1EF113D840D00C7059E /* PBXTargetDependency */ = { isa = PBXTargetDependency; target = 4024D162113D7D2400C7059E /* Test */; targetProxy = 4024D1EE113D840D00C7059E /* PBXContainerItemProxy */; }; /* End PBXTargetDependency section */ /* Begin XCBuildConfiguration section */ 3B07BDEC0E3F3F9F00647869 /* Debug */ = { isa = XCBuildConfiguration; buildSettings = { PRODUCT_NAME = WidgetFrameworkTest; }; name = Debug; }; 3B07BDED0E3F3F9F00647869 /* Release */ = { isa = XCBuildConfiguration; buildSettings = { PRODUCT_NAME = WidgetFrameworkTest; }; name = Release; }; 4024D163113D7D2400C7059E /* Debug */ = { isa = XCBuildConfiguration; buildSettings = { PRODUCT_NAME = TestAndBuild; }; name = Debug; }; 4024D164113D7D2400C7059E /* Release */ = { isa = XCBuildConfiguration; buildSettings = { PRODUCT_NAME = TestAndBuild; }; name = Release; }; 4024D1EA113D83FF00C7059E /* Debug */ = { isa = XCBuildConfiguration; buildSettings = { PRODUCT_NAME = TestAndBuild; }; name = Debug; }; 4024D1EB113D83FF00C7059E /* Release */ = { isa = XCBuildConfiguration; buildSettings = { PRODUCT_NAME = TestAndBuild; }; name = Release; }; 4FADC24308B4156D00ABE55E /* Debug */ = { isa = XCBuildConfiguration; buildSettings = { DYLIB_COMPATIBILITY_VERSION = 1; DYLIB_CURRENT_VERSION = 1; FRAMEWORK_VERSION = A; INFOPLIST_FILE = Info.plist; INSTALL_PATH = "@loader_path/../Frameworks"; PRODUCT_NAME = Widget; }; name = Debug; }; 4FADC24408B4156D00ABE55E /* Release */ = { isa = XCBuildConfiguration; buildSettings = { DYLIB_COMPATIBILITY_VERSION = 1; DYLIB_CURRENT_VERSION = 1; FRAMEWORK_VERSION = A; INFOPLIST_FILE = Info.plist; INSTALL_PATH = "@loader_path/../Frameworks"; PRODUCT_NAME = Widget; }; name = Release; }; 4FADC24708B4156D00ABE55E /* Debug */ = { isa = XCBuildConfiguration; buildSettings = { GCC_VERSION = 4.0; SDKROOT = /Developer/SDKs/MacOSX10.4u.sdk; }; name = Debug; }; 4FADC24808B4156D00ABE55E /* Release */ = { isa = XCBuildConfiguration; buildSettings = { GCC_VERSION = 4.0; SDKROOT = /Developer/SDKs/MacOSX10.4u.sdk; }; name = Release; }; /* End XCBuildConfiguration section */ /* Begin XCConfigurationList section */ 3B07BDF40E3F3FB600647869 /* Build configuration list for PBXNativeTarget "WidgetFrameworkTest" */ = { isa = XCConfigurationList; buildConfigurations = ( 3B07BDEC0E3F3F9F00647869 /* Debug */, 3B07BDED0E3F3F9F00647869 /* Release */, ); defaultConfigurationIsVisible = 0; defaultConfigurationName = Release; }; 4024D169113D7D4600C7059E /* Build configuration list for PBXAggregateTarget "Test" */ = { isa = XCConfigurationList; buildConfigurations = ( 4024D163113D7D2400C7059E /* Debug */, 4024D164113D7D2400C7059E /* Release */, ); defaultConfigurationIsVisible = 0; defaultConfigurationName = Release; }; 4024D1F0113D842B00C7059E /* Build configuration list for PBXAggregateTarget "TestAndBuild" */ = { isa = XCConfigurationList; buildConfigurations = ( 4024D1EA113D83FF00C7059E /* Debug */, 4024D1EB113D83FF00C7059E /* Release */, ); defaultConfigurationIsVisible = 0; defaultConfigurationName = Release; }; 4FADC24208B4156D00ABE55E /* Build configuration list for PBXNativeTarget "WidgetFramework" */ = { isa = XCConfigurationList; buildConfigurations = ( 4FADC24308B4156D00ABE55E /* Debug */, 4FADC24408B4156D00ABE55E /* Release */, ); defaultConfigurationIsVisible = 0; defaultConfigurationName = Release; }; 4FADC24608B4156D00ABE55E /* Build configuration list for PBXProject "WidgetFramework" */ = { isa = XCConfigurationList; buildConfigurations = ( 4FADC24708B4156D00ABE55E /* Debug */, 4FADC24808B4156D00ABE55E /* Release */, ); defaultConfigurationIsVisible = 0; defaultConfigurationName = Release; }; /* End XCConfigurationList section */ }; rootObject = 0867D690FE84028FC02AAC07 /* Project object */; } ffc-2019.1.0.post0/libs/gtest-1.7.0/xcode/Samples/FrameworkSample/runtests.sh000066400000000000000000000044621346026536000263310ustar00rootroot00000000000000#!/bin/bash # # Copyright 2008, Google Inc. # All rights reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions are # met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # * Redistributions in binary form must reproduce the above # copyright notice, this list of conditions and the following disclaimer # in the documentation and/or other materials provided with the # distribution. # * Neither the name of Google Inc. nor the names of its # contributors may be used to endorse or promote products derived from # this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # Executes the samples and tests for the Google Test Framework. # Help the dynamic linker find the path to the libraries. export DYLD_FRAMEWORK_PATH=$BUILT_PRODUCTS_DIR export DYLD_LIBRARY_PATH=$BUILT_PRODUCTS_DIR # Create some executables. test_executables=$@ # Now execute each one in turn keeping track of how many succeeded and failed. succeeded=0 failed=0 failed_list=() for test in ${test_executables[*]}; do "$test" result=$? if [ $result -eq 0 ]; then succeeded=$(( $succeeded + 1 )) else failed=$(( failed + 1 )) failed_list="$failed_list $test" fi done # Report the successes and failures to the console. echo "Tests complete with $succeeded successes and $failed failures." if [ $failed -ne 0 ]; then echo "The following tests failed:" echo $failed_list fi exit $failed ffc-2019.1.0.post0/libs/gtest-1.7.0/xcode/Samples/FrameworkSample/widget.cc000066400000000000000000000044031346026536000256730ustar00rootroot00000000000000// Copyright 2008, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: preston.a.jackson@gmail.com (Preston Jackson) // // Google Test - FrameworkSample // widget.cc // // Widget is a very simple class used for demonstrating the use of gtest #include "widget.h" Widget::Widget(int number, const std::string& name) : number_(number), name_(name) {} Widget::~Widget() {} float Widget::GetFloatValue() const { return number_; } int Widget::GetIntValue() const { return static_cast(number_); } std::string Widget::GetStringValue() const { return name_; } void Widget::GetCharPtrValue(char* buffer, size_t max_size) const { // Copy the char* representation of name_ into buffer, up to max_size. strncpy(buffer, name_.c_str(), max_size-1); buffer[max_size-1] = '\0'; return; } ffc-2019.1.0.post0/libs/gtest-1.7.0/xcode/Samples/FrameworkSample/widget.h000066400000000000000000000043361346026536000255420ustar00rootroot00000000000000// Copyright 2008, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: preston.a.jackson@gmail.com (Preston Jackson) // // Google Test - FrameworkSample // widget.h // // Widget is a very simple class used for demonstrating the use of gtest. It // simply stores two values a string and an integer, which are returned via // public accessors in multiple forms. #import class Widget { public: Widget(int number, const std::string& name); ~Widget(); // Public accessors to number data float GetFloatValue() const; int GetIntValue() const; // Public accessors to the string data std::string GetStringValue() const; void GetCharPtrValue(char* buffer, size_t max_size) const; private: // Data members float number_; std::string name_; }; ffc-2019.1.0.post0/libs/gtest-1.7.0/xcode/Samples/FrameworkSample/widget_test.cc000066400000000000000000000051551346026536000267370ustar00rootroot00000000000000// Copyright 2008, Google Inc. // All rights reserved. // // Redistribution and use in source and binary forms, with or without // modification, are permitted provided that the following conditions are // met: // // * Redistributions of source code must retain the above copyright // notice, this list of conditions and the following disclaimer. // * Redistributions in binary form must reproduce the above // copyright notice, this list of conditions and the following disclaimer // in the documentation and/or other materials provided with the // distribution. // * Neither the name of Google Inc. nor the names of its // contributors may be used to endorse or promote products derived from // this software without specific prior written permission. // // THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS // "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT // LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR // A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT // OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, // SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT // LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, // DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY // THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT // (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE // OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. // // Author: preston.a.jackson@gmail.com (Preston Jackson) // // Google Test - FrameworkSample // widget_test.cc // // This is a simple test file for the Widget class in the Widget.framework #include #include "gtest/gtest.h" #include // This test verifies that the constructor sets the internal state of the // Widget class correctly. TEST(WidgetInitializerTest, TestConstructor) { Widget widget(1.0f, "name"); EXPECT_FLOAT_EQ(1.0f, widget.GetFloatValue()); EXPECT_EQ(std::string("name"), widget.GetStringValue()); } // This test verifies the conversion of the float and string values to int and // char*, respectively. TEST(WidgetInitializerTest, TestConversion) { Widget widget(1.0f, "name"); EXPECT_EQ(1, widget.GetIntValue()); size_t max_size = 128; char buffer[max_size]; widget.GetCharPtrValue(buffer, max_size); EXPECT_STREQ("name", buffer); } // Use the Google Test main that is linked into the framework. It does something // like this: // int main(int argc, char** argv) { // testing::InitGoogleTest(&argc, argv); // return RUN_ALL_TESTS(); // } ffc-2019.1.0.post0/libs/gtest-1.7.0/xcode/Scripts/000077500000000000000000000000001346026536000210245ustar00rootroot00000000000000ffc-2019.1.0.post0/libs/gtest-1.7.0/xcode/Scripts/runtests.sh000066400000000000000000000050331346026536000232500ustar00rootroot00000000000000#!/bin/bash # # Copyright 2008, Google Inc. # All rights reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions are # met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # * Redistributions in binary form must reproduce the above # copyright notice, this list of conditions and the following disclaimer # in the documentation and/or other materials provided with the # distribution. # * Neither the name of Google Inc. nor the names of its # contributors may be used to endorse or promote products derived from # this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. # Executes the samples and tests for the Google Test Framework. # Help the dynamic linker find the path to the libraries. export DYLD_FRAMEWORK_PATH=$BUILT_PRODUCTS_DIR export DYLD_LIBRARY_PATH=$BUILT_PRODUCTS_DIR # Create some executables. test_executables=("$BUILT_PRODUCTS_DIR/gtest_unittest-framework" "$BUILT_PRODUCTS_DIR/gtest_unittest" "$BUILT_PRODUCTS_DIR/sample1_unittest-framework" "$BUILT_PRODUCTS_DIR/sample1_unittest-static") # Now execute each one in turn keeping track of how many succeeded and failed. succeeded=0 failed=0 failed_list=() for test in ${test_executables[*]}; do "$test" result=$? if [ $result -eq 0 ]; then succeeded=$(( $succeeded + 1 )) else failed=$(( failed + 1 )) failed_list="$failed_list $test" fi done # Report the successes and failures to the console. echo "Tests complete with $succeeded successes and $failed failures." if [ $failed -ne 0 ]; then echo "The following tests failed:" echo $failed_list fi exit $failed ffc-2019.1.0.post0/libs/gtest-1.7.0/xcode/Scripts/versiongenerate.py000077500000000000000000000106701346026536000246050ustar00rootroot00000000000000#!/usr/bin/env python # # Copyright 2008, Google Inc. # All rights reserved. # # Redistribution and use in source and binary forms, with or without # modification, are permitted provided that the following conditions are # met: # # * Redistributions of source code must retain the above copyright # notice, this list of conditions and the following disclaimer. # * Redistributions in binary form must reproduce the above # copyright notice, this list of conditions and the following disclaimer # in the documentation and/or other materials provided with the # distribution. # * Neither the name of Google Inc. nor the names of its # contributors may be used to endorse or promote products derived from # this software without specific prior written permission. # # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS # "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT # LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR # A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT # OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, # SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT # LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, # DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY # THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE # OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. """A script to prepare version informtion for use the gtest Info.plist file. This script extracts the version information from the configure.ac file and uses it to generate a header file containing the same information. The #defines in this header file will be included in during the generation of the Info.plist of the framework, giving the correct value to the version shown in the Finder. This script makes the following assumptions (these are faults of the script, not problems with the Autoconf): 1. The AC_INIT macro will be contained within the first 1024 characters of configure.ac 2. The version string will be 3 integers separated by periods and will be surrounded by squre brackets, "[" and "]" (e.g. [1.0.1]). The first segment represents the major version, the second represents the minor version and the third represents the fix version. 3. No ")" character exists between the opening "(" and closing ")" of AC_INIT, including in comments and character strings. """ import sys import re # Read the command line argument (the output directory for Version.h) if (len(sys.argv) < 3): print "Usage: versiongenerate.py input_dir output_dir" sys.exit(1) else: input_dir = sys.argv[1] output_dir = sys.argv[2] # Read the first 1024 characters of the configure.ac file config_file = open("%s/configure.ac" % input_dir, 'r') buffer_size = 1024 opening_string = config_file.read(buffer_size) config_file.close() # Extract the version string from the AC_INIT macro # The following init_expression means: # Extract three integers separated by periods and surrounded by squre # brackets(e.g. "[1.0.1]") between "AC_INIT(" and ")". Do not be greedy # (*? is the non-greedy flag) since that would pull in everything between # the first "(" and the last ")" in the file. version_expression = re.compile(r"AC_INIT\(.*?\[(\d+)\.(\d+)\.(\d+)\].*?\)", re.DOTALL) version_values = version_expression.search(opening_string) major_version = version_values.group(1) minor_version = version_values.group(2) fix_version = version_values.group(3) # Write the version information to a header file to be included in the # Info.plist file. file_data = """// // DO NOT MODIFY THIS FILE (but you can delete it) // // This file is autogenerated by the versiongenerate.py script. This script // is executed in a "Run Script" build phase when creating gtest.framework. This // header file is not used during compilation of C-source. Rather, it simply // defines some version strings for substitution in the Info.plist. Because of // this, we are not not restricted to C-syntax nor are we using include guards. // #define GTEST_VERSIONINFO_SHORT %s.%s #define GTEST_VERSIONINFO_LONG %s.%s.%s """ % (major_version, minor_version, major_version, minor_version, fix_version) version_file = open("%s/Version.h" % output_dir, 'w') version_file.write(file_data) version_file.close() ffc-2019.1.0.post0/libs/gtest-1.7.0/xcode/gtest.xcodeproj/000077500000000000000000000000001346026536000225175ustar00rootroot00000000000000ffc-2019.1.0.post0/libs/gtest-1.7.0/xcode/gtest.xcodeproj/project.pbxproj000066400000000000000000001434341346026536000256040ustar00rootroot00000000000000// !$*UTF8*$! { archiveVersion = 1; classes = { }; objectVersion = 46; objects = { /* Begin PBXAggregateTarget section */ 3B238F5F0E828B5400846E11 /* Check */ = { isa = PBXAggregateTarget; buildConfigurationList = 3B238FA30E828BB600846E11 /* Build configuration list for PBXAggregateTarget "Check" */; buildPhases = ( 3B238F5E0E828B5400846E11 /* ShellScript */, ); dependencies = ( 40899F9D0FFA740F000B29AE /* PBXTargetDependency */, 40C849F7101A43440083642A /* PBXTargetDependency */, 4089A0980FFAD34A000B29AE /* PBXTargetDependency */, 40C849F9101A43490083642A /* PBXTargetDependency */, ); name = Check; productName = Check; }; 40C44ADC0E3798F4008FCC51 /* Version Info */ = { isa = PBXAggregateTarget; buildConfigurationList = 40C44AE40E379905008FCC51 /* Build configuration list for PBXAggregateTarget "Version Info" */; buildPhases = ( 40C44ADB0E3798F4008FCC51 /* Generate Version.h */, ); comments = "The generation of Version.h must be performed in its own target. Since the Info.plist is preprocessed before any of the other build phases in gtest, the Version.h file would not be ready if included as a build phase of that target."; dependencies = ( ); name = "Version Info"; productName = Version.h; }; /* End PBXAggregateTarget section */ /* Begin PBXBuildFile section */ 224A12A30E9EADCC00BD17FD /* gtest-test-part.h in Headers */ = {isa = PBXBuildFile; fileRef = 224A12A20E9EADCC00BD17FD /* gtest-test-part.h */; settings = {ATTRIBUTES = (Public, ); }; }; 3BF6F2A00E79B5AD000F2EEE /* gtest-type-util.h in Copy Headers Internal */ = {isa = PBXBuildFile; fileRef = 3BF6F29F0E79B5AD000F2EEE /* gtest-type-util.h */; }; 3BF6F2A50E79B616000F2EEE /* gtest-typed-test.h in Headers */ = {isa = PBXBuildFile; fileRef = 3BF6F2A40E79B616000F2EEE /* gtest-typed-test.h */; settings = {ATTRIBUTES = (Public, ); }; }; 404884380E2F799B00CF7658 /* gtest-death-test.h in Headers */ = {isa = PBXBuildFile; fileRef = 404883DB0E2F799B00CF7658 /* gtest-death-test.h */; settings = {ATTRIBUTES = (Public, ); }; }; 404884390E2F799B00CF7658 /* gtest-message.h in Headers */ = {isa = PBXBuildFile; fileRef = 404883DC0E2F799B00CF7658 /* gtest-message.h */; settings = {ATTRIBUTES = (Public, ); }; }; 4048843A0E2F799B00CF7658 /* gtest-spi.h in Headers */ = {isa = PBXBuildFile; fileRef = 404883DD0E2F799B00CF7658 /* gtest-spi.h */; settings = {ATTRIBUTES = (Public, ); }; }; 4048843B0E2F799B00CF7658 /* gtest.h in Headers */ = {isa = PBXBuildFile; fileRef = 404883DE0E2F799B00CF7658 /* gtest.h */; settings = {ATTRIBUTES = (Public, ); }; }; 4048843C0E2F799B00CF7658 /* gtest_pred_impl.h in Headers */ = {isa = PBXBuildFile; fileRef = 404883DF0E2F799B00CF7658 /* gtest_pred_impl.h */; settings = {ATTRIBUTES = (Public, ); }; }; 4048843D0E2F799B00CF7658 /* gtest_prod.h in Headers */ = {isa = PBXBuildFile; fileRef = 404883E00E2F799B00CF7658 /* gtest_prod.h */; settings = {ATTRIBUTES = (Public, ); }; }; 404884500E2F799B00CF7658 /* README in Resources */ = {isa = PBXBuildFile; fileRef = 404883F60E2F799B00CF7658 /* README */; }; 404884A00E2F7BE600CF7658 /* gtest-death-test-internal.h in Copy Headers Internal */ = {isa = PBXBuildFile; fileRef = 404883E20E2F799B00CF7658 /* gtest-death-test-internal.h */; }; 404884A10E2F7BE600CF7658 /* gtest-filepath.h in Copy Headers Internal */ = {isa = PBXBuildFile; fileRef = 404883E30E2F799B00CF7658 /* gtest-filepath.h */; }; 404884A20E2F7BE600CF7658 /* gtest-internal.h in Copy Headers Internal */ = {isa = PBXBuildFile; fileRef = 404883E40E2F799B00CF7658 /* gtest-internal.h */; }; 404884A30E2F7BE600CF7658 /* gtest-port.h in Copy Headers Internal */ = {isa = PBXBuildFile; fileRef = 404883E50E2F799B00CF7658 /* gtest-port.h */; }; 404884A40E2F7BE600CF7658 /* gtest-string.h in Copy Headers Internal */ = {isa = PBXBuildFile; fileRef = 404883E60E2F799B00CF7658 /* gtest-string.h */; }; 404884AC0E2F7CD900CF7658 /* CHANGES in Resources */ = {isa = PBXBuildFile; fileRef = 404884A90E2F7CD900CF7658 /* CHANGES */; }; 404884AD0E2F7CD900CF7658 /* CONTRIBUTORS in Resources */ = {isa = PBXBuildFile; fileRef = 404884AA0E2F7CD900CF7658 /* CONTRIBUTORS */; }; 404884AE0E2F7CD900CF7658 /* LICENSE in Resources */ = {isa = PBXBuildFile; fileRef = 404884AB0E2F7CD900CF7658 /* LICENSE */; }; 40899F3A0FFA70D4000B29AE /* gtest-all.cc in Sources */ = {isa = PBXBuildFile; fileRef = 224A12A10E9EADA700BD17FD /* gtest-all.cc */; }; 40899F500FFA7281000B29AE /* gtest-tuple.h in Copy Headers Internal */ = {isa = PBXBuildFile; fileRef = 40899F4D0FFA7271000B29AE /* gtest-tuple.h */; }; 40899F530FFA72A0000B29AE /* gtest_unittest.cc in Sources */ = {isa = PBXBuildFile; fileRef = 3B238C120E7FE13C00846E11 /* gtest_unittest.cc */; }; 4089A0440FFAD1BE000B29AE /* sample1.cc in Sources */ = {isa = PBXBuildFile; fileRef = 4089A02C0FFACF7F000B29AE /* sample1.cc */; }; 4089A0460FFAD1BE000B29AE /* sample1_unittest.cc in Sources */ = {isa = PBXBuildFile; fileRef = 4089A02E0FFACF7F000B29AE /* sample1_unittest.cc */; }; 40C848FF101A21150083642A /* gtest-all.cc in Sources */ = {isa = PBXBuildFile; fileRef = 224A12A10E9EADA700BD17FD /* gtest-all.cc */; }; 40C84915101A21DF0083642A /* gtest_main.cc in Sources */ = {isa = PBXBuildFile; fileRef = 4048840D0E2F799B00CF7658 /* gtest_main.cc */; }; 40C84916101A235B0083642A /* libgtest_main.a in Frameworks */ = {isa = PBXBuildFile; fileRef = 40C8490B101A217E0083642A /* libgtest_main.a */; }; 40C84921101A23AD0083642A /* libgtest_main.a in Frameworks */ = {isa = PBXBuildFile; fileRef = 40C8490B101A217E0083642A /* libgtest_main.a */; }; 40C84978101A36540083642A /* libgtest_main.a in Resources */ = {isa = PBXBuildFile; fileRef = 40C8490B101A217E0083642A /* libgtest_main.a */; }; 40C84980101A36850083642A /* gtest_unittest.cc in Sources */ = {isa = PBXBuildFile; fileRef = 3B238C120E7FE13C00846E11 /* gtest_unittest.cc */; }; 40C84982101A36850083642A /* libgtest.a in Frameworks */ = {isa = PBXBuildFile; fileRef = 40C848FA101A209C0083642A /* libgtest.a */; }; 40C84983101A36850083642A /* libgtest_main.a in Frameworks */ = {isa = PBXBuildFile; fileRef = 40C8490B101A217E0083642A /* libgtest_main.a */; }; 40C8498F101A36A60083642A /* sample1.cc in Sources */ = {isa = PBXBuildFile; fileRef = 4089A02C0FFACF7F000B29AE /* sample1.cc */; }; 40C84990101A36A60083642A /* sample1_unittest.cc in Sources */ = {isa = PBXBuildFile; fileRef = 4089A02E0FFACF7F000B29AE /* sample1_unittest.cc */; }; 40C84992101A36A60083642A /* libgtest.a in Frameworks */ = {isa = PBXBuildFile; fileRef = 40C848FA101A209C0083642A /* libgtest.a */; }; 40C84993101A36A60083642A /* libgtest_main.a in Frameworks */ = {isa = PBXBuildFile; fileRef = 40C8490B101A217E0083642A /* libgtest_main.a */; }; 40C849A2101A37050083642A /* gtest.framework in Frameworks */ = {isa = PBXBuildFile; fileRef = 4539C8FF0EC27F6400A70F4C /* gtest.framework */; }; 40C849A4101A37150083642A /* gtest.framework in Frameworks */ = {isa = PBXBuildFile; fileRef = 4539C8FF0EC27F6400A70F4C /* gtest.framework */; }; 4539C9340EC280AE00A70F4C /* gtest-param-test.h in Headers */ = {isa = PBXBuildFile; fileRef = 4539C9330EC280AE00A70F4C /* gtest-param-test.h */; settings = {ATTRIBUTES = (Public, ); }; }; 4539C9380EC280E200A70F4C /* gtest-linked_ptr.h in Copy Headers Internal */ = {isa = PBXBuildFile; fileRef = 4539C9350EC280E200A70F4C /* gtest-linked_ptr.h */; }; 4539C9390EC280E200A70F4C /* gtest-param-util-generated.h in Copy Headers Internal */ = {isa = PBXBuildFile; fileRef = 4539C9360EC280E200A70F4C /* gtest-param-util-generated.h */; }; 4539C93A0EC280E200A70F4C /* gtest-param-util.h in Copy Headers Internal */ = {isa = PBXBuildFile; fileRef = 4539C9370EC280E200A70F4C /* gtest-param-util.h */; }; 4567C8181264FF71007740BE /* gtest-printers.h in Headers */ = {isa = PBXBuildFile; fileRef = 4567C8171264FF71007740BE /* gtest-printers.h */; settings = {ATTRIBUTES = (Public, ); }; }; /* End PBXBuildFile section */ /* Begin PBXContainerItemProxy section */ 40899F9C0FFA740F000B29AE /* PBXContainerItemProxy */ = { isa = PBXContainerItemProxy; containerPortal = 0867D690FE84028FC02AAC07 /* Project object */; proxyType = 1; remoteGlobalIDString = 40899F420FFA7184000B29AE; remoteInfo = gtest_unittest; }; 4089A0970FFAD34A000B29AE /* PBXContainerItemProxy */ = { isa = PBXContainerItemProxy; containerPortal = 0867D690FE84028FC02AAC07 /* Project object */; proxyType = 1; remoteGlobalIDString = 4089A0120FFACEFC000B29AE; remoteInfo = sample1_unittest; }; 408BEC0F1046CFE900DEF522 /* PBXContainerItemProxy */ = { isa = PBXContainerItemProxy; containerPortal = 0867D690FE84028FC02AAC07 /* Project object */; proxyType = 1; remoteGlobalIDString = 40C848F9101A209C0083642A; remoteInfo = "gtest-static"; }; 40C44AE50E379922008FCC51 /* PBXContainerItemProxy */ = { isa = PBXContainerItemProxy; containerPortal = 0867D690FE84028FC02AAC07 /* Project object */; proxyType = 1; remoteGlobalIDString = 40C44ADC0E3798F4008FCC51; remoteInfo = Version.h; }; 40C8497C101A36850083642A /* PBXContainerItemProxy */ = { isa = PBXContainerItemProxy; containerPortal = 0867D690FE84028FC02AAC07 /* Project object */; proxyType = 1; remoteGlobalIDString = 40C848F9101A209C0083642A; remoteInfo = "gtest-static"; }; 40C8497E101A36850083642A /* PBXContainerItemProxy */ = { isa = PBXContainerItemProxy; containerPortal = 0867D690FE84028FC02AAC07 /* Project object */; proxyType = 1; remoteGlobalIDString = 40C8490A101A217E0083642A; remoteInfo = "gtest_main-static"; }; 40C8498B101A36A60083642A /* PBXContainerItemProxy */ = { isa = PBXContainerItemProxy; containerPortal = 0867D690FE84028FC02AAC07 /* Project object */; proxyType = 1; remoteGlobalIDString = 40C848F9101A209C0083642A; remoteInfo = "gtest-static"; }; 40C8498D101A36A60083642A /* PBXContainerItemProxy */ = { isa = PBXContainerItemProxy; containerPortal = 0867D690FE84028FC02AAC07 /* Project object */; proxyType = 1; remoteGlobalIDString = 40C8490A101A217E0083642A; remoteInfo = "gtest_main-static"; }; 40C8499B101A36DC0083642A /* PBXContainerItemProxy */ = { isa = PBXContainerItemProxy; containerPortal = 0867D690FE84028FC02AAC07 /* Project object */; proxyType = 1; remoteGlobalIDString = 40C8490A101A217E0083642A; remoteInfo = "gtest_main-static"; }; 40C8499D101A36E50083642A /* PBXContainerItemProxy */ = { isa = PBXContainerItemProxy; containerPortal = 0867D690FE84028FC02AAC07 /* Project object */; proxyType = 1; remoteGlobalIDString = 8D07F2BC0486CC7A007CD1D0; remoteInfo = "gtest-framework"; }; 40C8499F101A36F10083642A /* PBXContainerItemProxy */ = { isa = PBXContainerItemProxy; containerPortal = 0867D690FE84028FC02AAC07 /* Project object */; proxyType = 1; remoteGlobalIDString = 8D07F2BC0486CC7A007CD1D0; remoteInfo = "gtest-framework"; }; 40C849F6101A43440083642A /* PBXContainerItemProxy */ = { isa = PBXContainerItemProxy; containerPortal = 0867D690FE84028FC02AAC07 /* Project object */; proxyType = 1; remoteGlobalIDString = 40C8497A101A36850083642A; remoteInfo = "gtest_unittest-static"; }; 40C849F8101A43490083642A /* PBXContainerItemProxy */ = { isa = PBXContainerItemProxy; containerPortal = 0867D690FE84028FC02AAC07 /* Project object */; proxyType = 1; remoteGlobalIDString = 40C84989101A36A60083642A; remoteInfo = "sample1_unittest-static"; }; /* End PBXContainerItemProxy section */ /* Begin PBXCopyFilesBuildPhase section */ 404884A50E2F7C0400CF7658 /* Copy Headers Internal */ = { isa = PBXCopyFilesBuildPhase; buildActionMask = 2147483647; dstPath = Headers/internal; dstSubfolderSpec = 6; files = ( 404884A00E2F7BE600CF7658 /* gtest-death-test-internal.h in Copy Headers Internal */, 404884A10E2F7BE600CF7658 /* gtest-filepath.h in Copy Headers Internal */, 404884A20E2F7BE600CF7658 /* gtest-internal.h in Copy Headers Internal */, 4539C9380EC280E200A70F4C /* gtest-linked_ptr.h in Copy Headers Internal */, 4539C9390EC280E200A70F4C /* gtest-param-util-generated.h in Copy Headers Internal */, 4539C93A0EC280E200A70F4C /* gtest-param-util.h in Copy Headers Internal */, 404884A30E2F7BE600CF7658 /* gtest-port.h in Copy Headers Internal */, 404884A40E2F7BE600CF7658 /* gtest-string.h in Copy Headers Internal */, 40899F500FFA7281000B29AE /* gtest-tuple.h in Copy Headers Internal */, 3BF6F2A00E79B5AD000F2EEE /* gtest-type-util.h in Copy Headers Internal */, ); name = "Copy Headers Internal"; runOnlyForDeploymentPostprocessing = 0; }; /* End PBXCopyFilesBuildPhase section */ /* Begin PBXFileReference section */ 224A12A10E9EADA700BD17FD /* gtest-all.cc */ = {isa = PBXFileReference; fileEncoding = 30; lastKnownFileType = sourcecode.cpp.cpp; path = "gtest-all.cc"; sourceTree = ""; }; 224A12A20E9EADCC00BD17FD /* gtest-test-part.h */ = {isa = PBXFileReference; fileEncoding = 30; lastKnownFileType = sourcecode.c.h; path = "gtest-test-part.h"; sourceTree = ""; }; 3B238C120E7FE13C00846E11 /* gtest_unittest.cc */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = gtest_unittest.cc; sourceTree = ""; }; 3B87D2100E96B92E000D1852 /* runtests.sh */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = text.script.sh; path = runtests.sh; sourceTree = ""; }; 3BF6F29F0E79B5AD000F2EEE /* gtest-type-util.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = "gtest-type-util.h"; sourceTree = ""; }; 3BF6F2A40E79B616000F2EEE /* gtest-typed-test.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = "gtest-typed-test.h"; sourceTree = ""; }; 403EE37C0E377822004BD1E2 /* versiongenerate.py */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = text.script.python; path = versiongenerate.py; sourceTree = ""; }; 404883DB0E2F799B00CF7658 /* gtest-death-test.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = "gtest-death-test.h"; sourceTree = ""; }; 404883DC0E2F799B00CF7658 /* gtest-message.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = "gtest-message.h"; sourceTree = ""; }; 404883DD0E2F799B00CF7658 /* gtest-spi.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = "gtest-spi.h"; sourceTree = ""; }; 404883DE0E2F799B00CF7658 /* gtest.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = gtest.h; sourceTree = ""; }; 404883DF0E2F799B00CF7658 /* gtest_pred_impl.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = gtest_pred_impl.h; sourceTree = ""; }; 404883E00E2F799B00CF7658 /* gtest_prod.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = gtest_prod.h; sourceTree = ""; }; 404883E20E2F799B00CF7658 /* gtest-death-test-internal.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = "gtest-death-test-internal.h"; sourceTree = ""; }; 404883E30E2F799B00CF7658 /* gtest-filepath.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = "gtest-filepath.h"; sourceTree = ""; }; 404883E40E2F799B00CF7658 /* gtest-internal.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = "gtest-internal.h"; sourceTree = ""; }; 404883E50E2F799B00CF7658 /* gtest-port.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = "gtest-port.h"; sourceTree = ""; }; 404883E60E2F799B00CF7658 /* gtest-string.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = "gtest-string.h"; sourceTree = ""; }; 404883F60E2F799B00CF7658 /* README */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = text; name = README; path = ../README; sourceTree = SOURCE_ROOT; }; 4048840D0E2F799B00CF7658 /* gtest_main.cc */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = gtest_main.cc; sourceTree = ""; }; 404884A90E2F7CD900CF7658 /* CHANGES */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = text; name = CHANGES; path = ../CHANGES; sourceTree = SOURCE_ROOT; }; 404884AA0E2F7CD900CF7658 /* CONTRIBUTORS */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = text; name = CONTRIBUTORS; path = ../CONTRIBUTORS; sourceTree = SOURCE_ROOT; }; 404884AB0E2F7CD900CF7658 /* LICENSE */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = text; name = LICENSE; path = ../LICENSE; sourceTree = SOURCE_ROOT; }; 40899F430FFA7184000B29AE /* gtest_unittest-framework */ = {isa = PBXFileReference; explicitFileType = "compiled.mach-o.executable"; includeInIndex = 0; path = "gtest_unittest-framework"; sourceTree = BUILT_PRODUCTS_DIR; }; 40899F4D0FFA7271000B29AE /* gtest-tuple.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = "gtest-tuple.h"; sourceTree = ""; }; 40899FB30FFA7567000B29AE /* StaticLibraryTarget.xcconfig */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = text.xcconfig; path = StaticLibraryTarget.xcconfig; sourceTree = ""; }; 4089A0130FFACEFC000B29AE /* sample1_unittest-framework */ = {isa = PBXFileReference; explicitFileType = "compiled.mach-o.executable"; includeInIndex = 0; path = "sample1_unittest-framework"; sourceTree = BUILT_PRODUCTS_DIR; }; 4089A02C0FFACF7F000B29AE /* sample1.cc */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = sample1.cc; sourceTree = ""; }; 4089A02D0FFACF7F000B29AE /* sample1.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = sample1.h; sourceTree = ""; }; 4089A02E0FFACF7F000B29AE /* sample1_unittest.cc */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.cpp.cpp; path = sample1_unittest.cc; sourceTree = ""; }; 40C848FA101A209C0083642A /* libgtest.a */ = {isa = PBXFileReference; explicitFileType = archive.ar; includeInIndex = 0; path = libgtest.a; sourceTree = BUILT_PRODUCTS_DIR; }; 40C8490B101A217E0083642A /* libgtest_main.a */ = {isa = PBXFileReference; explicitFileType = archive.ar; includeInIndex = 0; path = libgtest_main.a; sourceTree = BUILT_PRODUCTS_DIR; }; 40C84987101A36850083642A /* gtest_unittest */ = {isa = PBXFileReference; explicitFileType = "compiled.mach-o.executable"; includeInIndex = 0; path = gtest_unittest; sourceTree = BUILT_PRODUCTS_DIR; }; 40C84997101A36A60083642A /* sample1_unittest-static */ = {isa = PBXFileReference; explicitFileType = "compiled.mach-o.executable"; includeInIndex = 0; path = "sample1_unittest-static"; sourceTree = BUILT_PRODUCTS_DIR; }; 40D4CDF10E30E07400294801 /* DebugProject.xcconfig */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = text.xcconfig; path = DebugProject.xcconfig; sourceTree = ""; }; 40D4CDF20E30E07400294801 /* FrameworkTarget.xcconfig */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = text.xcconfig; path = FrameworkTarget.xcconfig; sourceTree = ""; }; 40D4CDF30E30E07400294801 /* General.xcconfig */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = text.xcconfig; path = General.xcconfig; sourceTree = ""; }; 40D4CDF40E30E07400294801 /* ReleaseProject.xcconfig */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = text.xcconfig; path = ReleaseProject.xcconfig; sourceTree = ""; }; 40D4CF510E30F5E200294801 /* Info.plist */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = text.plist.xml; path = Info.plist; sourceTree = ""; }; 4539C8FF0EC27F6400A70F4C /* gtest.framework */ = {isa = PBXFileReference; explicitFileType = wrapper.framework; includeInIndex = 0; path = gtest.framework; sourceTree = BUILT_PRODUCTS_DIR; }; 4539C9330EC280AE00A70F4C /* gtest-param-test.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = "gtest-param-test.h"; sourceTree = ""; }; 4539C9350EC280E200A70F4C /* gtest-linked_ptr.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = "gtest-linked_ptr.h"; sourceTree = ""; }; 4539C9360EC280E200A70F4C /* gtest-param-util-generated.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = "gtest-param-util-generated.h"; sourceTree = ""; }; 4539C9370EC280E200A70F4C /* gtest-param-util.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = "gtest-param-util.h"; sourceTree = ""; }; 4567C8171264FF71007740BE /* gtest-printers.h */ = {isa = PBXFileReference; fileEncoding = 4; lastKnownFileType = sourcecode.c.h; path = "gtest-printers.h"; sourceTree = ""; }; /* End PBXFileReference section */ /* Begin PBXFrameworksBuildPhase section */ 40899F410FFA7184000B29AE /* Frameworks */ = { isa = PBXFrameworksBuildPhase; buildActionMask = 2147483647; files = ( 40C849A4101A37150083642A /* gtest.framework in Frameworks */, 40C84916101A235B0083642A /* libgtest_main.a in Frameworks */, ); runOnlyForDeploymentPostprocessing = 0; }; 4089A0110FFACEFC000B29AE /* Frameworks */ = { isa = PBXFrameworksBuildPhase; buildActionMask = 2147483647; files = ( 40C849A2101A37050083642A /* gtest.framework in Frameworks */, 40C84921101A23AD0083642A /* libgtest_main.a in Frameworks */, ); runOnlyForDeploymentPostprocessing = 0; }; 40C84981101A36850083642A /* Frameworks */ = { isa = PBXFrameworksBuildPhase; buildActionMask = 2147483647; files = ( 40C84982101A36850083642A /* libgtest.a in Frameworks */, 40C84983101A36850083642A /* libgtest_main.a in Frameworks */, ); runOnlyForDeploymentPostprocessing = 0; }; 40C84991101A36A60083642A /* Frameworks */ = { isa = PBXFrameworksBuildPhase; buildActionMask = 2147483647; files = ( 40C84992101A36A60083642A /* libgtest.a in Frameworks */, 40C84993101A36A60083642A /* libgtest_main.a in Frameworks */, ); runOnlyForDeploymentPostprocessing = 0; }; /* End PBXFrameworksBuildPhase section */ /* Begin PBXGroup section */ 034768DDFF38A45A11DB9C8B /* Products */ = { isa = PBXGroup; children = ( 4539C8FF0EC27F6400A70F4C /* gtest.framework */, 40C848FA101A209C0083642A /* libgtest.a */, 40C8490B101A217E0083642A /* libgtest_main.a */, 40899F430FFA7184000B29AE /* gtest_unittest-framework */, 40C84987101A36850083642A /* gtest_unittest */, 4089A0130FFACEFC000B29AE /* sample1_unittest-framework */, 40C84997101A36A60083642A /* sample1_unittest-static */, ); name = Products; sourceTree = ""; }; 0867D691FE84028FC02AAC07 /* gtest */ = { isa = PBXGroup; children = ( 40D4CDF00E30E07400294801 /* Config */, 08FB77ACFE841707C02AAC07 /* Source */, 40D4CF4E0E30F5E200294801 /* Resources */, 403EE37B0E377822004BD1E2 /* Scripts */, 034768DDFF38A45A11DB9C8B /* Products */, ); name = gtest; sourceTree = ""; }; 08FB77ACFE841707C02AAC07 /* Source */ = { isa = PBXGroup; children = ( 404884A90E2F7CD900CF7658 /* CHANGES */, 404884AA0E2F7CD900CF7658 /* CONTRIBUTORS */, 404884AB0E2F7CD900CF7658 /* LICENSE */, 404883F60E2F799B00CF7658 /* README */, 404883D90E2F799B00CF7658 /* include */, 4089A02F0FFACF84000B29AE /* samples */, 404884070E2F799B00CF7658 /* src */, 3B238BF00E7FE13B00846E11 /* test */, ); name = Source; sourceTree = ""; }; 3B238BF00E7FE13B00846E11 /* test */ = { isa = PBXGroup; children = ( 3B238C120E7FE13C00846E11 /* gtest_unittest.cc */, ); name = test; path = ../test; sourceTree = SOURCE_ROOT; }; 403EE37B0E377822004BD1E2 /* Scripts */ = { isa = PBXGroup; children = ( 403EE37C0E377822004BD1E2 /* versiongenerate.py */, 3B87D2100E96B92E000D1852 /* runtests.sh */, ); path = Scripts; sourceTree = ""; }; 404883D90E2F799B00CF7658 /* include */ = { isa = PBXGroup; children = ( 404883DA0E2F799B00CF7658 /* gtest */, ); name = include; path = ../include; sourceTree = SOURCE_ROOT; }; 404883DA0E2F799B00CF7658 /* gtest */ = { isa = PBXGroup; children = ( 404883E10E2F799B00CF7658 /* internal */, 224A12A20E9EADCC00BD17FD /* gtest-test-part.h */, 404883DB0E2F799B00CF7658 /* gtest-death-test.h */, 404883DC0E2F799B00CF7658 /* gtest-message.h */, 4539C9330EC280AE00A70F4C /* gtest-param-test.h */, 4567C8171264FF71007740BE /* gtest-printers.h */, 404883DD0E2F799B00CF7658 /* gtest-spi.h */, 404883DE0E2F799B00CF7658 /* gtest.h */, 404883DF0E2F799B00CF7658 /* gtest_pred_impl.h */, 404883E00E2F799B00CF7658 /* gtest_prod.h */, 3BF6F2A40E79B616000F2EEE /* gtest-typed-test.h */, ); path = gtest; sourceTree = ""; }; 404883E10E2F799B00CF7658 /* internal */ = { isa = PBXGroup; children = ( 404883E20E2F799B00CF7658 /* gtest-death-test-internal.h */, 404883E30E2F799B00CF7658 /* gtest-filepath.h */, 404883E40E2F799B00CF7658 /* gtest-internal.h */, 4539C9350EC280E200A70F4C /* gtest-linked_ptr.h */, 4539C9360EC280E200A70F4C /* gtest-param-util-generated.h */, 4539C9370EC280E200A70F4C /* gtest-param-util.h */, 404883E50E2F799B00CF7658 /* gtest-port.h */, 404883E60E2F799B00CF7658 /* gtest-string.h */, 40899F4D0FFA7271000B29AE /* gtest-tuple.h */, 3BF6F29F0E79B5AD000F2EEE /* gtest-type-util.h */, ); path = internal; sourceTree = ""; }; 404884070E2F799B00CF7658 /* src */ = { isa = PBXGroup; children = ( 224A12A10E9EADA700BD17FD /* gtest-all.cc */, 4048840D0E2F799B00CF7658 /* gtest_main.cc */, ); name = src; path = ../src; sourceTree = SOURCE_ROOT; }; 4089A02F0FFACF84000B29AE /* samples */ = { isa = PBXGroup; children = ( 4089A02C0FFACF7F000B29AE /* sample1.cc */, 4089A02D0FFACF7F000B29AE /* sample1.h */, 4089A02E0FFACF7F000B29AE /* sample1_unittest.cc */, ); name = samples; path = ../samples; sourceTree = SOURCE_ROOT; }; 40D4CDF00E30E07400294801 /* Config */ = { isa = PBXGroup; children = ( 40D4CDF10E30E07400294801 /* DebugProject.xcconfig */, 40D4CDF20E30E07400294801 /* FrameworkTarget.xcconfig */, 40D4CDF30E30E07400294801 /* General.xcconfig */, 40D4CDF40E30E07400294801 /* ReleaseProject.xcconfig */, 40899FB30FFA7567000B29AE /* StaticLibraryTarget.xcconfig */, ); path = Config; sourceTree = ""; }; 40D4CF4E0E30F5E200294801 /* Resources */ = { isa = PBXGroup; children = ( 40D4CF510E30F5E200294801 /* Info.plist */, ); path = Resources; sourceTree = ""; }; /* End PBXGroup section */ /* Begin PBXHeadersBuildPhase section */ 8D07F2BD0486CC7A007CD1D0 /* Headers */ = { isa = PBXHeadersBuildPhase; buildActionMask = 2147483647; files = ( 404884380E2F799B00CF7658 /* gtest-death-test.h in Headers */, 404884390E2F799B00CF7658 /* gtest-message.h in Headers */, 4539C9340EC280AE00A70F4C /* gtest-param-test.h in Headers */, 4567C8181264FF71007740BE /* gtest-printers.h in Headers */, 3BF6F2A50E79B616000F2EEE /* gtest-typed-test.h in Headers */, 4048843A0E2F799B00CF7658 /* gtest-spi.h in Headers */, 4048843B0E2F799B00CF7658 /* gtest.h in Headers */, 4048843C0E2F799B00CF7658 /* gtest_pred_impl.h in Headers */, 4048843D0E2F799B00CF7658 /* gtest_prod.h in Headers */, 224A12A30E9EADCC00BD17FD /* gtest-test-part.h in Headers */, ); runOnlyForDeploymentPostprocessing = 0; }; /* End PBXHeadersBuildPhase section */ /* Begin PBXNativeTarget section */ 40899F420FFA7184000B29AE /* gtest_unittest-framework */ = { isa = PBXNativeTarget; buildConfigurationList = 40899F4A0FFA71BC000B29AE /* Build configuration list for PBXNativeTarget "gtest_unittest-framework" */; buildPhases = ( 40899F400FFA7184000B29AE /* Sources */, 40899F410FFA7184000B29AE /* Frameworks */, ); buildRules = ( ); dependencies = ( 40C849A0101A36F10083642A /* PBXTargetDependency */, ); name = "gtest_unittest-framework"; productName = gtest_unittest; productReference = 40899F430FFA7184000B29AE /* gtest_unittest-framework */; productType = "com.apple.product-type.tool"; }; 4089A0120FFACEFC000B29AE /* sample1_unittest-framework */ = { isa = PBXNativeTarget; buildConfigurationList = 4089A0240FFACF01000B29AE /* Build configuration list for PBXNativeTarget "sample1_unittest-framework" */; buildPhases = ( 4089A0100FFACEFC000B29AE /* Sources */, 4089A0110FFACEFC000B29AE /* Frameworks */, ); buildRules = ( ); dependencies = ( 40C8499E101A36E50083642A /* PBXTargetDependency */, ); name = "sample1_unittest-framework"; productName = sample1_unittest; productReference = 4089A0130FFACEFC000B29AE /* sample1_unittest-framework */; productType = "com.apple.product-type.tool"; }; 40C848F9101A209C0083642A /* gtest-static */ = { isa = PBXNativeTarget; buildConfigurationList = 40C84902101A212E0083642A /* Build configuration list for PBXNativeTarget "gtest-static" */; buildPhases = ( 40C848F7101A209C0083642A /* Sources */, ); buildRules = ( ); dependencies = ( ); name = "gtest-static"; productName = "gtest-static"; productReference = 40C848FA101A209C0083642A /* libgtest.a */; productType = "com.apple.product-type.library.static"; }; 40C8490A101A217E0083642A /* gtest_main-static */ = { isa = PBXNativeTarget; buildConfigurationList = 40C84912101A21D20083642A /* Build configuration list for PBXNativeTarget "gtest_main-static" */; buildPhases = ( 40C84908101A217E0083642A /* Sources */, ); buildRules = ( ); dependencies = ( ); name = "gtest_main-static"; productName = "gtest_main-static"; productReference = 40C8490B101A217E0083642A /* libgtest_main.a */; productType = "com.apple.product-type.library.static"; }; 40C8497A101A36850083642A /* gtest_unittest-static */ = { isa = PBXNativeTarget; buildConfigurationList = 40C84984101A36850083642A /* Build configuration list for PBXNativeTarget "gtest_unittest-static" */; buildPhases = ( 40C8497F101A36850083642A /* Sources */, 40C84981101A36850083642A /* Frameworks */, ); buildRules = ( ); dependencies = ( 40C8497B101A36850083642A /* PBXTargetDependency */, 40C8497D101A36850083642A /* PBXTargetDependency */, ); name = "gtest_unittest-static"; productName = gtest_unittest; productReference = 40C84987101A36850083642A /* gtest_unittest */; productType = "com.apple.product-type.tool"; }; 40C84989101A36A60083642A /* sample1_unittest-static */ = { isa = PBXNativeTarget; buildConfigurationList = 40C84994101A36A60083642A /* Build configuration list for PBXNativeTarget "sample1_unittest-static" */; buildPhases = ( 40C8498E101A36A60083642A /* Sources */, 40C84991101A36A60083642A /* Frameworks */, ); buildRules = ( ); dependencies = ( 40C8498A101A36A60083642A /* PBXTargetDependency */, 40C8498C101A36A60083642A /* PBXTargetDependency */, ); name = "sample1_unittest-static"; productName = sample1_unittest; productReference = 40C84997101A36A60083642A /* sample1_unittest-static */; productType = "com.apple.product-type.tool"; }; 8D07F2BC0486CC7A007CD1D0 /* gtest-framework */ = { isa = PBXNativeTarget; buildConfigurationList = 4FADC24208B4156D00ABE55E /* Build configuration list for PBXNativeTarget "gtest-framework" */; buildPhases = ( 8D07F2C10486CC7A007CD1D0 /* Sources */, 8D07F2BD0486CC7A007CD1D0 /* Headers */, 404884A50E2F7C0400CF7658 /* Copy Headers Internal */, 8D07F2BF0486CC7A007CD1D0 /* Resources */, ); buildRules = ( ); dependencies = ( 40C44AE60E379922008FCC51 /* PBXTargetDependency */, 408BEC101046CFE900DEF522 /* PBXTargetDependency */, 40C8499C101A36DC0083642A /* PBXTargetDependency */, ); name = "gtest-framework"; productInstallPath = "$(HOME)/Library/Frameworks"; productName = gtest; productReference = 4539C8FF0EC27F6400A70F4C /* gtest.framework */; productType = "com.apple.product-type.framework"; }; /* End PBXNativeTarget section */ /* Begin PBXProject section */ 0867D690FE84028FC02AAC07 /* Project object */ = { isa = PBXProject; attributes = { LastUpgradeCheck = 0460; }; buildConfigurationList = 4FADC24608B4156D00ABE55E /* Build configuration list for PBXProject "gtest" */; compatibilityVersion = "Xcode 3.2"; developmentRegion = English; hasScannedForEncodings = 1; knownRegions = ( English, Japanese, French, German, en, ); mainGroup = 0867D691FE84028FC02AAC07 /* gtest */; productRefGroup = 034768DDFF38A45A11DB9C8B /* Products */; projectDirPath = ""; projectRoot = ""; targets = ( 8D07F2BC0486CC7A007CD1D0 /* gtest-framework */, 40C848F9101A209C0083642A /* gtest-static */, 40C8490A101A217E0083642A /* gtest_main-static */, 40899F420FFA7184000B29AE /* gtest_unittest-framework */, 40C8497A101A36850083642A /* gtest_unittest-static */, 4089A0120FFACEFC000B29AE /* sample1_unittest-framework */, 40C84989101A36A60083642A /* sample1_unittest-static */, 3B238F5F0E828B5400846E11 /* Check */, 40C44ADC0E3798F4008FCC51 /* Version Info */, ); }; /* End PBXProject section */ /* Begin PBXResourcesBuildPhase section */ 8D07F2BF0486CC7A007CD1D0 /* Resources */ = { isa = PBXResourcesBuildPhase; buildActionMask = 2147483647; files = ( 404884500E2F799B00CF7658 /* README in Resources */, 404884AC0E2F7CD900CF7658 /* CHANGES in Resources */, 404884AD0E2F7CD900CF7658 /* CONTRIBUTORS in Resources */, 404884AE0E2F7CD900CF7658 /* LICENSE in Resources */, 40C84978101A36540083642A /* libgtest_main.a in Resources */, ); runOnlyForDeploymentPostprocessing = 0; }; /* End PBXResourcesBuildPhase section */ /* Begin PBXShellScriptBuildPhase section */ 3B238F5E0E828B5400846E11 /* ShellScript */ = { isa = PBXShellScriptBuildPhase; buildActionMask = 2147483647; files = ( ); inputPaths = ( ); outputPaths = ( ); runOnlyForDeploymentPostprocessing = 0; shellPath = /bin/sh; shellScript = "# Remember, this \"Run Script\" build phase will be executed from $SRCROOT\n/bin/bash Scripts/runtests.sh"; }; 40C44ADB0E3798F4008FCC51 /* Generate Version.h */ = { isa = PBXShellScriptBuildPhase; buildActionMask = 2147483647; files = ( ); inputPaths = ( "$(SRCROOT)/Scripts/versiongenerate.py", "$(SRCROOT)/../configure.ac", ); name = "Generate Version.h"; outputPaths = ( "$(PROJECT_TEMP_DIR)/Version.h", ); runOnlyForDeploymentPostprocessing = 0; shellPath = /bin/sh; shellScript = "# Remember, this \"Run Script\" build phase will be executed from $SRCROOT\n/usr/bin/python Scripts/versiongenerate.py ../ $PROJECT_TEMP_DIR"; }; /* End PBXShellScriptBuildPhase section */ /* Begin PBXSourcesBuildPhase section */ 40899F400FFA7184000B29AE /* Sources */ = { isa = PBXSourcesBuildPhase; buildActionMask = 2147483647; files = ( 40899F530FFA72A0000B29AE /* gtest_unittest.cc in Sources */, ); runOnlyForDeploymentPostprocessing = 0; }; 4089A0100FFACEFC000B29AE /* Sources */ = { isa = PBXSourcesBuildPhase; buildActionMask = 2147483647; files = ( 4089A0440FFAD1BE000B29AE /* sample1.cc in Sources */, 4089A0460FFAD1BE000B29AE /* sample1_unittest.cc in Sources */, ); runOnlyForDeploymentPostprocessing = 0; }; 40C848F7101A209C0083642A /* Sources */ = { isa = PBXSourcesBuildPhase; buildActionMask = 2147483647; files = ( 40C848FF101A21150083642A /* gtest-all.cc in Sources */, ); runOnlyForDeploymentPostprocessing = 0; }; 40C84908101A217E0083642A /* Sources */ = { isa = PBXSourcesBuildPhase; buildActionMask = 2147483647; files = ( 40C84915101A21DF0083642A /* gtest_main.cc in Sources */, ); runOnlyForDeploymentPostprocessing = 0; }; 40C8497F101A36850083642A /* Sources */ = { isa = PBXSourcesBuildPhase; buildActionMask = 2147483647; files = ( 40C84980101A36850083642A /* gtest_unittest.cc in Sources */, ); runOnlyForDeploymentPostprocessing = 0; }; 40C8498E101A36A60083642A /* Sources */ = { isa = PBXSourcesBuildPhase; buildActionMask = 2147483647; files = ( 40C8498F101A36A60083642A /* sample1.cc in Sources */, 40C84990101A36A60083642A /* sample1_unittest.cc in Sources */, ); runOnlyForDeploymentPostprocessing = 0; }; 8D07F2C10486CC7A007CD1D0 /* Sources */ = { isa = PBXSourcesBuildPhase; buildActionMask = 2147483647; files = ( 40899F3A0FFA70D4000B29AE /* gtest-all.cc in Sources */, ); runOnlyForDeploymentPostprocessing = 0; }; /* End PBXSourcesBuildPhase section */ /* Begin PBXTargetDependency section */ 40899F9D0FFA740F000B29AE /* PBXTargetDependency */ = { isa = PBXTargetDependency; target = 40899F420FFA7184000B29AE /* gtest_unittest-framework */; targetProxy = 40899F9C0FFA740F000B29AE /* PBXContainerItemProxy */; }; 4089A0980FFAD34A000B29AE /* PBXTargetDependency */ = { isa = PBXTargetDependency; target = 4089A0120FFACEFC000B29AE /* sample1_unittest-framework */; targetProxy = 4089A0970FFAD34A000B29AE /* PBXContainerItemProxy */; }; 408BEC101046CFE900DEF522 /* PBXTargetDependency */ = { isa = PBXTargetDependency; target = 40C848F9101A209C0083642A /* gtest-static */; targetProxy = 408BEC0F1046CFE900DEF522 /* PBXContainerItemProxy */; }; 40C44AE60E379922008FCC51 /* PBXTargetDependency */ = { isa = PBXTargetDependency; target = 40C44ADC0E3798F4008FCC51 /* Version Info */; targetProxy = 40C44AE50E379922008FCC51 /* PBXContainerItemProxy */; }; 40C8497B101A36850083642A /* PBXTargetDependency */ = { isa = PBXTargetDependency; target = 40C848F9101A209C0083642A /* gtest-static */; targetProxy = 40C8497C101A36850083642A /* PBXContainerItemProxy */; }; 40C8497D101A36850083642A /* PBXTargetDependency */ = { isa = PBXTargetDependency; target = 40C8490A101A217E0083642A /* gtest_main-static */; targetProxy = 40C8497E101A36850083642A /* PBXContainerItemProxy */; }; 40C8498A101A36A60083642A /* PBXTargetDependency */ = { isa = PBXTargetDependency; target = 40C848F9101A209C0083642A /* gtest-static */; targetProxy = 40C8498B101A36A60083642A /* PBXContainerItemProxy */; }; 40C8498C101A36A60083642A /* PBXTargetDependency */ = { isa = PBXTargetDependency; target = 40C8490A101A217E0083642A /* gtest_main-static */; targetProxy = 40C8498D101A36A60083642A /* PBXContainerItemProxy */; }; 40C8499C101A36DC0083642A /* PBXTargetDependency */ = { isa = PBXTargetDependency; target = 40C8490A101A217E0083642A /* gtest_main-static */; targetProxy = 40C8499B101A36DC0083642A /* PBXContainerItemProxy */; }; 40C8499E101A36E50083642A /* PBXTargetDependency */ = { isa = PBXTargetDependency; target = 8D07F2BC0486CC7A007CD1D0 /* gtest-framework */; targetProxy = 40C8499D101A36E50083642A /* PBXContainerItemProxy */; }; 40C849A0101A36F10083642A /* PBXTargetDependency */ = { isa = PBXTargetDependency; target = 8D07F2BC0486CC7A007CD1D0 /* gtest-framework */; targetProxy = 40C8499F101A36F10083642A /* PBXContainerItemProxy */; }; 40C849F7101A43440083642A /* PBXTargetDependency */ = { isa = PBXTargetDependency; target = 40C8497A101A36850083642A /* gtest_unittest-static */; targetProxy = 40C849F6101A43440083642A /* PBXContainerItemProxy */; }; 40C849F9101A43490083642A /* PBXTargetDependency */ = { isa = PBXTargetDependency; target = 40C84989101A36A60083642A /* sample1_unittest-static */; targetProxy = 40C849F8101A43490083642A /* PBXContainerItemProxy */; }; /* End PBXTargetDependency section */ /* Begin XCBuildConfiguration section */ 3B238F600E828B5400846E11 /* Debug */ = { isa = XCBuildConfiguration; buildSettings = { COMBINE_HIDPI_IMAGES = YES; COPY_PHASE_STRIP = NO; GCC_DYNAMIC_NO_PIC = NO; GCC_OPTIMIZATION_LEVEL = 0; GCC_VERSION = com.apple.compilers.llvm.clang.1_0; PRODUCT_NAME = Check; SDKROOT = macosx; }; name = Debug; }; 3B238F610E828B5400846E11 /* Release */ = { isa = XCBuildConfiguration; buildSettings = { COMBINE_HIDPI_IMAGES = YES; COPY_PHASE_STRIP = YES; DEBUG_INFORMATION_FORMAT = "dwarf-with-dsym"; GCC_VERSION = com.apple.compilers.llvm.clang.1_0; PRODUCT_NAME = Check; SDKROOT = macosx; ZERO_LINK = NO; }; name = Release; }; 40899F450FFA7185000B29AE /* Debug */ = { isa = XCBuildConfiguration; buildSettings = { GCC_VERSION = com.apple.compilers.llvm.clang.1_0; HEADER_SEARCH_PATHS = ../; PRODUCT_NAME = "gtest_unittest-framework"; SDKROOT = macosx; }; name = Debug; }; 40899F460FFA7185000B29AE /* Release */ = { isa = XCBuildConfiguration; buildSettings = { GCC_VERSION = com.apple.compilers.llvm.clang.1_0; HEADER_SEARCH_PATHS = ../; PRODUCT_NAME = "gtest_unittest-framework"; SDKROOT = macosx; }; name = Release; }; 4089A0150FFACEFD000B29AE /* Debug */ = { isa = XCBuildConfiguration; buildSettings = { GCC_VERSION = com.apple.compilers.llvm.clang.1_0; PRODUCT_NAME = "sample1_unittest-framework"; SDKROOT = macosx; }; name = Debug; }; 4089A0160FFACEFD000B29AE /* Release */ = { isa = XCBuildConfiguration; buildSettings = { GCC_VERSION = com.apple.compilers.llvm.clang.1_0; PRODUCT_NAME = "sample1_unittest-framework"; SDKROOT = macosx; }; name = Release; }; 40C44ADF0E3798F4008FCC51 /* Debug */ = { isa = XCBuildConfiguration; buildSettings = { COMBINE_HIDPI_IMAGES = YES; GCC_VERSION = com.apple.compilers.llvm.clang.1_0; MACOSX_DEPLOYMENT_TARGET = 10.7; PRODUCT_NAME = gtest; SDKROOT = macosx; TARGET_NAME = gtest; }; name = Debug; }; 40C44AE00E3798F4008FCC51 /* Release */ = { isa = XCBuildConfiguration; buildSettings = { COMBINE_HIDPI_IMAGES = YES; GCC_VERSION = com.apple.compilers.llvm.clang.1_0; MACOSX_DEPLOYMENT_TARGET = 10.7; PRODUCT_NAME = gtest; SDKROOT = macosx; TARGET_NAME = gtest; }; name = Release; }; 40C848FB101A209D0083642A /* Debug */ = { isa = XCBuildConfiguration; baseConfigurationReference = 40899FB30FFA7567000B29AE /* StaticLibraryTarget.xcconfig */; buildSettings = { COMBINE_HIDPI_IMAGES = YES; GCC_INLINES_ARE_PRIVATE_EXTERN = YES; GCC_SYMBOLS_PRIVATE_EXTERN = YES; GCC_VERSION = com.apple.compilers.llvm.clang.1_0; HEADER_SEARCH_PATHS = ( ../, ../include/, ); PRODUCT_NAME = gtest; SDKROOT = macosx; }; name = Debug; }; 40C848FC101A209D0083642A /* Release */ = { isa = XCBuildConfiguration; baseConfigurationReference = 40899FB30FFA7567000B29AE /* StaticLibraryTarget.xcconfig */; buildSettings = { COMBINE_HIDPI_IMAGES = YES; GCC_INLINES_ARE_PRIVATE_EXTERN = YES; GCC_SYMBOLS_PRIVATE_EXTERN = YES; GCC_VERSION = com.apple.compilers.llvm.clang.1_0; HEADER_SEARCH_PATHS = ( ../, ../include/, ); PRODUCT_NAME = gtest; SDKROOT = macosx; }; name = Release; }; 40C8490E101A217F0083642A /* Debug */ = { isa = XCBuildConfiguration; baseConfigurationReference = 40899FB30FFA7567000B29AE /* StaticLibraryTarget.xcconfig */; buildSettings = { COMBINE_HIDPI_IMAGES = YES; GCC_VERSION = com.apple.compilers.llvm.clang.1_0; HEADER_SEARCH_PATHS = ( ../, ../include/, ); PRODUCT_NAME = gtest_main; SDKROOT = macosx; }; name = Debug; }; 40C8490F101A217F0083642A /* Release */ = { isa = XCBuildConfiguration; baseConfigurationReference = 40899FB30FFA7567000B29AE /* StaticLibraryTarget.xcconfig */; buildSettings = { COMBINE_HIDPI_IMAGES = YES; GCC_VERSION = com.apple.compilers.llvm.clang.1_0; HEADER_SEARCH_PATHS = ( ../, ../include/, ); PRODUCT_NAME = gtest_main; SDKROOT = macosx; }; name = Release; }; 40C84985101A36850083642A /* Debug */ = { isa = XCBuildConfiguration; buildSettings = { GCC_VERSION = com.apple.compilers.llvm.clang.1_0; HEADER_SEARCH_PATHS = ../; PRODUCT_NAME = gtest_unittest; SDKROOT = macosx; }; name = Debug; }; 40C84986101A36850083642A /* Release */ = { isa = XCBuildConfiguration; buildSettings = { GCC_VERSION = com.apple.compilers.llvm.clang.1_0; HEADER_SEARCH_PATHS = ../; PRODUCT_NAME = gtest_unittest; SDKROOT = macosx; }; name = Release; }; 40C84995101A36A60083642A /* Debug */ = { isa = XCBuildConfiguration; buildSettings = { GCC_VERSION = com.apple.compilers.llvm.clang.1_0; PRODUCT_NAME = "sample1_unittest-static"; SDKROOT = macosx; }; name = Debug; }; 40C84996101A36A60083642A /* Release */ = { isa = XCBuildConfiguration; buildSettings = { GCC_VERSION = com.apple.compilers.llvm.clang.1_0; PRODUCT_NAME = "sample1_unittest-static"; SDKROOT = macosx; }; name = Release; }; 4FADC24308B4156D00ABE55E /* Debug */ = { isa = XCBuildConfiguration; baseConfigurationReference = 40D4CDF20E30E07400294801 /* FrameworkTarget.xcconfig */; buildSettings = { COMBINE_HIDPI_IMAGES = YES; DYLIB_COMPATIBILITY_VERSION = 1; DYLIB_CURRENT_VERSION = 1; GCC_VERSION = com.apple.compilers.llvm.clang.1_0; HEADER_SEARCH_PATHS = ( ../, ../include/, ); INFOPLIST_FILE = Resources/Info.plist; INFOPLIST_PREFIX_HEADER = "$(PROJECT_TEMP_DIR)/Version.h"; INFOPLIST_PREPROCESS = YES; PRODUCT_NAME = gtest; SDKROOT = macosx; VERSIONING_SYSTEM = "apple-generic"; }; name = Debug; }; 4FADC24408B4156D00ABE55E /* Release */ = { isa = XCBuildConfiguration; baseConfigurationReference = 40D4CDF20E30E07400294801 /* FrameworkTarget.xcconfig */; buildSettings = { COMBINE_HIDPI_IMAGES = YES; DYLIB_COMPATIBILITY_VERSION = 1; DYLIB_CURRENT_VERSION = 1; GCC_VERSION = com.apple.compilers.llvm.clang.1_0; HEADER_SEARCH_PATHS = ( ../, ../include/, ); INFOPLIST_FILE = Resources/Info.plist; INFOPLIST_PREFIX_HEADER = "$(PROJECT_TEMP_DIR)/Version.h"; INFOPLIST_PREPROCESS = YES; PRODUCT_NAME = gtest; SDKROOT = macosx; VERSIONING_SYSTEM = "apple-generic"; }; name = Release; }; 4FADC24708B4156D00ABE55E /* Debug */ = { isa = XCBuildConfiguration; baseConfigurationReference = 40D4CDF10E30E07400294801 /* DebugProject.xcconfig */; buildSettings = { }; name = Debug; }; 4FADC24808B4156D00ABE55E /* Release */ = { isa = XCBuildConfiguration; baseConfigurationReference = 40D4CDF40E30E07400294801 /* ReleaseProject.xcconfig */; buildSettings = { }; name = Release; }; /* End XCBuildConfiguration section */ /* Begin XCConfigurationList section */ 3B238FA30E828BB600846E11 /* Build configuration list for PBXAggregateTarget "Check" */ = { isa = XCConfigurationList; buildConfigurations = ( 3B238F600E828B5400846E11 /* Debug */, 3B238F610E828B5400846E11 /* Release */, ); defaultConfigurationIsVisible = 0; defaultConfigurationName = Release; }; 40899F4A0FFA71BC000B29AE /* Build configuration list for PBXNativeTarget "gtest_unittest-framework" */ = { isa = XCConfigurationList; buildConfigurations = ( 40899F450FFA7185000B29AE /* Debug */, 40899F460FFA7185000B29AE /* Release */, ); defaultConfigurationIsVisible = 0; defaultConfigurationName = Release; }; 4089A0240FFACF01000B29AE /* Build configuration list for PBXNativeTarget "sample1_unittest-framework" */ = { isa = XCConfigurationList; buildConfigurations = ( 4089A0150FFACEFD000B29AE /* Debug */, 4089A0160FFACEFD000B29AE /* Release */, ); defaultConfigurationIsVisible = 0; defaultConfigurationName = Release; }; 40C44AE40E379905008FCC51 /* Build configuration list for PBXAggregateTarget "Version Info" */ = { isa = XCConfigurationList; buildConfigurations = ( 40C44ADF0E3798F4008FCC51 /* Debug */, 40C44AE00E3798F4008FCC51 /* Release */, ); defaultConfigurationIsVisible = 0; defaultConfigurationName = Release; }; 40C84902101A212E0083642A /* Build configuration list for PBXNativeTarget "gtest-static" */ = { isa = XCConfigurationList; buildConfigurations = ( 40C848FB101A209D0083642A /* Debug */, 40C848FC101A209D0083642A /* Release */, ); defaultConfigurationIsVisible = 0; defaultConfigurationName = Release; }; 40C84912101A21D20083642A /* Build configuration list for PBXNativeTarget "gtest_main-static" */ = { isa = XCConfigurationList; buildConfigurations = ( 40C8490E101A217F0083642A /* Debug */, 40C8490F101A217F0083642A /* Release */, ); defaultConfigurationIsVisible = 0; defaultConfigurationName = Release; }; 40C84984101A36850083642A /* Build configuration list for PBXNativeTarget "gtest_unittest-static" */ = { isa = XCConfigurationList; buildConfigurations = ( 40C84985101A36850083642A /* Debug */, 40C84986101A36850083642A /* Release */, ); defaultConfigurationIsVisible = 0; defaultConfigurationName = Release; }; 40C84994101A36A60083642A /* Build configuration list for PBXNativeTarget "sample1_unittest-static" */ = { isa = XCConfigurationList; buildConfigurations = ( 40C84995101A36A60083642A /* Debug */, 40C84996101A36A60083642A /* Release */, ); defaultConfigurationIsVisible = 0; defaultConfigurationName = Release; }; 4FADC24208B4156D00ABE55E /* Build configuration list for PBXNativeTarget "gtest-framework" */ = { isa = XCConfigurationList; buildConfigurations = ( 4FADC24308B4156D00ABE55E /* Debug */, 4FADC24408B4156D00ABE55E /* Release */, ); defaultConfigurationIsVisible = 0; defaultConfigurationName = Release; }; 4FADC24608B4156D00ABE55E /* Build configuration list for PBXProject "gtest" */ = { isa = XCConfigurationList; buildConfigurations = ( 4FADC24708B4156D00ABE55E /* Debug */, 4FADC24808B4156D00ABE55E /* Release */, ); defaultConfigurationIsVisible = 0; defaultConfigurationName = Release; }; /* End XCConfigurationList section */ }; rootObject = 0867D690FE84028FC02AAC07 /* Project object */; } ffc-2019.1.0.post0/requirements.txt000066400000000000000000000002761346026536000170440ustar00rootroot00000000000000-e git+https://bitbucket.org/fenics-project/fiat.git#egg=fiat -e git+https://bitbucket.org/fenics-project/dijitso.git#egg=dijitso -e git+https://bitbucket.org/fenics-project/ufl.git#egg=ufl ffc-2019.1.0.post0/setup.cfg000066400000000000000000000011501346026536000153710ustar00rootroot00000000000000[tool:pytest] norecursedirs=libs doc test/uflacs/crosslanguage/generated [flake8] max-line-length = 100 exclude = .git,__pycache__,docs/source/conf.py,build,dist,libs ignore = # TODO: Enable these after fixing one by one, these may hide bugs F403,F405,F812,F999,F401,F841,F821, # TODO: Some of these may also be important E501,E203,E265,E701,E702,E703,E302,E402,E122, E222,E303,E129,E251,E225,E221,E731,E272, E131,E115,E127,E261,E128,E202,E231,E201,E266, E301,E262,E401,E124,E227,E126,E121,E704,E502,E241,E123,E226, # TODO: Some of these may also be important W503,W391,W291,W293 ffc-2019.1.0.post0/setup.py000077500000000000000000000112251346026536000152710ustar00rootroot00000000000000# -*- coding: utf-8 -*- import os import sys import subprocess import string from setuptools import setup, find_packages if sys.version_info < (3, 5): print("Python 3.5 or higher required, please upgrade.") sys.exit(1) on_rtd = os.environ.get('READTHEDOCS') == 'True' VERSION = "2019.1.0.post0" RESTRICT_REQUIREMENTS = ">=2019.1.0,<2019.2" if on_rtd: REQUIREMENTS = [] else: REQUIREMENTS = [ "numpy", "fenics-fiat%s" % RESTRICT_REQUIREMENTS, "fenics-ufl%s" % RESTRICT_REQUIREMENTS, "fenics-dijitso%s" % RESTRICT_REQUIREMENTS, ] URL = "https://bitbucket.org/fenics-project/ffc/" ENTRY_POINTS = {'console_scripts': ['ffc = ffc.__main__:main', 'ffc-3 = ffc.__main__:main']} AUTHORS = """\ Anders Logg, Kristian Oelgaard, Marie Rognes, Garth N. Wells, Martin Sandve Alnæs, Hans Petter Langtangen, Kent-Andre Mardal, Ola Skavhaug, et al. """ CLASSIFIERS = """\ Development Status :: 5 - Production/Stable Intended Audience :: Developers Intended Audience :: Science/Research License :: OSI Approved :: GNU Lesser General Public License v3 or later (LGPLv3+) Operating System :: POSIX Operating System :: POSIX :: Linux Operating System :: MacOS :: MacOS X Operating System :: Microsoft :: Windows Programming Language :: Python Programming Language :: Python :: 3 Programming Language :: Python :: 3.5 Programming Language :: Python :: 3.6 Topic :: Scientific/Engineering :: Mathematics Topic :: Software Development :: Libraries :: Python Modules Topic :: Software Development :: Code Generators """ def tarball(): if "dev" in VERSION: return None return URL + "downloads/fenics-ffc-%s.tar.gz" % VERSION def get_installation_prefix(): "Get installation prefix" prefix = sys.prefix for arg in sys.argv[1:]: if "--user" in arg: import site prefix = site.getuserbase() break elif arg in ("--prefix", "--home", "--install-base"): prefix = sys.argv[sys.argv.index(arg) + 1] break elif "--prefix=" in arg or "--home=" in arg or "--install-base=" in arg: prefix = arg.split("=")[1] break return os.path.abspath(os.path.expanduser(prefix)) def get_git_commit_hash(): """Return git commit hash of currently checked out revision or "unknown" """ try: hash = subprocess.check_output(['git', 'rev-parse', 'HEAD']) except (OSError, subprocess.CalledProcessError) as e: print('Retrieving git commit hash did not succeed with exception:') print('"%s"' % str(e)) print() print('Stored git commit hash will be set to "unknown"!') return "unknown" else: return hash.decode("ascii").strip() def write_config_file(infile, outfile, variables={}): "Write config file based on template" class AtTemplate(string.Template): delimiter = "@" s = AtTemplate(open(infile, "r").read()) s = s.substitute(**variables) with open(outfile, "w") as a: a.write(s) def generate_git_hash_file(GIT_COMMIT_HASH): "Generate module with git hash" write_config_file(os.path.join("ffc", "git_commit_hash.py.in"), os.path.join("ffc", "git_commit_hash.py"), variables=dict(GIT_COMMIT_HASH=GIT_COMMIT_HASH)) def run_install(): "Run installation" # Get common variables #INSTALL_PREFIX = get_installation_prefix() GIT_COMMIT_HASH = get_git_commit_hash() # Scripts list entry_points = ENTRY_POINTS # Generate module with git hash from template generate_git_hash_file(GIT_COMMIT_HASH) # FFC data files data_files = [(os.path.join("share", "man", "man1"), [os.path.join("doc", "man", "man1", "ffc.1.gz")])] # Call distutils to perform installation setup(name="fenics-ffc", description="The FEniCS Form Compiler", version=VERSION, author=AUTHORS, classifiers=[_f for _f in CLASSIFIERS.split('\n') if _f], license="LGPL version 3 or later", author_email="fenics-dev@googlegroups.com", maintainer_email="fenics-dev@googlegroups.com", url=URL, download_url=tarball(), platforms=["Windows", "Linux", "Solaris", "Mac OS-X", "Unix"], packages=find_packages("."), package_dir={"ffc": "ffc"}, package_data={"ffc" : [os.path.join('backends', 'ufc', '*.h')]}, #scripts=scripts, # Using entry_points instead entry_points=entry_points, data_files=data_files, install_requires=REQUIREMENTS, zip_safe=False) if __name__ == "__main__": run_install() ffc-2019.1.0.post0/test/000077500000000000000000000000001346026536000145325ustar00rootroot00000000000000ffc-2019.1.0.post0/test/regression/000077500000000000000000000000001346026536000167125ustar00rootroot00000000000000ffc-2019.1.0.post0/test/regression/README.rst000066400000000000000000000067131346026536000204100ustar00rootroot00000000000000How to run regression tests =========================== To run regression tests with default parameters, simply run:: cd /tests/regression/ python test.py Look at ``test.py`` for more options. How to update references ======================== To update the references for the FFC regression tests, first commit your changes, then run the regression test (to generate the new references) and finally run the script upload:: cd /tests/regression/ python test.py [--use-tsfc] ./scripts/upload Note: Contributors are encouraged to install also TSFC stack and update references including ``tsfc`` representation using ``--use-tsfc`` flag. For the installation instructions see ``doc/sphinx/source/installation.rst``. Note that ``tsfc`` regression test are run in separate plans on the Bamboo CI system. Note: You may be asked for your *Bitbucket* username and password when uploading the reference data, if use of ssh keys fails. Note: The upload script will push the new references to the ``ffc-reference-data`` repository. This is harmless even if these references are not needed later. Note: The upload script will update the file ``ffc-regression-data-id`` and commit this change to the currently active branch, remember to include this commit when merging or pushing your changes elsewhere. Note: You can cherry-pick the commit that updated ``ffc-regression-data-id`` into another branch to use the same set of references there. Note: If you ever get merge conflicts in the ``ffc-regression-data-id``, always pick one version of the file. Most likely you'll need to update the references again. How to run regression tests against a different set of regression data ====================================================================== To run regression tests and compare to a different set of regression data, perhaps to see what has changed in generated code since a certain version, check out the ``ffc-regression-data-id`` file you want and run tests as usual:: cd /tests/regression/ git checkout ffc-regression-data-id python test.py The test.py script will run scripts/download which will check out the regression data with the commit id from ``ffc-regression-data-id`` in ``ffc-regression-data/``. How to inspect diff in output from executed generated code ========================================================== Say you have differences in the output of PoissonDG, you can diff the ``.json`` files (NB! within some tolerance on the floating point accuracy!) like this:: python recdiff.py ffc-reference-data/r_uflacs/PoissonDG.json output/r_uflacs/PoissonDG.json python recdiff.py output/r_uflacs/PoissonDG.json output/r_tensor/PoissonDG.json Pick any combination of ``ffc-reference-data | output`` and ``r_foo | r_bar`` you want to compare. How to manually delete old reference data ========================================= If you update the tests such that some data should no longer be kept in the data repository, this approach allows deleting reference data:: cd ffc-regression-data git checkout master git pull git rm -rf git commit -a -m"Manually updated data because ..." git rev-parse HEAD > ../ffc-regression-data-id cd .. git commit ffc-regression-data-id -m"Manually updated reference id." This is not automated because it happens rarely. Probably a good idea to coordinate with other devs so they don't introduce the deleted files with another branch. ffc-2019.1.0.post0/test/regression/elements.py000066400000000000000000000035351346026536000211060ustar00rootroot00000000000000# -*- coding: utf-8 -*- interval_2D = "Cell('interval', geometric_dimension=2)" interval_3D = "Cell('interval', geometric_dimension=3)" triangle_3D = "Cell('triangle', geometric_dimension=3)" elements = ["FiniteElement('N1curl', triangle, 2)", "MixedElement([FiniteElement('Lagrange', triangle, 3), \ VectorElement('Lagrange', triangle, 3)['facet']])", "VectorElement('R', triangle, 0, 3)", "VectorElement('DG', %s, 1)" % interval_2D, "VectorElement('DG', %s, 1)" % interval_3D, "VectorElement('DG', %s, 1)" % triangle_3D, "MixedElement([VectorElement('CG', %s, 2), \ FiniteElement('CG', %s, 1)])" % (interval_2D, interval_2D), "MixedElement([VectorElement('CG', %s, 2), \ FiniteElement('CG', %s, 1)])" % (interval_3D, interval_3D), "MixedElement([VectorElement('CG', %s, 2), \ FiniteElement('CG', %s, 1)])" % (triangle_3D, triangle_3D), "MixedElement([FiniteElement('RT', %s, 2), \ FiniteElement('BDM', %s, 1), \ FiniteElement('N1curl', %s, 1), \ FiniteElement('DG', %s, 1)])" % (triangle_3D, triangle_3D, triangle_3D, triangle_3D), "FiniteElement('Regge', triangle, 2)", "MixedElement([FiniteElement('HHJ', triangle, 2), \ FiniteElement('CG', triangle, 3)])", ] ffc-2019.1.0.post0/test/regression/ffc-reference-data-id000066400000000000000000000000511346026536000226240ustar00rootroot000000000000000ccb7c89950a324261bb684835790e214b31874b ffc-2019.1.0.post0/test/regression/printer.h000066400000000000000000000064321346026536000205530ustar00rootroot00000000000000#ifndef PRINTER_H_INCLUDED #define PRINTER_H_INCLUDED #include #include #include #include #include #include class Printer { protected: // Precision in output of floats const std::size_t precision; const double epsilon; // Output stream std::ostream & os; // Indentation level int level; public: Printer(std::ostream & os): precision(16), epsilon(1e-16), os(os), level(0) {} /// Indent to current level void indent() { for (int i=0; i= 0) s << "_" << i; if (j >= 0) s << "_" << j; return s.str(); } /// Format '"foo": ' properly void begin_entry(std::string name, int i=-1, int j=-1) { name = format_name(name, i, j); indent(); os << '"' << name << '"' << ": "; } /// Begin an unnamed block void begin() { os << "{" << std::endl; ++level; } /// Begin a named block entry void begin(std::string name, int i=-1, int j=-1) { begin_entry(name, i, j); begin(); } /// End current named or unnamed block void end() { --level; assert(level >= 0); indent(); if (level > 0) os << "}," << std::endl; else os << "}" << std::endl; } /// Format a value type properly template void print_value(T value); /// Set a named single value entry template void print_scalar(std::string name, T value, int i=-1, int j=-1) { begin_entry(name, i, j); print_value(value); os << ", " << std::endl; } /// Set a named array valued entry template void print_array(std::string name, int n, T * values, int i=-1, int j=-1) { begin_entry(name, i, j); os << "["; if (n > 0) print_value(values[0]); for (int k=1; k void print_vector(std::string name, typename std::vector values, int i=-1, int j=-1) { begin_entry(name, i, j); os << "["; typename std::vector::iterator k=values.begin(); if (k!=values.end()) { print_value(*k); ++k; } while (k!=values.end()) { os << ", "; print_value(*k); ++k; } os << "], " << std::endl; } }; /// Fallback formatting for any value type template void Printer::print_value(T value) { os << value; } /// Use precision for floats template<> void Printer::print_value(double value) { os.setf(std::ios::scientific, std::ios::floatfield); os.precision(precision); if (std::abs(static_cast(value)) < epsilon) os << "0.0"; else os << value; } /// Use precision for floats template<> void Printer::print_value(float value) { print_value(static_cast(value)); } /// Wrap strings in quotes template<> void Printer::print_value(std::string value) { os << '"' << value << '"'; } /// Wrap strings in quotes template<> void Printer::print_value(const char * value) { os << '"' << value << '"'; } #endif ffc-2019.1.0.post0/test/regression/recdiff.py000066400000000000000000000143731346026536000206760ustar00rootroot00000000000000# -*- coding: utf-8 -*- class DiffMarkerType: def __init__(self, name): self.name = name def __str__(self): return self.name def __repr__(self): return self.name DiffMissing = DiffMarkerType("") DiffEqual = DiffMarkerType("") _default_recdiff_tolerance = 1e-5 def recdiff_dict(data1, data2, tolerance=_default_recdiff_tolerance): keys1 = set(data1.keys()) keys2 = set(data2.keys()) keys = keys1.intersection(keys2) diff = {} for k in keys1 - keys: diff[k] = (data1[k], DiffMissing) for k in keys2 - keys: diff[k] = (DiffMissing, data2[k]) for k in keys: d1 = data1[k] d2 = data2[k] d = recdiff(d1, d2, tolerance) if d is not DiffEqual: diff[k] = d return diff or DiffEqual def recdiff(data1, data2, tolerance=_default_recdiff_tolerance): if isinstance(data1, (float, int)) and isinstance(data2, (float, int)): # This approach allows numbers formatted as ints and floats interchangably as long as the values are equal delta = abs(data1 - data2) avg = (abs(data1) + abs(data2)) / 2.0 if 0: # Using relative comparison, i.e. a tolerance of 1e-2 means one percent error is acceptable eps = tolerance * avg same = avg < 1e-14 or delta < eps else: # Using absolute comparison, this is what the old .out comparison does same = delta < tolerance return DiffEqual if same else (data1, data2) elif not isinstance(data1, type(data2)): return (data1, data2) elif isinstance(data1, dict): return recdiff_dict(data1, data2, tolerance) elif isinstance(data1, list): diff = [recdiff(d1, d2, tolerance) for (d1, d2) in zip(data1, data2)] return DiffEqual if all(d is DiffEqual for d in diff) else diff else: return DiffEqual if data1 == data2 else (data1, data2) def _print(line): print(line) def print_recdiff(diff, indent=0, printer=_print, prekey=""): if isinstance(diff, dict): for k in sorted(diff.keys()): key = str(k) if prekey: key = ".".join((prekey, key)) printer("%s%s: " % (" " * indent, key)) print_recdiff(diff[k], indent + 1, printer, key) elif isinstance(diff, list): # Limiting this to lists of scalar values! for i, d in enumerate(diff): if isinstance(d, tuple): data1, data2 = d printer("%s%d: %s != %s" % (" " * indent, i, data1, data2)) elif isinstance(diff, tuple): assert len(diff) == 2 data1, data2 = diff data1 = str(data1) data2 = str(data2) if len(data1) + len(data2) + 2 * indent + 4 > 70: printer("%s%s" % (" " * indent, data1)) printer("%s!=" % (" " * indent)) printer("%s%s" % (" " * indent, data2)) else: printer("%s%s != %s" % (" " * indent, data1, data2)) # ---------- Unittest code import unittest # from recdiff import recdiff, print_recdiff, DiffEqual, DiffMissing class RecDiffTestCase(unittest.TestCase): def assertEqual(self, a, b): if not (a == b): print(a) print(b) assert a == b def assertDiffEqual(self, diff): self.assertEqual(diff, DiffEqual) def test_recdiff_equal_items(self): self.assertDiffEqual(recdiff(1, 1)) self.assertDiffEqual(recdiff(0, 0)) self.assertDiffEqual(recdiff(0, 1e-15)) self.assertDiffEqual(recdiff(1.1, 1.1 + 1e-7, tolerance=1e-6)) self.assertDiffEqual(recdiff(1.1, 1.1 - 1e-7, tolerance=1e-6)) self.assertDiffEqual(recdiff("foo", "foo")) def test_recdiff_not_equal_items(self): self.assertEqual(recdiff(1, 2), (1, 2)) self.assertEqual(recdiff(0, 0.0001), (0, 0.0001)) self.assertEqual(recdiff(0, 1e-13), (0, 1e-13)) self.assertEqual(recdiff(1.1, 1.2 + 1e-7, tolerance=1e-6), (1.1, 1.2 + 1e-7)) self.assertEqual(recdiff(1.1, 1.2 - 1e-7, tolerance=1e-6), (1.1, 1.2 - 1e-7)) self.assertEqual(recdiff("foo", "bar"), ("foo", "bar")) def test_recdiff_equal_list(self): self.assertDiffEqual(recdiff([1, 2], [1, 2])) def test_recdiff_not_equal_list(self): self.assertEqual(recdiff([1, 2], [1, 3]), [DiffEqual, (2, 3)]) def test_recdiff_equal_dict(self): self.assertDiffEqual(recdiff({1: 2}, {1: 2})) def test_recdiff_not_equal_dict(self): self.assertEqual(recdiff({1: 2, 2: 3}, {1: 3, 3: 4}), {1: (2, 3), 2: (3, DiffMissing), 3: (DiffMissing, 4)}) def test_recdiff_equal_dict_hierarchy(self): self.assertDiffEqual(recdiff({1: {2: {3: 4, 5: 6}}}, {1: {2: {3: 4, 5: 6}}})) def test_recdiff_not_equal_dict_hierarchy(self): self.assertEqual(recdiff({1: {2: {3: 4, 5: 6}}}, {1: {2: {3: 4, 5: 7}}}), {1: {2: {5: (6, 7)}}}) def test_example(self): form1 = { "num_coefficients": 2, "num_arguments": 2, "has_default_cell_integral": 1, "cell_integrals": {0: {"tabulate_tensor_input1": ["data"]}}, } form2 = eval("""{ "num_coefficients": 2, "rank": 2, "has_default_cell_integral": 0, "cell_integrals": { 0: { "tabulate_tensor_input1": ["data2"] } }, }""") actual_diff = recdiff(form1, form2) if 0: print_recdiff(actual_diff) expected_diff = { #"num_coefficients": DiffEqual, "num_arguments": (2, DiffMissing), "rank": (DiffMissing, 2), "has_default_cell_integral": (1, 0), "cell_integrals": {0: {"tabulate_tensor_input1": [("data", "data2")]}}, } self.assertEqual(actual_diff, expected_diff) def main(a, b, tolerance=_default_recdiff_tolerance): print("Running diff on files %s and %s" % (a, b)) a = eval(open(a).read()) b = eval(open(b).read()) d = recdiff(a, b, float(tolerance)) print_recdiff(d) if __name__ == "__main__": import sys args = sys.argv[1:] if not args: # Hack to be able to use this as a script, TODO: do something nicer print("No arguments, running tests.") unittest.main() else: main(*args) ffc-2019.1.0.post0/test/regression/scripts/000077500000000000000000000000001346026536000204015ustar00rootroot00000000000000ffc-2019.1.0.post0/test/regression/scripts/download000077500000000000000000000022731346026536000221420ustar00rootroot00000000000000#!/bin/bash # # Copyright (C) 2013 Anders Logg and Martin Sandve Alnaes # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # Modified by Johannes Ring, 2013-04-23 # # First added: 2013-04-22 # Last changed: 2013-08-20 # # This script downloads the reference data for the FFC regression tests # and updates to the reference data version specified by the data id file. # Parameters source ./scripts/parameters # Get updated reference repository ./scripts/getreferencerepo if [ $? -ne 0 ]; then exit 1 fi # Checkout data referenced by id file ./scripts/getdata if [ $? -ne 0 ]; then exit 1 fi ffc-2019.1.0.post0/test/regression/scripts/getdata000077500000000000000000000022111346026536000217340ustar00rootroot00000000000000#!/bin/bash # # Copyright (C) 2013 Anders Logg and Martin Sandve Alnaes # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # First added: 2013-04-22 # Last changed: 2013-08-21 # # This script checks out reference data by the given commit id, # or if none given using the commit id found in data id file. # Parameters source scripts/parameters # Take data id as optional argument or get from file DATA_ID=$1 && [ -z "$DATA_ID" ] && DATA_ID=`cat $DATA_ID_FILE` # Checkout data referenced by id (cd $DATA_DIR && git checkout -B auto $DATA_ID) exit $? ffc-2019.1.0.post0/test/regression/scripts/getreferencerepo000077500000000000000000000036121346026536000236550ustar00rootroot00000000000000#!/bin/bash # # Copyright (C) 2013 Anders Logg and Martin Sandve Alnaes # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # First added: 2013-04-22 # Last changed: 2013-08-21 # # This script overwrites the reference data with the current output # and stores the new reference data as part of the FFC reference data # repository. # Parameters source ./scripts/parameters # Get reference repository if [ ! -d "$DATA_DIR" ]; then echo "Cloning reference data repository" git clone $DATA_REPO_GIT if [ ! -d "$DATA_DIR" ]; then git clone $DATA_REPO_HTTPS fi else pushd $DATA_DIR echo "Found existing reference data repository, pulling new data" git checkout master if [ $? -ne 0 ]; then echo "Failed to checkout master, check state of reference data directory." exit 1 fi git fetch if [ $? -ne 0 ]; then echo "WARNING: Failed to fetch latest reference data from server." else git pull if [ $? -ne 0 ]; then echo "Failed to pull latest reference data from server, possibly a merge situation." exit 1 fi fi popd fi # Check that we had success with getting reference repository if [ ! -d "$DATA_DIR" ]; then echo "Failed to update reference data directory '$DATA_DIR'." exit 1 fi ffc-2019.1.0.post0/test/regression/scripts/parameters000077500000000000000000000003551346026536000224750ustar00rootroot00000000000000OUTPUT_DIR="output" DATA_REPO_GIT="git@bitbucket.org:fenics-project/ffc-reference-data.git" DATA_REPO_HTTPS="https://bitbucket.org/fenics-project/ffc-reference-data.git" DATA_DIR="ffc-reference-data" DATA_ID_FILE="ffc-reference-data-id" ffc-2019.1.0.post0/test/regression/scripts/upload000077500000000000000000000043241346026536000216160ustar00rootroot00000000000000#!/bin/bash # # Copyright (C) 2013 Anders Logg and Martin Sandve Alnaes # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # First added: 2013-04-22 # Last changed: 2013-08-21 # # This script overwrites the reference data with the current output # and stores the new reference data as part of the FFC reference data # repository. The commit id of the stored reference data is commited # to a file in the main repo. # Parameters source ./scripts/parameters # Get updated reference repository ./scripts/getreferencerepo if [ $? -ne 0 ]; then exit 1 fi # Check that we have any data if [ ! -d "$OUTPUT_DIR" ]; then echo "Missing data directory '$OUTPUT_DIR'." exit 1 fi # Copy references echo "Copying new reference data to $DATA_DIR" rsync -r --exclude='README.rst' --exclude='*.build' --exclude='*.bin' --exclude='*.cpp' --exclude='*~' --exclude='.*' --exclude='*#*' $OUTPUT_DIR/ $DATA_DIR echo "" # Get current id for main repo (does not include dirty files, so not quite trustworthy!) REPO_ID=`git rev-list --max-count 1 HEAD` # Commit new data to reference repository pushd $DATA_DIR git add * git commit -m "Update reference data, current project head is ${REPO_ID}." | grep -v "create mode" if [ $? -ne 0 ]; then echo "Failed to commit reference data." exit 1 fi DATA_ID=`git rev-list --max-count 1 HEAD` popd # Commit reference data commit id to file in main repo echo $DATA_ID > $DATA_ID_FILE git commit $DATA_ID_FILE -m"Update reference data pointer to ${DATA_ID}." # Push references to server pushd $DATA_DIR git push if [ $? -ne 0 ]; then echo "WARNING: Failed to push new reference data to server." fi popd ffc-2019.1.0.post0/test/regression/test.py000066400000000000000000000613121346026536000202460ustar00rootroot00000000000000# -*- coding: utf-8 -*- """This script compiles and verifies the output for all form files found in the 'demo' directory. The verification is performed in two steps. First, the generated code is compared with stored references. Then, the output from all functions in the generated code is compared with stored reference values. This script can also be used for benchmarking tabulate_tensor for all form files found in the 'bench' directory. To run benchmarks, use the option --bench. """ # Copyright (C) 2010-2013 Anders Logg, Kristian B. Oelgaard and Marie E. Rognes # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # Modified by Martin Sandve Alnæs, 2013-2017 # Modified by Johannes Ring, 2013 # Modified by Kristian B. Oelgaard, 2013 # Modified by Garth N. Wells, 2014 # FIXME: Need to add many more test cases. Quite a few DOLFIN forms # failed after the FFC tests passed. from collections import OrderedDict, defaultdict import os import sys import shutil import difflib import sysconfig import subprocess import time import logging import traceback from numpy import array, shape, abs, max, isnan import ffc from ffc.log import begin, end, info, info_red, info_green, info_blue from ffc.log import ffc_logger, ERROR from ufl.log import ufl_logger from ufl.utils.str import as_native_str from ffc import get_ufc_cxx_flags from ffc.backends.ufc import get_include_path as get_ufc_include from ufctest import generate_test_code # Parameters output_tolerance = 1e-5 demo_directory = "../../../../demo" bench_directory = "../../../../bench" # Global log file logfile = "error.log" # Remove old log file if os.path.isfile(logfile): os.remove(logfile) class GEFilter(object): """Filter messages that are greater or equal to given log level""" def __init__(self, level): self.__level = level def filter(self, record): return record.levelno >= self.__level class LTFilter(object): """Filter messages that are less than given log level""" def __init__(self, level): self.__level = level def filter(self, record): return record.levelno < self.__level # Filter out error messages from std output splitlevel = ERROR ffc_logger.get_handler().addFilter(LTFilter(splitlevel)) ufl_logger.get_handler().addFilter(LTFilter(splitlevel)) # Filter out error messages to log file file_handler = logging.FileHandler(logfile) file_handler.addFilter(GEFilter(splitlevel)) ffc_logger.get_logger().addHandler(file_handler) ufl_logger.get_logger().addHandler(file_handler) # Extended quadrature tests (optimisations) ext_quad = [ "-r quadrature -O -feliminate_zeros", "-r quadrature -O -fsimplify_expressions", "-r quadrature -O -fprecompute_ip_const", "-r quadrature -O -fprecompute_basis_const", "-r quadrature -O -fprecompute_ip_const -feliminate_zeros", "-r quadrature -O -fprecompute_basis_const -feliminate_zeros", ] # Extended uflacs tests # (to be extended with optimisation parameters later) ext_uflacs = [ "-r uflacs -O -fvectorize -fpadlen=4 -falignas=32", "-r uflacs -O -fno-enable_sum_factorization", "-r uflacs -O -fno-enable_preintegration", "-r uflacs -O -fenable_premultiplication", ] known_quad_failures = set([ "PoissonQuad.ufl", ]) known_uflacs_failures = set([ "CustomIntegral.ufl", "CustomMixedIntegral.ufl", "CustomVectorIntegral.ufl", "MetaData.ufl", ]) known_tsfc_failures = set([ # Expected not to work "CustomIntegral.ufl", "CustomMixedIntegral.ufl", "CustomVectorIntegral.ufl", "MetaData.ufl", ]) _command_timings = [] def run_command(command, verbose): "Run command and collect errors in log file." global _command_timings t1 = time.time() try: output = as_native_str(subprocess.check_output(command, shell=True)) t2 = time.time() _command_timings.append((command, t2 - t1)) if verbose: print(output) return True except subprocess.CalledProcessError as e: t2 = time.time() _command_timings.append((command, t2 - t1)) if e.output: log_error(e.output) print(e.output) return False def log_error(message): "Log error message." ffc_logger.get_logger().error(message) def clean_output(output_directory): "Clean out old output directory" if os.path.isdir(output_directory): shutil.rmtree(output_directory) os.mkdir(output_directory) def generate_test_cases(bench, only_forms, skip_forms): "Generate form files for all test cases." begin("Generating test cases") # Copy form files if bench: form_directory = bench_directory else: form_directory = demo_directory # Make list of form files form_files = [f for f in os.listdir(form_directory) if f.endswith(".ufl") and f not in skip_forms] if only_forms: form_files = [f for f in form_files if f in only_forms] form_files.sort() for f in form_files: shutil.copy(os.path.join(form_directory, f), ".") info_green("Found %d form files" % len(form_files)) # Generate form files for forms #info("Generating form files for extra forms: Not implemented") # Generate form files for elements if not (bench or only_forms): from elements import elements info("Generating form files for extra elements (%d elements)" % len(elements)) for (i, element) in enumerate(elements): with open("X_Element%d.ufl" % i, "w") as f: f.write("element = %s" % element) end() def generate_code(args, only_forms, skip_forms, debug): "Generate code for all test cases." global _command_timings # Get a list of all files form_files = [f for f in os.listdir(".") if f.endswith(".ufl") and f not in skip_forms] if only_forms: form_files = [f for f in form_files if f in only_forms] form_files.sort() begin("Generating code (%d form files found)" % len(form_files)) # TODO: Parse additional options from .ufl file? I.e. grep for # some sort of tag like '#ffc: '. special = {"AdaptivePoisson.ufl": "-e", } failures = [] # Iterate over all files for f in form_files: options = [special.get(f, "")] options.extend(args) options.extend(["-f", "precision=8", "-f", "epsilon=1e-7", "-fconvert_exceptions_to_warnings"]) options.append(f) options = list(filter(None, options)) cmd = sys.executable + " -m ffc " + " ".join(options) # Generate code t1 = time.time() try: ok = ffc.main(options) except Exception as e: if debug: raise e msg = traceback.format_exc() log_error(cmd) log_error(msg) ok = 1 finally: t2 = time.time() _command_timings.append((cmd, t2 - t1)) # Check status if ok == 0: info_green("%s OK" % f) else: info_red("%s failed" % f) failures.append(f) end() return failures def validate_code(reference_dir): "Validate generated code against references." # Get a list of all files header_files = sorted([f for f in os.listdir(".") if f.endswith(".h")]) begin("Validating generated code (%d header files found)" % len(header_files)) failures = [] # Iterate over all files for f in header_files: # Get generated code generated_code = open(f).read() # Get reference code reference_file = os.path.join(reference_dir, f) if os.path.isfile(reference_file): reference_code = open(reference_file).read() else: info_blue("Missing reference for %s" % reference_file) continue # Compare with reference if generated_code == reference_code: info_green("%s OK" % f) else: info_red("%s differs" % f) difflines = difflib.unified_diff( reference_code.split("\n"), generated_code.split("\n")) diff = "\n".join(difflines) s = ("Code differs for %s, diff follows (reference first, generated second)" % os.path.join(*reference_file.split(os.path.sep)[-3:])) log_error("\n" + s + "\n" + len(s) * "-") log_error(diff) failures.append(f) end() return failures def find_boost_cflags(): # Get Boost dir (code copied from ufc/src/utils/python/ufc_utils/build.py) # Set a default directory for the boost installation if sys.platform == "darwin": # Use Brew as default default = os.path.join(os.path.sep, "usr", "local") else: default = os.path.join(os.path.sep, "usr") # If BOOST_DIR is not set use default directory boost_inc_dir = "" boost_lib_dir = "" boost_math_tr1_lib = "boost_math_tr1" boost_dir = os.getenv("BOOST_DIR", default) boost_is_found = False for inc_dir in ["", "include"]: if os.path.isfile(os.path.join(boost_dir, inc_dir, "boost", "version.hpp")): boost_inc_dir = os.path.join(boost_dir, inc_dir) break libdir_multiarch = "lib/" + sysconfig.get_config_vars().get("MULTIARCH", "") for lib_dir in ["", "lib", libdir_multiarch, "lib64"]: for ext in [".so", "-mt.so", ".dylib", "-mt.dylib"]: _lib = os.path.join(boost_dir, lib_dir, "lib" + boost_math_tr1_lib + ext) if os.path.isfile(_lib): if "-mt" in _lib: boost_math_tr1_lib += "-mt" boost_lib_dir = os.path.join(boost_dir, lib_dir) break if boost_inc_dir != "" and boost_lib_dir != "": boost_is_found = True if boost_is_found: boost_cflags = " -I%s -L%s" % (boost_inc_dir, boost_lib_dir) boost_linkflags = "-l%s" % boost_math_tr1_lib else: boost_cflags = "" boost_linkflags = "" info_red("""The Boost library was not found. If Boost is installed in a nonstandard location, set the environment variable BOOST_DIR. Forms using bessel functions will fail to build. """) return boost_cflags, boost_linkflags def build_programs(bench, permissive, debug, verbose): "Build test programs for all test cases." # Get a list of all files header_files = sorted([f for f in os.listdir(".") if f.endswith(".h")]) begin("Building test programs (%d header files found)" % len(header_files)) # Get UFC flags ufc_cflags = "-I" + get_ufc_include() + " " + " ".join(get_ufc_cxx_flags()) # Get boost flags boost_cflags, boost_linkflags = find_boost_cflags() # Get compiler compiler = os.getenv("CXX", "g++") # Set compiler options compiler_options = " -Wall" if not permissive: compiler_options += " -Werror -pedantic" # Always need ufc compiler_options += " " + ufc_cflags if bench: info("Benchmarking activated") compiler_options += " -O3 -march=native" # Workaround for gcc bug: gcc is too eager to report array-bounds warning with -O3 compiler_options += " -Wno-array-bounds" if debug: info("Debugging activated") compiler_options += " -g -O0" info("Compiler options: %s" % compiler_options) failures = [] # Iterate over all files for f in header_files: prefix = f.split(".h")[0] # Options for all files cpp_flags = compiler_options ld_flags = "" # Only add boost flags if necessary needs_boost = prefix == "MathFunctions" if needs_boost: info("Additional compiler options for %s: %s" % (prefix, boost_cflags)) info("Additional linker options for %s: %s" % (prefix, boost_linkflags)) cpp_flags += " " + boost_cflags ld_flags += " " + boost_linkflags # Generate test code filename = generate_test_code(f) # Compile test code command = "%s %s -o %s.bin %s.cpp %s" % \ (compiler, cpp_flags, prefix, prefix, ld_flags) ok = run_command(command, verbose) # Store compile command for easy reproduction with open("%s.build" % (prefix,), "w") as f: f.write(command + "\n") # Check status if ok: info_green("%s OK" % prefix) else: info_red("%s failed" % prefix) failures.append(prefix) end() return failures def run_programs(bench, debug, verbose): "Run generated programs." # This matches argument parsing in the generated main files bench = 'b' if bench else '' # Get a list of all files test_programs = sorted([f for f in os.listdir(".") if f.endswith(".bin")]) begin("Running generated programs (%d programs found)" % len(test_programs)) failures = [] # Iterate over all files for f in test_programs: # Compile test code prefix = f.split(".bin")[0] ok = run_command(".%s%s.bin %s" % (os.path.sep, prefix, bench), verbose) # Check status if ok: info_green("%s OK" % f) else: info_red("%s failed" % f) failures.append(f) end() return failures def validate_programs(reference_dir): "Validate generated programs against references." # Get a list of all files output_files = sorted(f for f in os.listdir(".") if f.endswith(".json")) begin("Validating generated programs (%d .json program output files found)" % len(output_files)) failures = [] # Iterate over all files for fj in output_files: # Get generated json output if os.path.exists(fj): generated_json_output = open(fj).read() if "nan" in generated_json_output: info_red("Found nan in generated json output, replacing with 999 to be able to parse as python dict.") generated_json_output = generated_json_output.replace("nan", "999") else: generated_json_output = "{}" # Get reference json output reference_json_file = os.path.join(reference_dir, fj) if os.path.isfile(reference_json_file): reference_json_output = open(reference_json_file).read() else: info_blue("Missing reference for %s" % reference_json_file) reference_json_output = "{}" # Compare json with reference using recursive diff algorithm # TODO: Write to different error file? from recdiff import recdiff, print_recdiff, DiffEqual # Assuming reference is well formed reference_json_output = eval(reference_json_output) try: generated_json_output = eval(generated_json_output) except Exception as e: info_red("Failed to evaluate json output for %s" % fj) log_error(str(e)) generated_json_output = None json_diff = (None if generated_json_output is None else recdiff(generated_json_output, reference_json_output, tolerance=output_tolerance)) json_ok = json_diff == DiffEqual # Check status if json_ok: info_green("%s OK" % fj) else: info_red("%s differs" % fj) log_error("Json output differs for %s, diff follows (generated first, reference second)" % os.path.join(*reference_json_file.split(os.path.sep)[-3:])) print_recdiff(json_diff, printer=log_error) failures.append(fj) end() return failures def main(args): "Run all regression tests." # Check command-line arguments TODO: Use argparse only_auto = "--only-auto" in args use_auto = "--skip-auto" not in args use_uflacs = "--skip-uflacs" not in args use_quad = "--skip-quad" not in args use_tsfc = "--use-tsfc" in args use_ext_quad = "--ext-quad" in args use_ext_uflacs = "--ext-uflacs" in args skip_download = "--skip-download" in args skip_run = "--skip-run" in args skip_code_diff = "--skip-code-diff" in args skip_validate = "--skip-validate" in args bench = "--bench" in args debug = "--debug" in args verbose = ("--verbose" in args) or debug # debug implies verbose permissive = "--permissive" in args or bench tolerant = "--tolerant" in args print_timing = "--print-timing" in args show_help = "--help" in args flags = ( "--only-auto", "--skip-auto", "--skip-uflacs", "--skip-quad", "--use-tsfc", "--ext-quad", "--skip-download", "--skip-run", "--skip-code-diff", "--skip-validate", "--bench", "--debug", "--verbose", "--permissive", "--tolerant", "--print-timing", "--help", ) args = [arg for arg in args if arg not in flags] # Hack: add back --verbose for ffc.main to see if verbose: args = args + ["--verbose"] if show_help: info("Valid arguments:\n" + "\n".join(flags)) return 0 if bench or not skip_validate: skip_run = False if bench: skip_code_diff = True skip_validate = True if use_ext_quad or use_ext_uflacs: skip_code_diff = True # Extract .ufl names from args only_forms = set([arg for arg in args if arg.endswith(".ufl")]) args = [arg for arg in args if arg not in only_forms] # Download reference data if skip_download: info_blue("Skipping reference data download") else: try: cmd = "./scripts/download" output = as_native_str(subprocess.check_output(cmd, shell=True)) print(output) info_green("Download reference data ok") except subprocess.CalledProcessError as e: print(e.output) info_red("Download reference data failed") if tolerant: global output_tolerance output_tolerance = 1e-3 # Clean out old output directory output_directory = "output" clean_output(output_directory) os.chdir(output_directory) # Adjust which test cases (combinations of compile arguments) to run here test_cases = [] if only_auto: test_cases += ["-r auto"] else: if use_auto: test_cases += ["-r auto"] if use_uflacs: test_cases += ["-r uflacs -O0", "-r uflacs -O"] if use_quad: test_cases += ["-r quadrature -O0", "-r quadrature -O"] import warnings from ffc.quadrature.deprecation import QuadratureRepresentationDeprecationWarning warnings.simplefilter("once", QuadratureRepresentationDeprecationWarning) if use_tsfc: test_cases += ["-r tsfc -O0", "-r tsfc -O"] # Silence good-performance messages by COFFEE import coffee coffee.set_log_level(coffee.logger.PERF_WARN) if use_ext_quad: test_cases += ext_quad if use_ext_uflacs: test_cases += ext_uflacs test_case_timings = {} fails = OrderedDict() for argument in test_cases: test_case_timings[argument] = time.time() fails[argument] = OrderedDict() begin("Running regression tests with %s" % argument) # Clear and enter output sub-directory sub_directory = "_".join(argument.split(" ")).replace("-", "") clean_output(sub_directory) os.chdir(sub_directory) # Workarounds for feature lack in representation if "quadrature" in argument and not only_forms: skip_forms = known_quad_failures info_blue("Skipping forms known to fail with quadrature:\n" + "\n".join(sorted(skip_forms))) elif "uflacs" in argument and not only_forms: skip_forms = known_uflacs_failures info_blue("Skipping forms known to fail with uflacs:\n" + "\n".join(sorted(skip_forms))) elif "tsfc" in argument and not only_forms: skip_forms = known_tsfc_failures info_blue("Skipping forms known to fail with tsfc:\n" + "\n".join(sorted(skip_forms))) else: skip_forms = set() # Generate test cases generate_test_cases(bench, only_forms, skip_forms) # Generate code failures = generate_code(args + argument.split(), only_forms, skip_forms, debug) if failures: fails[argument]["generate_code"] = failures # Location of reference directories reference_directory = os.path.abspath("../../ffc-reference-data/") code_reference_dir = os.path.join(reference_directory, sub_directory) # Note: We use the r_auto references for all test cases. This # ensures that we continously test that the codes generated by # all different representations are equivalent. output_reference_dir = os.path.join(reference_directory, "r_auto") # Validate code by comparing to code generated with this set # of compiler parameters if skip_code_diff: info_blue("Skipping code diff validation") else: failures = validate_code(code_reference_dir) if failures: fails[argument]["validate_code"] = failures # Build and run programs and validate output to common # reference if skip_run: info_blue("Skipping program execution") else: failures = build_programs(bench, permissive, debug, verbose) if failures: fails[argument]["build_programs"] = failures failures = run_programs(bench, debug, verbose) if failures: fails[argument]["run_programs"] = failures # Validate output to common reference results if skip_validate: info_blue("Skipping program output validation") else: failures = validate_programs(output_reference_dir) if failures: fails[argument]["validate_programs"] = failures # Go back up os.chdir(os.path.pardir) end() test_case_timings[argument] = time.time() - test_case_timings[argument] # Go back up os.chdir(os.path.pardir) # Print results if print_timing: info_green("Timing of all commands executed:") timings = '\n'.join("%10.2e s %s" % (t, name) for (name, t) in _command_timings) info_blue(timings) for argument in test_cases: info_blue("Total time for %s: %.1f s" % (argument, test_case_timings[argument])) num_failures = sum(len(failures_phase) for failures_args in fails.values() for failures_phase in failures_args.values()) if num_failures == 0: info_green("Regression tests OK") return 0 else: info_red("Regression tests failed") info_red("") info_red("Long summary:") for argument in test_cases: if not fails[argument]: info_green(" No failures with args '%s'" % argument) else: info_red(" Failures with args '%s':" % argument) for phase, failures in fails[argument].items(): info_red(" %d failures in %s:" % (len(failures), phase)) for f in failures: info_red(" %s" % (f,)) info_red("") info_red("Short summary:") phase_fails = defaultdict(int) for argument in test_cases: if not fails[argument]: info_green(" No failures with args '%s'" % argument) else: info_red(" Number of failures with args '%s':" % argument) for phase, failures in fails[argument].items(): info_red(" %d failures in %s." % (len(failures), phase)) phase_fails[phase] += len(failures) info_red("") info_red("Total failures for all args:") for phase, count in phase_fails.items(): info_red(" %s: %d failed" % (phase, count)) info_red("") info_red("Error messages stored in %s" % logfile) return 1 if __name__ == "__main__": sys.exit(main(sys.argv[1:])) ffc-2019.1.0.post0/test/regression/ufctest.h000066400000000000000000000674601346026536000205550ustar00rootroot00000000000000// Copyright (C) 2010-2015 Anders Logg // // This file is part of FFC. // // FFC is free software: you can redistribute it and/or modify // it under the terms of the GNU Lesser General Public License as published by // the Free Software Foundation, either version 3 of the License, or // (at your option) any later version. // // FFC is distributed in the hope that it will be useful, // but WITHOUT ANY WARRANTY; without even the implied warranty of // MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the // GNU Lesser General Public License for more details. // // You should have received a copy of the GNU Lesser General Public License // along with FFC. If not, see . // // Functions for calling generated UFC functions with "random" (but // fixed) data and print the output to screen. Useful for running // regression tests. #include #include #include #include #include #include #include #include #include "printer.h" // How many derivatives to test const std::size_t max_derivative = 2; // Parameters for adaptive timing const std::size_t initial_num_reps = 10; const double minimum_timing = 1.0; // Function for timing double time() { clock_t __toc_time = std::clock(); return ((double) (__toc_time)) / CLOCKS_PER_SEC; } // Function for creating "random" vertex coordinates std::vector test_coordinate_dofs(std::size_t gdim, std::size_t gdeg) { // Generate some "random" coordinates std::vector coordinate_dofs; if (gdim == 1) { coordinate_dofs.resize(2); coordinate_dofs[0] = 0.903; coordinate_dofs[1] = 0.561; // Huh? Only 2 vertices for interval, is this for tdim=1,gdim=2? // coordinate_dofs[2] = 0.987; // coordinate_dofs[3] = 0.123; if (gdeg > 1) { assert(gdeg == 2); coordinate_dofs.resize(3); coordinate_dofs[2] = 0.750; } } else if (gdim == 2) { coordinate_dofs.resize(6); coordinate_dofs[0] = 0.903; coordinate_dofs[1] = 0.341; coordinate_dofs[2] = 0.561; coordinate_dofs[3] = 0.767; coordinate_dofs[4] = 0.987; coordinate_dofs[5] = 0.783; // Huh? Only 4 vertices for triangle, is this for quads? // coordinate_dofs[6] = 0.123; // coordinate_dofs[7] = 0.561; if (gdeg > 1) { assert(gdeg == 2); coordinate_dofs.resize(12); coordinate_dofs[6] = 0.750; coordinate_dofs[7] = 0.901; coordinate_dofs[8] = 0.999; coordinate_dofs[9] = 0.500; coordinate_dofs[10] = 0.659; coordinate_dofs[11] = 0.555; } } else if (gdim == 3) { coordinate_dofs.resize(12); coordinate_dofs[0] = 0.903; coordinate_dofs[1] = 0.341; coordinate_dofs[2] = 0.457; coordinate_dofs[3] = 0.561; coordinate_dofs[4] = 0.767; coordinate_dofs[5] = 0.833; coordinate_dofs[6] = 0.987; coordinate_dofs[7] = 0.783; coordinate_dofs[8] = 0.191; coordinate_dofs[9] = 0.123; coordinate_dofs[10] = 0.561; coordinate_dofs[11] = 0.667; if (gdeg > 1) { assert(gdeg == 2); coordinate_dofs.resize(30); // FIXME: Add some quadratic tetrahedron assert(false); } } return coordinate_dofs; } // Function for creating "random" vertex coordinates std::pair, std::vector> test_coordinate_dof_pair(int gdim, int gdeg, int facet0, int facet1) { // For affine simplices only so far... assert(gdeg == 1); // Return pair of cell coordinates there facet0 of cell 0 cooresponds to facet1 of cell 1 int num_vertices = gdim+1; std::vector c0 = test_coordinate_dofs(gdim, gdeg); std::vector c1(c0.size()); std::vector m(gdim); for (int i=0; icell_shape = cell_shape; // Store dimensions switch (cell_shape) { case ufc::shape::interval: topological_dimension = 1; if (gdim == 0) geometric_dimension = 1; else geometric_dimension = gdim; break; case ufc::shape::triangle: topological_dimension = 2; if (gdim == 0) geometric_dimension = 2; else geometric_dimension = gdim; break; case ufc::shape::tetrahedron: topological_dimension = 3; if (gdim == 0) geometric_dimension = 3; else geometric_dimension = gdim; break; default: throw std::runtime_error("Unhandled cell shape."); } // Set orientation (random, but must be set) this->orientation = 1; // Generate some "random" entity indices entity_indices.resize(topological_dimension+1); for (std::size_t i = 0; i <= topological_dimension; i++) { entity_indices[i].resize(12); // hack: sufficient for any cell entities list, 12 edges of hex for (std::size_t j = 0; j < entity_indices[i].size(); j++) entity_indices[i][j] = i*j + offset; } } ~test_cell() {} }; // Class for creating a "random" ufc::function object class test_function : public ufc::function { public: test_function(std::size_t value_size) : value_size(value_size) {} void evaluate(double* values, const double* coordinates, const ufc::cell& c) const { for (std::size_t i = 0; i < value_size; i++) { values[i] = 1.0; for (std::size_t j = 0; j < c.geometric_dimension; j++) values[i] *= static_cast(i + 1)*coordinates[j]; } } private: std::size_t value_size; }; std::string format_name(std::string name, int i=-1, int j=-1) { std::stringstream s; s << name; if (i >= 0) s << "_" << i; if (j >= 0) s << "_" << j; return s.str(); } // Function for testing ufc::element objects void test_finite_element(ufc::finite_element& element, int id, Printer& printer) { printer.begin("finite_element", id); // Prepare arguments test_cell c(element.cell_shape(), element.geometric_dimension()); // NOTE: Assuming geometry degree 1 const std::vector coordinate_dofs = test_coordinate_dofs(element.geometric_dimension(), 1); std::size_t value_size = 1; for (std::size_t i = 0; i < element.value_rank(); i++) value_size *= element.value_dimension(i); std::size_t derivative_size = 1; for (std::size_t i = 0; i < max_derivative; i++) derivative_size *= c.geometric_dimension; std::vector values(element.space_dimension()*value_size*derivative_size, 0.0); std::vector dof_values(element.space_dimension(), 0.0); std::vector vertex_values((c.topological_dimension + 1)*value_size, 0.0); std::vector coordinates(c.geometric_dimension); for (std::size_t i = 0; i < c.geometric_dimension; i++) coordinates[i] = 0.1*static_cast(i); test_function f(value_size); // signature //printer.print_scalar("signature", element.signature()); // cell_shape printer.print_scalar("cell_shape", static_cast(element.cell_shape())); // space_dimension printer.print_scalar("space_dimension", element.space_dimension()); // value_rank printer.print_scalar("value_rank", element.value_rank()); // value_dimension for (std::size_t i = 0; i < element.value_rank(); i++) printer.print_scalar("value_dimension", element.value_dimension(i), i); // evaluate_basis for (std::size_t i = 0; i < element.space_dimension(); i++) { element.evaluate_basis(i, values.data(), coordinates.data(), coordinate_dofs.data(), 1); printer.print_array("evaluate_basis:", value_size, values.data(), i); } // evaluate_basis all element.evaluate_basis_all(values.data(), coordinates.data(), coordinate_dofs.data(), 1); printer.print_vector("evaluate_basis_all", values); // evaluate_basis_derivatives for (std::size_t i = 0; i < element.space_dimension(); i++) { for (std::size_t n = 0; n <= max_derivative; n++) { std::size_t num_derivatives = 1; for (std::size_t j = 0; j < n; j++) num_derivatives *= c.geometric_dimension; element.evaluate_basis_derivatives(i, n, values.data(), coordinates.data(), coordinate_dofs.data(), 1); printer.print_array("evaluate_basis_derivatives", value_size*num_derivatives, values.data(), i, n); } } // evaluate_basis_derivatives_all for (std::size_t n = 0; n <= max_derivative; n++) { std::size_t num_derivatives = 1; for (std::size_t j = 0; j < n; j++) num_derivatives *= c.geometric_dimension; element.evaluate_basis_derivatives_all(n, values.data(), coordinates.data(), coordinate_dofs.data(), 1); printer.print_array("evaluate_basis_derivatives_all", element.space_dimension()*value_size*num_derivatives, values.data(), n); } // evaluate_dof for (std::size_t i = 0; i < element.space_dimension(); i++) { dof_values[i] = element.evaluate_dof(i, f, coordinate_dofs.data(), 1, c); printer.print_scalar("evaluate_dof", dof_values[i], i); } // evaluate_dofs element.evaluate_dofs(values.data(), f, coordinate_dofs.data(), 1, c); printer.print_array("evaluate_dofs", element.space_dimension(), values.data()); // interpolate_vertex_values element.interpolate_vertex_values(vertex_values.data(), dof_values.data(), coordinate_dofs.data(), 1); printer.print_vector("interpolate_vertex_values", vertex_values); // tabulate_coordinates std::vector dof_coordinates(element.geometric_dimension()*element.space_dimension()); element.tabulate_dof_coordinates(dof_coordinates.data(), coordinate_dofs.data()); printer.print_vector("tabulate_dof_coordinates", dof_coordinates); // num_sub_dof_elements printer.print_scalar("num_sub_elements", element.num_sub_elements()); // create_sub_element for (std::size_t i = 0; i < element.num_sub_elements(); i++) { std::unique_ptr sub_element(element.create_sub_element(i)); test_finite_element(*sub_element, i, printer); } printer.end(); } // Function for testing ufc::element objects void test_dofmap(ufc::dofmap& dofmap, ufc::shape cell_shape, int id, Printer& printer) { printer.begin("dofmap", id); // Prepare arguments std::vector num_entities(4); num_entities[0] = 10001; num_entities[1] = 10002; num_entities[2] = 10003; num_entities[3] = 10004; //test_cell c(cell_shape, dofmap.geometric_dimension()); test_cell c(cell_shape, dofmap.topological_dimension()); std::size_t n = dofmap.num_element_dofs(); std::vector dofs(n, 0); std::size_t num_facets = c.topological_dimension + 1; std::vector coordinates(n*c.geometric_dimension); // needs_mesh_entities for (std::size_t d = 0; d <= c.topological_dimension; d++) { printer.print_scalar("needs_mesh_entities", dofmap.needs_mesh_entities(d), d); } // global_dimension printer.print_scalar("global_dimension", dofmap.global_dimension(num_entities)); // num_element_dofs printer.print_scalar("num_element_dofs", dofmap.num_element_dofs()); // num_facet_dofs printer.print_scalar("num_facet_dofs", dofmap.num_facet_dofs()); // num_entity_dofs for (std::size_t d = 0; d <= c.topological_dimension; d++) printer.print_scalar("num_entity_dofs", dofmap.num_entity_dofs(d), d); // tabulate_dofs dofmap.tabulate_dofs(dofs.data(), num_entities, c.entity_indices); printer.print_array("tabulate_dofs", dofmap.num_element_dofs(), dofs.data()); // tabulate_facet_dofs for (std::size_t facet = 0; facet < num_facets; facet++) { dofmap.tabulate_facet_dofs(dofs.data(), facet); printer.print_array("tabulate_facet_dofs", dofmap.num_facet_dofs(), dofs.data(), facet); } // tabulate_entity_dofs for (std::size_t d = 0; d <= c.topological_dimension; d++) { std::size_t num_entities[4][4] = {{0, 0, 0, 0}, // dummy entities in 0D {2, 1, 0, 0}, // interval {3, 3, 1, 0}, // triangle {4, 6, 4, 1}}; // tetrahedron for (std::size_t i = 0; i < num_entities[c.topological_dimension][d]; i++) { dofmap.tabulate_entity_dofs(dofs.data(), d, i); printer.print_array("tabulate_entity_dofs", dofmap.num_entity_dofs(d), dofs.data(), d, i); } } // num_sub_dofmaps printer.print_scalar("num_sub_dofmaps", dofmap.num_sub_dofmaps()); // create_sub_dofmap for (std::size_t i = 0; i < dofmap.num_sub_dofmaps(); i++) { std::unique_ptr sub_dofmap(dofmap.create_sub_dofmap(i)); test_dofmap(*sub_dofmap, cell_shape, i, printer); } printer.end(); } // Function for testing ufc::cell_integral objects void test_cell_integral(ufc::cell_integral& integral, ufc::shape cell_shape, std::size_t gdim, std::size_t gdeg, std::size_t tensor_size, double** w, bool bench, int id, Printer & printer) { printer.begin("cell_integral", id); // Prepare arguments test_cell c(cell_shape, gdim); const std::vector coordinate_dofs = test_coordinate_dofs(gdim, gdeg); std::vector A(tensor_size, 0.0); // Call tabulate_tensor integral.tabulate_tensor(A.data(), w, coordinate_dofs.data(), c.orientation); printer.print_vector("tabulate_tensor", A); // Benchmark tabulate tensor if (bench) { printer.begin("timing"); for (std::size_t num_reps = initial_num_reps;; num_reps *= 2) { double t0 = time(); for (std::size_t i = 0; i < num_reps; i++) { integral.tabulate_tensor(A.data(), w, coordinate_dofs.data(), c.orientation); } double dt = time() - t0; if (dt > minimum_timing) { dt /= static_cast(num_reps); printer.print_scalar("cell_integral_timing_iterations", num_reps); printer.print_scalar("cell_integral_time", dt); break; } } printer.end(); } printer.end(); } // Function for testing ufc::exterior_facet_integral objects void test_exterior_facet_integral(ufc::exterior_facet_integral& integral, ufc::shape cell_shape, std::size_t gdim, std::size_t gdeg, std::size_t tensor_size, double** w, bool bench, int id, Printer & printer) { printer.begin("exterior_facet_integral", id); // Prepare arguments test_cell c(cell_shape, gdim); const std::vector coordinate_dofs = test_coordinate_dofs(gdim, gdeg); std::size_t num_facets = c.topological_dimension + 1; std::vector A(tensor_size); // Call tabulate_tensor for each facet for (std::size_t facet = 0; facet < num_facets; facet++) { for(std::size_t i = 0; i < tensor_size; i++) A[i] = 0.0; integral.tabulate_tensor(A.data(), w, coordinate_dofs.data(), facet, c.orientation); printer.print_array("tabulate_tensor", tensor_size, A.data(), facet); } // Benchmark tabulate tensor if (bench) { printer.begin("timing"); for (std::size_t num_reps = initial_num_reps;; num_reps *= 2) { double t0 = time(); for (std::size_t i = 0; i < num_reps; i++) { integral.tabulate_tensor(A.data(), w, coordinate_dofs.data(), 0, c.orientation); } double dt = time() - t0; if (dt > minimum_timing) { dt /= static_cast(num_reps); printer.print_scalar("exterior_facet_integral_timing_iterations", num_reps); printer.print_scalar("exterior_facet_integral_time", dt); break; } } printer.end(); } printer.end(); } // Function for testing ufc::interior_facet_integral objects void test_interior_facet_integral(ufc::interior_facet_integral& integral, ufc::shape cell_shape, std::size_t gdim, std::size_t gdeg, std::size_t macro_tensor_size, double** w, bool bench, int id, Printer & printer) { printer.begin("interior_facet_integral", id); // Prepare arguments test_cell c0(cell_shape, gdim); //test_cell c1(cell_shape, gdim); std::size_t num_facets = c0.topological_dimension + 1; std::vector A(macro_tensor_size); // Call tabulate_tensor for each facet-facet combination for (std::size_t facet0 = 0; facet0 < num_facets; facet0++) { for (std::size_t facet1 = 0; facet1 < num_facets; facet1++) { const std::pair, std::vector> coordinate_dofs = test_coordinate_dof_pair(gdim, gdeg, facet0, facet1); for(std::size_t i = 0; i < macro_tensor_size; i++) A[i] = 0.0; integral.tabulate_tensor(A.data(), w, coordinate_dofs.first.data(), coordinate_dofs.second.data(), facet0, facet1, 1, 1); printer.print_array("tabulate_tensor", macro_tensor_size, A.data(), facet0, facet1); } } // Benchmark tabulate tensor if (bench) { const std::pair, std::vector> coordinate_dofs = test_coordinate_dof_pair(gdim, gdeg, 0, 0); printer.begin("timing"); for (std::size_t num_reps = initial_num_reps;; num_reps *= 2) { double t0 = time(); for (std::size_t i = 0; i < num_reps; i++) { integral.tabulate_tensor(A.data(), w, coordinate_dofs.first.data(), coordinate_dofs.second.data(), 0, 0, 1, 1); } double dt = time() - t0; if (dt > minimum_timing) { dt /= static_cast(num_reps); printer.print_scalar("interior_facet_integral_timing_iterations", num_reps); printer.print_scalar("interior_facet_integral_time", dt); break; } } printer.end(); } printer.end(); } // Function for testing ufc::vertex_integral objects void test_vertex_integral(ufc::vertex_integral& integral, ufc::shape cell_shape, std::size_t gdim, std::size_t gdeg, std::size_t tensor_size, double** w, bool bench, int id, Printer & printer) { printer.begin("vertex_integral", id); // Prepare arguments test_cell c(cell_shape, gdim); const std::vector coordinate_dofs = test_coordinate_dofs(gdim, gdeg); std::size_t num_vertices = c.topological_dimension + 1; std::vector A(tensor_size); // Call tabulate_tensor for each vertex for (std::size_t vertex = 0; vertex < num_vertices; vertex++) { for(std::size_t i = 0; i < tensor_size; i++) A[i] = 0.0; integral.tabulate_tensor(A.data(), w, coordinate_dofs.data(), vertex, c.orientation); printer.print_array("tabulate_tensor", tensor_size, A.data(), vertex); } // Benchmark tabulate tensor if (bench) { printer.begin("timing"); for (std::size_t num_reps = initial_num_reps;; num_reps *= 2) { double t0 = time(); for (std::size_t i = 0; i < num_reps; i++) { integral.tabulate_tensor(A.data(), w, coordinate_dofs.data(), 0, c.orientation); } double dt = time() - t0; if (dt > minimum_timing) { dt /= static_cast(num_reps); printer.print_scalar("vertex_integral_timing_iterations", num_reps); printer.print_scalar("vertex_integral_time", dt); break; } } printer.end(); } printer.end(); } // Function for testing ufc::form objects void test_form(ufc::form& form, bool bench, int id, Printer & printer) { printer.begin("form", id); // Compute size of tensors int tensor_size = 1; int macro_tensor_size = 1; for (std::size_t i = 0; i < form.rank(); i++) { std::unique_ptr element(form.create_finite_element(i)); tensor_size *= element->space_dimension(); // *2 for interior facet integrals macro_tensor_size *= 2*element->space_dimension(); } // Prepare dummy coefficients double** w = 0; if (form.num_coefficients() > 0) { w = new double * [form.num_coefficients()]; for (std::size_t i = 0; i < form.num_coefficients(); i++) { std::unique_ptr element(form.create_finite_element(form.rank() + i)); // *2 for interior facet integrals const std::size_t macro_dim = 2*element->space_dimension(); w[i] = new double[macro_dim]; for (std::size_t j = 0; j < macro_dim; j++) w[i][j] = 0.1*static_cast((i + 1)*(j + 1)); } } // Get cell shape std::unique_ptr element(form.create_coordinate_finite_element()); ufc::shape cell_shape = element->cell_shape(); std::size_t gdim = element->geometric_dimension(); std::size_t gdeg = element->degree(); assert(element->value_rank() == 1); assert(element->value_dimension(0) == gdim); assert(element->family() == std::string("Lagrange")); element.reset(); // signature //printer.print_scalar("signature", form.signature()); // rank printer.print_scalar("rank", form.rank()); // num_coefficients printer.print_scalar("num_coefficients", form.num_coefficients()); // has_cell_integrals printer.print_scalar("has_cell_integrals", form.has_cell_integrals()); // has_exterior_facet_integrals printer.print_scalar("has_exterior_facet_integrals", form.has_exterior_facet_integrals()); // has_interior_facet_integrals printer.print_scalar("has_interior_facet_integrals", form.has_interior_facet_integrals()); // has_vertex_integrals printer.print_scalar("has_vertex_integrals", form.has_vertex_integrals()); // max_cell_subdomain_id printer.print_scalar("max_cell_subdomain_id", form.max_cell_subdomain_id()); // max_exterior_facet_subdomain_id printer.print_scalar("max_exterior_facet_subdomain_id", form.max_exterior_facet_subdomain_id()); // max_interior_facet_subdomain_id printer.print_scalar("max_interior_facet_subdomain_id", form.max_interior_facet_subdomain_id()); // max_vertex_subdomain_id printer.print_scalar("max_vertex_subdomain_id", form.max_vertex_subdomain_id()); // create_finite_element for (std::size_t i = 0; i < form.rank() + form.num_coefficients(); i++) { std::unique_ptr element(form.create_finite_element(i)); test_finite_element(*element, i, printer); } // create_dofmap for (std::size_t i = 0; i < form.rank() + form.num_coefficients(); i++) { std::unique_ptr dofmap(form.create_dofmap(i)); test_dofmap(*dofmap, cell_shape, i, printer); } // create_cell_integral { std::unique_ptr integral(form.create_default_cell_integral()); printer.print_scalar("default_cell_integral", (bool)integral); if (integral) { test_cell_integral(*integral, cell_shape, gdim, gdeg, tensor_size, w, bench, -1, printer); } } for (std::size_t i = 0; i < form.max_cell_subdomain_id(); i++) { std::unique_ptr integral(form.create_cell_integral(i)); if (integral) { test_cell_integral(*integral, cell_shape, gdim, gdeg, tensor_size, w, bench, i, printer); } } // create_exterior_facet_integral { std::unique_ptr integral(form.create_default_exterior_facet_integral()); printer.print_scalar("default_exterior_facet_integral", (bool)integral); if (integral) { test_exterior_facet_integral(*integral, cell_shape, gdim, gdeg, tensor_size, w, bench, -1, printer); } } for (std::size_t i = 0; i < form.max_exterior_facet_subdomain_id(); i++) { std::unique_ptr integral(form.create_exterior_facet_integral(i)); if (integral) { test_exterior_facet_integral(*integral, cell_shape, gdim, gdeg, tensor_size, w, bench, i, printer); } } // create_interior_facet_integral { std::unique_ptr integral(form.create_default_interior_facet_integral()); printer.print_scalar("default_interior_facet_integral", (bool)integral); if (integral) { test_interior_facet_integral(*integral, cell_shape, gdim, gdeg, macro_tensor_size, w, bench, -1, printer); } } for (std::size_t i = 0; i < form.max_interior_facet_subdomain_id(); i++) { std::unique_ptr integral(form.create_interior_facet_integral(i)); if (integral) { test_interior_facet_integral(*integral, cell_shape, gdim, gdeg, macro_tensor_size, w, bench, i, printer); } } // create_vertex_integral { std::unique_ptr integral(form.create_default_vertex_integral()); printer.print_scalar("default_vertex_integral", (bool)integral); if (integral) { test_vertex_integral(*integral, cell_shape, gdim, gdeg, tensor_size, w, bench, -1, printer); } } for (std::size_t i = 0; i < form.max_vertex_subdomain_id(); i++) { std::unique_ptr integral(form.create_vertex_integral(i)); if (integral) { test_vertex_integral(*integral, cell_shape, gdim, gdeg, tensor_size, w, bench, i, printer); } } // Cleanup for (std::size_t i = 0; i < form.num_coefficients(); i++) delete [] w[i]; delete [] w; printer.end(); } ffc-2019.1.0.post0/test/regression/ufctest.py000066400000000000000000000042421346026536000207430ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2010-2013 Anders Logg, Kristian B. Oelgaard and Marie E. Rognes # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # Modified by Martin Sandve Alnæs, 2013-2017 _test_code = """\ #include "../../ufctest.h" #include "{prefix}.h" #include int main(int argc, char * argv[]) {{ const char jsonfilename[] = "{prefix}.json"; std::ofstream jsonfile(jsonfilename); Printer printer(jsonfile); printer.begin(); {benchline} {tests} printer.end(); return 0; }} """ def generate_test_code(header_file): "Generate test code for given header file." # Count the number of forms and elements prefix = header_file.split(".h")[0] generated_code = open(header_file).read() num_forms = generated_code.count("class %s_form_" % prefix.lower()) num_elements = generated_code.count("class %s_finite_element_" % prefix.lower()) # Generate tests, either based on forms or elements if num_forms > 0: benchline = " bool bench = (argc > 1) && argv[1][0] == 'b';\n" tests = [' {prefix}_form_{i} f{i}; test_form(f{i}, bench, {i}, printer);'.format(prefix=prefix.lower(), i=i) for i in range(num_forms)] else: benchline = "" tests = [' {prefix}_finite_element_{i} e{i}; test_finite_element(e{i}, {i}, printer);'.format(prefix=prefix.lower(), i=i) for i in range(num_elements)] # Write file test_file = open(prefix + ".cpp", "w") test_file.write(_test_code.format(prefix=prefix, benchline=benchline, tests="\n".join(tests))) test_file.close() ffc-2019.1.0.post0/test/test.py000066400000000000000000000030321346026536000160610ustar00rootroot00000000000000# -*- coding: utf-8 -*- """Run all tests, including unit tests and regression tests""" # Copyright (C) 2007 Anders Logg # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # First added: 2007-06-09 # Last changed: 2014-05-15 import os, re, sys # Name of log file pwd = os.path.dirname(os.path.abspath(__file__)) logfile = os.path.join(pwd, "test.log") os.system("rm -f %s" % logfile) # Tests to run tests = ["unit", "regression"] # Run tests failed = [] for test in tests: print("Running tests: %s" % test) print("----------------------------------------------------------------------") os.chdir(os.path.join(pwd, test)) #failure = os.system(sys.executable + " test.py | tee -a %s" % logfile) failure = os.system(sys.executable + " test.py") if failure: print("Test FAILED") failed.append(test) print("") print("To view the test log, use the following command: less -R test.log") sys.exit(len(failed)) ffc-2019.1.0.post0/test/uflacs/000077500000000000000000000000001346026536000160075ustar00rootroot00000000000000ffc-2019.1.0.post0/test/uflacs/README.md000066400000000000000000000006421346026536000172700ustar00rootroot00000000000000# Test structure for UFLACS - unit/ unit tests of internal components of UFLACS - crosslanguage/ unit tests which produce C++ tests of generated code which is then executed by Google Test - system/ tests that use external software with uflacs, in particular integration with DOLFIN ## Running examples cd test/ py.test py.test unit py.test system py.test crosslanguage ffc-2019.1.0.post0/test/uflacs/crosslanguage/000077500000000000000000000000001346026536000206445ustar00rootroot00000000000000ffc-2019.1.0.post0/test/uflacs/crosslanguage/Makefile000066400000000000000000000021671346026536000223120ustar00rootroot00000000000000 # # Build C++ tests generated by python tests, configured to get access # to gtest and ufc. # # These are the commands we assume CXX?=c++ RM?=rm # Local filenames TESTBINARY=run_gtest TESTSRCDIR=generated TESTSUPPORTDIR=cppsupport TESTMAINSRC=${TESTSRCDIR}/main.cpp # Configure ufc UFC_INCLUDE_DIR?=../../../ffc/backends/ufc UFCINCDIR=-I${UFC_INCLUDE_DIR} # Configure gtest GTEST_DIR?=../../../libs/gtest-1.7.0 GTESTINCDIR=-I${GTEST_DIR}/include GTESTLIBS=-L${GTEST_DIR}/lib -lgtest -lpthread # Flags set for easy debugging CXXFLAGS?=-g -O0 -std=c++11 -Wall .PHONY: clean .SECONDARY: default: ${TESTBINARY} run: ${TESTBINARY} ./${TESTBINARY} # Also depends on ${TESTSRCDIR}/*.h but this rule seems to be # sufficient because the test binary is touched every time the python # tests run ${TESTBINARY}: ${TESTMAINSRC} ${TESTSUPPORTDIR}/*.h ${CXX} ${CXXFLAGS} -o ${TESTBINARY} \ ${TESTMAINSRC} -I${TESTSRCDIR} -I${TESTSUPPORTDIR} \ ${UFCINCDIR} ${GTESTINCDIR} ${GTESTLIBS} clean: ${RM} -f `find -name \*.pyc` ${RM} -f `find -name \*.py~` ${RM} -rf ${TESTSRCDIR} ${RM} -rf __pycache__ ${RM} -f *.log ${RM} -f ${TESTBINARY} ffc-2019.1.0.post0/test/uflacs/crosslanguage/README000066400000000000000000000001341346026536000215220ustar00rootroot00000000000000Python tests wrapping generated C++ code in C++ unit tests which is then tested with gtest. ffc-2019.1.0.post0/test/uflacs/crosslanguage/__init__.py000066400000000000000000000000001346026536000227430ustar00rootroot00000000000000ffc-2019.1.0.post0/test/uflacs/crosslanguage/conftest.py000066400000000000000000000170701346026536000230500ustar00rootroot00000000000000# -*- coding: utf-8 -*- """This file contains code for setting up C++ unit tests with gtest from within Python tests using py.test. """ import pytest import os import inspect from collections import defaultdict import subprocess from ffc.backends.ufc import get_include_path # TODO: For a generic framework, this needs to change somewhat: _supportcode = ''' #include #include #include "mock_cells.h" //#include "debugging.h" ''' _gtest_runner_template = """ #include {supportcode} {testincludes} int main(int argc, char **argv) {{ ::testing::InitGoogleTest(&argc, argv); return RUN_ALL_TESTS(); }} """ _gtest_template = """ /** Autogenerated from Python test {pytestname} at {pyfilename}:{pylineno} */ TEST ({suite}, {case}) {{ {body} }} """ def find_parent_test_function(): """Return (filename, lineno, functionname) of the first function named "test_*" found on the stack.""" # Get current frame frame = inspect.currentframe() info = inspect.getframeinfo(frame) # Jump back frames until we're in a 'test_*' function while not info[2].startswith("test_"): frame = frame.f_back info = inspect.getframeinfo(frame) # Get info from frame filename = info[0] lineno = info[1] function = info[2] assert len(info) == 5 assert function.startswith("test_") return filename, lineno, function class GTestContext: _all = [] def __init__(self, config): self._basedir = os.path.dirname(__file__) self._gendir = os.path.join(self._basedir, "generated") self._binary_filename = os.path.join(self._basedir, "run_gtest") self._gtest_log = os.path.join(self._basedir, "gtest.log") self._code = defaultdict(list) self._dirlist = [] GTestContext._all.append(self) def info(self, msg): pre = "In gtest generation:" if '\n' in msg: print(pre) print(msg) else: print(pre, msg) def add(self, body): # Look through stack to find frame with test_... function name filename, lineno, function = find_parent_test_function() # Using function testcase class name as suite and function # name as case basename = os.path.basename(filename) suite = basename.replace(".py", "") case = function # Use a header filename matching the python filename hfilename = os.path.join(self._gendir, basename.replace(".py", ".h")) # Get python filename relative to test directory pyfilename = os.path.relpath(filename, self._basedir) # Format as a test in target framework and store code = _gtest_template.format(pyfilename=pyfilename, pylineno=lineno, pytestname=function, suite=suite, case=case, body=body) self._code[hfilename].append(code) def write(self): # Make sure we have the directory for generated code if not os.path.isdir(self._gendir): os.mkdir(self._gendir) # Write test code collected during py.test run to files headers = [] header_basenames = [] for hfilename in sorted(self._code): tests = self._code[hfilename] code = '\n'.join(tests) with open(hfilename, "w") as f: f.write(code) headers.append(hfilename) # Collect headers in include list for runner code self._test_header_names = [os.path.split(h)[-1] for h in headers] testincludes = '\n'.join('#include "{0}"'.format(h) for h in self._test_header_names) # Write test runner code to file runner_code = _gtest_runner_template.format(supportcode=_supportcode, testincludes=testincludes) self._main_filename = os.path.join(self._gendir, "main.cpp") with open(self._main_filename, "w") as f: f.write(runner_code) def build_gtest(self): "Build gtest library" # Source and build directories gtest_dir = "../../../libs/gtest-1.7.0" gtest_dir = os.path.abspath(os.path.join(self._basedir, gtest_dir)) build_dir = os.path.join(gtest_dir, "lib") # Check if GTest source can be found if os.path.isdir(gtest_dir): # Make build directory, if required if not os.path.isdir(build_dir): os.mkdir(build_dir) # Configure gtest using cmake error = subprocess.call("cmake ..", cwd=build_dir, shell=True) if error: raise RuntimeError("Could not call CMake successfully to build gtest") # Build gtest library err = subprocess.call("make", cwd=build_dir, shell=True) if error: raise RuntimeError("Could not call make successfully to build gtest") else: raise RuntimeError("Cannot find gtest source") def build(self): # Prepare command UFC_INCLUDE_DIR = get_include_path() if not os.path.exists(os.path.join(UFC_INCLUDE_DIR, "ufc.h")): import IPython; IPython.embed() raise RuntimeError("Cannot find ufc.h in provided include path: %s" % (UFC_INCLUDE_DIR,)) UFC_CXX_FLAGS = " ".join(["-g", "-O0", "-std=c++11", "-Wall"]) cmd = ['make', 'UFC_INCLUDE_DIR="%s"' % UFC_INCLUDE_DIR, 'CXXFLAGS="%s"' % UFC_CXX_FLAGS] self.info("Running command: " + " ".join(cmd)) # Execute try: out = subprocess.check_output(cmd, cwd=self._basedir, shell=True, universal_newlines=True) self.info(out) self.info("Building ok.") except subprocess.CalledProcessError as e: self.info("Building '{0}' FAILED (code {1}, headers: {2})".format(self._binary_filename, e.returncode, self._test_header_names)) self.info("Build output:") self.info(e.output) pytest.fail() def run(self): try: out = subprocess.check_output(self._binary_filename, cwd=self._basedir, shell=True, universal_newlines=True) self.info("Gtest running ok!") with open(self._gtest_log, "w") as f: f.write(out) self.info(out) except subprocess.CalledProcessError as e: self.info("Gtest running FAILED with code {0}!".format(e.returncode)) with open(self._gtest_log, "w") as f: f.write(e.output) self.info(e.output) pytest.fail() def finalize(self): # Build gtest library itself self.build_gtest() # Write collected test code to files self.write() # Build test code self.build() # Run compiled tests self.run() @pytest.fixture("session") def gtest(): "create initial files for gtest generation" config = None gtc = GTestContext(config) return gtc def gtest_sessionfinish(session): session.trace("finalizing gtest contexts") while GTestContext._all: gtc = GTestContext._all.pop() gtc.finalize() session.trace("done finalizing gtest contexts") def pytest_sessionfinish(session): gtest_sessionfinish(session) ffc-2019.1.0.post0/test/uflacs/crosslanguage/cppsupport/000077500000000000000000000000001346026536000230635ustar00rootroot00000000000000ffc-2019.1.0.post0/test/uflacs/crosslanguage/cppsupport/mock_cells.h000066400000000000000000000172641346026536000253610ustar00rootroot00000000000000// This is utility code for UFLACS unit tests. // This code is released into the public domain. // // The FEniCS Project (http://www.fenicsproject.org/) 2006-2012. #ifndef __MOCK_CELLS_H #define __MOCK_CELLS_H #include //namespace uflacs //{ struct mock_cell { // These coordinates are all that generated code should care // about, the rest is to support the tests double coordinate_dofs[8*3]; // Worst case hexahedron: 8 vertices in 3d // Dimensions needed for generic cell transformations size_t geometric_dimension; size_t topological_dimension; size_t num_vertices; // Constructor just clears everything to zero mock_cell() { init_dimensions(0,0,0); } // Utility initialization function void init_dimensions(size_t geometric_dimension, size_t topological_dimension, size_t num_vertices) { this->geometric_dimension = geometric_dimension; this->topological_dimension = topological_dimension; this->num_vertices = num_vertices; int n = int(sizeof(coordinate_dofs) / sizeof(coordinate_dofs[0])); for (int i=0; i& values, const Array& x) const { } };""" dolfin_expression_with_constructor1 = """\ class MyExpression: public Expression { public: MyExpression(): Expression(2) {} virtual void eval(Array& values, const Array& x) const { } };""" dolfin_expression_with_constructor2 = """\ class MyExpression: public Expression { public: MyExpression(): Expression(2, 3) {} virtual void eval(Array& values, const Array& x) const { } };""" dolfin_expression_with_members = """\ class MyExpression: public Expression { public: virtual void eval(Array& values, const Array& x) const { } double c; boost::shared_ptr > mf; boost::shared_ptr gf; boost::shared_ptr f; };""" dolfin_expression_with_members2 = """\ class MyExpression: public Expression { public: virtual void eval(Array& values, const Array& x) const { } double c0; double c1; boost::shared_ptr > mf0; boost::shared_ptr > mf1; boost::shared_ptr gf0; boost::shared_ptr gf1; boost::shared_ptr f0; boost::shared_ptr f1; };""" dolfin_expression_with_cell = """\ class MyExpression: public Expression { public: void eval(Array& values, const Array& x, const ufc::cell& ufc_cell) const { // Emulate tabulate_tensor arguments const ufc::cell& c = ufc_cell; Array & A = values; //dolfin_assert(ufc_cell.local_facet >= 0); values[0] = 1.23; } };""" other_stuff = """ // Getting cell data const uint gd = cell.geometric_dimension; const uint td = cell.topological_dimension; const uint cell_index = cell.entity_indices[td][0]; // Tabulation of mesh function values double mf0v = (*mf0)[cell_index]; // Evaluation of functions Array w0v(1); w0->eval(w0v, x, cell); // Writing output values values[0] = w0v[0]*mf0v; values[1] = w0v[0]+mf0v; """ #class DolfinExpressionFormatterTest(UflTestCase): #"""Tests of Dolfin C++ Expression formatting utilities.""" def test_dolfin_expression_with_cell(dolfin): e = dolfin.Expression(cppcode=dolfin_expression_with_cell) assert e.ufl_shape == () values = numpy.zeros((1,)) x = numpy.asarray((0.2, 0.2)) mesh = dolfin.UnitSquareMesh(1, 1) cell = dolfin.Cell(mesh, 0) e.eval_cell(values, x, cell) assert values[0] == 1.23 #assert e((1.0,2.0)) == 1.23 # FAILS! Need dolfin to dispatch this to eval_cell with the proper Cell found from x for this to work. def test_dolfin_expression_constructors(): code = format_dolfin_expression(classname="MyExpression", shape=()) assert code == dolfin_expression_minimal code = format_dolfin_expression(classname="MyExpression", shape=(2,)) assert code == dolfin_expression_with_constructor1 code = format_dolfin_expression(classname="MyExpression", shape=(2, 3)) assert code == dolfin_expression_with_constructor2 def test_dolfin_expression_members(): code = format_dolfin_expression(classname="MyExpression", shape=(), constants=('c',), mesh_functions=('mf',), generic_functions=('gf',), functions=('f',), ) assert code == dolfin_expression_with_members code = format_dolfin_expression(classname="MyExpression", shape=(), constants=('c0', 'c1'), mesh_functions=('mf0', 'mf1'), generic_functions=('gf0', 'gf1'), functions=('f0', 'f1'), ) assert code == dolfin_expression_with_members2 def test_explicit_dolfin_expression_compilation(dolfin): code = format_dolfin_expression(classname="MyExpression", shape=(2,), eval_body=['values[0] = x[0];', 'values[1] = x[1];'], constants=('c0', 'c1'), mesh_functions=('mf0', 'mf1'), generic_functions=('gf0', 'gf1'), functions=('f0', 'f1'), ) expr = dolfin.Expression(cppcode=code) assert hasattr(expr, 'c0') assert hasattr(expr, 'c1') assert hasattr(expr, 'mf0') assert hasattr(expr, 'mf1') assert hasattr(expr, 'gf0') assert hasattr(expr, 'gf1') assert hasattr(expr, 'f0') assert hasattr(expr, 'f1') #dolfin.plot(expr, mesh=dolfin.UnitSquareMesh(10,10), interactive=True) def flattened_nonempty_lines(lines): res = [] for l in lines: if isinstance(l, list): res.extend(flattened_nonempty_lines(l)) elif l: res.append(l) else: pass return res #class Ufl2DolfinExpressionCompilerTest(UflTestCase): #"""Tests of Dolfin C++ Expression compilation from UFL expressions.""" def check_dolfin_expression_compilation(uexpr, expected_lines, expected_values, members={}): parameters = default_parameters() # Compile expression compiled_lines, member_names = compile_dolfin_expression_body(uexpr, parameters) # Check expected compilation output if flattened_nonempty_lines(compiled_lines) != expected_lines: print('\n'*5) print("Upcoming failure, expected lines:") print('\n'.join(expected_lines)) print("\nActual lines:") print('\n'.join(flattened_nonempty_lines(compiled_lines))) print('\n'*5) assert flattened_nonempty_lines(compiled_lines) == expected_lines # Add debugging lines to print scalar expressions at end of Expression eval call if 0: printme = ["x[0]", "values[0]"] compiled_lines += ['std::cout '] + [' << "%s = " << %s << std::endl' % (pm, pm) for pm in printme] + ['<< std::endl;'] # Wrap it in a dolfin::Expression class shape = uexpr.ufl_shape name = "MyExpression_%s" % hashlib.md5(str(hash(uexpr))).hexdigest() code = format_dolfin_expression(classname=name, shape=shape, eval_body=compiled_lines, **member_names) if 0: print("SKIPPING REST OF TEST") return # Try to compile it with DOLFIN import dolfin dexpr = dolfin.Expression(cppcode=code) # Connect compiled dolfin::Expression object with dolfin::Function instances for name, value in members.items(): setattr(dexpr, name, value) # Evaluate and assert compiled value! for x, v in expected_values: u = dexpr(x) # TODO: Not sure if this works if not hasattr(u, '__len__'): u = (u,) diff = numpy.array(u) - numpy.array(v) if numpy.dot(diff, diff) > 1e-13: print("u =", u) print("v =", v) print("diff =", diff) assert numpy.dot(diff, diff) < 1e-14 def test_dolfin_expression_compilation_of_scalar_literal(): # Define some literal ufl expression uexpr = as_ufl(3.14) # Define expected output from compilation expected_lines = ['double s[1];', 's[0] = 3.14;', 'values[0] = s[0];'] # Define expected evaluation values: [(x,value), (x,value), ...] expected_values = [((0.0, 0.0), (3.14,)), ((0.6, 0.7), (3.14,)), ] # Execute all tests check_dolfin_expression_compilation(uexpr, expected_lines, expected_values) def test_dolfin_expression_compilation_of_vector_literal(): # Define some literal ufl expression uexpr = ufl.as_vector((1.23, 7.89)) # Define expected output from compilation expected_lines = ['double s[2];', 's[0] = 1.23;', 's[1] = 7.89;', 'values[0] = s[0];', 'values[1] = s[1];'] # Define expected evaluation values: [(x,value), (x,value), ...] expected_values = [((0.0, 0.0), (1.23, 7.89)), ((0.6, 0.7), (1.23, 7.89)), ] # Execute all tests check_dolfin_expression_compilation(uexpr, expected_lines, expected_values) def test_dolfin_expression_compilation_of_x(): # Define some ufl expression x = ufl.SpatialCoordinate(ufl.triangle) uexpr = 2*x # Define expected output from compilation expected_lines = ['double s[2];', 's[0] = 2 * x[0];', 's[1] = 2 * x[1];', 'values[0] = s[0];', 'values[1] = s[1];'] # Define expected evaluation values: [(x,value), (x,value), ...] expected_values = [((0.0, 0.0), (0.0, 0.0)), ((0.6, 0.7), (1.2, 1.4)), ] # Execute all tests check_dolfin_expression_compilation(uexpr, expected_lines, expected_values) def test_dolfin_expression_compilation_of_coefficient(dolfin): # Define some ufl expression with PyDOLFIN coefficients mesh = dolfin.UnitSquareMesh(3, 3) # Using quadratic element deliberately for accuracy V = dolfin.FunctionSpace(mesh, "CG", 2) u = dolfin.Function(V) u.interpolate(dolfin.Expression("x[0]*x[1]")) uexpr = 2*u # Define expected output from compilation expected_lines = ['double s[1];', 'Array v_w0(1);', 'w0->eval(v_w0, x);', 's[0] = 2 * v_w0[0];', 'values[0] = s[0];'] # Define expected evaluation values: [(x,value), (x,value), ...] expected_values = [((0.0, 0.0), (0.0,)), ((0.6, 0.7), (2*0.6*0.7,)), ] # Execute all tests check_dolfin_expression_compilation(uexpr, expected_lines, expected_values, members={'w0':u}) def test_dolfin_expression_compilation_of_algebraic_operators(dolfin): # Define some PyDOLFIN coefficients mesh = dolfin.UnitSquareMesh(3, 3) # Using quadratic element deliberately for accuracy V = dolfin.FunctionSpace(mesh, "CG", 2) u = dolfin.Function(V) u.interpolate(dolfin.Expression("x[0]*x[1]")) # Define ufl expression with algebraic operators uexpr = (2*u + 3.14*u**2) / 4 # Define expected output from compilation # TODO: Test with and without variables # Fully split version: expected_lines = ['double s[4];', 'Array v_w0(1);', 'w0->eval(v_w0, x);', 's[0] = 2 * v_w0[0];', 's[1] = pow(v_w0[0], 2);', 's[2] = 3.14 * s[1];', 's[3] = s[2] / 4;', 'values[0] = s[3];'] # Oneliner version (no reuse in this expression): expected_lines = ['double s[1];', 'Array v_w0(1);', 'w0->eval(v_w0, x);', 's[0] = (3.14 * pow(v_w0[0], 2) + 2 * v_w0[0]) / 4;', 'values[0] = s[0];'] # Define expected evaluation values: [(x,value), (x,value), ...] x, y = 0.6, 0.7; expected = (2*x*y + 3.14*(x*y)**2) / 4 expected_values = [((0.0, 0.0), (0.0,)), ((x, y), (expected,)), ] # Execute all tests check_dolfin_expression_compilation(uexpr, expected_lines, expected_values, members={'w0':u}) def test_dolfin_expression_compilation_of_math_functions(dolfin): # Define some PyDOLFIN coefficients mesh = dolfin.UnitSquareMesh(3, 3) # Using quadratic element deliberately for accuracy V = dolfin.FunctionSpace(mesh, "CG", 2) u = dolfin.Function(V) u.interpolate(dolfin.Expression("x[0]*x[1]")) w0 = u # Define ufl expression with math functions v = abs(ufl.cos(u))/2 + 0.02 uexpr = ufl.sin(u) + ufl.tan(v) + ufl.exp(u) + ufl.ln(v) + ufl.atan(v) + ufl.acos(v) + ufl.asin(v) #print dolfin.assemble(uexpr**2*dolfin.dx, mesh=mesh) # 11.7846508409 # Define expected output from compilation ucode = 'v_w0[0]' vcode = '0.02 + fabs(cos(v_w0[0])) / 2' funcs = 'asin(%(v)s) + (acos(%(v)s) + (atan(%(v)s) + (log(%(v)s) + (exp(%(u)s) + (sin(%(u)s) + tan(%(v)s))))))' oneliner = funcs % {'u':ucode, 'v':vcode} # Oneliner version (ignoring reuse): expected_lines = ['double s[1];', 'Array v_w0(1);', 'w0->eval(v_w0, x);', 's[0] = %s;' % oneliner, 'values[0] = s[0];'] #cppcode = format_dolfin_expression(classname="DebugExpression", shape=(), eval_body=expected_lines) #print '-'*100 #print cppcode #print '-'*100 #dolfin.plot(dolfin.Expression(cppcode=cppcode, mesh=mesh)) #dolfin.interactive() # Split version (handles reuse of v, no other reuse): expected_lines = ['double s[2];', 'Array v_w0(1);', 'w0->eval(v_w0, x);', 's[0] = %s;' % (vcode,), 's[1] = %s;' % (funcs % {'u':ucode,'v':'s[0]'},), 'values[0] = s[1];'] # Define expected evaluation values: [(x,value), (x,value), ...] import math x, y = 0.6, 0.7 u = x*y v = abs(math.cos(u))/2 + 0.02 v0 = .52 expected0 = math.tan(v0) + 1 + math.log(v0) + math.atan(v0) + math.acos(v0) + math.asin(v0) expected = math.sin(u) + math.tan(v) + math.exp(u) + math.log(v) + math.atan(v) + math.acos(v) + math.asin(v) expected_values = [((0.0, 0.0), (expected0,)), ((x, y), (expected,)), ] # Execute all tests check_dolfin_expression_compilation(uexpr, expected_lines, expected_values, members={'w0':w0}) def test_dolfin_expression_compilation_of_dot_with_index_notation(dolfin): # Define some PyDOLFIN coefficients mesh = dolfin.UnitSquareMesh(3, 3) # Using quadratic element deliberately for accuracy V = dolfin.VectorFunctionSpace(mesh, "CG", 2) u = dolfin.Function(V) u.interpolate(dolfin.Expression(("x[0]", "x[1]"))) # Define ufl expression with the simplest possible index notation uexpr = ufl.dot(u, u) # Needs expand_compounds, not testing here uexpr = u[0]*u[0] + u[1]*u[1] # Works i, = ufl.indices(1) uexpr = u[i]*u[i] # Works # Define expected output from compilation # Oneliner version (no reuse in this expression): xc, yc = ('v_w0[0]', 'v_w0[1]') expected_lines = ['double s[1];', 'Array v_w0(2);', 'w0->eval(v_w0, x);', 's[0] = pow(%(x)s, 2) + pow(%(y)s, 2);' % {'x':xc, 'y':yc}, 'values[0] = s[0];'] # Define expected evaluation values: [(x,value), (x,value), ...] x, y = (0.6, 0.7) expected = (x*x+y*y) expected_values = [((0.0, 0.0), (0.0,)), ((x, y), (expected,)), ] # Execute all tests check_dolfin_expression_compilation(uexpr, expected_lines, expected_values, members={'w0':u}) def test_dolfin_expression_compilation_of_index_notation(dolfin): # Define some PyDOLFIN coefficients mesh = dolfin.UnitSquareMesh(3, 3) # Using quadratic element deliberately for accuracy V = dolfin.VectorFunctionSpace(mesh, "CG", 2) u = dolfin.Function(V) u.interpolate(dolfin.Expression(("x[0]", "x[1]"))) # Define ufl expression with index notation i, j = ufl.indices(2) uexpr = ((u[0]*u + u[1]*u)[i]*u[j])*(u[i]*u[j]) # Define expected output from compilation # Split version (reusing all product terms correctly!): xc, yc = ('v_w0[0]', 'v_w0[1]') svars = {'a':'s[2]', 'b':'s[4]', 'xx':'s[0]', 'yy':'s[3]', 'xy':'s[1]', 'x':xc, 'y':yc} expected_lines = ['double s[6];', 'Array v_w0(2);', 'w0->eval(v_w0, x);', 's[0] = pow(%(x)s, 2);' % svars, 's[1] = %(x)s * %(y)s;' % svars, 's[2] = %(xx)s + %(xy)s;' % svars, 's[3] = pow(%(y)s, 2);' % svars, 's[4] = %(yy)s + %(xy)s;' % svars, 's[5] = (%(xx)s * (%(x)s * %(a)s) + %(xy)s * (%(x)s * %(b)s)) + (%(yy)s * (%(y)s * %(b)s) + %(xy)s * (%(y)s * %(a)s));' % svars, 'values[0] = s[5];'] # Making the above line correct was a nightmare because of operator sorting in ufl... Hoping it stays stable! # Define expected evaluation values: [(x,value), (x,value), ...] x, y = (0.6, 0.7) a = x*x+y*x b = x*y+y*y expected = (a*x)*(x*x) + (a*y)*(x*y) + (b*x)*(y*x) + (b*y)*(y*y) expected_values = [((0.0, 0.0), (0.0,)), ((x, y), (expected,)), ] # Execute all tests check_dolfin_expression_compilation(uexpr, expected_lines, expected_values, members={'w0':u}) ffc-2019.1.0.post0/test/uflacs/unit/000077500000000000000000000000001346026536000167665ustar00rootroot00000000000000ffc-2019.1.0.post0/test/uflacs/unit/README000066400000000000000000000000641346026536000176460ustar00rootroot00000000000000Tests covering smaller units of code within uflacs. ffc-2019.1.0.post0/test/uflacs/unit/__init__.py000066400000000000000000000000001346026536000210650ustar00rootroot00000000000000ffc-2019.1.0.post0/test/uflacs/unit/test_cnodes.py000066400000000000000000000300741346026536000216560ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ Tests of CNode formatting. """ from ffc.uflacs.language.cnodes import * def test_cnode_expression_precedence(): assert str(Add(1, 2)) == "1 + 2" assert str(Add(Add(1, 2), 3)) == "1 + 2 + 3" assert str(Add(1, Add(2, 3))) == "1 + (2 + 3)" assert str(Add(Add(1, 2), Add(3, 4))) == "1 + 2 + (3 + 4)" assert str(Add(Add(1, Add(2, 3)), 4)) == "1 + (2 + 3) + 4" assert str(Div(Mul(1, 2), 3)) == "1 * 2 / 3" assert str(Mul(Div(1, 2), 3)) == "1 / 2 * 3" assert str(Mul(1, Div(2, 3))) == "1 * (2 / 3)" assert str(Mul(1, Mul(2, Mul(3, 4)))) == "1 * (2 * (3 * 4))" assert str(Mul(Mul(Mul(1, 2), 3), 4)) == "1 * 2 * 3 * 4" assert str(Mul(Mul(1, Mul(2, 3)), 4)) == "1 * (2 * 3) * 4" assert str(Mul(Add(1, 2), Add(3, 4))) == "(1 + 2) * (3 + 4)" def test_cnode_expressions(): A = Symbol("A") B = Symbol("B") i = Symbol("i") j = Symbol("j") x = Symbol("x") y = Symbol("y") # Literals assert str(LiteralInt(123)) == "123" assert str(LiteralFloat(0.0)) == "0.0" assert str(LiteralFloat(1.0)) == "1.0" assert str(LiteralFloat(12.3)) == "12.3" # 1.23e+01" # Variables # TODO: VariableAccess # Arrays assert str(ArrayAccess("A", (1,))) == "A[1]" assert str(ArrayAccess(A, (1, 2))) == "A[1][2]" assert str(A[1, 2, 3]) == "A[1][2][3]" assert str(ArrayAccess(ArrayDecl("double", "A", (2,)), 1)) == "A[1]" assert str(ArrayDecl("double", A, (2, 3))[1, 2]) == "A[1][2]" # FlattenedArray n = Symbol("n") decl = ArrayDecl("double", A, (4,)) assert str(FlattenedArray(decl, strides=(2,), offset=3)[0]) == "A[3]" # "A[3 + 2 * 0]" assert str(FlattenedArray(decl, strides=(2,))[0]) == "A[0]" # "A[2 * 0]" decl = ArrayDecl("double", A, (2, 3, 4)) flattened = FlattenedArray(decl, strides=(7, 8 * n, n - 1)) #assert str(flattened[0, n, n * 7]) == "A[7 * 0 + 8 * n * n + (n - 1) * (n * 7)]" #assert str(flattened[0, n][n * 7]) == "A[7 * 0 + 8 * n * n + (n - 1) * (n * 7)]" #assert str(flattened[0][n][n * 7]) == "A[7 * 0 + 8 * n * n + (n - 1) * (n * 7)]" assert str(flattened[0, n, n * 7]) == "A[8 * n * n + (n - 1) * (n * 7)]" assert str(flattened[0, n][n * 7]) == "A[8 * n * n + (n - 1) * (n * 7)]" assert str(flattened[0][n][n * 7]) == "A[8 * n * n + (n - 1) * (n * 7)]" # Unary operators assert str(Pos(1)) == "+1" assert str(Neg(1)) == "-1" assert str(Not(1)) == "!1" assert str(BitNot(1)) == "~1" assert str(PreIncrement(i)) == "++i" assert str(PostIncrement(i)) == "i++" assert str(PreDecrement(i)) == "--i" assert str(PostDecrement(i)) == "i--" assert str(Call(Symbol("f"), [])) == "f()" assert str(Call("f", [x, 3, Mul(y, 8.0)])) == "f(x, 3, y * 8.0)" assert str(Call("sin", Mul(B, 3.0))) == "sin(B * 3.0)" # Binary operators assert str(Add(1, 2)) == "1 + 2" assert str(Mul(1, 2)) == "1 * 2" assert str(Div(1, 2)) == "1 / 2" assert str(Sub(1, 2)) == "1 - 2" assert str(Mod(1, 2)) == "1 % 2" assert str(EQ(1, 2)) == "1 == 2" assert str(NE(1, 2)) == "1 != 2" assert str(LT(1, 2)) == "1 < 2" assert str(LE(1, 2)) == "1 <= 2" assert str(GT(1, 2)) == "1 > 2" assert str(GE(1, 2)) == "1 >= 2" assert str(And(1, 2)) == "1 && 2" assert str(Or(1, 2)) == "1 || 2" assert str(BitAnd(1, 2)) == "1 & 2" assert str(BitXor(1, 2)) == "1 ^ 2" assert str(BitOr(1, 2)) == "1 | 2" # Binary operators translated from python assert str(A + B) == "A + B" assert str(A * B) == "A * B" assert str(A / B) == "A / B" assert str(A - B) == "A - B" # Ternary operator assert str(Conditional(1, 2, 3)) == "1 ? 2 : 3" # N-ary "operators" simplify code generation assert str(Sum([1, 2, 3, 4])) == "1 + 2 + 3 + 4" assert str(Product([1, 2, 3, 4])) == "1 * 2 * 3 * 4" assert str(Product([Sum([1, 2, 3]), Sub(4, 5)])) == "(1 + 2 + 3) * (4 - 5)" # Custom expression assert str(Mul(VerbatimExpr("1 + std::foo(3)"), Add(5, 6))) == "(1 + std::foo(3)) * (5 + 6)" def test_cnode_assignments(): x = Symbol("x") y = Symbol("y") assert str(Assign(x, y)) == "x = y" assert str(AssignAdd(x, y)) == "x += y" assert str(AssignSub(x, y)) == "x -= y" assert str(AssignMul(x, y)) == "x *= y" assert str(AssignDiv(x, y)) == "x /= y" assert str(AssignMod(x, y)) == "x %= y" assert str(AssignLShift(x, y)) == "x <<= y" assert str(AssignRShift(x, y)) == "x >>= y" assert str(AssignAnd(x, y)) == "x &&= y" assert str(AssignOr(x, y)) == "x ||= y" assert str(AssignBitAnd(x, y)) == "x &= y" assert str(AssignBitXor(x, y)) == "x ^= y" assert str(AssignBitOr(x, y)) == "x |= y" def test_cnode_variable_declarations(): y = Symbol("y") assert str(VariableDecl("foo", "x")) == "foo x;" assert str(VariableDecl("int", "n", 1)) == "int n = 1;" assert str(VariableDecl("double", "x", Mul(y, 3.0))) == "double x = y * 3.0;" def test_1d_initializer_list(): fmt = lambda v, s: format_indented_lines(build_initializer_lists(v, s, 0, str)) assert fmt([], (0,)) == "{ }" assert fmt([1], (1,)) == "{ 1 }" assert fmt([1, 2], (2,)) == "{ 1, 2 }" assert fmt([1, 2, 3], (3,)) == "{ 1, 2, 3 }" def test_nd_initializer_list_oneitem(): fmt = lambda v, s: format_indented_lines(build_initializer_lists(v, s, 0, str)) assert fmt([1], (1,)) == "{ 1 }" assert fmt([[1]], (1, 1)) == "{ { 1 } }" assert fmt([[[1]]], (1, 1, 1)) == "{ { { 1 } } }" assert fmt([[[[1]]]], (1, 1, 1, 1)) == "{ { { { 1 } } } }" def test_nd_initializer_list_twoitems(): fmt = lambda v, s: format_indented_lines(build_initializer_lists(v, s, 0, str)) assert fmt([1, 2], (2,)) == "{ 1, 2 }" assert fmt([[1, 2]], (1, 2)) == "{ { 1, 2 } }" assert fmt([[[1, 2]]], (1, 1, 2)) == "{ { { 1, 2 } } }" assert fmt([[[[1, 2]]]], (1, 1, 1, 2)) == "{ { { { 1, 2 } } } }" # transpose it: assert fmt([[1], [2]], (2, 1)) == "{ { 1 },\n { 2 } }" assert fmt([[[1], [2]]], (1, 2, 1)) == "{ { { 1 },\n { 2 } } }" def test_nd_initializer_list_twobytwoitems(): fmt = lambda v, s: format_indented_lines(build_initializer_lists(v, s, 0, str)) assert fmt([[1, 2], [3, 4]], (2, 2)) == "{ { 1, 2 },\n { 3, 4 } }" assert fmt([[[1, 2], [3, 4]]], (1, 2, 2)) == "{ { { 1, 2 },\n { 3, 4 } } }" assert fmt([[[[1, 2], [3, 4]]]], (1, 1, 2, 2)) == "{ { { { 1, 2 },\n { 3, 4 } } } }" def test_2d_initializer_list(): assert format_indented_lines(build_initializer_lists([[1, 2, 3], [4, 5, 6]], (2, 3), 0, str)) == "{ { 1, 2, 3 },\n { 4, 5, 6 } }" values = [[[1], [2]], [[3], [4]], [[5], [6]]] reference = """\ { { { 1 }, { 2 } }, { { 3 }, { 4 } }, { { 5 }, { 6 } } }""" assert format_indented_lines(build_initializer_lists(values, (3, 2, 1), 0, str)) == reference def test_2d_numpy_initializer_list(): import numpy values = [[1, 2, 3], [4, 5, 6]] array = numpy.asarray(values) sh = (2, 3) assert array.shape == sh fmt = lambda v, s: format_indented_lines(build_initializer_lists(v, s, 0, str)) assert fmt(values, sh) == "{ { 1, 2, 3 },\n { 4, 5, 6 } }" assert fmt(array, sh) == "{ { 1, 2, 3 },\n { 4, 5, 6 } }" values = [[[1], [2]], [[3], [4]], [[5], [6]]] array = numpy.asarray(values) sh = (3, 2, 1) assert sh == array.shape reference = """\ { { { 1 }, { 2 } }, { { 3 }, { 4 } }, { { 5 }, { 6 } } }""" assert fmt(values, sh) == reference assert fmt(array, sh) == reference def test_cnode_array_declarations(): assert str(ArrayDecl("double", "x", 3)) == "double x[3];" assert str(ArrayDecl("double", "x", (3,))) == "double x[3];" assert str(ArrayDecl("double", "x", (3, 4))) == "double x[3][4];" assert str(ArrayDecl("double", "x", 3, [1., 2., 3.])) == "double x[3] = { 1.0, 2.0, 3.0 };" assert str(ArrayDecl("double", "x", (3,), [1., 2., 3.])) == "double x[3] = { 1.0, 2.0, 3.0 };" reference = """\ { double x[2][3] = { { 1.0, 2.0, 3.0 }, { 4.0, 5.0, 6.0 } }; }""" assert str(Scope(ArrayDecl("double", "x", (2, 3), [[1., 2., 3.], [4., 5., 6.]]))) == reference def test_cnode_comments(): assert str(Comment("hello world")) == "// hello world" assert str(Comment(" hello\n world ")) == "// hello\n// world" assert format_indented_lines(Indented(Comment(" hello\n world ").cs_format())) == " // hello\n // world" def test_cnode_statements(): assert str(Break()) == "break;" assert str(Continue()) == "continue;" assert str(Return(Add(1, 2))) == "return 1 + 2;" assert str(Case(3)) == "case 3:" assert str(Default()) == "default:" code = "for (std::vector::iterator it = v.begin(); it != v.end(); ++it)\n{ /* foobar */\n}" assert str(VerbatimStatement(code)) == code def test_cnode_loop_statements(): i, j, x, y, A = [Symbol(ch) for ch in "ijxyA"] body = [Assign("x", 3), AssignAdd(x, 5)] body_fmt = "{\n x = 3;\n x += 5;\n}" assert str(Scope(body)) == body_fmt assert str(Namespace("foo", body)) == "namespace foo\n" + body_fmt assert str(If(LT(x, 4.0), body)) == "if (x < 4.0)\n" + body_fmt assert str(ElseIf(LT(x, 4.0), body)) == "else if (x < 4.0)\n" + body_fmt assert str(Else(body)) == "else\n" + body_fmt assert str(While(LT(x, 4.0), body)) == "while (x < 4.0)\n" + body_fmt assert str(Do(LT(x, 4.0), body)) == "do\n" + body_fmt + " while (x < 4.0);" assert str(For(VariableDecl("int", "i", 0), LT(i, 4), PreIncrement(i), body)) == "for (int i = 0; i < 4; ++i)\n" + body_fmt assert str(ForRange("i", 3, 7, Comment("body"))) == "for (int i = 3; i < 7; ++i)\n{\n // body\n}" # Using assigns as both statements and expressions assert str(While(LT(AssignAdd("x", 4.0), 17.0), AssignAdd("A", y))) == "while ((x += 4.0) < 17.0)\n{\n A += y;\n}" assert str(ForRange("i", 3, 7, AssignAdd("A", i))) == "for (int i = 3; i < 7; ++i)\n A += i;" def test_cnode_loop_helpers(): i = Symbol("i") j = Symbol("j") A = Symbol("A") B = Symbol("B") C = Symbol("C") dst = A[i + 4 * j] src = 2.0 * B[j] * C[i] ranges = [(i, 0, 2), (j, 1, 3)] assert str(assign_loop(dst, src, ranges)) == """\ for (int i = 0; i < 2; ++i) for (int j = 1; j < 3; ++j) A[i + 4 * j] = 2.0 * B[j] * C[i];""" assert str(scale_loop(dst, src, ranges)) == """\ for (int i = 0; i < 2; ++i) for (int j = 1; j < 3; ++j) A[i + 4 * j] *= 2.0 * B[j] * C[i];""" assert str(accumulate_loop(dst, src, ranges)) == """\ for (int i = 0; i < 2; ++i) for (int j = 1; j < 3; ++j) A[i + 4 * j] += 2.0 * B[j] * C[i];""" def test_cnode_switch_statements(): i, j, x, y, A = [Symbol(ch) for ch in "ijxyA"] assert str(Switch("x", [])) == "switch (x)\n{\n}" assert str(Switch("x", [], default=Assign("i", 3))) == "switch (x)\n{\ndefault:\n {\n i = 3;\n }\n}" reference_switch = """switch (x) { case 1: { y = 3; } break; case 2: { y = 4; } break; default: { y = 5; } }""" cnode_switch = str(Switch("x", [(1, Assign("y", 3)), (2, Assign("y", 4)), ], default=Assign("y", 5))) assert cnode_switch == reference_switch reference_switch = """switch (x) { case 1: y = 3; case 2: y = 4; default: y = 5; }""" cnode_switch = str(Switch("x", [(1, Assign("y", 3)), (2, Assign("y", 4)), ], default=Assign("y", 5), autobreak=False, autoscope=False)) assert cnode_switch == reference_switch def test_conceptual_tabulate_tensor(): A = ArrayDecl("double", "A", (4, 6), values=0.0) code = StatementList([ A, ForRange("q", 0, 2, [ VariableDecl("double", "x", ArrayAccess("quadpoints", "q")), ForRange("i", 0, 4, [ ForRange("j", 0, 6, [ AssignAdd(ArrayAccess(A, ("i", "j")), Mul(ArrayAccess("FE0", ("q", "i")), ArrayAccess("FE1", ("q", "j")))) ]) ]) ]) ]) print(str(code)) ffc-2019.1.0.post0/test/uflacs/unit/test_cpp_compiler.py000066400000000000000000000143231346026536000230560ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ Tests of generic C++ compilation code. """ # FIXME: These are all disabled, don't remember why, turn them on again! import numpy import ufl from ufl import as_ufl from ufl import * import ffc.uflacs # from ffc.uflacs.backends.toy.toy_compiler import compile_form def format_variable(i): return "s[%d]" % i def format_assignment(v, e): return "%s = %s;" % (v, e) def format_code_lines(expressions, expression_dependencies, want_to_cache, format_expression, format_variable, format_assignment): """FIXME: Test this. expressions - array of length n with scalar valued expressions expression_dependencies - array of length n with want_to_cache - array of length n with truth value for each expression whether we want to store it in a variable or not format_expression - a multifunction taking an expression as the first argument and code for each operand, returning code for the expression format_variable - a function taking an int and formatting a variable name with it format_assignment - a function taking a lhs variable name and an rhs expression, returning an assignment statement. """ def par(c): return "(%s)" % (c,) n = len(expressions) code_ref = numpy.empty(n, dtype=object) precedence = numpy.zeros(n, dtype=int) code_lines = [] j = 0 for i, e in enumerate(expressions): p = 0 # TODO: precedence of e deps = expression_dependencies[i] code_ops = [code_ref[d] for d in deps] code_ops = [par(c) if precedence[d] > p else c for c, d in zip(code_ops, deps)] code_e = format_expression(e, code_ops) if want_to_cache[i]: varname = format_variable(j) j += 1 # TODO: allocate free variable instead of just adding a new one assignment = format_assignment(varname, code_e) code_lines.append(assignment) code_ref[i] = varname precedence[i] = 9999 # TODO: highest # no () needed around variable else: code_ref[i] = code_e precedence[i] = p # TODO: deallocate variables somehow when they have been last used final_expressions = [] # TODO: Insert code expressions for given output expression indices here return code_lines, final_expressions # class CppExpressionCompilerTest(UflTestCase): def xtest_literal_zero_compilation(): uexpr = as_ufl(0) lines, finals = compile_expression_lines(uexpr) expected_lines = [] expected_finals = ['0'] assert lines == expected_lines assert finals == expected_finals def xtest_literal_int_compilation(): uexpr = as_ufl(2) lines, finals = compile_expression_lines(uexpr) expected_lines = [] expected_finals = ['2'] assert lines == expected_lines assert finals == expected_finals def xtest_literal_float_compilation(): uexpr = as_ufl(2.56) lines, finals = compile_expression_lines(uexpr) expected_lines = [] expected_finals = ['2.56'] assert lines == expected_lines assert finals == expected_finals def xtest_geometry_h_compilation(): h = ufl.Circumradius(ufl.triangle) uexpr = h lines, finals = compile_expression_lines(uexpr) expected_lines = [] expected_finals = ['h'] assert lines == expected_lines assert finals == expected_finals def xtest_terminal_sum_compilation(): h = ufl.Circumradius(ufl.triangle) uexpr = h + 2.56 lines, finals = compile_expression_lines(uexpr) expected_lines = [] expected_finals = ['h + 2.56'] assert lines == expected_lines assert finals == expected_finals # class CppCompilerTest(UflTestCase): import pytest @pytest.fixture def u(): return Coefficient(FiniteElement("U", triangle, 1)) @pytest.fixture def v(): return Coefficient(VectorElement("U", triangle, 1)) @pytest.fixture def w(): return Coefficient(TensorElement("U", triangle, 1)) def test_fixtures(u, v, w): "Just checking that fixtures work!" assert u == Coefficient(FiniteElement("U", triangle, 1), count=u.count()) assert v == Coefficient(VectorElement("U", triangle, 1), count=v.count()) assert w == Coefficient(TensorElement("U", triangle, 1), count=w.count()) def xtest_cpp2_compile_scalar_literals(): M = as_ufl(0) * dx code = compile_form(M, 'unittest') print('\n', code) expected = 'TODO' assert code == expected M = as_ufl(3) * dx code = compile_form(M, 'unittest') print('\n', code) expected = 'TODO' assert code == expected M = as_ufl(1.03) * dx code = compile_form(M, 'unittest') print('\n', code) expected = 'TODO' assert code == expected def xtest_cpp2_compile_geometry(): M = CellVolume(triangle) * dx code = compile_form(M, 'unittest') print('\n', code) expected = 'TODO' assert code == expected M = SpatialCoordinate(triangle)[0] * dx code = compile_form(M, 'unittest') print('\n', code) expected = 'TODO' assert code == expected def xtest_cpp2_compile_coefficients(u, v, w): M = u * dx code = compile_form(M, 'unittest') print('\n', code) expected = 'TODO' assert code == expected M = v[0] * dx code = compile_form(M, 'unittest') print('\n', code) expected = 'TODO' assert code == expected M = w[1, 0] * dx code = compile_form(M, 'unittest') print('\n', code) expected = 'TODO' assert code == expected def xtest_cpp2_compile_sums(u, v, w): M = (2 + u) * dx code = compile_form(M, 'unittest') print('\n', code) expected = 'TODO' assert code == expected M = (v[1] + w[1, 1] + 3 + u) * dx code = compile_form(M, 'unittest') print('\n', code) expected = 'TODO' assert code == expected def xtest_cpp2_compile_products(u, v, w): M = (2 * u) * dx code = compile_form(M, 'unittest') print('\n', code) expected = 'TODO' assert code == expected M = (v[1] * w[1, 1] * 3 * u) * dx code = compile_form(M, 'unittest') print('\n', code) expected = 'TODO' assert code == expected def xtest_cpp_compilation(): M = u**2 / 2 * dx code = compile_form(M, 'unittest') print('\n', code) expected = 'TODO' assert code == expected ffc-2019.1.0.post0/test/uflacs/unit/test_crs.py000066400000000000000000000017741346026536000211770ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ Tests of CRSArray data structure. """ from ffc.uflacs.analysis.crsarray import CRSArray def test_crs_can_have_zero_element_rows(): rcap, ecap = 3, 1 A = CRSArray(rcap, ecap, int) for i in range(rcap): row = [] A.push_row(row) assert A.num_elements == 0 for i in range(rcap): row = [] assert list(A[i]) == row def test_crs_can_have_one_element_rows(): rcap, ecap = 3, 3 A = CRSArray(rcap, ecap, int) for i in range(rcap): row = [i] A.push_row(row) assert A.num_elements == rcap for i in range(rcap): row = [i] assert list(A[i]) == row def test_crs_can_have_n_element_rows(): rcap, ecap = 5, 25 A = CRSArray(rcap, ecap, int) k = 0 for i in range(rcap): row = [i + 2, i + 1] + [i] * i k += len(row) A.push_row(row) assert A.num_elements == k for i in range(rcap): row = [i + 2, i + 1] + [i] * i assert list(A[i]) == row ffc-2019.1.0.post0/test/uflacs/unit/test_factorization.py000066400000000000000000000101541346026536000232540ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ Tests of algorithm for factorization of integrand w.r.t. Argument terms. """ from ufl import * from ffc.uflacs.analysis.factorization import compute_argument_factorization # TODO: Restructure these tests using py.test fixtures and parameterization? def compare_compute_argument_factorization(SV, SV_deps, expected_AV, expected_FV, expected_IM): SV_targets = [len(SV) - 1] rank = max(len(k) for k in expected_IM.keys()) argument_factorizations, modified_arguments, FV, FV_deps, FV_targets = \ compute_argument_factorization(SV, SV_deps, SV_targets, rank) argument_factorization, = argument_factorizations AV = modified_arguments IM = argument_factorization assert AV == expected_AV if 0: for n in range(1, min(len(FV), len(expected_FV))): assert FV[:n] == expected_FV[:n] assert FV == expected_FV assert IM == expected_IM def test_compute_argument_factorization(): V = FiniteElement("CG", triangle, 1) u = TrialFunction(V) v = TestFunction(V) a, b, c, d, e, f, g = [Coefficient(V, count=k) for k in range(7)] zero = as_ufl(0.0) one = as_ufl(1.0) two = as_ufl(2) # Test workaround for hack in factorization: FVpre = [zero, one, two] offset = len(FVpre) # Test basic non-argument terminal SV = [f] SV_deps = [()] AV = [] FV = FVpre + [f] IM = {(): 0 + offset} compare_compute_argument_factorization(SV, SV_deps, AV, FV, IM) # Test basic non-argument sum SV = [f, g, f + g] SV_deps = [(), (), (0, 1)] AV = [] FV = FVpre + [f, g, f + g] IM = {(): 2 + offset} compare_compute_argument_factorization(SV, SV_deps, AV, FV, IM) # Test basic non-argument product SV = [f, g, f * g] SV_deps = [(), (), (0, 1)] AV = [] FV = FVpre + [f, g, f * g] IM = {(): 2 + offset} compare_compute_argument_factorization(SV, SV_deps, AV, FV, IM) # Test basic single-argument-only expression SV = [v] SV_deps = [()] AV = [v] FV = FVpre + [] IM = {(0,): 1} # v == AV[0] * FV[1] compare_compute_argument_factorization(SV, SV_deps, AV, FV, IM) # Test basic coefficient-argument product SV = [f, v, f * v] SV_deps = [(), (), (0, 1)] AV = [v] FV = FVpre + [f] IM = {(0,): offset} # f*v == AV[0] * FV[1] compare_compute_argument_factorization(SV, SV_deps, AV, FV, IM) # Test basic argument product SV = [u, v, u * v] SV_deps = [(), (), (0, 1)] AV = [v, u] # Test function < trial function FV = FVpre + [] IM = {(0, 1): 1} # v*u == (AV[0] * AV[1]) * FV[1] compare_compute_argument_factorization(SV, SV_deps, AV, FV, IM) # Test coefficient-argument products SV = [u, f, v, (f * v), u * (f * v)] SV_deps = [(), (), (), (1, 2), (0, 3)] AV = [v, u] FV = FVpre + [f] IM = {(0, 1): 0 + offset} # f*(u*v) == (AV[0] * AV[1]) * FV[2] compare_compute_argument_factorization(SV, SV_deps, AV, FV, IM) # Test more complex situation SV = [u, u.dx(0), v, # 0..2 a, b, c, d, e, # 3..7 a * u, b * u.dx(0), # 8..9 c * v, d * v, # 10..11 a * u + b * u.dx(0), # 12 c * v + d * v, # 13 e * (a * u + b * u.dx(0)), # 14 (e * (a * u + b * u.dx(0))) * (c * v + d * v), # 15 ] SV_deps = [(), (), (), (), (), (), (), (), (0, 3), (1, 4), (2, 5), (2, 6), (8, 9), (10, 11), (7, 12), (13, 14), ] AV = [v, u, u.dx(0)] FV = FVpre + [a, b, c, d, e, # 0..5 c + d, # 6, introduced by SV[13] e * a, # 7, introduced by SV[14] e * b, # 8, introduced by SV[14] (e * a) * (c + d), # 9 (e * b) * (c + d), # 10 ] IM = {(0, 1): 8 + offset, # (a*e)*(c+d)*(u*v) == (AV[0] * AV[2]) * FV[13] (0, 2): 9 + offset} # (b*e)*(c+d)*(u.dx(0)*v) == (AV[1] * AV[2]) * FV[12] compare_compute_argument_factorization(SV, SV_deps, AV, FV, IM) ffc-2019.1.0.post0/test/uflacs/unit/test_format_code_structure.py000066400000000000000000000223631346026536000250070ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ Tests of generic code formatting utilities, which focus on the overall structure of the code, e.g. indentation, curly brace blocks, control flow structures like loops, and class and function definitions. Some of this is C++ specific, some is more generic. """ import pytest from ffc.uflacs.language.cnodes import * def test_format_basics(): # Reproduce a string assert format_indented_lines("string") == "string" # Basic strings with indentation assert format_indented_lines("string", 1) == " string" assert format_indented_lines("string", 2) == " string" # Multiline strings with indentation assert format_indented_lines("fee\nfie\nfoe", level=1) == " fee\n fie\n foe" # One item lists are the same as strings assert format_indented_lines(["hello"]) == "hello" assert format_indented_lines(["hello"], 1) == " hello" assert format_indented_lines(["hello"], 2) == " hello" # Lists are joined with newlines assert format_indented_lines(["hello", "world"]) == "hello\nworld" assert format_indented_lines(["hello", "world"], 1) == " hello\n world" assert format_indented_lines(["hello", "world"], 2) == " hello\n world" # Tuples are joined with newlines assert format_indented_lines(("hello", "world")) == "hello\nworld" assert format_indented_lines(("hello", "world"), 1) == " hello\n world" assert format_indented_lines(("hello", "world"), 2) == " hello\n world" code = format_indented_lines(("hello", "world"), 2) assert code == " hello\n world" def test_format_indented_lines(): # Strings and lists can be put in Indented containers assert format_indented_lines(Indented("hei")) == " hei" code = format_indented_lines(Indented(["hei", Indented("verden")])) assert code == " hei\n verden" code = format_indented_lines(["{", Indented("fee\nfie\nfoe"), "}"]) assert code == "{\n fee\n fie\n foe\n}" def xtest_format_blocks(): # A Scope is an indented body with brackets before and after code = format_code(Scope("fee\nfie\nfoe")) assert code == "{\n fee\n fie\n foe\n}" # A Namespace is a 'namespace foo' line before a block code = format_code(Namespace("bar", "fee\nfie\nfoe")) assert code == "namespace bar\n{\n fee\n fie\n foe\n}" # Making a for loop code = format_code(["for (iq...)", Scope("foo(iq);")]) assert code == "for (iq...)\n{\n foo(iq);\n}" # Making a do loop code = format_code(["iq = 0;", "do", (Scope("foo(iq);"), " while (iq < nq);")]) assert code == "iq = 0;\ndo\n{\n foo(iq);\n} while (iq < nq);" def xtest_format_class(): # Making a class declaration assert format_code(Class('Car')) == 'class Car\n{\n};' code = format_code(Class('Car', superclass='Vehicle')) assert code == 'class Car: public Vehicle\n{\n};' code = format_code(Class('Car', public_body='void eval()\n{\n}')) assert code == 'class Car\n{\npublic:\n void eval()\n {\n }\n};' code = format_code(Class('Car', protected_body='void eval()\n{\n}')) assert code == 'class Car\n{\nprotected:\n void eval()\n {\n }\n};' code = format_code(Class('Car', private_body='void eval()\n{\n}')) assert code == 'class Car\n{\nprivate:\n void eval()\n {\n }\n};' def xtest_format_template_argument_list(): def t(args, mlcode, slcode): code = format_code(TemplateArgumentList(args, False)) assert code == slcode code = format_code(TemplateArgumentList(args, True)) assert code == mlcode t(('A',), '<\n A\n>', '') t(('A', 'B'), '<\n A,\n B\n>', '') def xtest_format_templated_type(): code = format_code(Type('Foo')) assert code == 'Foo' code = format_code(Type('Foo', ('int', '123'))) assert code == 'Foo' code = format_code(Type('Foo', ('int', Type('Bar', ('123', 'float'))))) assert code == 'Foo >' def xtest_format_typedef(): assert format_code(TypeDef('int', 'myint')) == 'typedef int myint;' code = format_code(TypeDef(Type('Foo', ('int', Type('Bar', ('123', 'float')))), 'Thing')) assert code == 'typedef Foo > Thing;' def xtest_format_template_class(): expected = "template\nclass MyClass\n{\npublic:\n void hello(int world) {}\n};" code = [('template', TemplateArgumentList(('typename T', 'typename R'), False)), Class('MyClass', public_body='void hello(int world) {}')] assert format_code(code) == expected code = Class('MyClass', public_body='void hello(int world) {}', template_arguments=('typename T', 'typename R')) assert format_code(code) == expected def test_format_variable_decl(): code = VariableDecl("double", "foo") expected = "double foo;" assert str(code) == expected def test_literal_cexpr_value_conversion(): assert bool(LiteralBool(True)) is True assert bool(LiteralBool(False)) is False assert int(LiteralInt(2)) == 2 assert float(LiteralFloat(2.0)) == 2.0 # assert complex(LiteralFloat(2.0+4.0j)) == 2.0+4.0j def test_format_array_decl(): expected = "double foo[3];" code = ArrayDecl("double", "foo", 3) assert str(code) == expected code = ArrayDecl("double", "foo", (3,)) assert str(code) == expected decl = code expected = "foo[1]" code = ArrayAccess(decl, 1) assert str(code) == expected code = ArrayAccess(decl, (1,)) assert str(code) == expected expected = "foo[0]" code = ArrayAccess(decl, 0) assert str(code) == expected with pytest.raises(ValueError): ArrayAccess(decl, 3) code = ArrayDecl("double", "foo", (3, 4)) expected = "double foo[3][4];" assert str(code) == expected decl = code expected = "foo[0][1]" code = ArrayAccess(decl, (0, 1)) assert str(code) == expected expected = "foo[2][3]" code = ArrayAccess(decl, (2, 3)) assert str(code) == expected with pytest.raises(ValueError): ArrayAccess(decl, 0) with pytest.raises(ValueError): ArrayAccess(decl, (0, 4)) with pytest.raises(ValueError): ArrayAccess(decl, (3, 0)) with pytest.raises(ValueError): ArrayAccess(decl, (3, -1)) def test_format_array_def(): expected = "double foo[3] = { 1.0, 2.0, 3.0 };" code = ArrayDecl("double", "foo", 3, [1.0, 2.0, 3.0]) assert str(code) == expected expected = """double foo[2][3] = { { 1.0, 2.0, 3.0 }, { 6.0, 5.0, 4.0 } };""" code = ArrayDecl("double", "foo", (2, 3), [[1.0, 2.0, 3.0], [6.0, 5.0, 4.0]]) assert str(code) == expected expected = """double foo[2][2][3] = { { { 1.0, 2.0, 3.0 }, { 6.0, 5.0, 4.0 } }, { { 1.0, 2.0, 3.0 }, { 6.0, 5.0, 4.0 } } };""" code = ArrayDecl("double", "foo", (2, 2, 3), [[[1.0, 2.0, 3.0], [6.0, 5.0, 4.0]], [[1.0, 2.0, 3.0], [6.0, 5.0, 4.0]]]) assert str(code) == expected def test_format_array_def_zero(): expected = "double foo[3] = {};" code = ArrayDecl("double", "foo", 3, [0.0, 0.0, 0.0]) assert str(code) == expected code = ArrayDecl("double", "foo", (3,), [0.0, 0.0, 0.0]) assert str(code) == expected expected = "double foo[2][3] = {};" code = ArrayDecl("double", "foo", (2, 3), [[0.0, 0.0, 0.0], [0.0, 0.0, 0.0]]) assert str(code) == expected def xtest_class_with_arrays(): adecl = ArrayDecl('double', 'phi', (3, 5)) aacc = ArrayAccess(adecl, (2, 1)) expected = "phi[2][1]" assert str(aacc) == expected decls = [VariableDecl('double', 'foo'), VariableDecl('int', 'bar'), adecl, ] classcode = Class('Car', public_body=decls) expected_decls = [str(decl) for decl in decls] expected = 'class Car\n{\npublic:\n %s\n};' % '\n '.join(expected_decls) actual = str(classcode) assert actual == expected def test_class_array_access(): vname = 'phi' shape = (3, 4) component = ('i0', 2) decl = ArrayDecl('double', vname, shape) dcode = str(decl) acode = str(ArrayAccess(decl, component)) assert dcode == 'double phi[3][4];' assert acode == 'phi[i0][2]' def test_while_loop(): code = While(VerbatimExpr("--k < 3"), []) actual = str(code) expected = "while (--k < 3)\n{\n}" assert actual == expected code = While(VerbatimExpr("--k < 3"), body=["ting;", "tang;"]) actual = str(code) expected = "while (--k < 3)\n{\n ting;\n tang;\n}" assert actual == expected def test_for_loop(): code = For("int i = 0", "i < 3", "++i", []) actual = str(code) expected = "for (int i = 0; i < 3; ++i)\n{\n}" assert actual == expected code = For("int i = 0", "i < 3", "++i", body=["ting;", "tang;"]) actual = str(code) expected = "for (int i = 0; i < 3; ++i)\n{\n ting;\n tang;\n}" assert actual == expected def test_for_range(): code = ForRange("i", 0, 3, []) actual = str(code) expected = "for (int i = 0; i < 3; ++i)\n{\n}" assert actual == expected code = ForRange("i", 0, 3, body=["ting;", "tang;"]) actual = str(code) expected = "for (int i = 0; i < 3; ++i)\n{\n ting;\n tang;\n}" assert actual == expected ffc-2019.1.0.post0/test/uflacs/unit/test_graph_algorithm.py000066400000000000000000000246611346026536000235570ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ Tests of graph representation of expressions. """ from ufl import * from ufl import product from ufl.permutation import compute_indices from ffc.uflacs.analysis.graph import build_graph from ffc.uflacs.analysis.graph_rebuild import rebuild_expression_from_graph # from ffc.uflacs.analysis.graph_rebuild import rebuild_scalar_e2i # from ffc.uflacs.analysis.dependencies import (compute_dependencies, # mark_active, # mark_image) # from ffc.uflacs.analysis.graph_ssa import (mark_partitions, # compute_dependency_count, # invert_dependencies, # default_cache_score_policy, # compute_cache_scores, # allocate_registers) from operator import eq as equal def test_graph_algorithm_allocates_correct_number_of_symbols(): U = FiniteElement("CG", triangle, 1) V = VectorElement("CG", triangle, 1) W = TensorElement("CG", triangle, 1) u = Coefficient(U) v = Coefficient(V) w = Coefficient(W) # Testing some scalar expressions expr = u G = build_graph([expr], DEBUG=0) assert G.V_symbols.num_elements == 1 assert G.total_unique_symbols == 1 expr = u**2 G = build_graph([expr], DEBUG=0) assert G.V_symbols.num_elements == 3 assert G.total_unique_symbols == 3 expr = u**2 / 2 G = build_graph([expr], DEBUG=0) assert G.V_symbols.num_elements == 4 assert G.total_unique_symbols == 4 expr = dot(v, v) / 2 G = build_graph([expr], DEBUG=0) assert G.V_symbols.num_elements == 5 assert G.total_unique_symbols == 5 # Testing Indexed expr = v[i] * v[i] G = build_graph([expr], DEBUG=0) assert G.V_symbols.num_elements == 2 + 2 + 2 + 1 assert G.total_unique_symbols == 2 + 2 + 1 # Reusing symbols for indexed with different ordering # Note that two index sums are created, giving 2+1 symbols expr = w[i, j] * w[j, i] G = build_graph([expr], DEBUG=0) assert G.V_symbols.num_elements == 4 + 4 + 4 + 4 + 2 + 1 assert G.total_unique_symbols == 4 + 4 + 2 + 1 # Testing ComponentTensor expr = dot(as_vector(2 * v[i], i), v) G = build_graph([expr], DEBUG=0) assert G.V_symbols.num_elements == 2 + 1 + 2 + 2 + 2 + 1 assert G.total_unique_symbols == 2 + 1 + 2 + 1 expr = dot(v + 2 * v, v) G = build_graph([expr], DEBUG=0) assert G.V_symbols.num_elements == 2 + 1 + 2 + 2 + 2 + 2 + 1 assert G.total_unique_symbols == 2 + 1 + 2 + 2 + 1 expr = outer(v, v)[i, j] * outer(v, v)[j, i] G = build_graph([expr], DEBUG=0) assert G.V_symbols.num_elements == 21 # 2+4+4+4 + 4+2+1 assert G.total_unique_symbols == 13 # 2+4+4 + 2+1 # Testing tensor/scalar expr = as_ufl(2) G = build_graph([expr], DEBUG=0) assert G.V_symbols.num_elements == 1 assert G.total_unique_symbols == 1 expr = v G = build_graph([expr], DEBUG=0) assert G.V_symbols.num_elements == 2 assert G.total_unique_symbols == 2 expr = outer(v, v) G = build_graph([expr], DEBUG=0) assert G.V_symbols.num_elements == 2 + 4 assert G.total_unique_symbols == 2 + 4 expr = as_tensor(v[i] * v[j], (i, j)) G = build_graph([expr], DEBUG=0) assert G.V_symbols.num_elements == 2 + 2 + 2 + 4 + 4 assert G.total_unique_symbols == 2 + 4 expr = as_tensor(v[i] * v[j] / 2, (i, j)) G = build_graph([expr], DEBUG=0) assert G.V_symbols.num_elements == 2 + 2 + 2 + 4 + 4 + 4 + 1 assert G.total_unique_symbols == 2 + 1 + 4 + 4 expr = outer(v, v) / 2 # converted to the tensor notation above G = build_graph([expr], DEBUG=0) assert G.V_symbols.num_elements == 2 + 2 + 2 + 4 + 4 + 4 + 1 assert G.total_unique_symbols == 2 + 1 + 4 + 4 def test_rebuild_expression_from_graph_basic_scalar_expressions(): U = FiniteElement("CG", triangle, 1) V = VectorElement("CG", triangle, 1) W = TensorElement("CG", triangle, 1) u = Coefficient(U) v = Coefficient(V) w = Coefficient(W) # ... Literals are reproduced literals = [as_ufl(0), as_ufl(1), as_ufl(3.14)] for v1 in literals: G = build_graph([v1]) v2 = rebuild_expression_from_graph(G) assert v1 == v2 v1 = u G = build_graph([v1]) v2 = rebuild_expression_from_graph(G) assert v1 == v2 # ... Simple operators are reproduced for v1 in [2 + u, u + u, u * u, u * 2, u**2, u**u, u / u, sin(u)]: G = build_graph([v1]) v2 = rebuild_expression_from_graph(G) assert v1 == v2 def test_rebuild_expression_from_graph_on_products_with_indices(): U = FiniteElement("CG", triangle, 1) V = VectorElement("CG", triangle, 1) W = TensorElement("CG", triangle, 1) u = Coefficient(U) v = Coefficient(V) w = Coefficient(W) i, j = indices(2) # Test fixed index fixed = [u * v[0], v[1] * v[0], w[0, 1] * w[0, 0]] for v1 in fixed: G = build_graph([v1]) v2 = rebuild_expression_from_graph(G) assert v1 == v2 # Test simple repeated index v1 = v[i] * v[i] G = build_graph([v1]) v2 = rebuild_expression_from_graph(G) ve = v[0] * v[0] + v[1] * v[1] assert ve == v2 # Test double repeated index v1 = w[i, j] * w[j, i] G = build_graph([v1]) v2 = rebuild_expression_from_graph(G) ve = (w[1, 1] * w[1, 1] + w[1, 0] * w[0, 1]) + (w[0, 1] * w[1, 0] + w[0, 0] * w[0, 0]) if 0: print() print(v1) print(ve) print(v2) print() assert ve == v2 # Test mix of repeated and non-repeated index v1 = (w[i, j] * w[j, 0] + v[i]) * v[i] G = build_graph([v1]) v2 = rebuild_expression_from_graph(G) ve = ((w[0, 0] * w[0, 0] + w[0, 1] * w[1, 0] + v[0]) * v[0] + (w[1, 0] * w[0, 0] + w[1, 1] * w[1, 0] + v[1]) * v[1]) assert ve == v2 def test_rebuild_expression_from_graph_basic_tensor_expressions(): U = FiniteElement("CG", triangle, 1) V = VectorElement("CG", triangle, 1) W = TensorElement("CG", triangle, 1) u = Coefficient(U) v = Coefficient(V) vb = Coefficient(V) w = Coefficient(W) wb = Coefficient(W) # Single vector v1 = v G = build_graph([v1]) v2 = rebuild_expression_from_graph(G) assert as_vector((v1[0], v1[1])) == v2 # Single tensor v1 = w G = build_graph([v1]) v2 = rebuild_expression_from_graph(G) assert as_vector((v1[0, 0], v1[0, 1], v1[1, 0], v1[1, 1])) == v2 # Vector sum v1 = v + v G = build_graph([v1]) v2 = rebuild_expression_from_graph(G) ve = as_vector((v[0] + v[0], v[1] + v[1])) assert ve == v2 v1 = v + vb G = build_graph([v1]) v2 = rebuild_expression_from_graph(G) ve = as_vector((v[0] + vb[0], v[1] + vb[1])) assert ve == v2 # Tensor sum v1 = w + w G = build_graph([v1]) v2 = rebuild_expression_from_graph(G) ve = as_vector((w[0, 0] + w[0, 0], w[0, 1] + w[0, 1], w[1, 0] + w[1, 0], w[1, 1] + w[1, 1])) assert ve == v2 v1 = w + wb G = build_graph([v1]) v2 = rebuild_expression_from_graph(G) ve = as_vector((w[0, 0] + wb[0, 0], w[0, 1] + wb[0, 1], w[1, 0] + wb[1, 0], w[1, 1] + wb[1, 1])) assert ve == v2 # Scalar-vector product v1 = u * v G = build_graph([v1]) v2 = rebuild_expression_from_graph(G) ve = as_vector((u * v[0], u * v[1])) assert ve == v2 # Scalar-tensor product v1 = u * w G = build_graph([v1]) v2 = rebuild_expression_from_graph(G) ve = as_vector((u * w[0, 0], u * w[0, 1], u * w[1, 0], u * w[1, 1])) assert ve == v2 # Vector-vector index based inner product v1 = v[i] * v[i] G = build_graph([v1]) v2 = rebuild_expression_from_graph(G) ve = v[0] * v[0] + v[1] * v[1] assert ve == v2 v1 = v[i] * vb[i] G = build_graph([v1]) v2 = rebuild_expression_from_graph(G) ve = v[0] * vb[0] + v[1] * vb[1] assert ve == v2 # Tensor-tensor index based transposed inner product v1 = w[i, j] * w[j, i] G = build_graph([v1]) v2 = rebuild_expression_from_graph(G) ve = (w[0, 0] * w[0, 0] + w[0, 1] * w[1, 0]) \ + (w[1, 0] * w[0, 1] + w[1, 1] * w[1, 1]) assert ve == v2 v1 = w[i, j] * wb[j, i] G = build_graph([v1]) v2 = rebuild_expression_from_graph(G) ve = (w[0, 0] * wb[0, 0] + w[1, 0] * wb[0, 1]) \ + (w[0, 1] * wb[1, 0] + w[1, 1] * wb[1, 1]) assert ve == v2 # Vector/scalar division v1 = v / u G = build_graph([v1]) v2 = rebuild_expression_from_graph(G) ve = as_vector((v[0] / u, v[1] / u)) assert ve == v2 # Tensor/scalar division v1 = w / u G = build_graph([v1]) v2 = rebuild_expression_from_graph(G) ve = as_vector((w[0, 0] / u, w[0, 1] / u, w[1, 0] / u, w[1, 1] / u)) assert ve == v2 # FIXME: Write more tests to discover bugs in ReconstructScalarSubexpressions.element_wise* # assert False # Compounds not implemented, not expecting to do this anytime soon def xtest_rebuild_expression_from_graph_on_compounds(): U = FiniteElement("CG", triangle, 1) V = VectorElement("CG", triangle, 1) W = TensorElement("CG", triangle, 1) u = Coefficient(U) v = Coefficient(V) w = Coefficient(W) v1 = dot(v, v) G = build_graph([v1]) v2 = rebuild_expression_from_graph(G) v1 = outer(v, v) G = build_graph([v1]) v2 = rebuild_expression_from_graph(G) v1 = outer(v, v)[i, j] * outer(v, v)[j, i] G = build_graph([v1]) v2 = rebuild_expression_from_graph(G) # print v1 # print v2 # FIXME: Assert something def test_flattening_of_tensor_valued_expression_symbols(): # from ffc.uflacs.analysis.graph import foo def flatten_expression_symbols(v, vsyms, opsyms): sh = v.ufl_shape if sh == (): assert len(vsyms) == 1 if not opsyms: res = [(v, vsyms[0], ())] else: assert len(opsyms[0]) == 1 res = [(v, vsyms[0], tuple(syms[0] for syms in opsyms))] else: res = [] if isinstance(v, ufl.classes.Sum): for i in range(len(vsyms)): u = None # sum of component i for syms in opsyms res += (u, vsyms[i], tuple(syms[i] for syms in opsyms)) return res v = as_ufl(1) vsyms = (0,) opsyms = () res = flatten_expression_symbols(v, vsyms, opsyms) assert res == [(v, 0, ())] ffc-2019.1.0.post0/test/uflacs/unit/test_snippets.py000066400000000000000000000051721346026536000222510ustar00rootroot00000000000000# -*- coding: utf-8 -*- from ffc.uflacs.language.format_value import format_float from ffc.uflacs.language.format_lines import iter_indented_lines, Indented, format_indented_lines def test_format_float(): # Ints handled assert format_float(0, 3) == "0.0" assert format_float(1, 3) == "1.0" assert format_float(12, 15) == "12.0" # Zeros simple assert format_float(0.0, 0) == "0.0" assert format_float(0.0, 3) == "0.0" assert format_float(0.0, 15) == "0.0" # Ones simple assert format_float(1.0, 0) == "1.0" assert format_float(1.0, 3) == "1.0" assert format_float(1.0, 15) == "1.0" # Small ints simple assert format_float(12., 15) == "12.0" assert format_float(12., 15) == "12.0" # Precision truncates assert format_float(1.2, 3) == "1.2" assert format_float(1.23, 3) == "1.23" assert format_float(1.2345, 3) == "1.23" assert format_float(1.0 + 1e-5, 7) == "1.00001" assert format_float(1.0 + 1e-5, 6) == "1.00001" assert format_float(1.0 + 1e-5, 5) == "1.0" # Cleanly formatted exponential numbers # without superfluous +s and 0s assert format_float(1234567.0, 3) == "1.23e6" assert format_float(1.23e6, 3) == "1.23e6" assert format_float(1.23e-6, 3) == "1.23e-6" assert format_float(-1.23e-6, 3) == "-1.23e-6" assert format_float(-1.23e6, 3) == "-1.23e6" def test_iter_indented_lines(): assert list(iter_indented_lines("single line")) == ["single line"] assert list(iter_indented_lines(["word"])) == ["word"] assert list(iter_indented_lines(["line one", "line two"])) == ["line one", "line two"] assert list(iter_indented_lines("line one\nline two")) == ["line one", "line two"] assert list(iter_indented_lines(["line one", Indented("line two")])) == ["line one", " line two"] assert list(iter_indented_lines([Indented("line one"), "line two"])) == [" line one", "line two"] assert list(iter_indented_lines([[Indented("line one")], "line two"])) == [" line one", "line two"] assert list(iter_indented_lines([Indented("line one"), ["line two"]])) == [" line one", "line two"] assert list(iter_indented_lines([[Indented("line one"), "line two"]])) == [" line one", "line two"] def test_format_indented_lines_example(): forheader = "for (int i=begin; i!=end; ++i)" code = ["{", Indented([forheader, "{", Indented("double x = y[i];"), "}"]), "}"] reference_code = """{ for (int i=begin; i!=end; ++i) { double x = y[i]; } }""" assert format_indented_lines(code) == reference_code ffc-2019.1.0.post0/test/uflacs/unit/test_ssa_manipulations.py000066400000000000000000000073461346026536000241420ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ Tests of manipulations of the ssa form of expressions. """ from ufl import * from ufl import product from ufl.permutation import compute_indices from ffc.uflacs.analysis.graph import build_graph # Tests need this but it has been removed. Rewrite tests! # from ffc.uflacs.analysis.graph_rebuild import rebuild_scalar_e2i # from ffc.uflacs.analysis.graph_rebuild import rebuild_expression_from_graph # from ffc.uflacs.analysis.indexing import (map_indexed_arg_components, # map_indexed_arg_components2, # map_component_tensor_arg_components) # from ffc.uflacs.analysis.graph_symbols import (map_list_tensor_symbols, # map_transposed_symbols, get_node_symbols) # from ffc.uflacs.analysis.dependencies import (compute_dependencies, # mark_active, # mark_image) # from ffc.uflacs.analysis.graph_ssa import (mark_partitions, # compute_dependency_count, # invert_dependencies, # default_cache_score_policy, # compute_cache_scores, # allocate_registers) def xtest_dependency_construction(): cell = triangle d = cell.geometric_dimension() x = SpatialCoordinate(cell) n = FacetNormal(cell) U = FiniteElement("CG", cell, 1) V = VectorElement("CG", cell, 1) W = TensorElement("CG", cell, 1) u = Coefficient(U) v = Coefficient(V) w = Coefficient(W) i, = indices(1) expressions = [as_ufl(1), as_ufl(3.14), as_ufl(0), x[0], n[0], u, v[0], v[1], w[0, 1], w[0, 0] + w[1, 1], (2 * v + w[1, :])[i] * v[i], ] for expr in expressions: G = build_graph([expr]) ne2i, NV, W, terminals, nvs = rebuild_scalar_e2i(G) e2i = ne2i V = NV dependencies = compute_dependencies(e2i, V) max_symbol = len(V) targets = (max_symbol - 1,) active, num_active = mark_active(max_symbol, dependencies, targets) partitions = mark_partitions(V, active, dependencies, {}) depcount = compute_dependency_count(dependencies) inverse_dependencies = invert_dependencies(dependencies, depcount) scores = compute_cache_scores(V, active, dependencies, inverse_dependencies, partitions, cache_score_policy=default_cache_score_policy) max_registers = 4 score_threshold = 1 allocations = allocate_registers(active, partitions, targets, scores, max_registers, score_threshold) if 0: print() print("====== expr \n", expr) print("====== V \n", '\n'.join(map(str, V))) print("====== dependencies \n", dependencies) print("====== active \n", active) print("====== partitions \n", partitions) print("====== depcount \n", depcount) print("====== inverse_dependencies\n", inverse_dependencies) print("====== scores \n", scores) print("====== allocations \n", allocations) print() ffc-2019.1.0.post0/test/uflacs/unit/test_table_utils.py000066400000000000000000000230041346026536000227050ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ Tests of table manipulation utilities. """ from ufl import triangle from ffc.uflacs.elementtables import equal_tables, strip_table_zeros, build_unique_tables, get_ffc_table_values import numpy as np default_tolerance = 1e-14 def old_strip_table_zeros(table, eps): "Quick fix translating new version to old behaviour to keep tests without modification." dofrange, dofmap, stripped_table = strip_table_zeros(table, False, eps) begin, end = dofrange return begin, end, stripped_table def test_equal_tables(): a = np.zeros((2, 3)) b = np.zeros((2, 3)) assert equal_tables(a, b, default_tolerance) a = np.ones((2, 3)) b = np.zeros((2, 3)) assert not equal_tables(a, b, default_tolerance) a = np.ones((2, 3)) b = np.ones((3, 2)) assert not equal_tables(a, b, default_tolerance) a = np.ones((2, 3))*1.1 b = np.ones((2, 3)) assert not equal_tables(a, b, default_tolerance) # Checking for all ones within given tolerance: eps = 1e-4 a = np.ones((2, 3))*(1.0+eps) assert equal_tables(a, np.ones(a.shape), 10*eps) def test_strip_table_zeros(): # Can strip entire table: a = np.zeros((2, 3)) e = np.zeros((2, 0)) begin, end, b = old_strip_table_zeros(a, default_tolerance) assert begin == end # This is a way to check for all-zero table assert begin == 0 assert end == 0 assert equal_tables(b, e, default_tolerance) # Can keep entire nonzero table: a = np.ones((2, 3)) e = np.ones((2, 3)) begin, end, b = old_strip_table_zeros(a, default_tolerance) assert begin != end assert begin == 0 assert end == a.shape[-1] assert equal_tables(b, e, default_tolerance) # Can strip one left side column: a = np.ones((2, 3)) a[:, 0] = 0.0 e = np.ones((2, 2)) begin, end, b = old_strip_table_zeros(a, default_tolerance) assert begin != end assert begin == 1 assert end == a.shape[-1] assert equal_tables(b, e, default_tolerance) # Can strip one right side column: a = np.ones((2, 3)) a[:, 2] = 0.0 e = np.ones((2, 2)) begin, end, b = old_strip_table_zeros(a, default_tolerance) assert begin != end assert begin == 0 assert end == a.shape[-1]-1 assert equal_tables(b, e, default_tolerance) # Can strip two columns on each side: a = np.ones((2, 5)) a[:, 0] = 0.0 a[:, 1] = 0.0 a[:, 3] = 0.0 a[:, 4] = 0.0 e = np.ones((2, 1)) begin, end, b = old_strip_table_zeros(a, default_tolerance) assert begin != end assert begin == 2 assert end == a.shape[-1]-2 assert equal_tables(b, e, default_tolerance) # Can strip two columns on each side of rank 1 table: a = np.ones((5,)) a[..., 0] = 0.0 a[..., 1] = 0.0 a[..., 3] = 0.0 a[..., 4] = 0.0 e = np.ones((1,)) begin, end, b = old_strip_table_zeros(a, default_tolerance) assert begin != end assert begin == 2 assert end == a.shape[-1]-2 assert equal_tables(b, e, default_tolerance) # Can strip two columns on each side of rank 3 table: a = np.ones((3, 2, 5)) a[..., 0] = 0.0 a[..., 1] = 0.0 a[..., 3] = 0.0 a[..., 4] = 0.0 e = np.ones((3, 2, 1)) begin, end, b = old_strip_table_zeros(a, default_tolerance) assert begin != end assert begin == 2 assert end == a.shape[-1]-2 assert equal_tables(b, e, default_tolerance) # Can strip columns on each side and compress the one in the middle: a = np.ones((2, 5)) a[:, 0] = 0.0 a[:, 1] = 1.0 a[:, 2] = 0.0 a[:, 3] = 3.0 a[:, 4] = 0.0 e = np.ones((2, 2)) e[:, 0] = 1.0 e[:, 1] = 3.0 dofrange, dofmap, b = strip_table_zeros(a, True, default_tolerance) begin, end = dofrange assert begin != end assert begin == 1 assert end == a.shape[-1] - 1 assert list(dofmap) == [1, 3] assert equal_tables(b, e, default_tolerance) def test_unique_tables_some_equal(): tables = [ np.zeros((2,)), np.ones((2,)), np.zeros((3,)), np.ones((2,)), np.ones((2,))*2, np.ones((2,))*2, ] unique, mapping = build_unique_tables(tables, default_tolerance) expected_unique = [ np.zeros((2,)), np.ones((2,)), np.zeros((3,)), # np.ones((2,)), np.ones((2,))*2, # np.ones((2,))*2, ] expected_mapping = dict((i, v) for i, v in enumerate([0, 1, 2, 1, 3, 3])) assert mapping == expected_mapping assert len(set(mapping.values())) == len(unique) for i, t in enumerate(tables): assert equal_tables(t, unique[mapping[i]], default_tolerance) def test_unique_tables_all_equal(): tables = [np.ones((3, 5))*2.0]*6 unique, mapping = build_unique_tables(tables, default_tolerance) expected_unique = [tables[0]] expected_mapping = dict((i, v) for i, v in enumerate([0]*6)) assert mapping == expected_mapping assert len(set(mapping.values())) == len(unique) for i, t in enumerate(tables): assert equal_tables(t, unique[mapping[i]], default_tolerance) def test_unique_tables_all_different(): tables = [ np.ones((2,)), np.ones((2, 3)), np.ones((2, 3, 4)), np.ones((2, 3, 4, 5)), ] unique, mapping = build_unique_tables(tables, default_tolerance) expected_unique = tables expected_mapping = dict((i, i) for i in range(len(tables))) assert mapping == expected_mapping assert len(set(mapping.values())) == len(unique) for i, t in enumerate(tables): assert equal_tables(t, unique[mapping[i]], default_tolerance) def test_unique_tables_string_keys(): tables = { 'a': np.zeros((2,)), 'b': np.ones((2,)), 'c': np.zeros((3,)), 'd': np.ones((2,)), 'e': np.ones((2,))*2, 'f': np.ones((2,))*2, } unique, mapping = build_unique_tables(tables, default_tolerance) expected_unique = [ np.zeros((2,)), np.ones((2,)), np.zeros((3,)), # np.ones((2,)), np.ones((2,))*2, # np.ones((2,))*2, ] expected_mapping = { 'a': 0, 'b': 1, 'c': 2, 'd': 1, 'e': 3, 'f': 3 } assert mapping == expected_mapping assert len(set(mapping.values())) == len(unique) for i, t in tables.items(): assert equal_tables(t, unique[mapping[i]], default_tolerance) def xtest_get_ffc_table_values_scalar_cell(): cell = triangle integral_type = "cell" entitytype = "cell" class MockElement: def value_shape(self): return () element = MockElement() component = () avg = None entity = 0 for num_points in (1, 3): for num_dofs in (1, 5): arr = np.ones((num_dofs, num_points)) for derivatives in [(0, 0), (1, 1)]: # Mocking table built by ffc ffc_tables = { num_points: { element: { avg: { # avg entity: { # entityid derivatives: arr } } } } } table = get_ffc_table_values(ffc_tables, cell, integral_type, element, avg, entitytype, derivatives, component, default_tolerance) assert equal_tables(table[0, ...], np.transpose(arr), default_tolerance) def xtest_get_ffc_table_values_vector_facet(): cell = triangle integral_type = "exterior_facet" entitytype = "facet" num_entities = 3 class MockElement: def value_shape(self): return (2,) element = MockElement() num_components = 2 avg = None for num_points in (1, 5): for num_dofs in (4, 7): # Make ones array of the right shape (all dimensions differ to detect algorithm bugs better) arr1 = np.ones((num_dofs, num_components, num_points)) arrays = [] for i in range(num_entities): arr = (i+1.0)*arr1 # Make first digit the facet number (1,2,3) for j in range(num_components): arr[:, j,:] += 0.1*(j+1.0) # Make first decimal the component number (1,2) for j in range(num_dofs): arr[j,:,:] += 0.01*(j+1.0) # Make second decimal the dof number for j in range(num_points): arr[:,:, j] += 0.001*(j+1.0) # Make third decimal the point number arrays.append(arr) for derivatives in [(), (0,)]: # Mocking table built by ffc ffc_tables = { num_points: { element: { avg: { entity: { derivatives: arrays[entity] } for entity in range(num_entities) } } } } # Tables use flattened component, so we can loop over them as integers: for component in range(num_components): table = get_ffc_table_values(ffc_tables, cell, integral_type, element, avg, entitytype, derivatives, component, default_tolerance) for i in range(num_entities): # print table[i,...] assert equal_tables(table[i, ...], np.transpose(arrays[i][:, component,:]), default_tolerance) ffc-2019.1.0.post0/test/uflacs/unit/test_ufc_backend.py000066400000000000000000000264141346026536000226320ustar00rootroot00000000000000# -*- coding: utf-8 -*- import pytest import numpy from ffc.uflacs.backends.ufc.generators import * import ffc.uflacs.language.cnodes as L # from ffc.cpp import set_float_formatting # set_float_formatting(precision=8) # TODO: Make this a feature of dijitso: dijitso show-function modulehash functionname def extract_function(name, code): lines = code.split("\n") n = len(lines) begin = None body = None for i in range(n): if (name + "(") in lines[i]: for j in range(i, n): if lines[j] == "{": begin = i body = j break break if begin is None: return "didnt find %s" % (name,) end = n for i in range(body, n): if lines[i] == "}": end = i + 1 break sublines = lines[begin:end] return '\n'.join(sublines) def basic_class_properties(classname): ir = { "classname": classname, "constructor": "", "constructor_arguments": "", "initializer_list": "", "destructor": "", "members": "", "preamble": "", "jit": False, } return ir def mock_parameters(): return {"precision": 8} def mock_form_ir(): ir = basic_class_properties("mock_form_classname") ir.update({ "id": 0, "prefix": "Foo", "signature": "mock form signature", "rank": 2, "num_coefficients": 3, "original_coefficient_position": [0, 2], }) ir.update({ "create_coordinate_finite_element": ["mock_coordinate_finite_element_classname"], "create_coordinate_dofmap": ["mock_coordinate_dofmap_classname"], "create_coordinate_mapping": ["mock_coordinate_mapping_classname"], "create_finite_element": ["mock_finite_element_classname_%d" % (i,) for i in range(ir["num_coefficients"])], "create_dofmap": ["mock_dofmap_classname_%d" % (i,) for i in range(ir["num_coefficients"])], }) # These are the method names in ufc::form that are specialized for each integral type template = "max_%s_subdomain_id" for i, integral_type in enumerate(ufc_integral_types): key = template % integral_type ir[key] = i # just faking some integers template = "has_%s_integrals" for i, integral_type in enumerate(ufc_integral_types): key = template % integral_type ir[key] = (i % 2 == 0) # faking some bools template = "create_%s_integral" for i, integral_type in enumerate(ufc_integral_types): key = template % integral_type subdomain_ids = list(range(i)) classnames = [key.replace("create_", "") + str(j) for j in subdomain_ids] ir[key] = (subdomain_ids, classnames) template = "create_default_%s_integral" for i, integral_type in enumerate(ufc_integral_types): key = template % integral_type classname = key.replace("create_", "") ir[key] = classname return ir def mock_dofmap_ir(): # Note: The mock data here is not necessarily consistent, # it just needs to have the right types to exercise the code generation. ir = basic_class_properties("mock_dofmap_classname") num_dofs_per_entity = [1, 1, 1] entity_dofs = [[(0,), (1,), (2,)], [(3,), (4,), (5,)], [(6,)]] entity_closure_dofs = { (0, 0): [0], (0, 1): [1], (0, 2): [2], (1, 0): [0, 1], (1, 1): [0, 2], (1, 2): [1, 2], (2, 0): [0, 1, 2], } ir.update({ "signature": "mock element signature", "geometric_dimension": 3, "topological_dimension": 2, "global_dimension": ([3, 2, 1], 4), "tabulate_dofs": ([[[(0,), (1,), (2,)], [(3,), (4,), (5,)], [(6,)]], None], [7, 1], True, [False, True]), "tabulate_facet_dofs": [[0, 1, 2], [1, 2, 3], [0, 2, 3]], "tabulate_entity_dofs": (entity_dofs, num_dofs_per_entity), "tabulate_entity_closure_dofs": (entity_closure_dofs, entity_dofs, num_dofs_per_entity), "needs_mesh_entities": [True, False, True], "num_global_support_dofs": 1, "num_element_support_dofs": 6, "num_element_dofs": 7, "num_entity_dofs": num_dofs_per_entity, "num_entity_closure_dofs": [3, 6, 10], "num_facet_dofs": 7, "num_sub_dofmaps": 3, "create_sub_dofmap": ["mock_dofmap_classname_sub_%d" % (i,) for i in range(3)], }) return ir def mock_evaluate_basis_ir(): dofs_data = [ { "embedded_degree": 5, "num_components": 2, "num_expansion_members": 7, "coeffs": [list(range(20, 27)), list(range(30, 37))], "reference_offset": 3, "physical_offset": 5, "mapping": "affine", "dmats": numpy.zeros([]), # FIXME: Mock dmats data }, { "embedded_degree": 1, "num_components": 2, "num_expansion_members": 7, "coeffs": [list(range(7)), list(range(10, 17))], "reference_offset": 3, "physical_offset": 5, "mapping": "affine", "dmats": numpy.zeros([]), # FIXME: Mock dmats data } ] data = { "cellname": "triangle", "geometric_dimension": 3, "topological_dimension": 2, "reference_value_size": 3, "physical_value_size": 2, "dofs_data": dofs_data, "needs_oriented": True, "space_dimension": 14, "max_degree": 5, } return data @pytest.mark.skipif(True, reason="mock ir for evaluate basis is currently incomplete") def test_mock_evaluate_basis(): from ffc.uflacs.backends.ufc.evaluatebasis import generate_evaluate_reference_basis import ffc.uflacs.language.cnodes as L parameters = mock_parameters() data = mock_evaluate_basis_ir() # FIXME: mock data is currently incomplete code = generate_evaluate_reference_basis(L, data, parameters) print(code) def mock_finite_element_ir(): # Note: The mock data here is not necessarily consistent, # it just needs to have the right types to exercise the code generation. ir = basic_class_properties("mock_finite_element_classname") ebir = mock_evaluate_basis_ir() ir.update({ "signature": "mock element signature", "cell_shape": "mock_cell_shape", "geometric_dimension": 3, "topological_dimension": 2, "degree": 2, "family": "Lagrange", "value_dimension": (3, 3), "reference_value_dimension": (2, 2), "space_dimension": 6, "tabulate_dof_coordinates": {"gdim": 3, "tdim": 2, "points": [(0.0, 0.0), (0.0, 1.0), (1.0, 0.0)]}, "evaluate_reference_basis": ebir, "evaluate_reference_basis_derivatives": ebir, "evaluate_basis": ebir, "evaluate_basis_derivatives": ebir, "evaluate_basis_all": ebir, "evaluate_basis_derivatives_all": ebir, "evaluate_dof": "fixme", "evaluate_dofs": "fixme", "interpolate_vertex_values": "fixme", "num_sub_elements": 3, "create_sub_element": ["mock_finite_element_classname_sub_%d" % (i,) for i in range(3)], }) return ir def mock_integral_ir(): # Note: The mock data here is not necessarily consistent, # it just needs to have the right types to exercise the code generation. ir = basic_class_properties("mock_integral_classname") ir.update({ "representation": "uflacs", "integral_metadata": [{"dummy": "integral metadata"}], "integrals_metadata": {"dummy": "integrals metadata"}, "prefix": "Foo", "enabled_coefficients": [True, False, True], "tabulate_tensor": " mock_body_of_tabulate_tensor();", "num_cells": 1, }) return ir def mock_coordinate_mapping_ir(): ir = basic_class_properties("mock_coordinate_mapping_classname") ir.update({ "signature": "mock_coordinate_mapping_signature", "cell_shape": "mock_cell_shape", "geometric_dimension": 3, "topological_dimension": 2, "create_coordinate_finite_element": "mock_coordinate_finite_element_classname", "create_coordinate_dofmap": "mock_coordinate_dofmap_classname", "scalar_coordinate_finite_element_classname": "mock_scalar_coordinate_finite_element_classname", "coordinate_element_degree": 2, "num_scalar_coordinate_element_dofs": 5, "tables": {"x0": numpy.ones((5,)), "xm": numpy.ones((5,)), "J0": numpy.ones((2, 5)), "Jm": numpy.ones((2, 5)), } }) return ir def compile_mock_coordinate_mapping(): parameters = mock_parameters() ir = mock_coordinate_mapping_ir() gen = ufc_coordinate_mapping() return gen.generate(L, ir, parameters) def compile_mock_form(): parameters = mock_parameters() ir = mock_form_ir() gen = ufc_form() return gen.generate(L, ir, parameters) def compile_mock_dofmap(): parameters = mock_parameters() ir = mock_dofmap_ir() gen = ufc_dofmap() return gen.generate(L, ir, parameters) def compile_mock_finite_element(): parameters = mock_parameters() ir = mock_finite_element_ir() gen = ufc_finite_element() return gen.generate(L, ir, parameters) def compile_mock_integral(integral_type): parameters = mock_parameters() ir = mock_integral_ir() gen = eval("ufc_%s_integral" % integral_type)() return gen.generate(L, ir, parameters) def compile_mock_all(): mocks = [compile_mock_integral(integral_type) for integral_type in ufc_integral_types] mocks += [compile_mock_form(), compile_mock_dofmap(), compile_mock_finite_element()] return '\n\n'.join(mocks) def test_mock_coordinate_mapping(): h, cpp = compile_mock_coordinate_mapping() print(h) print(cpp) def test_mock_form(): h, cpp = compile_mock_form() print(h) print(cpp) def test_mock_dofmap(): h, cpp = compile_mock_dofmap() print(h) print(cpp) @pytest.mark.skipif(True, reason="mock ir for evaluate basis is currently incomplete") def test_mock_finite_element(): h, cpp = compile_mock_finite_element() print(h) print(cpp) def test_mock_integral(): for integral_type in ufc_integral_types: h, cpp = compile_mock_integral(integral_type) print(h) print(cpp) def test_foo_integral_properties(): parameters = mock_parameters() ir = mock_form_ir() assert "cell_integral" in ufc_form.create_cell_integral.__doc__ assert "return" in str(ufc_form().create_cell_integral(L, ir, parameters)) def test_mock_extract_function(): h, cpp = compile_mock_coordinate_mapping() name = "compute_reference_coordinates" print("/// Extracted", name, ":") print("/// begin") print(extract_function(name, cpp)) print("/// end") def test_debug_by_printing_extracted_function(): h, cpp = compile_mock_coordinate_mapping() # name = "compute_reference_coordinates" # name = "compute_physical_coordinates" # name = "compute_jacobians" name = "compute_jacobian_inverses" print("/// Extracted", name, ":") print("/// begin") print(extract_function(name, cpp)) print("/// end") """ Missing: finite_element: evaluate_basis* evaluate_dof interpolate_vertex_values Ok? tabulate_dof_coordinates integrals: tabulate_tensor all: everything with classnames """ ffc-2019.1.0.post0/test/uflacs/unit/test_ufl_to_cnodes.py000066400000000000000000000056761346026536000232400ustar00rootroot00000000000000# -*- coding: utf-8 -*- import ufl from ufl import * from ufl import as_ufl import ffc.uflacs.language from ffc.uflacs.language.ufl_to_cnodes import UFL2CNodesTranslatorCpp, UFL2CNodesTranslatorC def test_ufl_to_cnodes(): L = ffc.uflacs.language.cnodes translate = UFL2CNodesTranslatorCpp(L) f = ufl.CellVolume(ufl.triangle) g = ufl.CellVolume(ufl.triangle) h = ufl.CellVolume(ufl.triangle) x = L.Symbol("x") y = L.Symbol("y") z = L.Symbol("z") examples = [ (f + g, (x, y), "x + y"), # (f-g, (x,y), "x - y"), # - is not currently a UFL operator (f * g, (x, y), "x * y"), (f / g, (x, y), "x / y"), (f**g, (x, y), "std::pow(x, y)"), (exp(f), (x,), "std::exp(x)"), (ln(f), (x,), "std::log(x)"), (abs(f), (x,), "std::abs(x)"), (sqrt(f), (x,), "std::sqrt(x)"), #(cbrt(f), (x,), "std::cbrt(x)"), (sin(f), (x,), "std::sin(x)"), (cos(f), (x,), "std::cos(x)"), (tan(f), (x,), "std::tan(x)"), (asin(f), (x,), "std::asin(x)"), (acos(f), (x,), "std::acos(x)"), (atan(f), (x,), "std::atan(x)"), (sinh(f), (x,), "std::sinh(x)"), (cosh(f), (x,), "std::cosh(x)"), (tanh(f), (x,), "std::tanh(x)"), (erf(f), (x,), "std::erf(x)"), #(erfc(f), (x,), "std::erfc(x)"), (min_value(f, g), (x, y), "std::min(x, y)"), (max_value(f, g), (x, y), "std::max(x, y)"), (bessel_I(1, g), (x, y), "boost::math::cyl_bessel_i(x, y)"), (bessel_J(1, g), (x, y), "boost::math::cyl_bessel_j(x, y)"), (bessel_K(1, g), (x, y), "boost::math::cyl_bessel_k(x, y)"), (bessel_Y(1, g), (x, y), "boost::math::cyl_neumann(x, y)"), (f < g, (x, y), "x < y"), (f > g, (x, y), "x > y"), (f <= g, (x, y), "x <= y"), (f >= g, (x, y), "x >= y"), (eq(f, g), (x, y), "x == y"), (ne(f, g), (x, y), "x != y"), (And(f < g, f > g), (x, y), "x && y"), (Or(f < g, f > g), (x, y), "x || y"), (Not(f < g), (x,), "!x"), (conditional(f < g, g, h), (x, y, z), "x ? y : z"), ] for expr, args, code in examples: # Warning: This test is subtle: translate will look at the type of expr and # ignore its operands, i.e. not translating the full tree but only one level. assert str(translate(expr, *args)) == code # C specific translation: translate = UFL2CNodesTranslatorC(L) examples = [ (sin(f), (x,), "sin(x)"), (f**g, (x, y), "pow(x, y)"), (exp(f), (x,), "exp(x)"), (abs(f), (x,), "fabs(x)"), (min_value(f, g), (x, y), "fmin(x, y)"), (max_value(f, g), (x, y), "fmax(x, y)"), ] for expr, args, code in examples: # Warning: This test is subtle: translate will look at the type of expr and # ignore its operands, i.e. not translating the full tree but only one level. assert str(translate(expr, *args)) == code ffc-2019.1.0.post0/test/uflacs/unit/test_valuenumbering.py000066400000000000000000000057471346026536000234370ustar00rootroot00000000000000# -*- coding: utf-8 -*- import pytest from ufl import * from ffc.uflacs.analysis.indexing import map_indexed_arg_components from ffc.uflacs.analysis.indexing import map_component_tensor_arg_components def test_map_index_arg_components(): x = SpatialCoordinate(triangle) i, j, k = indices(3) # Tensors of rank 1, 2, 3 A1 = x A2 = outer(x, x) A3 = outer(A2, x) indexed = A1[i] assert map_indexed_arg_components(indexed) == [0, 1] # Rank 2 indexed = A2[i, j] assert map_indexed_arg_components(indexed) == [0, 1, 2, 3] indexed = A2[j, i] assert map_indexed_arg_components(indexed) == [0, 2, 1, 3] # Rank 3 indexed = A3[i, j, k] assert map_indexed_arg_components(indexed) == [0, 1, 2, 3, 4, 5, 6, 7] indexed = A3[j, i, k] assert map_indexed_arg_components(indexed) == [0, 1, 4, 5, 2, 3, 6, 7] indexed = A3[i, k, j] assert map_indexed_arg_components(indexed) == [0, 2, 1, 3, 4, 6, 5, 7] indexed = A3[j, k, i] assert map_indexed_arg_components(indexed) == [0, 2, 4, 6, 1, 3, 5, 7] indexed = A3[k, i, j] assert map_indexed_arg_components(indexed) == [0, 4, 1, 5, 2, 6, 3, 7] indexed = A3[k, j, i] assert map_indexed_arg_components(indexed) == [0, 4, 2, 6, 1, 5, 3, 7] # Explanation: assert [ii*4+jj*2+kk # strides are determined by relative position of i,j,k for kk in range(2) # loop order matches indexing order for jj in range(2) # loop range matches index dimensions for ii in range(2)] == [0, 4, 2, 6, 1, 5, 3, 7] def test_map_component_tensor_arg_components(): x = SpatialCoordinate(triangle) i, j, k = indices(3) # Tensors of rank 1, 2, 3 A1 = x A2 = outer(x, x) A3 = outer(A2, x) Aij = A2[i,j] # Rank 1 assert map_component_tensor_arg_components((x[i]*2.0)^(i,)) == [0,1] # Rank 2 assert map_component_tensor_arg_components((A2[i,j]*2.0)^(i,j)) == [0,1,2,3] assert map_component_tensor_arg_components((A2[i,j]*2.0)^(j,i)) == [0,2,1,3] assert map_component_tensor_arg_components(A2[i,j]^(j,i)) == [0,2,1,3] # Rank 3 assert map_component_tensor_arg_components((A3[i,j,k]*2.0)^(i,j,k)) == [0,1,2,3,4,5,6,7] assert map_component_tensor_arg_components((A3[i,j,k]*2.0)^(i,k,j)) == [0,2,1,3,4,6,5,7] assert map_component_tensor_arg_components((A3[i,j,k]*2.0)^(k,i,j)) == [0,2,4,6,1,3,5,7] # Explanation: assert [ii*4+jj*2+kk # strides are determined by relative order of i,j,k in indexed expr for kk in range(2) # loop order matches un-indexing order in component tensor for ii in range(2) # loop range matches index dimensions for jj in range(2)] == [0,2,4,6,1,3,5,7] ffc-2019.1.0.post0/test/uflacs/unit/xtest_latex_formatting.py000066400000000000000000000217611346026536000241450ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ Tests of LaTeX formatting rules. """ from ffc.log import ffc_assert, info, warning, error from ffc.uflacs.codeutils.expr_formatter import ExprFormatter from ffc.uflacs.codeutils.latex_expr_formatting_rules import LatexFormatter import ufl from ufl.algorithms import preprocess_expression, expand_indices def expr2latex(expr, variables=None): "This is a test specific function for formatting ufl to LaTeX." # Preprocessing expression before applying formatting. In a # compiler, one should probably assume that these have been # applied and use ExprFormatter directly. expr_data = preprocess_expression(expr) expr = expand_indices(expr_data.preprocessed_expr) # This formatter is a multifunction with single operator # formatting rules for generic LaTeX formatting latex_formatter = LatexFormatter() # This final formatter implements a generic framework handling # indices etc etc. variables = variables or {} expr_formatter = ExprFormatter(latex_formatter, variables) code = expr_formatter.visit(expr) return code def assertLatexEqual(self, expr, code, variables=None): r = expr2latex(expr, variables) self.assertEqual(code, r) def test_latex_formatting_of_literals(): # Test literals assert expr2latex(ufl.as_ufl(2)) == "2" assert expr2latex(ufl.as_ufl(3.14)) == '3.14' assert expr2latex(ufl.as_ufl(0)) == "0" # These are actually converted to int before formatting: assert expr2latex(ufl.Identity(2)[0, 0]) == "1" assert expr2latex(ufl.Identity(2)[0, 1]) == "0" assert expr2latex(ufl.Identity(2)[1, 0]) == "0" assert expr2latex(ufl.Identity(2)[1, 1]) == "1" assert expr2latex(ufl.PermutationSymbol(3)[1, 2, 3]) == "1" assert expr2latex(ufl.PermutationSymbol(3)[2, 1, 3]) == "-1" assert expr2latex(ufl.PermutationSymbol(3)[1, 1, 3]) == "0" def test_latex_formatting_of_geometry(): # Test geometry quantities x = ufl.SpatialCoordinate(ufl.interval)[0] assert expr2latex(x) == "x_0" x, y = ufl.SpatialCoordinate(ufl.triangle) assert expr2latex(x) == "x_0" assert expr2latex(y) == "x_1" nx, ny = ufl.FacetNormal(ufl.triangle) assert expr2latex(nx) == "n_0" assert expr2latex(ny) == "n_1" Kv = ufl.CellVolume(ufl.triangle) assert expr2latex(Kv) == r"K_{\text{vol}}" Kr = ufl.Circumradius(ufl.triangle) assert expr2latex(Kr) == r"K_{\text{rad}}" def test_latex_formatting_of_form_arguments(): # Test form arguments (faked for testing!) U = ufl.FiniteElement("CG", ufl.triangle, 1) V = ufl.VectorElement("CG", ufl.triangle, 1) W = ufl.TensorElement("CG", ufl.triangle, 1) v = ufl.TestFunction(U) assert expr2latex(v) == r"\overset{0}{v}" f = ufl.Coefficient(U, count=0) assert expr2latex(f) == r"\overset{0}{w}" f = ufl.Coefficient(V, count=1) assert expr2latex(f[0]) == r"\overset{1}{w}_{0}" # NOT renumbered to 0... v = ufl.Argument(V, number=3) assert expr2latex(v[1]) == r"\overset{3}{v}_{1}" # NOT renumbered to 0... f = ufl.Coefficient(W, count=2) assert expr2latex(f[1, 0]) == r"\overset{2}{w}_{1 0}" # NOT renumbered to 0... v = ufl.Argument(W, number=3) assert expr2latex(v[0, 1]) == r"\overset{3}{v}_{0 1}" # NOT renumbered to 0... # TODO: Test mixed functions # TODO: Test tensor functions with symmetries def test_latex_formatting_of_arithmetic(): x = ufl.SpatialCoordinate(ufl.triangle)[0] assert expr2latex(x + 3) == "3 + x_0" assert expr2latex(x * 2) == "2 x_0" assert expr2latex(x / 2) == r"\frac{x_0}{2}" assert expr2latex(x * x) == r"{x_0}^{2}" # TODO: Will gcc optimize this to x*x for us? assert expr2latex(x**3) == r"{x_0}^{3}" # TODO: Test all basic operators def test_latex_formatting_of_cmath(): x = ufl.SpatialCoordinate(ufl.triangle)[0] assert expr2latex(ufl.exp(x)) == r"e^{x_0}" assert expr2latex(ufl.ln(x)) == r"\ln(x_0)" assert expr2latex(ufl.sqrt(x)) == r"\sqrt{x_0}" assert expr2latex(abs(x)) == r"\|x_0\|" assert expr2latex(ufl.sin(x)) == r"\sin(x_0)" assert expr2latex(ufl.cos(x)) == r"\cos(x_0)" assert expr2latex(ufl.tan(x)) == r"\tan(x_0)" assert expr2latex(ufl.asin(x)) == r"\arcsin(x_0)" assert expr2latex(ufl.acos(x)) == r"\arccos(x_0)" assert expr2latex(ufl.atan(x)) == r"\arctan(x_0)" def test_latex_formatting_of_derivatives(): xx = ufl.SpatialCoordinate(ufl.triangle) x = xx[0] # Test derivatives of basic operators assert expr2latex(x.dx(0)) == "1" assert expr2latex(x.dx(1)) == "0" assert expr2latex(ufl.grad(xx)[0, 0]) == "1" assert expr2latex(ufl.grad(xx)[0, 1]) == "0" assert expr2latex(ufl.sin(x).dx(0)) == r"\cos(x_0)" # Test derivatives of form arguments V = ufl.FiniteElement("CG", ufl.triangle, 1) f = ufl.Coefficient(V, count=0) assert expr2latex(f.dx(0)) == r"\overset{0}{w}_{, 0}" v = ufl.Argument(V, number=3) assert expr2latex(v.dx(1)) == r"\overset{3}{v}_{, 1}" # TODO: Test more derivatives # TODO: Test variable derivatives using diff def xtest_latex_formatting_of_conditionals(): # Test conditional expressions assert expr2latex(ufl.conditional(ufl.lt(x, 2), y, 3)) == "x_0 < 2 ? x_1: 3" assert expr2latex(ufl.conditional(ufl.gt(x, 2), 4 + y, 3)) == "x_0 > 2 ? 4 + x_1: 3" assert expr2latex(ufl.conditional(ufl.And(ufl.le(x, 2), ufl.ge(y, 4)), 7, 8)) == "x_0 <= 2 && x_1 >= 4 ? 7: 8" assert expr2latex(ufl.conditional(ufl.Or(ufl.eq(x, 2), ufl.ne(y, 4)), 7, 8)) == "x_0 == 2 || x_1 != 4 ? 7: 8" # TODO: Some tests of nested conditionals with correct precedences? def test_latex_formatting_precedence_handling(): x, y = ufl.SpatialCoordinate(ufl.triangle) # Test precedence handling with sums # Note that the automatic sorting is reflected in formatting! assert expr2latex(y + (2 + x)) == "x_1 + (2 + x_0)" assert expr2latex((x + 2) + y) == "x_1 + (2 + x_0)" assert expr2latex((2 + x) + (3 + y)) == "(2 + x_0) + (3 + x_1)" assert expr2latex((x + 3) + 2 + y) == "x_1 + (2 + (3 + x_0))" assert expr2latex(2 + (x + 3) + y) == "x_1 + (2 + (3 + x_0))" assert expr2latex(2 + (3 + x) + y) == "x_1 + (2 + (3 + x_0))" assert expr2latex(y + (2 + (3 + x))) == "x_1 + (2 + (3 + x_0))" assert expr2latex(2 + x + 3 + y) == "x_1 + (3 + (2 + x_0))" assert expr2latex(2 + x + 3 + y) == "x_1 + (3 + (2 + x_0))" # Test precedence handling with divisions # This is more stable than sums since there is no sorting. assert expr2latex((x / 2) / 3) == r"\frac{(\frac{x_0}{2})}{3}" assert expr2latex(x / (y / 3)) == r"\frac{x_0}{(\frac{x_1}{3})}" assert expr2latex((x / 2) / (y / 3)) == r"\frac{(\frac{x_0}{2})}{(\frac{x_1}{3})}" assert expr2latex(x / (2 / y) / 3) == r"\frac{(\frac{x_0}{(\frac{2}{x_1})})}{3}" # Test precedence handling with highest level types assert expr2latex(ufl.sin(x)) == r"\sin(x_0)" assert expr2latex(ufl.cos(x + 2)) == r"\cos(2 + x_0)" assert expr2latex(ufl.tan(x / 2)) == r"\tan(\frac{x_0}{2})" assert expr2latex(ufl.acos(x + 3 * y)) == r"\arccos(x_0 + 3 x_1)" assert expr2latex(ufl.asin(ufl.atan(x**4))) == r"\arcsin(\arctan({x_0}^{4}))" assert expr2latex(ufl.sin(y) + ufl.tan(x)) == r"\sin(x_1) + \tan(x_0)" # Test precedence handling with mixed types assert expr2latex(3 * (2 + x)) == "3 (2 + x_0)" assert expr2latex((2 * x) + (3 * y)) == "2 x_0 + 3 x_1" assert expr2latex(2 * (x + 3) * y) == "x_1 (2 (3 + x_0))" def _fixme(): assert expr2latex(2 * (x + 3)**4 * y) == "x_1 (2 {(3 + x_0)}^{4)" # FIXME: Precedence handling fails here # TODO: More tests covering all types and more combinations! def test_latex_formatting_of_variables(): x, y = ufl.SpatialCoordinate(ufl.triangle) # Test user-provided C variables for subexpressions # we can use variables for x[0], and sum, and power assert expr2latex(x**2 + y**2, variables={x**2: 'x2', y**2: 'y2'}) == "x2 + y2" assert expr2latex(x**2 + y**2, variables={x: 'z', y**2: 'y2'}) == r"{z}^{2} + y2" assert expr2latex(x**2 + y**2, variables={x**2 + y**2: 'q'}) == "q" # we can use variables in conditionals if 0: assert expr2latex(ufl.conditional(ufl.Or(ufl.eq(x, 2), ufl.ne(y, 4)), 7, 8), variables={ufl.eq(x, 2): 'c1', ufl.ne(y, 4): 'c2'}) == "c1 || c2 ? 7: 8" # we can replace coefficients (formatted by user provided code) V = ufl.FiniteElement("CG", ufl.triangle, 1) f = ufl.Coefficient(V, count=0) assert expr2latex(f, variables={f: 'f'}) == "f" assert expr2latex(f**3, variables={f: 'f'}) == r"{f}^{3}" # variables do not replace derivatives of variable expressions assert expr2latex(f.dx(0), variables={f: 'f'}) == r"\overset{0}{w}_{, 0}" # variables do replace variable expressions that are themselves derivatives assert expr2latex(f.dx(0), variables={f.dx(0): 'df', ufl.grad(f)[0]: 'df'}) == "df" assert expr2latex(ufl.grad(f)[0], variables={f.dx(0): 'df', ufl.grad(f)[0]: 'df'}) == "df" # TODO: Test variables in more situations with indices and derivatives # TODO: Test various compound operators ffc-2019.1.0.post0/test/uflacs/unit/xtest_ufl_shapes_and_indexing.py000066400000000000000000000065501346026536000254350ustar00rootroot00000000000000# -*- coding: utf-8 -*- """Tests of utilities for dealing with ufl indexing and components vs flattened index spaces. """ from ufl import * from ufl import product from ufl.permutation import compute_indices from ffc.uflacs.analysis.indexing import (map_indexed_arg_components, map_component_tensor_arg_components) from ffc.uflacs.analysis.graph_symbols import (map_list_tensor_symbols, map_transposed_symbols, get_node_symbols) from ffc.uflacs.analysis.graph import build_graph from operator import eq as equal def test_map_indexed_arg_components(): W = TensorElement("CG", triangle, 1) A = Coefficient(W) i, j = indices(2) # Ordered indices: d = map_indexed_arg_components(A[i, j]) assert equal(d, [0, 1, 2, 3]) # Swapped ordering of indices: d = map_indexed_arg_components(A[j, i]) assert equal(d, [0, 2, 1, 3]) def test_map_indexed_arg_components2(): # This was the previous return type, copied here to preserve the # test without having to rewrite def map_indexed_arg_components2(Aii): c1, c2 = map_indexed_to_arg_components(Aii) d = [None] * len(c1) for k in range(len(c1)): d[c1[k]] = k return d W = TensorElement("CG", triangle, 1) A = Coefficient(W) i, j = indices(2) # Ordered indices: d = map_indexed_arg_components2(A[i, j]) assert equal(d, [0, 1, 2, 3]) # Swapped ordering of indices: d = map_indexed_arg_components2(A[j, i]) assert equal(d, [0, 2, 1, 3]) def test_map_componenttensor_arg_components(): W = TensorElement("CG", triangle, 1) A = Coefficient(W) i, j = indices(2) # Ordered indices: d = map_component_tensor_arg_components(as_tensor(2 * A[i, j], (i, j))) assert equal(d, [0, 1, 2, 3]) # Swapped ordering of indices: d = map_component_tensor_arg_components(as_tensor(2 * A[i, j], (j, i))) assert equal(d, [0, 2, 1, 3]) def test_map_list_tensor_symbols(): U = FiniteElement("CG", triangle, 1) u = Coefficient(U) A = as_tensor(((u + 1, u + 2, u + 3), (u**2 + 1, u**2 + 2, u**2 + 3))) # Would be nicer to refactor build_graph a bit so we could call # map_list_tensor_symbols directly... G = build_graph([A], DEBUG=False) s1 = list(get_node_symbols(A, G.e2i, G.V_symbols)) s2 = [get_node_symbols(e, G.e2i, G.V_symbols)[0] for e in (u + 1, u + 2, u + 3, u**2 + 1, u**2 + 2, u**2 + 3)] assert s1 == s2 def test_map_transposed_symbols(): W = TensorElement("CG", triangle, 1) w = Coefficient(W) A = w.T # Would be nicer to refactor build_graph a bit so we could call # map_transposed_symbols directly... G = build_graph([A], DEBUG=False) s1 = list(get_node_symbols(A, G.e2i, G.V_symbols)) s2 = list(get_node_symbols(w, G.e2i, G.V_symbols)) s2[1], s2[2] = s2[2], s2[1] assert s1 == s2 W = TensorElement("CG", tetrahedron, 1) w = Coefficient(W) A = w.T # Would be nicer to refactor build_graph a bit so we could call # map_transposed_symbols directly... G = build_graph([A], DEBUG=False) s1 = list(get_node_symbols(A, G.e2i, G.V_symbols)) s2 = list(get_node_symbols(w, G.e2i, G.V_symbols)) s2[1], s2[2], s2[5], s2[3], s2[6], s2[7] = s2[3], s2[6], s2[7], s2[1], s2[2], s2[5] assert s1 == s2 ffc-2019.1.0.post0/test/uflacs/unit/xtest_ufl_to_cpp_formatting.py000066400000000000000000000224601346026536000251570ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ Tests of ufl to C++ expression formatting rules. These rules can format a subset of the UFL expression language directly to C++. The subset is currently not clearly defined, but it's basically only scalar operations and the set of operators in UFL that coincide with C operators and cmath functions. """ import ufl from ufl.constantvalue import as_ufl from ufl.algorithms import preprocess_expression, expand_indices from ffc.uflacs.codeutils.cpp_expr_formatting_rules import CppFormattingRules from ffc.uflacs.codeutils.expr_formatter import ExprFormatter def expr2cpp(expr, variables=None): "This is a test specific function for formatting ufl to C++." # Preprocessing expression before applying formatting. # In a compiler, one should probably assume that these # have been applied and use ExprFormatter directly. expr_data = preprocess_expression(expr) # Applying expand_indices instead of going through # compiler algorithm to get scalar valued expressions # that can be directlly formatted into C++ expr = expand_indices(expr_data.preprocessed_expr) # This formatter is a multifunction implementing target # specific C++ formatting rules cpp_formatter = ToyCppLanguageFormatter(dh, ir) # This final formatter implements a generic framework handling indices etc etc. expr_formatter = ExprFormatter(cpp_formatter, variables or {}) # Finally perform the formatting code = expr_formatter.visit(expr) return code def test_cpp_formatting_of_literals(): # Test literals assert expr2cpp(ufl.as_ufl(2)) == "2" assert expr2cpp(ufl.as_ufl(3.14)) == '3.14' assert expr2cpp(ufl.as_ufl(0)) == "0" # These are actually converted to int before formatting: assert expr2cpp(ufl.Identity(2)[0, 0]) == "1" assert expr2cpp(ufl.Identity(2)[0, 1]) == "0" assert expr2cpp(ufl.Identity(2)[1, 0]) == "0" assert expr2cpp(ufl.Identity(2)[1, 1]) == "1" assert expr2cpp(ufl.PermutationSymbol(3)[1, 2, 3]) == "1" assert expr2cpp(ufl.PermutationSymbol(3)[2, 1, 3]) == "-1" assert expr2cpp(ufl.PermutationSymbol(3)[1, 1, 3]) == "0" def test_cpp_formatting_of_geometry(): # Test geometry quantities (faked for testing!) x = ufl.SpatialCoordinate(ufl.interval)[0] assert expr2cpp(x) == "x[0]" x, y = ufl.SpatialCoordinate(ufl.triangle) assert expr2cpp(x) == "x[0]" assert expr2cpp(y) == "x[1]" nx, ny = ufl.FacetNormal(ufl.triangle) assert expr2cpp(nx) == "n[0]" assert expr2cpp(ny) == "n[1]" Kv = ufl.CellVolume(ufl.triangle) assert expr2cpp(Kv) == "volume" Kh = ufl.CellDiameter(ufl.triangle) assert expr2cpp(Kh) == "diameter" Kr = ufl.Circumradius(ufl.triangle) assert expr2cpp(Kr) == "circumradius" def test_cpp_formatting_of_form_arguments(): # Test form arguments (faked for testing!) V = ufl.FiniteElement("CG", ufl.triangle, 1) f = ufl.Coefficient(V, count=0) assert expr2cpp(f) == "w0" v = ufl.TestFunction(V) assert expr2cpp(v) == "v0" V = ufl.VectorElement("CG", ufl.triangle, 1) f = ufl.Coefficient(V, count=1) assert expr2cpp(f[0]) == "w0[0]" # Renumbered to 0... v = ufl.Argument(V, number=3) assert expr2cpp(v[1]) == "v3[1]" # NOT renumbered to 0... V = ufl.TensorElement("CG", ufl.triangle, 1) f = ufl.Coefficient(V, count=2) assert expr2cpp(f[1, 0]) == "w0[1][0]" # Renumbered to 0... v = ufl.Argument(V, number=3) assert expr2cpp(v[0, 1]) == "v3[0][1]" # NOT renumbered to 0... # TODO: Test mixed functions # TODO: Test tensor functions with symmetries def test_cpp_formatting_of_arithmetic(): x, y = ufl.SpatialCoordinate(ufl.triangle) # Test basic arithmetic operators assert expr2cpp(x + 3) == "3 + x[0]" assert expr2cpp(x * 2) == "2 * x[0]" assert expr2cpp(x / 2) == "x[0] / 2" assert expr2cpp(x * x) == "pow(x[0], 2)" # TODO: Will gcc optimize this to x*x for us? assert expr2cpp(x**3) == "pow(x[0], 3)" # TODO: Test all basic operators def test_cpp_formatting_of_cmath(): x, y = ufl.SpatialCoordinate(ufl.triangle) # Test cmath functions assert expr2cpp(ufl.exp(x)) == "exp(x[0])" assert expr2cpp(ufl.ln(x)) == "log(x[0])" assert expr2cpp(ufl.sqrt(x)) == "sqrt(x[0])" assert expr2cpp(abs(x)) == "fabs(x[0])" assert expr2cpp(ufl.sin(x)) == "sin(x[0])" assert expr2cpp(ufl.cos(x)) == "cos(x[0])" assert expr2cpp(ufl.tan(x)) == "tan(x[0])" assert expr2cpp(ufl.asin(x)) == "asin(x[0])" assert expr2cpp(ufl.acos(x)) == "acos(x[0])" assert expr2cpp(ufl.atan(x)) == "atan(x[0])" def test_cpp_formatting_of_derivatives(): xx = ufl.SpatialCoordinate(ufl.triangle) x, y = xx # Test derivatives of basic operators assert expr2cpp(x.dx(0)) == "1" assert expr2cpp(x.dx(1)) == "0" assert expr2cpp(ufl.grad(xx)[0, 0]) == "1" assert expr2cpp(ufl.grad(xx)[0, 1]) == "0" assert expr2cpp(ufl.sin(x).dx(0)) == "cos(x[0])" # Test derivatives of target specific test fakes V = ufl.FiniteElement("CG", ufl.triangle, 1) f = ufl.Coefficient(V, count=0) assert expr2cpp(f.dx(0)) == "d1_w0[0]" v = ufl.Argument(V, number=3) assert expr2cpp(v.dx(1)) == "d1_v3[1]" # NOT renumbered to 0... # TODO: Test more derivatives # TODO: Test variable derivatives using diff def test_cpp_formatting_of_conditionals(): x, y = ufl.SpatialCoordinate(ufl.triangle) # Test conditional expressions assert expr2cpp(ufl.conditional(ufl.lt(x, 2), y, 3)) \ == "x[0] < 2 ? x[1]: 3" assert expr2cpp(ufl.conditional(ufl.gt(x, 2), 4 + y, 3)) \ == "x[0] > 2 ? 4 + x[1]: 3" assert expr2cpp(ufl.conditional(ufl.And(ufl.le(x, 2), ufl.ge(y, 4)), 7, 8)) \ == "x[0] <= 2 && x[1] >= 4 ? 7: 8" assert expr2cpp(ufl.conditional(ufl.Or(ufl.eq(x, 2), ufl.ne(y, 4)), 7, 8)) \ == "x[0] == 2 || x[1] != 4 ? 7: 8" # TODO: Some tests of nested conditionals with correct precedences? def test_cpp_formatting_precedence_handling(): x, y = ufl.SpatialCoordinate(ufl.triangle) # Test precedence handling with sums # Note that the automatic sorting is reflected in formatting! assert expr2cpp(y + (2 + x)) == "x[1] + (2 + x[0])" assert expr2cpp((x + 2) + y) == "x[1] + (2 + x[0])" assert expr2cpp((2 + x) + (3 + y)) == "(2 + x[0]) + (3 + x[1])" assert expr2cpp((x + 3) + 2 + y) == "x[1] + (2 + (3 + x[0]))" assert expr2cpp(2 + (x + 3) + y) == "x[1] + (2 + (3 + x[0]))" assert expr2cpp(2 + (3 + x) + y) == "x[1] + (2 + (3 + x[0]))" assert expr2cpp(y + (2 + (3 + x))) == "x[1] + (2 + (3 + x[0]))" assert expr2cpp(2 + x + 3 + y) == "x[1] + (3 + (2 + x[0]))" assert expr2cpp(2 + x + 3 + y) == "x[1] + (3 + (2 + x[0]))" # Test precedence handling with divisions # This is more stable than sums since there is no sorting. assert expr2cpp((x / 2) / 3) == "(x[0] / 2) / 3" assert expr2cpp(x / (y / 3)) == "x[0] / (x[1] / 3)" assert expr2cpp((x / 2) / (y / 3)) == "(x[0] / 2) / (x[1] / 3)" assert expr2cpp(x / (2 / y) / 3) == "(x[0] / (2 / x[1])) / 3" # Test precedence handling with highest level types assert expr2cpp(ufl.sin(x)) == "sin(x[0])" assert expr2cpp(ufl.cos(x + 2)) == "cos(2 + x[0])" assert expr2cpp(ufl.tan(x / 2)) == "tan(x[0] / 2)" assert expr2cpp(ufl.acos(x + 3 * y)) == "acos(x[0] + 3 * x[1])" assert expr2cpp(ufl.asin(ufl.atan(x**4))) == "asin(atan(pow(x[0], 4)))" assert expr2cpp(ufl.sin(y) + ufl.tan(x)) == "sin(x[1]) + tan(x[0])" # Test precedence handling with mixed types assert expr2cpp(3 * (2 + x)) == "3 * (2 + x[0])" assert expr2cpp((2 * x) + (3 * y)) == "2 * x[0] + 3 * x[1]" assert expr2cpp(2 * (x + 3) * y) == "x[1] * (2 * (3 + x[0]))" assert expr2cpp(2 * (x + 3)**4 * y) == "x[1] * (2 * pow(3 + x[0], 4))" # TODO: More tests covering all types and more combinations! def test_cpp_formatting_with_variables(): x, y = ufl.SpatialCoordinate(ufl.triangle) # Test user-provided C variables for subexpressions # we can use variables for x[0], and sum, and power assert expr2cpp(x**2 + y**2, variables={x**2: 'x2', y**2: 'y2'}) == "x2 + y2" assert expr2cpp(x**2 + y**2, variables={x: 'z', y**2: 'y2'}) == "pow(z, 2) + y2" assert expr2cpp(x**2 + y**2, variables={x**2 + y**2: 'q'}) == "q" # we can use variables in conditionals assert expr2cpp(ufl.conditional(ufl.Or(ufl.eq(x, 2), ufl.ne(y, 4)), 7, 8), variables={ufl.eq(x, 2): 'c1', ufl.ne(y, 4): 'c2'}) == "c1 || c2 ? 7: 8" # we can replace coefficients (formatted by user provided code) V = ufl.FiniteElement("CG", ufl.triangle, 1) f = ufl.Coefficient(V, count=0) assert expr2cpp(f, variables={f: 'f'}) == "f" assert expr2cpp(f**3, variables={f: 'f'}) == "pow(f, 3)" # variables do not replace derivatives of variable expressions assert expr2cpp(f.dx(0), variables={f: 'f'}) == "d1_w0[0]" # This test depends on which differentiation algorithm is in use # in UFL, representing derivatives as SpatialDerivative or Grad: # variables do replace variable expressions that are themselves derivatives assert expr2cpp(f.dx(0), variables={f.dx(0): 'df', ufl.grad(f)[0]: 'df'}) == "df" assert expr2cpp(ufl.grad(f)[0], variables={f.dx(0): 'df', ufl.grad(f)[0]: 'df'}) == "df" # TODO: Test variables in more situations with indices and derivatives # TODO: Test various compound operators ffc-2019.1.0.post0/test/unit/000077500000000000000000000000001346026536000155115ustar00rootroot00000000000000ffc-2019.1.0.post0/test/unit/misc/000077500000000000000000000000001346026536000164445ustar00rootroot00000000000000ffc-2019.1.0.post0/test/unit/misc/test_elements.py000066400000000000000000000264341346026536000217020ustar00rootroot00000000000000# -*- coding: utf-8 -*- "Unit tests for FFC" # Copyright (C) 2007-2017 Anders Logg and Garth N. Wells # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . # # Modified by Marie E. Rognes, 2010 # Modified by Lizao Li, 2016 import pytest import os import sys import numpy import math from time import time from ufl import * from ffc.fiatinterface import create_element from ffc import jit def element_coords(cell): if cell == "interval": return [(0,), (1,)] elif cell == "triangle": return [(0, 0), (1, 0), (0, 1)] elif cell == "tetrahedron": return [(0, 0, 0), (1, 0, 0), (0, 1, 0), (0, 0, 1)] elif cell == "quadrilateral": return [(0, 0), (1, 0), (0, 1), (1, 1)] elif cell == "hexahedron": return [(0, 0, 0), (1, 0, 0), (0, 1, 0), (1, 1, 0), (0, 0, 1), (1, 0, 1), (0, 1, 1), (1, 1, 1)] else: RuntimeError("Unknown cell type") def random_point(shape): w = numpy.random.random(len(shape)) return sum([numpy.array(shape[i])*w[i] for i in range(len(shape))])/sum(w) @pytest.mark.parametrize("degree, expected_dim", [(1, 3), (2, 6), (3, 10)]) def test_continuous_lagrange(degree, expected_dim): "Test space dimensions of continuous Lagrange elements." P = create_element(FiniteElement("Lagrange", "triangle", degree)) assert P.space_dimension() == expected_dim @pytest.mark.parametrize("degree, expected_dim", [(1, 4), (2, 9), (3, 16)]) def test_continuous_lagrange_quadrilateral(degree, expected_dim): "Test space dimensions of continuous TensorProduct elements (quadrilateral)." P = create_element(FiniteElement("Lagrange", "quadrilateral", degree)) assert P.space_dimension() == expected_dim @pytest.mark.parametrize("degree, expected_dim", [(0, 1), (1, 3), (2, 6), (3, 10)]) def test_discontinuous_lagrange(degree, expected_dim): "Test space dimensions of discontinuous Lagrange elements." P = create_element(FiniteElement("DG", "triangle", degree)) assert P.space_dimension() == expected_dim @pytest.mark.parametrize("degree, expected_dim", [(0, 3), (1, 9), (2, 18), (3, 30)]) def test_regge(degree, expected_dim): "Test space dimensions of generalized Regge element." P = create_element(FiniteElement("Regge", "triangle", degree)) assert P.space_dimension() == expected_dim @pytest.mark.parametrize("degree, expected_dim", [(0, 3), (1, 9), (2, 18), (3, 30)]) def test_hhj(degree, expected_dim): "Test space dimensions of Hellan-Herrmann-Johnson element." P = create_element(FiniteElement("HHJ", "triangle", degree)) assert P.space_dimension() == expected_dim class TestFunctionValues(): """These tests examine tabulate gives the correct answers for a the supported (non-mixed) for low degrees""" # FIXME: Add tests for NED and BDM/RT in 3D. # Shape (basis) functions on reference element reference_interval_1 = [lambda x: 1 - x[0], lambda x: x[0]] reference_triangle_1 = [lambda x: 1 - x[0] - x[1], lambda x: x[0], lambda x: x[1]] reference_tetrahedron_1 = [lambda x: 1 - x[0] - x[1] - x[2], lambda x: x[0], lambda x: x[1], lambda x: x[2]] reference_triangle_bdm1 = [lambda x: (2*x[0], -x[1]), lambda x: (-x[0], 2*x[1]), lambda x: (2 - 2*x[0] - 3*x[1], x[1]), lambda x: (- 1 + x[0] + 3*x[1], - 2*x[1]), lambda x: (-x[0], -2 + 3*x[0] + 2*x[1]), lambda x: (2*x[0], 1 - 3*x[0] - x[1])] reference_triangle_rt1 = [lambda x: (x[0], x[1]), lambda x: (1 - x[0], -x[1]), lambda x: (x[0], x[1] - 1)] reference_triangle_rt2 = [lambda x: (-x[0] + 3*x[0]**2, -x[1] + 3*x[0]*x[1]), lambda x: (-x[0] + 3*x[0]*x[1], -x[1] + 3*x[1]**2), lambda x: ( 2 - 5*x[0] - 3*x[1] + 3*x[0]*x[1] + 3*x[0]**2, -2*x[1] + 3*x[0]*x[1] + 3*x[1]**2), lambda x: (-1.0 + x[0] + 3*x[1] - 3*x[0]*x[1], x[1] - 3*x[1]**2), lambda x: (2*x[0] - 3*x[0]*x[1] - 3*x[0]**2, -2 + 3*x[0]+ 5*x[1] - 3*x[0]*x[1] - 3*x[1]**2), lambda x: (- x[0] + 3*x[0]**2, + 1 - 3*x[0] - x[1] + 3*x[0]*x[1]), lambda x: (6*x[0] - 3*x[0]*x[1] - 6*x[0]**2, 3*x[1] - 6*x[0]*x[1] - 3*x[1]**2), lambda x: (3*x[0] - 6*x[0]*x[1] - 3*x[0]**2, 6*x[1]- 3*x[0]*x[1] - 6*x[1]**2)] reference_triangle_ned1 = [lambda x: (-x[1], x[0]), lambda x: ( x[1], 1 - x[0]), lambda x: ( 1 - x[1], x[0])] reference_tetrahedron_rt1 = [lambda x: (-x[0], -x[1], -x[2]), lambda x: (-1.0 + x[0], x[1], x[2]), lambda x: (-x[0], 1.0 - x[1], -x[2]), lambda x: ( x[0], x[1], -1.0 + x[2])] reference_tetrahedron_bdm1 = [lambda x: (-3*x[0], x[1], x[2]), lambda x: (x[0], -3*x[1], x[2]), lambda x: (x[0], x[1], -3*x[2]), lambda x: (-3.0 + 3*x[0] + 4*x[1] + 4*x[2], -x[1], -x[2]), lambda x: (1.0 - x[0] - 4*x[1], 3*x[1], -x[2]), lambda x: (1.0 - x[0] - 4*x[2], -x[1], 3*x[2]), lambda x: (x[0], 3.0 - 4*x[0] - 3*x[1] - 4*x[2], x[2]), lambda x: (-3*x[0], -1.0 + 4*x[0] + x[1], x[2]), lambda x: (x[0], -1.0 + x[1] + 4*x[2], -3*x[2]), lambda x: (-x[0], -x[1], -3.0 + 4*x[0] + 4*x[1] + 3*x[2]), lambda x: (3*x[0], -x[1], 1.0 - 4*x[0] - x[2]), lambda x: (-x[0], 3*x[1], 1.0 - 4*x[1] - x[2])] reference_tetrahedron_ned1 = [lambda x: (0.0, -x[2], x[1]), lambda x: (-x[2], 0.0, x[0]), lambda x: (-x[1], x[0], 0.0), lambda x: ( x[2], x[2], 1.0 - x[0] - x[1]), lambda x: (x[1], 1.0 - x[0] - x[2], x[1]), lambda x: (1.0 - x[1] - x[2], x[0], x[0])] reference_quadrilateral_1 = [lambda x: (1-x[0])*(1-x[1]), lambda x: (1-x[0])*x[1], lambda x: x[0]*(1-x[1]), lambda x: x[0]*x[1]] reference_hexahedron_1 = [lambda x: (1-x[0])*(1-x[1])*(1-x[2]), lambda x: (1-x[0])*(1-x[1])*x[2], lambda x: (1-x[0])*x[1]*(1-x[2]), lambda x: (1-x[0])*x[1]*x[2], lambda x: x[0]*(1-x[1])*(1-x[2]), lambda x: x[0]*(1-x[1])*x[2], lambda x: x[0]*x[1]*(1-x[2]), lambda x: x[0]*x[1]*x[2]] # Tests to perform tests = [("Lagrange", "interval", 1, reference_interval_1), ("Lagrange", "triangle", 1, reference_triangle_1), ("Lagrange", "tetrahedron", 1, reference_tetrahedron_1), ("Lagrange", "quadrilateral", 1, reference_quadrilateral_1), ("Lagrange", "hexahedron", 1, reference_hexahedron_1), ("Discontinuous Lagrange", "interval", 1, reference_interval_1), ("Discontinuous Lagrange", "triangle", 1, reference_triangle_1), ("Discontinuous Lagrange", "tetrahedron", 1, reference_tetrahedron_1), ("Brezzi-Douglas-Marini", "triangle", 1, reference_triangle_bdm1), ("Raviart-Thomas", "triangle", 1, reference_triangle_rt1), ("Raviart-Thomas", "triangle", 2, reference_triangle_rt2), ("Discontinuous Raviart-Thomas", "triangle", 1, reference_triangle_rt1), ("Discontinuous Raviart-Thomas", "triangle", 2, reference_triangle_rt2), ("N1curl", "triangle", 1, reference_triangle_ned1), ("Raviart-Thomas", "tetrahedron", 1, reference_tetrahedron_rt1), ("Discontinuous Raviart-Thomas", "tetrahedron", 1, reference_tetrahedron_rt1), ("Brezzi-Douglas-Marini", "tetrahedron", 1, reference_tetrahedron_bdm1), ("N1curl", "tetrahedron", 1, reference_tetrahedron_ned1), ] @pytest.mark.parametrize("family, cell, degree, reference", tests) def test_values(self, family, cell, degree, reference): # Create element element = create_element(FiniteElement(family, cell, degree)) # Get some points and check basis function values at points points = [random_point(element_coords(cell)) for i in range(5)] for x in points: table = element.tabulate(0, (x,)) basis = table[list(table.keys())[0]] for i in range(len(basis)): if not element.value_shape(): assert round(float(basis[i]) - reference[i](x), 10) == 0.0 else: for k in range(element.value_shape()[0]): assert round(basis[i][k][0] - reference[i](x)[k] , 10) == 0.0 class Test_JIT(): def test_poisson(self): "Test that JIT compiler is fast enough." # FIXME: Use local cache: cache_dir argument to instant.build_module #options = {"log_level": INFO + 5} #options = {"log_level": 5} options = {"log_level": WARNING} # Define two forms with the same signatures element = FiniteElement("Lagrange", "triangle", 1) v = TestFunction(element) u = TrialFunction(element) f = Coefficient(element) g = Coefficient(element) a0 = f*dot(grad(v), grad(u))*dx a1 = g*dot(grad(v), grad(u))*dx # Strange this needs to be done twice # Compile a0 so it will be in the cache (both in-memory and disk) jit(a0, options) jit(a0, options) # Compile a0 again (should be really fast, using in-memory cache) t = time() jit(a0, options) dt0 = time() - t # Compile a1 (should be fairly fast, using disk cache) t = time() jit(a1, options) dt1 = time() - t # Good values dt0_good = 0.005 dt1_good = 0.01 print("\nJIT in-memory cache:", dt0) print("JIT disk cache: ", dt1) print("Reasonable values are %g and %g" % (dt0_good, dt1_good)) # Check times assert dt0 < 10*dt0_good assert dt1 < 10*dt1_good ffc-2019.1.0.post0/test/unit/pytest.ini000066400000000000000000000003331346026536000175410ustar00rootroot00000000000000[pytest] # We use fixture features requiring pytest 2.4 minversion = 2.4 # Make pytest ignore temp folders norecursedirs = .* __pycache__ test_*_tempdir # Make pytest ignore utility .py files python_files = test_*.py ffc-2019.1.0.post0/test/unit/symbolics/000077500000000000000000000000001346026536000175155ustar00rootroot00000000000000ffc-2019.1.0.post0/test/unit/symbolics/test_dg_elastodyn.py000077500000000000000000000032631346026536000236110ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2010 Kristian B. Oelgaard # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . import pytest import time # FFC modules from ffc.quadrature.symbolics import * from ffc.quadrature.cpp import format, set_float_formatting from ffc.parameters import FFC_PARAMETERS set_float_formatting(FFC_PARAMETERS['precision']) def testDGElastoDyn(): expr = Product([ Sum([ Symbol("F0", IP), Symbol("F1", IP) ]), Fraction( Symbol("w4", GEO), Symbol("w3", GEO) ), Fraction( Product([ Symbol("w2", GEO), Symbol("w5", GEO) ]), Symbol("w6", GEO) ) ]) expr_exp = expr.expand() expr_red = expr_exp.reduce_ops() F0, F1, w2, w3, w4, w5, w6 = (3.12, -8.1, -45.3, 17.5, 2.2, 5.3, 9.145) assert round(eval(str(expr)) - eval(str(expr_exp)), 10) == 0 assert round(eval(str(expr)) - eval(str(expr_red)), 10) == 0 assert expr.ops() == 6 assert expr_exp.ops() == 11 assert expr_red.ops() == 6 ffc-2019.1.0.post0/test/unit/symbolics/test_elas_weighted.py000077500000000000000000000055101346026536000237360ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2010 Kristian B. Oelgaard # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . import pytest import time # FFC modules from ffc.quadrature.symbolics import * from ffc.quadrature.cpp import format, set_float_formatting from ffc.parameters import FFC_PARAMETERS set_float_formatting(FFC_PARAMETERS['precision']) def testElasWeighted(): expr = Product([ Symbol('W4', IP), Sum([ Product([ Symbol('FE0_C1_D01_ip_j', BASIS), Symbol('FE0_C1_D01_ip_k', BASIS), Symbol('Jinv_00', GEO), Symbol('w1', GEO) ]), Product([ Symbol('FE0_C1_D01_ip_j', BASIS), Symbol('FE0_C1_D01_ip_k', BASIS), Symbol('Jinv_01', GEO), Symbol('w0', GEO) ]), Product([ Symbol('w2', GEO), Sum([ Product([ Symbol('FE0_C1_D01_ip_j', BASIS), Symbol('FE0_C1_D01_ip_k', BASIS), Symbol('Jinv_00', GEO), Symbol('w1', GEO) ]), Product([ Symbol('FE0_C1_D01_ip_j', BASIS), Symbol('FE0_C1_D01_ip_k', BASIS), Symbol('Jinv_01', GEO), Symbol('w0', GEO) ]) ]) ]) ]) ]) expr_exp = expr.expand() expr_red = expr_exp.reduce_ops() det, W4, w0, w1, w2, Jinv_00, Jinv_01, Jinv_11, Jinv_10, FE0_C1_D01_ip_j, FE0_C1_D01_ip_k = [0.123 + i for i in range(11)] assert round(eval(str(expr)) - eval(str(expr_exp))) == 0.0 assert round(eval(str(expr)) - eval(str(expr_red))) == 0.0 assert expr.ops() == 17 assert expr_exp.ops() == 21 assert expr_red.ops() == 10 # Generate code ip_consts = {} geo_consts = {} trans_set = set() start = time.time() opt_code = optimise_code(expr, ip_consts, geo_consts, trans_set) G = [eval(str(list(geo_consts.items())[0][0]))] I = [eval(str(list(ip_consts.items())[0][0]))] # noqa: E741 assert round(eval(str(expr)) - eval(str(opt_code))) == 0.0 ffc-2019.1.0.post0/test/unit/symbolics/test_elas_weighted2.py000077500000000000000000000066661346026536000240350ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2010 Kristian B. Oelgaard # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . import pytest import time # FFC modules from ffc.quadrature.symbolics import * from ffc.quadrature.cpp import format, set_float_formatting from ffc.parameters import FFC_PARAMETERS set_float_formatting(FFC_PARAMETERS['precision']) def testElasWeighted2(): expr = Product([ Symbol('W4', IP), Sum([ Product([ Symbol('FE0_C1_D01_ip_j', BASIS), Symbol('FE0_C1_D01_ip_k', BASIS), Symbol('Jinv_00', GEO), Symbol('w1', GEO) ]), Product([ Symbol('FE0_C1_D01_ip_j', BASIS), Symbol('Jinv_01', GEO), Sum([ Product([ Symbol('FE0_C1_D01_ip_k', BASIS), Symbol('w0', GEO) ]), Product([ Symbol('FE0_C1_D01_ip_k', BASIS), Symbol('w1', GEO) ]) ]) ]), Product([ Symbol('w2', GEO), Sum([ Product([ Symbol('FE0_C1_D01_ip_j', BASIS), Symbol('FE0_C1_D01_ip_k', BASIS), Symbol('Jinv_00', GEO), Symbol('w1', GEO) ]), Product([ Symbol('FE0_C1_D01_ip_j', BASIS), Symbol('Jinv_01', GEO), Sum([ Product([ Symbol('FE0_C1_D01_ip_k', BASIS), Symbol('w0', GEO) ]), Product([ Symbol('FE0_C1_D01_ip_k', BASIS), Symbol('w1', GEO) ]) ]) ]) ]) ]) ]) ]) expr_exp = expr.expand() expr_red = expr_exp.reduce_ops() det, W4, w0, w1, w2, Jinv_00, Jinv_01, Jinv_11, Jinv_10, FE0_C1_D01_ip_j, FE0_C1_D01_ip_k = [0.123 + i for i in range(11)] assert round(eval(str(expr)) - eval(str(expr_exp)), 10) == 0.0 assert round(eval(str(expr)) - eval(str(expr_red)), 10) == 0.0 assert expr.ops() == 21 assert expr_exp.ops() == 32 assert expr_red.ops() == 12 # Generate code ip_consts = {} geo_consts = {} trans_set = set() start = time.time() opt_code = optimise_code(expr, ip_consts, geo_consts, trans_set) G = [eval(str(list(geo_consts.items())[0][0]))] I = [eval(str(list(ip_consts.items())[0][0]))] # noqa: E741 assert round(eval(str(expr)) - eval(str(opt_code)), 10) == 0.0 ffc-2019.1.0.post0/test/unit/symbolics/test_elasticity_2d.py000077500000000000000000000146451346026536000237020ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2010 Kristian B. Oelgaard # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . import pytest import time # FFC modules from ffc.quadrature.reduce_operations import operation_count, expand_operations, reduce_operations from ffc.quadrature.symbolics import * from ffc.quadrature.cpp import format, set_float_formatting from ffc.parameters import FFC_PARAMETERS set_float_formatting(FFC_PARAMETERS['precision']) def testElasticity2D(): elasticity = """(((Jinv_00*FE0_C0_D10_ip_j + Jinv_10*FE0_C0_D01_ip_j)*2*(Jinv_00*FE0_C0_D10_ip_k + Jinv_10*FE0_C0_D01_ip_k)*2 + ((Jinv_00*FE0_C1_D10_ip_j + Jinv_10*FE0_C1_D01_ip_j) + (Jinv_01*FE0_C0_D10_ip_j + Jinv_11*FE0_C0_D01_ip_j))*((Jinv_00*FE0_C1_D10_ip_k + Jinv_10*FE0_C1_D01_ip_k) + (Jinv_01*FE0_C0_D10_ip_k + Jinv_11*FE0_C0_D01_ip_k))) + ((Jinv_01*FE0_C1_D10_ip_j + Jinv_11*FE0_C1_D01_ip_j)*2*(Jinv_01*FE0_C1_D10_ip_k + Jinv_11*FE0_C1_D01_ip_k)*2 + ((Jinv_01*FE0_C0_D10_ip_j + Jinv_11*FE0_C0_D01_ip_j) + (Jinv_00*FE0_C1_D10_ip_j + Jinv_10*FE0_C1_D01_ip_j))*((Jinv_01*FE0_C0_D10_ip_k + Jinv_11*FE0_C0_D01_ip_k) + (Jinv_00*FE0_C1_D10_ip_k + Jinv_10*FE0_C1_D01_ip_k))))*0.25*W4_ip*det""" expr = Product([ Sum([ Sum([ Product([ Sum([ Product([Symbol("Jinv_00", GEO), Symbol("FE0_C0_D10_ip_j", BASIS)]), Product([Symbol("Jinv_10", GEO), Symbol("FE0_C0_D01_ip_j", BASIS)]) ]), FloatValue(2), Sum([ Product([Symbol("Jinv_00", GEO), Symbol("FE0_C0_D10_ip_k", BASIS)]), Product([Symbol("Jinv_10", GEO), Symbol("FE0_C0_D01_ip_k", BASIS)]) ]), FloatValue(2) ]), Product([ Sum([ Sum([ Product([Symbol("Jinv_00", GEO), Symbol("FE0_C1_D10_ip_j", BASIS)]), Product([Symbol("Jinv_10", GEO), Symbol("FE0_C1_D01_ip_j", BASIS)]) ]), Sum([ Product([Symbol("Jinv_01", GEO), Symbol("FE0_C0_D10_ip_j", BASIS)]), Product([Symbol("Jinv_11", GEO), Symbol("FE0_C0_D01_ip_j", BASIS)]) ]) ]), Sum([ Sum([ Product([Symbol("Jinv_00", GEO), Symbol("FE0_C1_D10_ip_k", BASIS)]), Product([Symbol("Jinv_10", GEO), Symbol("FE0_C1_D01_ip_k", BASIS)]) ]), Sum([ Product([Symbol("Jinv_01", GEO), Symbol("FE0_C0_D10_ip_k", BASIS)]), Product([Symbol("Jinv_11", GEO), Symbol("FE0_C0_D01_ip_k", BASIS)]) ]) ]) ]) ]), Sum([ Product([ Sum([ Product([Symbol("Jinv_01", GEO), Symbol("FE0_C1_D10_ip_j", BASIS)]), Product([Symbol("Jinv_11", GEO), Symbol("FE0_C1_D01_ip_j", BASIS)]) ]), FloatValue(2), Sum([ Product([Symbol("Jinv_01", GEO), Symbol("FE0_C1_D10_ip_k", BASIS)]), Product([Symbol("Jinv_11", GEO), Symbol("FE0_C1_D01_ip_k", BASIS)]) ]), FloatValue(2) ]), Product([ Sum([ Sum([ Product([Symbol("Jinv_01", GEO), Symbol("FE0_C0_D10_ip_j", BASIS)]), Product([Symbol("Jinv_11", GEO), Symbol("FE0_C0_D01_ip_j", BASIS)]) ]), Sum([ Product([Symbol("Jinv_00", GEO), Symbol("FE0_C1_D10_ip_j", BASIS)]), Product([Symbol("Jinv_10", GEO), Symbol("FE0_C1_D01_ip_j", BASIS)]) ]) ]), Sum([ Sum([ Product([Symbol("Jinv_01", GEO), Symbol("FE0_C0_D10_ip_k", BASIS)]), Product([Symbol("Jinv_11", GEO), Symbol("FE0_C0_D01_ip_k", BASIS)]) ]), Sum([ Product([Symbol("Jinv_00", GEO), Symbol("FE0_C1_D10_ip_k", BASIS)]), Product([Symbol("Jinv_10", GEO), Symbol("FE0_C1_D01_ip_k", BASIS)]) ]) ]) ]) ]) ]), FloatValue(0.25), Symbol("W4_ip", IP), Symbol("det", GEO) ]) expr_exp = expr.expand() elasticity_exp = expand_operations(elasticity, format) expr_red = expr_exp.reduce_ops() elasticity_red = reduce_operations(elasticity, format) elasticity_exp_ops = operation_count(elasticity_exp, format) elasticity_red_ops = operation_count(elasticity_red, format) Jinv_00, Jinv_01, Jinv_10, Jinv_11, W4_ip, det = (1.1, 1.5, -4.3, 1.7, 11, 52.3) FE0_C0_D01_ip_j, FE0_C0_D10_ip_j, FE0_C0_D01_ip_k, FE0_C0_D10_ip_k = (1.12, 5.7, -9.3, 7.4) FE0_C1_D01_ip_j, FE0_C1_D10_ip_j, FE0_C1_D01_ip_k, FE0_C1_D10_ip_k = (3.12, -8.1, -45.3, 17.5) assert round(eval(str(expr)) - eval(str(expr_exp)), 8) == 0.0 assert round(eval(str(expr)) - eval(str(expr_red)), 8) == 0.0 assert round(eval(str(expr)) - eval(str(elasticity)), 8) == 0.0 assert round(eval(str(expr)) - eval(str(elasticity_exp)), 8) == 0.0 assert round(eval(str(expr)) - eval(str(elasticity_red)), 8) == 0.0 assert expr.ops() == 52 assert elasticity_exp_ops == 159 assert expr_exp.ops() == 159 assert elasticity_red_ops == 71 assert expr_red.ops() == 71 ffc-2019.1.0.post0/test/unit/symbolics/test_elasticity_term.py000077500000000000000000000042701346026536000243350ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2010 Kristian B. Oelgaard # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . import pytest import time # FFC modules from ffc.quadrature.symbolics import * from ffc.quadrature.cpp import format, set_float_formatting from ffc.parameters import FFC_PARAMETERS set_float_formatting(FFC_PARAMETERS['precision']) def test_Elasticity_Term(): # expr: 0.25*W1*det*(FE0_C2_D001[0][j]*FE0_C2_D001[0][k]*Jinv_00*Jinv_21 + FE0_C2_D001[0][j]*FE0_C2_D001[0][k]*Jinv_00*Jinv_21) expr = Product([ FloatValue(0.25), Symbol('W1', GEO), Symbol('det', GEO), Sum([Product([Symbol('FE0_C2_D001_0_j', BASIS), Symbol('FE0_C2_D001_0_k', BASIS), Symbol('Jinv_00', GEO), Symbol('Jinv_21', GEO)]), Product([Symbol('FE0_C2_D001_0_j', BASIS), Symbol('FE0_C2_D001_0_k', BASIS), Symbol('Jinv_00', GEO), Symbol('Jinv_21', GEO)]) ]) ]) expr_exp = expr.expand() expr_red = expr_exp.reduce_ops() det, W1, Jinv_00, Jinv_21, FE0_C2_D001_0_j, FE0_C2_D001_0_k = [0.123 + i for i in range(6)] assert round(eval(str(expr)) - eval(str(expr_exp)), 10) == 0.0 assert round(eval(str(expr)) - eval(str(expr_red)), 10) == 0.0 assert expr.ops() == 10 assert expr_exp.ops() == 6 assert expr_red.ops() == 6 # Generate code ip_consts = {} geo_consts = {} trans_set = set() start = time.time() opt_code = optimise_code(expr, ip_consts, geo_consts, trans_set) G = [eval(str(list(geo_consts.items())[0][0]))] assert round(eval(str(expr)) - eval(str(opt_code))) == 0.0 ffc-2019.1.0.post0/test/unit/symbolics/test_expand_operations.py000077500000000000000000000151621346026536000246600ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2010 Kristian B. Oelgaard # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . import pytest import time # FFC modules from ffc.quadrature.symbolics import * from ffc.quadrature.cpp import format, set_float_formatting from ffc.parameters import FFC_PARAMETERS set_float_formatting(FFC_PARAMETERS['precision']) def testExpandOperations(): f0 = FloatValue(-1) f1 = FloatValue(2) f2 = FloatValue(1) sx = Symbol("x", GEO) sy = Symbol("y", GEO) sz = Symbol("z", GEO) s0 = Product([FloatValue(-1), Symbol("x", GEO)]) s1 = Symbol("y", GEO) s2 = Product([FloatValue(5), Symbol("z", IP)]) s3 = Product([FloatValue(-4), Symbol("z", GEO)]) # Random variable values x = 2.2 y = -0.2 z = 1.1 # Aux. expressions P0 = Product([s2, s1]) P1 = Product([P0, s0]) P2 = Product([P1, s1, P0]) P3 = Product([P1, P2]) S0 = Sum([s2, s1]) S1 = Sum([S0, s0]) S2 = Sum([S1, s1, S0]) S3 = Sum([S1, S2]) F0 = Fraction(s2, s1) F1 = Fraction(F0, s0) F2 = Fraction(F1, F0) F3 = Fraction(F1, F2) # Special fractions F4 = Fraction(P0, F0) F5 = Fraction(Fraction(s0, P0), P0) F6 = Fraction(Fraction(Fraction(s1, s0), Fraction(s1, s2)), Fraction(Fraction(s2, s0), Fraction(s1, s0))) F7 = Fraction(s1, Product([s1, Symbol("x", GEO)])) F8 = Fraction(Sum([sx, Fraction(sy, sx)]), FloatValue(2)) F4x = F4.expand() F5x = F5.expand() F6x = F6.expand() F7x = F7.expand() F8x = F8.expand() assert round(eval(str(F4)) - eval(str(F4x)), 10) == 0.0 assert round(eval(str(F5)) - eval(str(F5x)), 10) == 0.0 assert round(eval(str(F6)) - eval(str(F6x)), 10) == 0.0 assert round(eval(str(F7)) - eval(str(F7x)), 10) == 0.0 assert round(eval(str(F8)) - eval(str(F8x)), 10) == 0.0 assert F4.ops() == 5 assert F4x.ops() == 1 assert F5.ops() == 6 assert F5x.ops() == 5 assert F6.ops() == 9 assert F6x.ops() == 1 assert F7.ops() == 2 assert F7x.ops() == 1 assert F8.ops() == 3 assert F8x.ops() == 4 # Expressions that should be expanded e0 = Product([P3, F2]) e1 = Product([S3, P2]) e2 = Product([F3, S1]) e3 = Sum([P3, F2]) e4 = Sum([S3, P2]) e5 = Sum([F3, S1]) e6 = Fraction(P3, F2) e7 = Fraction(S3, P2) e8 = Fraction(F3, S1) e9 = Fraction(S0, s0) e0x = e0.expand() e1x = e1.expand() e2x = e2.expand() e3x = e3.expand() e4x = e4.expand() e5x = e5.expand() e6x = e6.expand() e7x = e7.expand() e8x = e8.expand() e9x = e9.expand() assert round(eval(str(e0)) - eval(str(e0x)), 10) == 0.0 assert round(eval(str(e1)) - eval(str(e1x)), 10) == 0.0 assert round(eval(str(e2)) - eval(str(e2x)), 10) == 0.0 assert round(eval(str(e3)) - eval(str(e3x)), 10) == 0.0 assert round(eval(str(e4)) - eval(str(e4x)), 10) == 0.0 assert round(eval(str(e5)) - eval(str(e5x)), 10) == 0.0 assert round(eval(str(e6)) - eval(str(e6x)), 10) == 0.0 assert round(eval(str(e7)) - eval(str(e7x)), 10) == 0.0 assert round(eval(str(e8)) - eval(str(e8x)), 10) == 0.0 assert round(eval(str(e9)) - eval(str(e9x)), 10) == 0.0 assert e0.ops() == 16 assert e0x.ops() == 8 assert e1.ops() == 18 assert e1x.ops() == 23 assert e2.ops() == 14 assert e2x.ops() == 9 assert e3.ops() == 16 assert e3x.ops() == 11 assert e4.ops() == 18 assert e4x.ops() == 12 assert e5.ops() == 14 assert e5x.ops() == 6 assert e6.ops() == 16 assert e6x.ops() == 10 assert e7.ops() == 18 assert e7x.ops() == 17 assert e8.ops() == 14 assert e8x.ops() == 8 assert e9.ops() == 3 assert e9x.ops() == 4 # More expressions (from old expand tests) PF = Product([F0, F1]) E0 = Product([s1, f0, S1]) E1 = Sum([P0, E0]) E2 = Fraction(Sum([Product([f1])]), f2) E3 = Sum([F0, F0]) E4 = Product([Sum([Product([sx, Sum([sy, Product([Sum([sy, Product([sy, sz]), sy])]), sy])]), Product([sx, Sum([Product([sy, sz]), sy])])])]) P4 = Product([s1, Sum([s0, s1])]) P5 = Product([s0, E0]) P6 = Product([s1]) S4 = Sum([s1]) # Create 'real' term that caused me trouble P00 = Product([Symbol("Jinv_00", GEO)] * 2) P01 = Product([Symbol("Jinv_01", GEO)] * 2) P20 = Product([Symbol("Jinv_00", GEO), Product([f1, Symbol("Jinv_20", GEO)])]) P21 = Product([Symbol("Jinv_01", GEO), Product([f1, Symbol("Jinv_21", GEO)])]) PS0 = Product([Symbol("Jinv_22", GEO), Sum([P00, P01])]) PS1 = Product([Product([f0, Symbol("Jinv_02", GEO)]), Sum([P20, P21])]) SP = Sum([PS0, PS1]) PFx = PF.expand() E0x = E0.expand() E1x = E1.expand() E2x = E2.expand() E3x = E3.expand() E4x = E4.expand() P4x = P4.expand() P5x = P5.expand() P6x = P6.expand() S4x = S4.expand() SPx = SP.expand() Jinv_00, Jinv_01, Jinv_10, Jinv_02, Jinv_20, Jinv_22, Jinv_21, W1, det = [1, 2, 3, 4, 5, 6, 7, 8, 9] assert round(eval(str(SP)) - eval(str(SPx)), 10) == 0.0 assert round(eval(str(E0)) - eval(str(E0x)), 10) == 0.0 assert round(eval(str(E1)) - eval(str(E1x)), 10) == 0.0 assert round(eval(str(E2)) - eval(str(E2x)), 10) == 0.0 assert round(eval(str(E3)) - eval(str(E3x)), 10) == 0.0 assert round(eval(str(E4)) - eval(str(E4x)), 10) == 0.0 assert round(eval(str(SP)) - eval(str(SPx)), 10) == 0.0 assert round(eval(str(P4)) - eval(str(P4x)), 10) == 0.0 assert round(eval(str(P5)) - eval(str(P5x)), 10) == 0.0 assert P6x == s1 assert S4x == s1 assert PF.ops() == 6 assert PFx.ops() == 5 assert E0.ops() == 4 assert E0x.ops() == 6 assert E1.ops() == 7 assert E1x.ops() == 3 assert E2.ops() == 1 assert E2x.ops() == 0 assert E3.ops() == 5 assert E3x.ops() == 5 assert E4.ops() == 10 assert E4x.ops() == 6 assert SP.ops() == 11 assert SPx.ops() == 13 assert P4.ops() == 2 assert P4x.ops() == 3 assert P5.ops() == 5 assert P5x.ops() == 9 ffc-2019.1.0.post0/test/unit/symbolics/test_float.py000077500000000000000000000041351346026536000222410ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2010 Kristian B. Oelgaard # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . import pytest import time # FFC modules from ffc.quadrature.symbolics import * from ffc.quadrature.cpp import format, set_float_formatting from ffc.parameters import FFC_PARAMETERS set_float_formatting(FFC_PARAMETERS["precision"]) def testFloat(): "Test simple FloatValue instance." f0 = FloatValue(1.5) f1 = FloatValue(-5) f2 = FloatValue(-1e-14) f3 = FloatValue(-1e-11) f4 = FloatValue(1.5) assert repr(f0) == "FloatValue(%s)" % format["float"](1.5) assert repr(f1) == "FloatValue(%s)" % format["float"](-5) # This depends on the chosen precision and certain rounding behaviour in code generation: #assert repr(f2) == "FloatValue(%s)" % format["float"](0.0) assert repr(f3) == "FloatValue(%s)" % format["float"](-1e-11) #assert f2.val == 0 # This test documents incorrect float behaviour of quadrature repsentation assert not f3.val == 0 assert f0.ops() == 0 assert f1.ops() == 0 assert f2.ops() == 0 assert f3.ops() == 0 assert f0 == f4 assert f1 != f3 assert not f0 < f1 #assert f2 > f3 # ^^^ NB! The > operator of FloatValue is not a comparison of numbers, # it does something else entirely, and is affected by precision in indirect ways. # Test hash ll = [f0] d = {f0: 0} assert f0 in ll assert f0 in d assert f4 in ll assert f4 in d assert f1 not in ll assert f1 not in d ffc-2019.1.0.post0/test/unit/symbolics/test_float_operators.py000077500000000000000000000055371346026536000243460ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2010 Kristian B. Oelgaard # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . import pytest import time # FFC modules from ffc.quadrature.reduce_operations import expand_operations, reduce_operations from ffc.quadrature.symbolics import * from ffc.quadrature.sumobj import _group_fractions from ffc.quadrature.cpp import format, set_float_formatting from ffc.parameters import FFC_PARAMETERS set_float_formatting(FFC_PARAMETERS['precision']) from ffc.log import error, push_level, pop_level, CRITICAL def test_Float_Operators(): "Test binary operators" f0 = FloatValue(0.0) f2 = FloatValue(2.0) f3 = FloatValue(3.0) fm1 = FloatValue(-1.0) fm3 = FloatValue(-3.0) x = Symbol("x", GEO) y = Symbol("y", GEO) z = Symbol("z", GEO) p0 = Product([f2, x]) p1 = Product([x, y]) p2 = Product([f2, z]) p3 = Product([y, x, z]) p4 = Product([fm1, f2, x]) S0 = Sum([p0, fm3]) S1 = Sum([x, y]) S2 = Sum([S1, fm3]) S3 = Sum([p4, fm3]) S4 = Sum([fm3, Product([fm1, Sum([x, y])])]) F0 = Fraction(f2, y) F1 = Fraction(FloatValue(-1.5), x) F2 = Fraction(fm3, S1) SF0 = Sum([f3, F1]) SF1 = Sum([f3, Product([fm1, F1])]) # Test FloatValue '+' assert str(f2 + fm3) == str(fm1) assert str(f2 + fm3 + fm3 + f2 + f2) == str(f0) assert str(f0 + p0) == str(p0) assert str(fm3 + p0) == str(S0) assert str(fm3 + S1) == str(S2) assert str(f3 + F1) == str(SF0) # Test FloatValue '-' assert str(f2 - fm3) == str(FloatValue(5)) assert str(f0 - p0) == str(p4) assert str(fm3 - p0) == str(S3) assert str(fm3 - S1) == str(S4) assert str(f3 - F1) == str(SF1) # Test FloatValue '*', only need one because all other cases are # handled by 'other' assert str(f2 * f2) == '%s' % format["float"](4) # Test FloatValue '/' assert str(fm3 / f2) == str(FloatValue(-1.5)) assert str(f2 / y) == str(F0) assert str(fm3 / p0) == str(F1) assert str(fm3 / S1) == str(F2) # Silence output push_level(CRITICAL) with pytest.raises(Exception): truediv(f2, F0) with pytest.raises(Exception): truediv(f2, f0) with pytest.raises(Exception): truediv(f2, Product([f0, y])) pop_level() ffc-2019.1.0.post0/test/unit/symbolics/test_fraction.py000077500000000000000000000045301346026536000227400ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2010 Kristian B. Oelgaard # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . import pytest import time # FFC modules from ffc.quadrature.symbolics import * from ffc.quadrature.cpp import format, set_float_formatting from ffc.parameters import FFC_PARAMETERS set_float_formatting(FFC_PARAMETERS['precision']) from ffc.log import push_level, pop_level, CRITICAL def testFraction(): "Test simple fraction instance." f0 = FloatValue(-2.0) f1 = FloatValue(3.0) f2 = FloatValue(0) s0 = Symbol("x", BASIS) s1 = Symbol("y", GEO) F0 = Fraction(f1, f0) F1 = Fraction(f2, f0) F2 = Fraction(s0, s1) F3 = Fraction(s0, f1) F4 = Fraction(f0, s1) F5 = Fraction(f2, s1) F6 = Fraction(s0, s1) # Silence output push_level(CRITICAL) with pytest.raises(Exception): Fraction(f0, f2) with pytest.raises(Exception): Fraction(s0, f2) pop_level() assert repr(F0) == "Fraction(FloatValue(%s), FloatValue(%s))"\ % (format["float"](-1.5), format["float"](1)) assert repr(F2) == "Fraction(Symbol('x', BASIS), Symbol('y', GEO))" assert str(F0) == "%s" % format["float"](-1.5) assert str(F1) == "%s" % format["float"](0) assert str(F2) == "x/y" assert str(F3) == "x/%s" % format["float"](3) assert str(F4) == "-%s/y" % format["float"](2) assert str(F5) == "%s" % format["float"](0) assert F2 == F2 assert F2 != F3 assert F5 != F4 assert F2 == F6 assert F0.ops() == 0 assert F1.ops() == 0 assert F2.ops() == 1 assert F3.ops() == 1 assert F4.ops() == 1 assert F5.ops() == 0 # Test hash ll = [F2] d = {F2: 0} assert F2 in ll assert F2 in d assert F6 in ll assert F6 in d ffc-2019.1.0.post0/test/unit/symbolics/test_fraction_operators.py000077500000000000000000000060231346026536000250350ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2010 Kristian B. Oelgaard # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . import pytest import time # FFC modules from ffc.quadrature.reduce_operations import operation_count, expand_operations, reduce_operations from ffc.quadrature.symbolics import * from ffc.quadrature.sumobj import _group_fractions from ffc.quadrature.cpp import format, set_float_formatting from ffc.parameters import FFC_PARAMETERS set_float_formatting(FFC_PARAMETERS['precision']) from ffc.log import error, push_level, pop_level, CRITICAL def testFractionOperators(): "Test binary operators" f_0 = format["float"](0) f_1 = format["float"](1) f_2 = format["float"](2) f_5 = format["float"](5) f2 = FloatValue(2.0) fm3 = FloatValue(-3.0) x = Symbol("x", GEO) y = Symbol("y", GEO) p0 = Product([f2, x]) p1 = Product([x, y]) S0 = Sum([x, y]) F0 = Fraction(f2, y) F1 = Fraction(x, y) F2 = Fraction(x, S0) F3 = Fraction(x, y) F4 = Fraction(p0, y) F5 = Fraction(Product([fm3, x]), y) # Test Fraction '+' assert str(F0 + f2) == '(%s + %s/y)' % (f_2, f_2) assert str(F1 + x) == '(x + x/y)' assert str(F1 + p0) == '(%s*x + x/y)' % f_2 assert str(F1 + S0) == '(x + y + x/y)' assert str(F1 + F3) == '%s*x/y' % f_2 assert str(F0 + F1) == '(%s + x)/y' % f_2 assert str(F2 + F4) == '(%s*x/y + x/(x + y))' % f_2 # Test Fraction '-' assert str(F0 - f2) == '(%s/y-%s)' % (f_2, f_2) assert str(F1 - x) == '(x/y - x)' assert str(F1 - p0) == '(x/y-%s*x)' % f_2 assert str(F1 - S0) == '(x/y - (x + y))' assert str(F1 - F3) == '%s' % f_0 assert str(F4 - F1) == 'x/y' assert str(F4 - F5) == '%s*x/y' % f_5 assert str(F0 - F1) == '(%s - x)/y' % f_2 assert str(F2 - F4) == '(x/(x + y) - %s*x/y)' % f_2 # Test Fraction '*' assert str(F1 * f2) == '%s*x/y' % f_2 assert str(F1 * x) == 'x*x/y' assert str(F1 * p1) == 'x*x' assert str(F1 * S0) == '(x + x*x/y)' assert repr(F1 * S0) == repr(Sum([x, Fraction(Product([x, x]), y)])) assert str(F1 * F0) == '%s*x/(y*y)' % f_2 # Test Fraction '/' assert str(F0 / f2) == '%s/y' % f_1 assert str(F1 / x) == '%s/y' % f_1 assert str(F4 / p1) == '%s/(y*y)' % f_2 assert str(F4 / x) == '%s/y' % f_2 assert str(F2 / y) == 'x/(x*y + y*y)' assert str(F0 / S0) == '%s/(x*y + y*y)' % f_2 with pytest.raises(Exception): truediv(F0 / F0) ffc-2019.1.0.post0/test/unit/symbolics/test_mixed_symbols.py000077500000000000000000000203011346026536000240030ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2010 Kristian B. Oelgaard # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . import pytest import time # FFC modules from ffc.quadrature.symbolics import * from ffc.quadrature.cpp import format, set_float_formatting from ffc.parameters import FFC_PARAMETERS set_float_formatting(FFC_PARAMETERS['precision']) def testMixedSymbols(): f_0 = format["float"](0) f_2 = format["float"](2) f_3 = format["float"](3) f_4 = format["float"](4) f_6 = format["float"](6) f0 = FloatValue(-2.0) f1 = FloatValue(3.0) f2 = FloatValue(0) s0 = Symbol("x", BASIS) s1 = Symbol("y", GEO) s2 = Symbol("z", GEO) p0 = Product([s0, s1]) p1 = Product([f1, s0, s1]) p2 = Product([s0, f2, s2]) p3 = Product([s0, f0, s1, f1, s2]) S0 = Sum([s0, s1]) S1 = Sum([s0, s0]) S2 = Sum([f0, s0]) S3 = Sum([s0, f0, s0]) F0 = Fraction(f1, f0) F1 = Fraction(s0, s1) F2 = Fraction(s0, f1) F3 = Fraction(f0, s1) x = 1.2 y = 2.36 z = 6.75 # Mixed products mpp0 = Product([p0, s0]) mpp1 = Product([p1, p0]) mpp2 = Product([p2, p3]) mpp3 = Product([p1, mpp1]) mps0 = Product([S0, s0]) mps1 = Product([S1, S0]) mps2 = Product([S2, S3]) mps3 = Product([S1, mps1]) mpf0 = Product([F1, s0]) mpf1 = Product([F1, F2]) mpf2 = Product([F2, F3]) mpf3 = Product([F1, mpf1]) assert round(eval(str(mpp0)) - eval(str(p0)) * eval(str(s0)), 10) == 0.0 assert round(eval(str(mpp1)) - eval(str(p1)) * eval(str(p0)), 10) == 0.0 assert round(eval(str(mpp2)) - eval(str(p2)) * eval(str(p3)), 10) == 0.0 assert round(eval(str(mpp3)) - eval(str(p1)) * eval(str(mpp1)), 10) == 0.0 assert round(eval(str(mps0)) - eval(str(S0)) * eval(str(s0)), 10) == 0.0 assert round(eval(str(mps1)) - eval(str(S1)) * eval(str(S0)), 10) == 0.0 assert round(eval(str(mps2)) - eval(str(S2)) * eval(str(S3)), 10) == 0.0 assert round(eval(str(mps3)) - eval(str(S1)) * eval(str(mps1)), 10) == 0.0 assert round(eval(str(mpf0)) - eval(str(F1)) * eval(str(s0)), 10) == 0.0 assert round(eval(str(mpf1)) - eval(str(F1)) * eval(str(F2)), 10) == 0.0 assert round(eval(str(mpf2)) - eval(str(F2)) * eval(str(F3)), 10) == 0.0 assert round(eval(str(mpf3)) - eval(str(F1)) * eval(str(mpf1)), 10) == 0.0 assert mpp0.ops() == 2 assert mpp1.ops() == 4 assert mpp2.ops() == 0 assert mpp3.ops() == 6 assert mps0.ops() == 2 assert mps1.ops() == 3 assert mps2.ops() == 4 assert mps3.ops() == 5 assert mpf0.ops() == 2 assert mpf1.ops() == 3 assert mpf2.ops() == 3 assert mpf3.ops() == 5 assert str(mpp0) == 'x*x*y' assert str(mpp1) == '%s*x*x*y*y' % f_3 assert str(mpp2) == '%s' % f_0 assert str(mpp3) == '%s*x*x*x*y*y*y' % format["float"](9) assert str(mps0) == 'x*(x + y)' assert str(mps1) == '(x + x)*(x + y)' assert str(mps2) == '(x + x-%s)*(x-%s)' % (f_2, f_2) assert str(mps3) == '(x + x)*(x + x)*(x + y)' assert str(mpf0) == 'x*x/y' assert str(mpf1) == 'x/%s*x/y' % f_3 assert str(mpf2) == '-%s/y*x/%s' % (f_2, f_3) assert str(mpf3) == 'x/%s*x/y*x/y' % f_3 # Mixed sums msp0 = Sum([p0, s0]) msp1 = Sum([p1, p0]) msp2 = Sum([p2, p3]) msp3 = Sum([p1, msp1]) msp4 = Sum([f2, f2]) mss0 = Sum([S0, s0]) mss1 = Sum([S1, S0]) mss2 = Sum([S2, S3]) mss3 = Sum([S1, mps1]) msf0 = Sum([F1, s0]) msf1 = Sum([F1, F2]) msf2 = Sum([F2, F3]) msf3 = Sum([F1, msf1]) assert round(eval(str(msp0)) - (eval(str(p0)) + eval(str(s0))), 10) == 0.0 assert round(eval(str(msp1)) - (eval(str(p1)) + eval(str(p0))), 10) == 0.0 assert round(eval(str(msp2)) - (eval(str(p2)) + eval(str(p3))), 10) == 0.0 assert round(eval(str(msp3)) - (eval(str(p1)) + eval(str(msp1))), 10) == 0.0 assert str(msp4) == '%s' % f_0 assert round(eval(str(mss0)) - (eval(str(S0)) + eval(str(s0))), 10) == 0.0 assert round(eval(str(mss1)) - (eval(str(S1)) + eval(str(S0))), 10) == 0.0 assert round(eval(str(mss2)) - (eval(str(S2)) + eval(str(S3))), 10) == 0.0 assert round(eval(str(mss3)) - (eval(str(S1)) + eval(str(mps1))), 10) == 0.0 assert round(eval(str(msf0)) - (eval(str(F1)) + eval(str(s0))), 10) == 0.0 assert round(eval(str(msf1)) - (eval(str(F1)) + eval(str(F2))), 10) == 0.0 assert round(eval(str(msf2)) - (eval(str(F2)) + eval(str(F3))), 10) == 0.0 assert round(eval(str(msf3)) - (eval(str(F1)) + eval(str(msf1))), 10) == 0.0 assert msp0.ops() == 2 assert msp1.ops() == 4 assert msp2.ops() == 3 assert msp3.ops() == 7 assert mss0.ops() == 2 assert mss1.ops() == 3 assert mss2.ops() == 3 assert mss3.ops() == 5 assert msf0.ops() == 2 assert msf1.ops() == 3 assert msf2.ops() == 3 assert msf3.ops() == 5 assert str(msp0) == '(x + x*y)' assert str(msp1) == '(%s*x*y + x*y)' % f_3 assert str(msp2) == '-%s*x*y*z' % f_6 assert str(msp3) == '(%s*x*y + %s*x*y + x*y)' % (f_3, f_3) assert str(mss0) == '(x + x + y)' assert str(mss1) == '(x + x + x + y)' assert str(mss2) == '(x + x + x-%s)' % f_4 assert str(mss3) == '(x + x + (x + x)*(x + y))' assert str(msf0) == '(x + x/y)' assert str(msf1) == '(x/%s + x/y)' % f_3 assert str(msf2) == '(x/%s-%s/y)' % (f_3, f_2) assert str(msf3) == '(x/%s + x/y + x/y)' % f_3 # Mixed fractions mfp0 = Fraction(p0, s0) mfp1 = Fraction(p1, p0) mfp2 = Fraction(p2, p3) mfp3 = Fraction(p1, mfp1) mfs0 = Fraction(S0, s0) mfs1 = Fraction(S1, S0) mfs2 = Fraction(S2, S3) mfs3 = Fraction(S1, mfs1) mff0 = Fraction(F1, s0) mff1 = Fraction(F1, F2) mff2 = Fraction(F2, F3) mff3 = Fraction(F1, mff1) assert round(eval(str(mfp0)) - eval(str(p0)) / eval(str(s0)), 10) == 0.0 assert round(eval(str(mfp1)) - eval(str(p1)) / eval(str(p0)), 10) == 0.0 assert round(eval(str(mfp2)) - eval(str(p2)) / eval(str(p3)), 10) == 0.0 assert round(eval(str(mfp3)) - eval(str(p1)) / eval(str(mfp1)), 10) == 0.0 assert round(eval(str(mfs0)) - eval(str(S0)) / eval(str(s0)), 10) == 0.0 assert round(eval(str(mfs1)) - eval(str(S1)) / eval(str(S0)), 10) == 0.0 assert round(eval(str(mfs2)) - eval(str(S2)) / eval(str(S3)), 10) == 0.0 assert round(eval(str(mfs3)) - eval(str(S1)) / eval(str(mfs1)), 10) == 0.0 assert round(eval(str(mff0)) - eval(str(F1)) / eval(str(s0)), 10) == 0.0 assert round(eval(str(mff1)) - eval(str(F1)) / eval(str(F2)), 10) == 0.0 assert round(eval(str(mff2)) - eval(str(F2)) / eval(str(F3)), 10) == 0.0 assert round(eval(str(mff3)) - eval(str(F1)) / eval(str(mff1)), 10) == 0.0 assert mfp0.ops() == 2 assert mfp1.ops() == 4 assert mfp2.ops() == 0 assert mfp3.ops() == 7 assert mfs0.ops() == 2 assert mfs1.ops() == 3 assert mfs2.ops() == 4 assert mfs3.ops() == 5 assert mff0.ops() == 2 assert mff1.ops() == 3 assert mff2.ops() == 3 assert mff3.ops() == 5 assert str(mfp0) == 'x*y/x' assert str(mfp1) == '%s*x*y/(x*y)' % f_3 assert str(mfp2) == '%s' % f_0 assert str(mfp3) == '%s*x*y/(%s*x*y/(x*y))' % (f_3, f_3) assert str(mfs0) == '(x + y)/x' assert str(mfs1) == '(x + x)/(x + y)' assert str(mfs2) == '(x-%s)/(x + x-%s)' % (f_2, f_2) assert str(mfs3) == '(x + x)/((x + x)/(x + y))' assert str(mff0) == '(x/y)/x' assert str(mff1) == '(x/y)/(x/%s)' % f_3 assert str(mff2) == '(x/%s)/(-%s/y)' % (f_3, f_2) assert str(mff3) == '(x/y)/((x/y)/(x/%s))' % f_3 # Use p1 as a base expression for Symbol s3 = Symbol(format["cos"](str(p1)), CONST, p1, 1) assert str(s3) == 'std::cos(%s*x*y)' % f_3 assert s3.ops() == 3 ffc-2019.1.0.post0/test/unit/symbolics/test_not_finished.py000077500000000000000000000057771346026536000236220ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2010 Kristian B. Oelgaard # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . import pytest import time # FFC modules from ffc.quadrature.symbolics import * from ffc.quadrature.sumobj import _group_fractions from ffc.quadrature.cpp import format, set_float_formatting from ffc.parameters import FFC_PARAMETERS set_float_formatting(FFC_PARAMETERS['precision']) def testNotFinished(): "Stuff that would be nice to implement." f_1 = format["float"](1) f_2 = format["float"](2) f_4 = format["float"](4) f_8 = format["float"](8) f0 = FloatValue(4) f1 = FloatValue(2) f2 = FloatValue(8) s0 = Symbol("x", GEO) s1 = Symbol("y", GEO) s2 = Symbol("z", GEO) a = Symbol("a", GEO) b = Symbol("b", GEO) c = Symbol("c", GEO) # Aux. expressions p0 = Product([f1, s0]) p1 = Product([f2, s1]) p2 = Product([s0, s1]) F0 = Fraction(f0, s0) S0 = Sum([p0, p1]) S1 = Sum([s0, p2]) S2 = Sum([FloatValue(1), s1]) S3 = Sum([F0, F0]) # Thing to be implemented e0 = f0 / S0 e1 = s0 / S1 e2 = S2 / S1 e3 = _group_fractions(S3) e4 = Sum([Fraction(f1 * s0, a * b * c), Fraction(s0, a * b)]).expand().reduce_ops() # Tests that pass the current implementation assert str(e0) == '%s/(%s*x + %s*y)' % (f_4, f_2, f_8) assert str(e1) == 'x/(x + x*y)' assert str(e2) == '(%s + y)/(x + x*y)' % f_1 assert str(e3) == '%s/x' % f_8 assert str(e4) == 'x*(%s/(a*b) + %s/(a*b*c))' % (f_1, f_2) # Tests that should pass in future implementations (change NotEqual to Equal) assert str(e0) != '%s/(x + %s*y)' % (f_2, f_4) assert str(e1) != '%s/(%s + y)' % (f_1, f_1) assert str(e2) != '%s/x' % f_1 assert str(e4) != 'x*(%s/c + %s)/(a*b)' % (f_2, f_1) # TODO: Would it be a good idea to reduce expressions # wrt. var_type without first expanding? E0 = Product([Sum([Product([Symbol('B0', BASIS), Product([Symbol('B1', BASIS), Sum([s0]), Sum([s0])])]), Product([Symbol('B0', BASIS), Symbol('B1', BASIS)])])]) Er0 = E0.reduce_vartype(BASIS) Ex0 = E0.expand().reduce_vartype(BASIS) assert Ex0[0][1] != Er0[0][1].expand() # Both of these reductions should work at the same time # 1) 2/(x/(a+b) + y/(a+b)) --> 2(a+b)/(x+y) # 2) 2/(x + y/(a+b)) --> no reduction, or if divisions are more expensive # 3) 2/(x + y/(a+b)) --> 2(a+b)/((a+b)x + y) ffc-2019.1.0.post0/test/unit/symbolics/test_poisson.py000077500000000000000000000063341346026536000226310ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2010 Kristian B. Oelgaard # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . import pytest import time # FFC modules from ffc.quadrature.reduce_operations import operation_count, expand_operations, reduce_operations from ffc.quadrature.symbolics import * from ffc.quadrature.cpp import format, set_float_formatting from ffc.parameters import FFC_PARAMETERS set_float_formatting(FFC_PARAMETERS['precision']) def testPoisson(): poisson = """((Jinv_00*FE0_D10_ip_j + Jinv_10*FE0_D01_ip_j)*(Jinv_00*FE0_D10_ip_k + Jinv_10*FE0_D01_ip_k) + (Jinv_01*FE0_D10_ip_j + Jinv_11*FE0_D01_ip_j)*(Jinv_01*FE0_D10_ip_k + Jinv_11*FE0_D01_ip_k))*W4_ip*det""" expr = Product([ Sum([ Product([ Sum([ Product([Symbol("Jinv_00", GEO), Symbol("FE0_D10_ip_j", BASIS)]), Product([Symbol("Jinv_10", GEO), Symbol("FE0_D01_ip_j", BASIS)]) ]), Sum([ Product([Symbol("Jinv_00", GEO), Symbol("FE0_D10_ip_k", BASIS)]), Product([Symbol("Jinv_10", GEO), Symbol("FE0_D01_ip_k", BASIS)]) ]) ]), Product([ Sum([ Product([Symbol("Jinv_01", GEO), Symbol("FE0_D10_ip_j", BASIS)]), Product([Symbol("Jinv_11", GEO), Symbol("FE0_D01_ip_j", BASIS)]) ]), Sum([ Product([Symbol("Jinv_01", GEO), Symbol("FE0_D10_ip_k", BASIS)]), Product([Symbol("Jinv_11", GEO), Symbol("FE0_D01_ip_k", BASIS)]) ]) ]) ]), Symbol("W4_ip", IP), Symbol("det", GEO) ]) expr_exp = expr.expand() poisson_exp = expand_operations(poisson, format) expr_red = expr_exp.reduce_ops() poisson_red = reduce_operations(poisson, format) poisson_exp_ops = operation_count(poisson_exp, format) poisson_red_ops = operation_count(poisson_red, format) Jinv_00, Jinv_01, Jinv_10, Jinv_11, W4_ip, det = (1.1, 1.5, -4.3, 1.7, 11, 52.3) FE0_D01_ip_j, FE0_D10_ip_j, FE0_D01_ip_k, FE0_D10_ip_k = (1.12, 5.7, -9.3, 7.4) assert round(eval(str(expr)) - eval(str(expr_exp)), 10) == 0.0 assert round(eval(str(expr)) - eval(str(expr_red)), 10) == 0.0 assert round(eval(str(expr)) - eval(str(poisson)), 10) == 0.0 assert round(eval(str(expr)) - eval(str(poisson_exp)), 10) == 0.0 assert round(eval(str(expr)) - eval(str(poisson_red)), 10) == 0.0 assert expr.ops() == 17 assert poisson_exp_ops == 47 assert expr_exp.ops() == 47 assert poisson_red_ops == 23 assert expr_red.ops() == 23 ffc-2019.1.0.post0/test/unit/symbolics/test_product.py000077500000000000000000000066041346026536000226170ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2010 Kristian B. Oelgaard # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . import pytest import time # FFC modules from ffc.quadrature.symbolics import * from ffc.quadrature.cpp import format, set_float_formatting from ffc.parameters import FFC_PARAMETERS set_float_formatting(FFC_PARAMETERS['precision']) def testProduct(): "Test simple product instance." f_0 = format["float"](0) f_1 = format["float"](1) f0 = FloatValue(-2.0) f1 = FloatValue(3.0) f2 = FloatValue(0) f3 = FloatValue(-1) f4 = FloatValue(1) f5 = FloatValue(-0.5) f6 = FloatValue(2.0) s0 = Symbol("x", BASIS) s1 = Symbol("y", GEO) s2 = Symbol("z", GEO) p0 = Product([]) p1 = Product([s0]) p2 = Product([s0, s1]) p3 = Product([f1, s0, s1]) p4 = Product([s0, f2, s2]) p5 = Product([s0, f0, s1, f1, s2]) p6 = Product([s0, f3, s1]) p7 = Product([s0, f4, s1]).expand().reduce_ops() p8 = Product([s0, f0, s2, f5]) p9 = Product([s0, s1]) p10 = Product([p0, p1]) p11 = Product([f5, f0]) p12 = Product([f6, f5]) p13 = Product([f6, f5]).expand() p14 = Product([f1, f2]) p_tmp = Product([f1]) p_tmp.expand() p15 = Product([p_tmp, s0]) assert repr(p0) == "Product([FloatValue(%s)])" % f_0 assert repr(p1) == "Product([Symbol('x', BASIS)])" assert repr(p3) == "Product([FloatValue(%s), Symbol('x', BASIS), Symbol('y', GEO)])"\ % format["float"](3) assert repr(p6) == "Product([FloatValue(-%s), Symbol('x', BASIS), Symbol('y', GEO)])" % f_1 assert repr(p7) == "Product([Symbol('x', BASIS), Symbol('y', GEO)])" assert repr(p8) == "Product([Symbol('x', BASIS), Symbol('z', GEO)])" assert str(p2) == 'x*y' assert str(p4) == '%s' % f_0 assert str(p5) == '-%s*x*y*z' % format["float"](6) assert str(p6) == ' - x*y' assert str(p7) == 'x*y' assert str(p8) == 'x*z' assert str(p9) == 'x*y' assert p0.val == 0 assert str(p10) == '%s' % f_0 assert str(p11) == '%s' % f_1 assert str(p12) == '-%s' % f_1 assert str(p13) == '-%s' % f_1 assert repr(p14) == "Product([FloatValue(%s)])" % f_0 assert repr(p14.expand()) == "FloatValue(%s)" % f_0 assert p1 == p1 assert p1 != p7 assert p4 != p3 assert p2 == p9 assert p2 != p3 assert p0.ops() == 0 assert p1.ops() == 0 assert p2.ops() == 1 assert p3.ops() == 2 assert p4.ops() == 0 assert p5.ops() == 3 assert p6.ops() == 1 assert p7.ops() == 1 assert p8.ops() == 1 assert p9.ops() == 1 assert p10.ops() == 0 assert p14.ops() == 0 # Test hash ll = [p3] d = {p3: 0} p10 = Product([f1, s0, s1]) assert p3 in ll assert p3 in d assert p10 in ll assert p10 in d assert p2 not in ll assert p2 not in d ffc-2019.1.0.post0/test/unit/symbolics/test_product_operators.py000077500000000000000000000066641346026536000247230ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2010 Kristian B. Oelgaard # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . import pytest import time # FFC modules from ffc.quadrature.reduce_operations import operation_count, expand_operations, reduce_operations from ffc.quadrature.symbolics import * from ffc.quadrature.sumobj import _group_fractions from ffc.quadrature.cpp import format, set_float_formatting from ffc.parameters import FFC_PARAMETERS set_float_formatting(FFC_PARAMETERS['precision']) from ffc.log import error, push_level, pop_level, CRITICAL def testProductOperators(): "Test binary operators" f_0 = format["float"](0) f_2 = format["float"](2) f_4 = format["float"](4) f0 = FloatValue(0.0) f1 = FloatValue(1.0) f2 = FloatValue(2.0) fm1 = FloatValue(-1.0) fm3 = FloatValue(-3.0) x = Symbol("x", GEO) y = Symbol("y", GEO) z = Symbol("z", GEO) p0 = Product([f2, x]) p1 = Product([x, y]) p2 = Product([f2, z]) p3 = Product([x, y, z]) S0 = Sum([x, y]) S1 = Sum([x, z]) F0 = Fraction(f2, x) F1 = Fraction(x, y) F2 = Fraction(x, S0) F3 = Fraction(x, y) F4 = Fraction(p0, y) # Test Product '+' assert str(p0 + f2) == '(%s + %s*x)' % (f_2, f_2) assert str(p0 + x) == '%s*x' % format["float"](3) assert str(p0 + y) == '(y + %s*x)' % f_2 assert str(p0 + p0) == '%s*x' % f_4 assert str(p0 + p1) == '(%s*x + x*y)' % f_2 assert p0 + Product([fm1, x]) == x assert Product([fm1, x]) + x == f0 assert str(x + Product([fm1, x])) == '%s' % f_0 assert str(p0 + S0) == '(x + y + %s*x)' % f_2 assert str(p0 + F3) == '(%s*x + x/y)' % f_2 # Test Product '-' assert str(p0 - f2) == '(%s*x-%s)' % (f_2, f_2) assert str(p0 - x) == 'x' assert str(p0 - y) == '(%s*x - y)' % f_2 assert str(p0 - p0) == '%s' % f_0 assert str(p0 - p1) == '(%s*x - x*y)' % f_2 assert str(p0 - S0) == '(%s*x - (x + y))' % f_2 assert str(p0 - F3) == '(%s*x - x/y)' % f_2 # Test Product '*', only need to test float, symbol and product. # Sum and fraction are handled by 'other' assert str(p0 * f0) == '%s' % f_0 assert str(p0 * fm3) == '-%s*x' % format["float"](6) assert str(p0 * y) == '%s*x*y' % f_2 assert str(p0 * p1) == '%s*x*x*y' % f_2 # Test Product '/' assert str(Product([f0, x]) / x) == '%s' % f_0 assert str(p0 / S0) == '%s*x/(x + y)' % f_2 assert p1 / y == x assert p1 / p2 == Fraction(Product([p1, FloatValue(0.5)]), z) assert p1 / z == Fraction(p1, z) assert p0 / Product([f2, p1]) == Fraction(f1, y) assert p1 / p0 == Product([FloatValue(0.5), y]) assert p1 / p1 == f1 assert p1 / p3 == Fraction(f1, z) assert str(p1 / p3) == '%s/z' % format["float"](1) with pytest.raises(Exception): truediv(p0, f0) with pytest.raises(Exception): truediv(p0, F0) ffc-2019.1.0.post0/test/unit/symbolics/test_real_examples.py000077500000000000000000000032451346026536000237560ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2010 Kristian B. Oelgaard # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . import pytest import time # FFC modules from ffc.quadrature.symbolics import * from ffc.quadrature.cpp import format, set_float_formatting from ffc.parameters import FFC_PARAMETERS set_float_formatting(FFC_PARAMETERS['precision']) def testRealExamples(): p = Product([Symbol('FE0_C1_D01[ip][k]', BASIS), Sum([ Symbol('Jinv_10', GEO), Symbol('w[4][0]', GEO) ]), Sum([ Symbol('Jinv_10', GEO), Symbol('w[4][0]', GEO) ]) ]) br = p.reduce_vartype(BASIS) be = p.expand().reduce_vartype(BASIS) if len(be) == 1: if be[0][0] == br[0]: if be[0][1] != br[1].expand(): print("\nbe: ", repr(be[0][1])) print("\nbr: ", repr(br[1].expand())) print("\nbe: ", be[0][1]) print("\nbr: ", br[1].expand()) RuntimeError("Should not be here") ffc-2019.1.0.post0/test/unit/symbolics/test_reduce_gip.py000077500000000000000000000226421346026536000232450ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2010 Kristian B. Oelgaard # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . import pytest import time # FFC modules from ffc.quadrature.symbolics import * from ffc.quadrature.cpp import format, set_float_formatting from ffc.parameters import FFC_PARAMETERS set_float_formatting(FFC_PARAMETERS['precision']) def testReduceGIP(): expr = Sum([ Product([ Symbol("F17", IP), Symbol("F17", IP), Symbol("F18", IP), Symbol("F18", IP), Symbol("F20", IP), Symbol("F3", IP), Symbol("W9", IP), Symbol("G0", GEO) ]), Product([ Symbol("F11", IP), Symbol("F11", IP), Symbol("F13", IP), Symbol("F13", IP), Symbol("F20", IP), Symbol("F3", IP), Symbol("W9", IP), Symbol("G1", GEO) ]), Product([ Symbol("F11", IP), Symbol("F11", IP), Symbol("F12", IP), Symbol("F13", IP), Symbol("F20", IP), Symbol("F3", IP), Symbol("W9", IP), Symbol("G2", GEO) ]), Product([ Symbol("F10", IP), Symbol("F11", IP), Symbol("F12", IP), Symbol("F20", IP), Symbol("F3", IP), Symbol("F8", IP), Symbol("W9", IP), Symbol("G2", GEO) ]), Product([ Symbol("F17", IP), Symbol("F18", IP), Symbol("F20", IP), Symbol("F3", IP), Symbol("F8", IP), Symbol("F9", IP), Symbol("W9", IP), Symbol("G3", GEO) ]), Product([ Symbol("F10", IP), Symbol("F17", IP), Symbol("F18", IP), Symbol("F20", IP), Symbol("F8", IP), Symbol("W9", IP), Symbol("G4", GEO) ]), Product([ Symbol("F10", IP), Symbol("F20", IP), Symbol("F8", IP), Symbol("F8", IP), Symbol("F9", IP), Symbol("W9", IP), Symbol("G4", GEO) ]), Product([ Symbol("F11", IP), Symbol("F13", IP), Symbol("F17", IP), Symbol("F18", IP), Symbol("F20", IP), Symbol("F3", IP), Symbol("W9", IP), Symbol("G2", GEO) ]), Product([ Symbol("F20", IP), Symbol("F8", IP), Symbol("F8", IP), Symbol("F9", IP), Symbol("F9", IP), Symbol("W9", IP), Symbol("G5", GEO) ]), Product([ Symbol("F11", IP), Symbol("F12", IP), Symbol("F17", IP), Symbol("F18", IP), Symbol("F20", IP), Symbol("W9", IP), Symbol("G6", GEO) ]), Product([ Symbol("F10", IP), Symbol("F10", IP), Symbol("F20", IP), Symbol("F3", IP), Symbol("F8", IP), Symbol("F8", IP), Symbol("W9", IP), Symbol("G1", GEO) ]), Product([ Symbol("F10", IP), Symbol("F10", IP), Symbol("F20", IP), Symbol("F8", IP), Symbol("F8", IP), Symbol("W9", IP), Symbol("G7", GEO) ]), Product([ Symbol("F17", IP), Symbol("F17", IP), Symbol("F18", IP), Symbol("F19", IP), Symbol("F20", IP), Symbol("F3", IP), Symbol("W9", IP), Symbol("G2", GEO) ]), Product([ Symbol("F11", IP), Symbol("F12", IP), Symbol("F17", IP), Symbol("F19", IP), Symbol("F20", IP), Symbol("W9", IP), Symbol("G4", GEO) ]), Product([ Symbol("F11", IP), Symbol("F11", IP), Symbol("F13", IP), Symbol("F13", IP), Symbol("F20", IP), Symbol("W9", IP), Symbol("G7", GEO) ]), Product([ Symbol("F10", IP), Symbol("F17", IP), Symbol("F19", IP), Symbol("F20", IP), Symbol("F3", IP), Symbol("F8", IP), Symbol("W9", IP), Symbol("G8", GEO) ]), Product([ Symbol("F17", IP), Symbol("F17", IP), Symbol("F18", IP), Symbol("F18", IP), Symbol("F20", IP), Symbol("W9", IP), Symbol("G5", GEO) ]), Product([ Symbol("F10", IP), Symbol("F17", IP), Symbol("F19", IP), Symbol("F20", IP), Symbol("F8", IP), Symbol("W9", IP), Symbol("G9", GEO) ]), Product([ Symbol("F10", IP), Symbol("F17", IP), Symbol("F18", IP), Symbol("F20", IP), Symbol("F3", IP), Symbol("F8", IP), Symbol("W9", IP), Symbol("G2", GEO) ]), Product([ Symbol("F17", IP), Symbol("F18", IP), Symbol("F20", IP), Symbol("F8", IP), Symbol("F9", IP), Symbol("W9", IP), Symbol("G6", GEO) ]), Product([ Symbol("F10", IP), Symbol("F11", IP), Symbol("F13", IP), Symbol("F20", IP), Symbol("F3", IP), Symbol("F8", IP), Symbol("W9", IP), Symbol("G8", GEO) ]), Product([ Symbol("F11", IP), Symbol("F11", IP), Symbol("F12", IP), Symbol("F13", IP), Symbol("F20", IP), Symbol("W9", IP), Symbol("G4", GEO) ]), Product([ Symbol("F10", IP), Symbol("F11", IP), Symbol("F13", IP), Symbol("F20", IP), Symbol("F8", IP), Symbol("W9", IP), Symbol("G9", GEO) ]), Product([ Symbol("F11", IP), Symbol("F13", IP), Symbol("F20", IP), Symbol("F3", IP), Symbol("F8", IP), Symbol("F9", IP), Symbol("W9", IP), Symbol("G2", GEO) ]), Product([ Symbol("F11", IP), Symbol("F12", IP), Symbol("F17", IP), Symbol("F18", IP), Symbol("F20", IP), Symbol("F3", IP), Symbol("W9", IP), Symbol("G3", GEO) ]), Product([ Symbol("F10", IP), Symbol("F20", IP), Symbol("F3", IP), Symbol("F8", IP), Symbol("F8", IP), Symbol("F9", IP), Symbol("W9", IP), Symbol("G2", GEO) ]), Product([ Symbol("F17", IP), Symbol("F19", IP), Symbol("F20", IP), Symbol("F8", IP), Symbol("F9", IP), Symbol("W9", IP), Symbol("G4", GEO) ]), Product([ Symbol("F11", IP), Symbol("F13", IP), Symbol("F17", IP), Symbol("F19", IP), Symbol("F20", IP), Symbol("W9", IP), Symbol("G9", GEO) ]), Product([ Symbol("F11", IP), Symbol("F13", IP), Symbol("F17", IP), Symbol("F18", IP), Symbol("F20", IP), Symbol("W9", IP), Symbol("G4", GEO) ]), Product([ Symbol("F11", IP), Symbol("F11", IP), Symbol("F12", IP), Symbol("F12", IP), Symbol("F20", IP), Symbol("F3", IP), Symbol("W9", IP), Symbol("G0", GEO) ]), Product([ Symbol("F17", IP), Symbol("F17", IP), Symbol("F19", IP), Symbol("F19", IP), Symbol("F20", IP), Symbol("W9", IP), Symbol("G7", GEO) ]), Product([ Symbol("F17", IP), Symbol("F17", IP), Symbol("F18", IP), Symbol("F19", IP), Symbol("F20", IP), Symbol("W9", IP), Symbol("G4", GEO) ]), Product([ Symbol("F20", IP), Symbol("F3", IP), Symbol("F8", IP), Symbol("F8", IP), Symbol("F9", IP), Symbol("F9", IP), Symbol("W9", IP), Symbol("G0", GEO) ]), Product([ Symbol("F11", IP), Symbol("F12", IP), Symbol("F20", IP), Symbol("F8", IP), Symbol("F9", IP), Symbol("W9", IP), Symbol("G6", GEO) ]), Product([ Symbol("F11", IP), Symbol("F13", IP), Symbol("F17", IP), Symbol("F19", IP), Symbol("F20", IP), Symbol("F3", IP), Symbol("W9", IP), Symbol("G8", GEO) ]), Product([ Symbol("F17", IP), Symbol("F19", IP), Symbol("F20", IP), Symbol("F3", IP), Symbol("F8", IP), Symbol("F9", IP), Symbol("W9", IP), Symbol("G2", GEO) ]), Product([ Symbol("F10", IP), Symbol("F11", IP), Symbol("F12", IP), Symbol("F20", IP), Symbol("F8", IP), Symbol("W9", IP), Symbol("G4", GEO) ]), Product([ Symbol("F11", IP), Symbol("F13", IP), Symbol("F20", IP), Symbol("F8", IP), Symbol("F9", IP), Symbol("W9", IP), Symbol("G4", GEO) ]), Product([ Symbol("F11", IP), Symbol("F11", IP), Symbol("F12", IP), Symbol("F12", IP), Symbol("F20", IP), Symbol("W9", IP), Symbol("G5", GEO) ]), Product([ Symbol("F11", IP), Symbol("F12", IP), Symbol("F20", IP), Symbol("F3", IP), Symbol("F8", IP), Symbol("F9", IP), Symbol("W9", IP), Symbol("G3", GEO) ]), Product([ Symbol("F17", IP), Symbol("F17", IP), Symbol("F19", IP), Symbol("F19", IP), Symbol("F20", IP), Symbol("F3", IP), Symbol("W9", IP), Symbol("G1", GEO) ]), Product([ Symbol("F11", IP), Symbol("F12", IP), Symbol("F17", IP), Symbol("F19", IP), Symbol("F20", IP), Symbol("F3", IP), Symbol("W9", IP), Symbol("G2", GEO) ]) ]) expr_exp = expr.expand() expr_red = expr_exp.reduce_ops() W9 = 9 F1, F2, F3, F4, F5, F6, F7, F8, F9, F10, F11, F12, F13, F14, F15, F16, F17, F18, F19, F20 = [0.123 * i for i in range(1, 21)] G0, G1, G2, G3, G4, G5, G6, G7, G8, G9 = [2.64 + 1.0 / i for i in range(20, 30)] assert round(eval(str(expr)) - eval(str(expr_exp)), 10) == 0.0 assert round(eval(str(expr)) - eval(str(expr_red)), 10) == 0.0 assert expr.ops() == 314 assert expr_exp.ops() == 314 assert expr_red.ops() == 120 ffc-2019.1.0.post0/test/unit/symbolics/test_reduce_operations.py000077500000000000000000000166601346026536000246540ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2010 Kristian B. Oelgaard # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . import pytest import time # FFC modules from ffc.quadrature.symbolics import * from ffc.quadrature.cpp import format, set_float_formatting from ffc.parameters import FFC_PARAMETERS set_float_formatting(FFC_PARAMETERS['precision']) def testReduceOperations(): f_1 = format["float"](1) f_2 = format["float"](2) # Aux. variables f2 = FloatValue(2) f0_5 = FloatValue(0.5) f1 = FloatValue(1.0) fm1 = FloatValue(-1.0) x = Symbol("x", GEO) y = Symbol("y", GEO) z = Symbol("z", GEO) a = Symbol("a", GEO) b = Symbol("b", GEO) c = Symbol("c", GEO) d = Symbol("d", GEO) # Simple expand and reduce simple float and symbol objects fx2 = f2.expand() xx = x.expand() fr2 = fx2.reduce_ops() xr = xx.reduce_ops() assert f2 == fr2 assert x == xr # Test product p0 = f2 * x p1 = y * x p2 = x * f2 / y p3 = x * Sum([x, y]) px0 = p0.expand() px1 = p1.expand() pr0 = px0.reduce_ops() pr1 = px1.reduce_ops() assert p0 == pr0 assert p1 == pr1 # Test fraction F0 = Fraction(p0, y) F1 = Fraction(x, p0) F2 = Fraction(p0, p1) F3 = Fraction(Sum([x * x, x * y]), y) F4 = Fraction(Sum([f2 * x, x * y]), a) Fx0 = F0.expand() Fx1 = F1.expand() Fx2 = F2.expand() Fx3 = F3.expand() Fx4 = F4.expand() Fr0 = Fx0.reduce_ops() Fr1 = Fx1.reduce_ops() Fr2 = Fx2.reduce_ops() Fr3 = Fx3.reduce_ops() Fr4 = Fx4.reduce_ops() assert Fr0 == F0 assert Fr1 == f0_5 assert Fr2 == Fraction(f2, y) assert str(Fr3) == "x*(%s + x/y)" % f_1 assert str(Fr4) == "x*(%s + y)/a" % f_2 # Test sum # TODO: Here we might have to add additional tests S0 = Sum([x, y]) S1 = Sum([p0, p1]) S2 = Sum([x, p1]) S3 = Sum([p0, f2 * y]) S4 = Sum([f2 * p1, z * p1]) S5 = Sum([x, x * x, x * x * x]) S6 = Sum([a * x * x, b * x * x * x, c * x * x, d * x * x * x]) S7 = Sum([p0, p1, x * x, f2 * z, y * z]) S8 = Sum([a * y, b * y, x * x * x * y, x * x * x * z]) S9 = Sum([a * y, b * y, c * y, x * x * x * y, f2 * x * x, x * x * x * z]) S10 = Sum([f2 * x * x * y, x * x * y * z]) S11 = Sum([f2 * x * x * y * y, x * x * y * y * z]) S12 = Sum([f2 * x * x * y * y, x * x * y * y * z, a * z, b * z, c * z]) S13 = Sum([Fraction(f1, x), Fraction(f1, y)]) S14 = Sum([Fraction(fm1, x), Fraction(fm1, y)]) S15 = Sum([Fraction(f2, x), Fraction(f2, x)]) S16 = Sum([Fraction(f2 * x, y * z), Fraction(f0_5, y * z)]) S17 = Sum([(f2 * x * y) / a, (x * y * z) / b]) S18 = Sum([(x * y) / a, (x * z) / a, f2 / a, (f2 * x * y) / a]) S19 = Sum([(f2 * x) / a, (x * y) / a, z * x]) S20 = Product([Sum([x, y]), Fraction(a, b), Fraction(Product([c, d]), z)]) S21 = Sum([a * x, b * x, c * x, x * y, x * z, f2 * y, a * y, b * y, f2 * z, a * z, b * z]) S22 = Sum([FloatValue(0.5) * x / y, FloatValue(-0.5) * x / y]) S23 = Sum([x * y * z, x * y * y * y * z * z * z, y * y * y * z * z * z * z, z * z * z * z * z]) Sx0 = S0.expand() Sx1 = S1.expand() Sx2 = S2.expand() Sx3 = S3.expand() Sx4 = S4.expand() Sx5 = S5.expand() Sx6 = S6.expand() Sx7 = S7.expand() Sx8 = S8.expand() Sx9 = S9.expand() Sx10 = S10.expand() Sx11 = S11.expand() Sx12 = S12.expand() Sx13 = S13.expand() Sx14 = S14.expand() Sx15 = S15.expand() Sx16 = S16.expand() Sx17 = S17.expand() Sx18 = S18.expand() Sx19 = S19.expand() Sx20 = S20.expand() Sx21 = S21.expand() Sx22 = S22.expand() Sx23 = S23.expand() Sr0 = Sx0.reduce_ops() Sr1 = Sx1.reduce_ops() Sr2 = Sx2.reduce_ops() Sr3 = Sx3.reduce_ops() Sr4 = Sx4.reduce_ops() Sr5 = Sx5.reduce_ops() Sr6 = Sx6.reduce_ops() Sr7 = Sx7.reduce_ops() Sr8 = Sx8.reduce_ops() Sr9 = Sx9.reduce_ops() Sr10 = Sx10.reduce_ops() Sr11 = Sx11.reduce_ops() Sr12 = Sx12.reduce_ops() Sr13 = Sx13.reduce_ops() Sr14 = Sx14.reduce_ops() Sr15 = Sx15.reduce_ops() Sr16 = Sx16.reduce_ops() Sr17 = Sx17.reduce_ops() Sr18 = Sx18.reduce_ops() Sr19 = Sx19.reduce_ops() Sr20 = Sx20.reduce_ops() Sr21 = Sx21.reduce_ops() Sr22 = Sx22.reduce_ops() Sr23 = Sx23.reduce_ops() assert Sr0 == S0 assert str(Sr1) == "x*(%s + y)" % f_2 # TODO: Should this be (x + x*y)? assert str(Sr2) == "x*(%s + y)" % f_1 # assert str(Sr2), "(x + x*y)") assert str(Sr3) == "%s*(x + y)" % f_2 assert str(Sr4) == "x*y*(%s + z)" % f_2 assert str(Sr5) == "x*(%s + x*(%s + x))" % (f_1, f_1) assert str(Sr6) == "x*x*(a + c + x*(b + d))" assert str(Sr7) == "(x*(%s + x + y) + z*(%s + y))" % (f_2, f_2) assert str(Sr8) == "(x*x*x*(y + z) + y*(a + b))" assert str(Sr9) == "(x*x*(%s + x*(y + z)) + y*(a + b + c))" % f_2 assert str(Sr10) == "x*x*y*(%s + z)" % f_2 assert str(Sr11) == "x*x*y*y*(%s + z)" % f_2 assert str(Sr12) == "(x*x*y*y*(%s + z) + z*(a + b + c))" % f_2 assert str(Sr13) == "(%s/x + %s/y)" % (f_1, f_1) assert str(Sr14) == "(-%s/x-%s/y)" % (f_1, f_1) assert str(Sr15) == "%s/x" % format["float"](4) assert str(Sr16) == "(%s + %s*x)/(y*z)" % (format["float"](0.5), f_2) assert str(Sr17) == "x*y*(%s/a + z/b)" % f_2 assert str(Sr18) == "(%s + x*(z + %s*y))/a" % (f_2, format["float"](3)) assert str(Sr19) == "x*(z + (%s + y)/a)" % f_2 assert str(Sr20) == "a*c*d*(x + y)/(b*z)" assert str(Sr21) == "(x*(a + b + c + y + z) + y*(%s + a + b) + z*(%s + a + b))" % (f_2, f_2) assert str(Sr22) == "%s" % format["float"](0) assert str(Sr23) == "(x*y*z + z*z*z*(y*y*y*(x + z) + z*z))" assert S0.ops() == 1 assert Sr0.ops() == 1 assert S1.ops() == 3 assert Sr1.ops() == 2 assert S2.ops() == 2 assert Sr2.ops() == 2 assert S3.ops() == 3 assert Sr3.ops() == 2 assert S4.ops() == 5 assert Sr4.ops() == 3 assert S5.ops() == 5 assert Sr5.ops() == 4 assert S6.ops() == 13 assert Sr6.ops() == 6 assert S7.ops() == 9 assert Sr7.ops() == 6 assert S8.ops() == 11 assert Sr8.ops() == 7 assert S9.ops() == 16 assert Sr9.ops() == 9 assert S10.ops() == 7 assert Sr10.ops() == 4 assert S11.ops() == 9 assert Sr11.ops() == 5 assert S12.ops() == 15 assert Sr12.ops() == 9 assert S13.ops() == 3 assert Sr13.ops() == 3 assert S14.ops() == 3 assert Sr14.ops() == 3 assert S15.ops() == 3 assert Sr15.ops() == 1 assert S16.ops() == 6 assert Sr16.ops() == 4 assert S17.ops() == 7 assert Sr17.ops() == 5 assert S18.ops() == 11 assert Sr18.ops() == 5 assert S19.ops() == 7 assert Sr19.ops() == 4 assert S20.ops() == 6 assert Sr20.ops() == 6 assert S21.ops() == 21 assert Sr21.ops() == 13 assert S23.ops() == 21 assert Sr23.ops() == 12 ffc-2019.1.0.post0/test/unit/symbolics/test_reduce_vartype.py000077500000000000000000000103311346026536000241500ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2010 Kristian B. Oelgaard # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . import pytest import time # FFC modules from ffc.quadrature.symbolics import * from ffc.quadrature.cpp import format, set_float_formatting from ffc.parameters import FFC_PARAMETERS set_float_formatting(FFC_PARAMETERS['precision']) def testReduceVarType(): f1 = FloatValue(1) f2 = FloatValue(2) f3 = FloatValue(3) f5 = FloatValue(5) fm4 = FloatValue(-4) B0 = Symbol("B0", BASIS) B1 = Symbol("B1", BASIS) Bm4 = Product([fm4, B1]) B5 = Product([f5, B0]) I0 = Symbol("I0", IP) I1 = Symbol("I1", IP) I2 = Symbol("I2", IP) I5 = Product([f5, I0]) G0 = Symbol("G0", GEO) G1 = Symbol("G1", GEO) G2 = Symbol("G2", GEO) G3 = Product([f3, G0]) C0 = Symbol("C0", CONST) C2 = Product([f2, C0]) p0 = Product([B0, I5]) p1 = Product([B0, B1]) S0 = Sum([B0, I5]) S1 = Sum([p0, p1]) S2 = Sum([B0, B1]) S3 = Sum([B0, p0]) S4 = Sum([f5, p0]) S5 = Sum([I0, G0]) F0 = Fraction(B0, I5).expand() F1 = Fraction(p1, I5).expand() F2 = Fraction(G3, S2).expand() F3 = Fraction(G3, S3).expand() F4 = Fraction(I1, Sum([I1, I0])) F5 = Fraction(S5, I1) F6 = Fraction(I0, Sum([ Fraction(Sum([I0, I1]), Sum([G0, G1])), Fraction(Sum([I1, I2]), Sum([G1, G2])), ])) r0 = B0.reduce_vartype(BASIS) r1 = B0.reduce_vartype(CONST) rp0 = p0.reduce_vartype(BASIS) rp1 = p0.reduce_vartype(IP) rp2 = p1.reduce_vartype(BASIS) rp3 = p1.reduce_vartype(GEO) rs0 = S0.reduce_vartype(BASIS) rs1 = S0.reduce_vartype(IP) rs2 = S1.reduce_vartype(BASIS) rs3 = S4.reduce_vartype(BASIS) rs4 = S4.reduce_vartype(CONST) rf0 = F0.reduce_vartype(BASIS) rf1 = F1.reduce_vartype(BASIS) rf2 = F0.reduce_vartype(IP) rf3 = F2.reduce_vartype(BASIS) rf4 = F3.reduce_vartype(BASIS) rf5 = F4.reduce_vartype(IP) rf6 = F5.reduce_vartype(IP) rf7 = F6.reduce_vartype(IP) assert [(B0, f1)] == r0 assert [((), B0)] == r1 assert [(B0, I5)] == rp0 assert [(I0, B5)] == rp1 assert [(p1, f1)] == rp2 assert [((), p1)] == rp3 assert ((), I5) == rs0[0] assert (B0, f1) == rs0[1] assert (I0, f5) == rs1[1] assert ((), B0) == rs1[0] assert (Product([B0, B1]), f1) == rs2[1] assert (B0, I5) == rs2[0] assert ((), f5) == rs3[0] assert (B0, I5) == rs3[1] assert (f5, Sum([f1, Product([B0, I0])])) == rs4[0] assert [(B0, Fraction(FloatValue(0.2), I0))] == rf0 assert [(Product([B0, B1]), Fraction(FloatValue(0.2), I0))] == rf1 assert [(Fraction(f1, I0), Product([FloatValue(0.2), B0]))] == rf2 assert [(Fraction(f1, S2), G3)] == rf3 assert [(Fraction(f1, B0), Fraction(G3, Sum([I5, f1])))] == rf4 assert F4 == rf5[0][0] assert FloatValue(1) == rf5[0][1] assert Fraction(I0, I1) == rf6[1][0] assert f1 == rf6[1][1] assert Fraction(f1, I1) == rf6[0][0] assert G0 == rf6[0][1] assert F6 == rf7[0][0] assert f1 == rf7[0][1] expr = Sum([Symbol('W1', GEO), Fraction(Symbol('det', GEO), Sum([Symbol('F0', IP), Symbol('K_11', GEO)]))]) red = expr.expand().reduce_vartype(IP) vals = [] for ip in red: ip_dec, geo = ip if ip_dec and geo: vals.append(Product([ip_dec, geo])) elif geo: vals.append(geo) elif ip_dec: vals.append(ip_dec) comb = Sum(vals).expand() K_11 = 1.4 F0 = 1.5 W1 = 1.9 det = 2.1 assert round(eval(str(expr)) - eval(str(comb)), 10) == 0.0 ffc-2019.1.0.post0/test/unit/symbolics/test_sum.py000077500000000000000000000047501346026536000217430ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2010 Kristian B. Oelgaard # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . import pytest import time # FFC modules from ffc.quadrature.symbolics import * from ffc.quadrature.cpp import format, set_float_formatting from ffc.parameters import FFC_PARAMETERS set_float_formatting(FFC_PARAMETERS['precision']) def testSum(): "Test simple sum instance." f_0 = format["float"](0) f_1 = format["float"](1) f_2 = format["float"](2) f_3 = format["float"](3) f0 = FloatValue(-2.0) f1 = FloatValue(3.0) f2 = FloatValue(0) s0 = Symbol("x", BASIS) s1 = Symbol("y", GEO) s2 = Symbol("z", GEO) S0 = Sum([]) S1 = Sum([s0]) S2 = Sum([s0, s1]) S3 = Sum([s0, s0]) S4 = Sum([f0, s0]) S5 = Sum([s0, f0, s0]) S6 = Sum([s0, f0, s0, f1]) S7 = Sum([s0, f0, s1, f2]) S8 = Sum([s0, f1, s0]) S9 = Sum([f0, f0, f0, f1, f1, s1]) S10 = Sum([s1, s0]) assert repr(S0) == "Sum([FloatValue(%s)])" % f_0 assert S0.t == CONST assert repr(S1) == "Sum([Symbol('x', BASIS)])" assert repr(S4) == "Sum([FloatValue(-%s), Symbol('x', BASIS)])" % f_2 assert repr(S9) == "Sum([Symbol('y', GEO)])" assert str(S2) == "(x + y)" assert str(S3) == "(x + x)" assert str(S5) == "(x + x-%s)" % f_2 assert str(S6) == "(%s + x + x)" % f_1 assert str(S7) == "(x + y-%s)" % f_2 assert str(S8) == "(%s + x + x)" % f_3 assert str(S9) == "y" assert S2 == S2 assert S2 != S3 assert S5 != S6 assert S2 == S10 assert S0.ops() == 0 assert S1.ops() == 0 assert S2.ops() == 1 assert S3.ops() == 1 assert S4.ops() == 1 assert S5.ops() == 2 assert S6.ops() == 2 assert S7.ops() == 2 assert S8.ops() == 2 assert S9.ops() == 0 # Test hash ll = [S2] d = {S2: 0} assert S2 in ll assert S2 in d assert S10 in ll assert S10 in d ffc-2019.1.0.post0/test/unit/symbolics/test_sum_operators.py000077500000000000000000000056631346026536000240450ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2010 Kristian B. Oelgaard # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . import pytest import time # FFC modules from ffc.quadrature.reduce_operations import operation_count, expand_operations, reduce_operations from ffc.quadrature.symbolics import * from ffc.quadrature.sumobj import _group_fractions from ffc.quadrature.cpp import format, set_float_formatting from ffc.parameters import FFC_PARAMETERS set_float_formatting(FFC_PARAMETERS['precision']) from ffc.log import error, push_level, pop_level, CRITICAL def testSumOperators(): "Test binary operators" f_0_5 = format["float"](0.5) f_1 = format["float"](1) f_2 = format["float"](2) f_3 = format["float"](3) f_6 = format["float"](6) f2 = FloatValue(2.0) fm3 = FloatValue(-3.0) x = Symbol("x", GEO) y = Symbol("y", GEO) z = Symbol("z", GEO) p0 = Product([f2, x]) p1 = Product([x, y]) S0 = Sum([x, y]) S1 = Sum([x, z]) F0 = Fraction(p0, y) # Test Sum '+' assert str(S0 + f2) == '(%s + x + y)' % f_2 assert str(S0 + x) == '(x + x + y)' assert str(S0 + p0) == '(x + y + %s*x)' % f_2 assert str(S0 + S0) == '(x + x + y + y)' assert str(S0 + F0) == '(x + y + %s*x/y)' % f_2 # Test Sum '-' assert str(S0 - f2) == '(x + y-%s)' % f_2 assert str(S0 - fm3) == '(x + y + %s)' % f_3 assert str(S0 - x) == '(x + y - x)' assert str(S0 - p0) == '(x + y-%s*x)' % f_2 assert str(S0 - Product([fm3, p0])) == '(x + y + %s*x)' % f_6 assert str(S0 - S0) == '(x + y - (x + y))' assert str(S0 - F0) == '(x + y - %s*x/y)' % f_2 # Test Sum '*' assert str(S0 * f2) == '(%s*x + %s*y)' % (f_2, f_2) assert str(S0 * x) == '(x*x + x*y)' assert str(S0 * p0) == '(%s*x*x + %s*x*y)' % (f_2, f_2) assert str(S0 * S0) == '(%s*x*y + x*x + y*y)' % f_2 assert str(S0 * F0) == '(%s*x + %s*x*x/y)' % (f_2, f_2) # Test Sum '/' assert str(S0 / f2) == '(%s*x + %s*y)' % (f_0_5, f_0_5) assert str(S0 / x) == '(%s + y/x)' % f_1 assert str(S0 / p0) == '(%s + %s*y/x)' % (f_0_5, f_0_5) assert str(S0 / p1) == '(%s/x + %s/y)' % (f_1, f_1) assert str(S0 / S0) == '(x + y)/(x + y)' assert str(S0 / S1) == '(x + y)/(x + z)' with pytest.raises(Exception): truediv(S0, FloatValue(0)) with pytest.raises(Exception): truediv(S0, F0) ffc-2019.1.0.post0/test/unit/symbolics/test_symbol.py000077500000000000000000000033451346026536000224430ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2010 Kristian B. Oelgaard # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . import pytest import time # FFC modules from ffc.quadrature.symbolics import * from ffc.quadrature.cpp import format, set_float_formatting from ffc.parameters import FFC_PARAMETERS set_float_formatting(FFC_PARAMETERS['precision']) def testSymbol(): "Test simple symbol instance." s0 = Symbol("x", BASIS) s1 = Symbol("y", IP) s2 = Symbol("z", GEO) s3 = Symbol("z", GEO) s4 = Symbol("z", IP) assert repr(s0) == "Symbol('x', BASIS)" assert repr(s1) == "Symbol('y', IP)" assert repr(s2) == "Symbol('z', GEO)" assert repr(s4) == "Symbol('z', IP)" assert s2 == s3 assert (s2 == s1) is False assert (s2 == s4) is False assert (s2 != s3) is False assert s2 != s1 assert s0 < s1 assert s4 > s1 assert s0.ops() == 0 assert s1.ops() == 0 assert s2.ops() == 0 assert s3.ops() == 0 assert s4.ops() == 0 # Test hash ll = [s0] d = {s0: 0} s5 = Symbol('x', BASIS) assert s0 in ll assert s0 in d assert s5 in ll assert s5 in d ffc-2019.1.0.post0/test/unit/symbolics/test_symbol_operators.py000077500000000000000000000060571346026536000245440ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2010 Kristian B. Oelgaard # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . import pytest import time # FFC modules from ffc.quadrature.reduce_operations import operation_count, expand_operations, reduce_operations from ffc.quadrature.symbolics import * from ffc.quadrature.sumobj import _group_fractions from ffc.quadrature.cpp import format, set_float_formatting from ffc.parameters import FFC_PARAMETERS set_float_formatting(FFC_PARAMETERS['precision']) from ffc.log import error, push_level, pop_level, CRITICAL def testSymbolOperators(): "Test binary operators" f_0 = format["float"](0) f_1 = format["float"](1) f_2 = format["float"](2) f_3 = format["float"](3) f_0_5 = format["float"](0.5) f0 = FloatValue(0.0) f2 = FloatValue(2.0) fm1 = FloatValue(-1.0) fm3 = FloatValue(-3.0) x = Symbol("x", GEO) y = Symbol("y", GEO) z = Symbol("z", GEO) p0 = Product([f2, x]) p1 = Product([x, y]) p2 = Product([f2, z]) p3 = Product([y, x, z]) S0 = Sum([x, y]) S1 = Sum([x, z]) F0 = Fraction(f2, y) F1 = Fraction(x, y) F2 = Fraction(x, S0) F3 = Fraction(x, y) F4 = Fraction(p0, y) F5 = Fraction(fm3, y) # Test Symbol '+' assert str(x + f2) == '(%s + x)' % f_2 assert str(x + x) == '%s*x' % f_2 assert str(x + y) == '(x + y)' assert str(x + p0) == '%s*x' % f_3 assert str(x + p1) == '(x + x*y)' assert str(x + S0) == '(x + x + y)' assert str(x + F0) == '(x + %s/y)' % f_2 # Test Symbol '-' assert str(x - f2) == '(x-%s)' % f_2 assert str(x - x) == '%s' % f_0 assert str(x - y) == '(x - y)' assert str(x - p0) == ' - x' assert str(x - p1) == '(x - x*y)' assert str(x - S0) == '(x - (x + y))' assert str(x - F5) == '(x - -%s/y)' % f_3 # Test Symbol '*', only need to test float, symbol and product. Sum and # fraction are handled by 'other' assert str(x * f2) == '%s*x' % f_2 assert str(x * y) == 'x*y' assert str(x * p1) == 'x*x*y' # Test Symbol '/' assert str(x / f2) == '%s*x' % f_0_5 assert str(x / x) == '%s' % f_1 assert str(x / y) == 'x/y' assert str(x / S0) == 'x/(x + y)' assert str(x / p0) == '%s' % f_0_5 assert str(y / p1) == '%s/x' % f_1 assert str(z / p0) == '%s*z/x' % f_0_5 assert str(z / p1) == 'z/(x*y)' with pytest.raises(Exception): truediv(x, F0) with pytest.raises(Exception): truediv(y, FloatValue(0)) ffc-2019.1.0.post0/test/unit/test.py000066400000000000000000000001241346026536000170370ustar00rootroot00000000000000# -*- coding: utf-8 -*- import pytest if __name__ == "__main__": pytest.main() ffc-2019.1.0.post0/test/unit/ufc/000077500000000000000000000000001346026536000162665ustar00rootroot00000000000000ffc-2019.1.0.post0/test/unit/ufc/finite_element/000077500000000000000000000000001346026536000212555ustar00rootroot00000000000000ffc-2019.1.0.post0/test/unit/ufc/finite_element/test_evaluate.py000066400000000000000000000257611346026536000245070ustar00rootroot00000000000000# -*- coding: utf-8 -*- # Copyright (C) 2017 Garth N. Wells # # This file is part of FFC. # # FFC is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # FFC is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with FFC. If not, see . import numpy as np import pytest from ufl import FiniteElement, VectorElement, MixedElement from ufl import interval, triangle, tetrahedron import ffc import ffc_factory @pytest.fixture(scope="module") def point_data(): """Points are which evaluate functions are tested""" p = {1: [(0.114,), (0.349,), (0.986,)], 2: [(0.114, 0.854), (0.349, 0.247), (0.986, 0.045)], 3: [(0.114, 0.854, 0.126), (0.349, 0.247, 0.457), (0.986, 0.045, 0.127)]} return p @pytest.fixture(scope="module") def build_ufl_element_list(): """Build collection of UFL elements""" elements = [] # Lagrange elements for cell in (interval, triangle, tetrahedron): for p in range(1, 2): elements.append(FiniteElement("Lagrange", cell, p)) elements.append(VectorElement("Lagrange", cell, p)) elements.append(FiniteElement("Discontinuous Lagrange", cell, p-1)) # Vector elements for cell in (triangle, tetrahedron): for p in range(1, 2): elements.append(FiniteElement("RT", cell, p)) elements.append(FiniteElement("BDM", cell, p)) elements.append(FiniteElement("N1curl", cell, p)) elements.append(FiniteElement("N2curl", cell, p)) elements.append(FiniteElement("Discontinuous Raviart-Thomas", cell, p)) # Mixed elements for cell in (interval, triangle, tetrahedron): for p in range(1, 3): e0 = FiniteElement("Lagrange", cell, p+1) e1 = FiniteElement("Lagrange", cell, p) e2 = VectorElement("Lagrange", cell, p+1) elements.append(MixedElement([e0, e0])) elements.append(MixedElement([e0, e1])) elements.append(MixedElement([e1, e0])) elements.append(MixedElement([e2, e1])) elements.append(MixedElement([MixedElement([e1, e1]), e0])) for cell in (triangle, tetrahedron): for p in range(1, 2): e0 = FiniteElement("Lagrange", cell, p+1) e1 = FiniteElement("Lagrange", cell, p) e2 = VectorElement("Lagrange", cell, p+1) e3 = FiniteElement("BDM", cell, p) e4 = FiniteElement("RT", cell, p+1) e5 = FiniteElement("N1curl", cell, p) elements.append(MixedElement([e1, e2])) elements.append(MixedElement([e3, MixedElement([e4, e5])])) # Misc elements for cell in (triangle,): for p in range(1, 2): elements.append(FiniteElement("HHJ", cell, p)) for cell in (triangle, tetrahedron): elements.append(FiniteElement("CR", cell, 1)) for p in range(1, 2): elements.append(FiniteElement("Regge", cell, p)) return elements def element_id(e): """Return element string signature as ID""" return str(e) @pytest.fixture(params=build_ufl_element_list(), ids=element_id, scope="module") def element_pair(request): """Given a UFL element, returns UFL element and a JIT-compiled and wrapped UFC element. """ ufl_element = request.param ufc_element, ufc_dofmap = ffc.jit(ufl_element, parameters=None) ufc_element = ffc_factory.make_ufc_finite_element(ufc_element) return ufl_element, ufc_element def test_evaluate_reference_basis_vs_fiat(element_pair, point_data): """Tests ufc::finite_element::evaluate_reference_basis against data from FIAT. """ ufl_element, ufc_element = element_pair # Get geometric and topological dimensions tdim = ufc_element.topological_dimension() points = point_data[tdim] # Create FIAT element via FFC fiat_element = ffc.fiatinterface.create_element(ufl_element) # Tabulate basis at requsted points via UFC values_ufc = ufc_element.evaluate_reference_basis(points) # Tabulate basis at requsted points via FIAT values_fiat = fiat_element.tabulate(0, points) # Extract and reshape FIAT output key = (0,)*tdim values_fiat = np.array(values_fiat[key]) # Shape checks fiat_value_size = 1 for i in range(ufc_element.reference_value_rank()): fiat_value_size *= values_fiat.shape[i + 1] assert values_fiat.shape[i + 1] == ufc_element.reference_value_dimension(i) # Reshape FIAT output to compare with UFC output values_fiat = values_fiat.reshape((values_fiat.shape[0], ufc_element.reference_value_size(), values_fiat.shape[-1])) # Transpose values_fiat = values_fiat.transpose((2, 0, 1)) # Check values assert np.allclose(values_ufc, values_fiat) # FIXME: This could be fragile because it depend on the order of # the sub-element in UFL being the same at the UFC order. # Test sub-elements recursively n = ufc_element.num_sub_elements() ufl_subelements = ufl_element.sub_elements() assert n == len(ufl_subelements) for i, ufl_e in enumerate(ufl_subelements): ufc_sub_element = ufc_element.create_sub_element(i) test_evaluate_reference_basis_vs_fiat((ufl_e, ufc_sub_element), point_data) def test_evaluate_basis_vs_fiat(element_pair, point_data): """Tests ufc::finite_element::evaluate_basis and ufc::finite_element::evaluate_basis_all against data from FIAT. """ ufl_element, ufc_element = element_pair # Get geometric and topological dimensions gdim = ufc_element.geometric_dimension() points = point_data[gdim] # Get geometric and topological dimensions tdim = ufc_element.topological_dimension() # Create FIAT element via FFC fiat_element = ffc.fiatinterface.create_element(ufl_element) # Tabulate basis at requsted points via FIAT values_fiat = fiat_element.tabulate(0, points) # Extract and reshape FIAT output key = (0,)*tdim values_fiat = np.array(values_fiat[key]) # Shape checks fiat_value_size = 1 for i in range(ufc_element.reference_value_rank()): fiat_value_size *= values_fiat.shape[i + 1] assert values_fiat.shape[i + 1] == ufc_element.reference_value_dimension(i) # Reshape FIAT output to compare with UFC output values_fiat = values_fiat.reshape((values_fiat.shape[0], ufc_element.reference_value_size(), values_fiat.shape[-1])) # Transpose values_fiat = values_fiat.transpose((2, 0, 1)) # Get reference cell cell = ufl_element.cell() ref_coords = ffc.fiatinterface.reference_cell(cell.cellname()).get_vertices() # Iterate over each point and test for i, p in enumerate(points): values_ufc = ufc_element.evaluate_basis_all(p, ref_coords, 1) assert np.allclose(values_ufc, values_fiat[i]) for d in range(ufc_element.space_dimension()): values_ufc = ufc_element.evaluate_basis(d, p, ref_coords, 1) assert np.allclose(values_ufc, values_fiat[i][d]) # Test sub-elements recursively n = ufc_element.num_sub_elements() ufl_subelements = ufl_element.sub_elements() assert n == len(ufl_subelements) for i, ufl_e in enumerate(ufl_subelements): ufc_sub_element = ufc_element.create_sub_element(i) test_evaluate_basis_vs_fiat((ufl_e, ufc_sub_element), point_data) @pytest.mark.parametrize("order", range(4)) def test_evaluate_reference_basis_deriv_vs_fiat(order, element_pair, point_data): """Tests ufc::finite_element::evaluate_reference_basis_derivatives against data from FIAT. """ ufl_element, ufc_element = element_pair # Get geometric and topological dimensions tdim = ufc_element.geometric_dimension() gdim = ufc_element.topological_dimension() points = point_data[tdim] # Create FIAT element via FFC fiat_element = ffc.fiatinterface.create_element(ufl_element) # Tabulate basis at requsted points via UFC values_ufc = ufc_element.evaluate_reference_basis_derivatives(order, points) # Tabulate basis at requsted points via FIAT values_fiat = fiat_element.tabulate(order, points) # Compute 'FFC' style derivative indicators deriv_order = order geo_dim = tdim num_derivatives = geo_dim**deriv_order combinations = [[0] * deriv_order for n in range(num_derivatives)] for i in range(1, num_derivatives): for k in range(i): j = deriv_order - 1 while j + 1 > 0: j -= 1 if combinations[i][j] + 1 > geo_dim - 1: combinations[i][j] = 0 else: combinations[i][j] += 1 break def to_fiat_tuple(comb, gdim): """Convert a list of combinations of derivatives to a fiat tuple of derivatives. FIAT expects a list with the number of derivatives in each spatial direction. E.g., in 2D: u_{xyy} --> [0, 1, 1] in FFC --> (1, 2) in FIAT. """ new_comb = [0] * gdim if comb == []: return tuple(new_comb) for i in range(gdim): new_comb[i] = comb.count(i) return tuple(new_comb) derivs = [to_fiat_tuple(c, tdim) for c in combinations] assert len(derivs) == tdim**order # Iterate over derivatives for i, d in enumerate(derivs): values_fiat_slice = np.array(values_fiat[d]) # Shape checks fiat_value_size = 1 for j in range(ufc_element.reference_value_rank()): fiat_value_size *= values_fiat_slice.shape[j + 1] assert values_fiat_slice.shape[j + 1] == ufc_element.reference_value_dimension(j) # Reshape FIAT output to compare with UFC output values_fiat_slice = values_fiat_slice.reshape((values_fiat_slice.shape[0], ufc_element.reference_value_size(), values_fiat_slice.shape[-1])) # Transpose values_fiat_slice = values_fiat_slice.transpose((2, 0, 1)) # Compare assert np.allclose(values_ufc[:,:,i,:], values_fiat_slice) # Test sub-elements recursively n = ufc_element.num_sub_elements() ufl_subelements = ufl_element.sub_elements() assert n == len(ufl_subelements) for i, ufl_e in enumerate(ufl_subelements): ufc_sub_element = ufc_element.create_sub_element(i) test_evaluate_reference_basis_deriv_vs_fiat(order, (ufl_e, ufc_sub_element), point_data)