dolfin-2017.2.0.post0/0000755000231000000010000000000013216543017013376 5ustar chrisdaemondolfin-2017.2.0.post0/.mailmap0000644000231000000010000001575313216542752015037 0ustar chrisdaemonAnders Logg Anders Logg Anders Logg Anders Logg Anders Logg logg Anders Logg fenics Anders Logg hg Anders Logg root Anders Logg Anders Logg Anders Logg Anders Logg Anders Logg Anders Logg anders Anders Logg logg Anders Logg logg Anders Logg logg Anders Logg logg Andy R. Terrel Andy R. Terrel Andy R. Terrel aterrel Andy R. Terrel Benjamin Kehlet Benjamin Kehlet Benjamin Kehlet Benjamin Kehlet Benjamin Kehlet Chris Richardson Chris Richardson chris Chris Richardson Chris Richardson Chris Richardson root Corrado Maurini Corrado Maurini Dag Lindbo Dag Lindbo dag Dag Lindbo dag David Ham David Ham david.ham@imperial.ac.uk <> Evan Lezar Evan Lezar Evan Lezar elezar Evan Lezar elezar Fredrik Valdmanis Fredrik Valdmanis Fredrik Valdmanis Garth N. Wells Garth N. Wells root Garth N. Wells garth Garth N. Wells Garth N. Wells Garth N. Wells Garth N. Wells Garth N. Wells Garth N. Wells Garth N. Wells Garth N. Wells Garth N. Wells Garth N. Wells Garth N. Wells gnw20@cam.ac.uk <> gideonsimpson Gustav Magnus Vikström Gustav Magnus Vikström Gustav Magnus Vikström Gustav Magnus Vikström Harish Narayanan Harish Narayanan Harish Narayanan Johan Hoffman Johan Hoffman hoffman Johan Hoffman Johan Hoffman Ilmar Wilbers Ilmar Wilbers Ilmar Wilbers Jack S. Hale Jack S. Hale Johan Hake Johan Hake Johan Hake Johan Jansson Johan Jansson johan Johan Jansson johan Johan Jansson johanjan Johan Jansson johanjan Johan Jansson Johan Jansson Johannes Ring Johannes Ring Kent-Andre Mardal Kent-Andre Mardal Kristian B. Ølgaard Kristian B. Ølgaard Kristian B. Ølgaard Magnus Vikstrøm Marco Morandini Marco Morandini Marie E. Rognes Marie E. Rognes Marie E. Rognes Marie E. Rognes Marie E. Rognes (meg@simula.no) Marie E. Rognes meg@simula.no <> Martin Sandve Alnæs Martin Sandve Alnæs Martin Sandve Alnæs Michele Zaffalon Mikael Mortensen Mikael Mortensen Nate Sime Nate Sime Nuno Lopes Nuno Lopes N.Lopes Patrick Farrell Patrick Farrell Patrick Farrell Quang Ha Kristoffer Selim Kristoffer Selim Simon Funke Simon Funke Simon Funke Solveig Bruvoll Solveig Masvie Steven Vandekerckhove Steven Vandekerckhove stockli stockli Tianyi Li Steffen Müthing Steffen Müthing Steffen Müthing steffen.muething@ipvs.uni-stuttgart.de <> Miklós Homolya Åsmund Ødegård Åsmund Ødegård Ola Skavhaug Andre Massing Andrew McRae dolfin-2017.2.0.post0/utils/0000755000231000000010000000000013216543010014527 5ustar chrisdaemondolfin-2017.2.0.post0/utils/pylit/0000755000231000000010000000000013216543010015670 5ustar chrisdaemondolfin-2017.2.0.post0/utils/pylit/LICENSE0000644000231000000010000004340513216543010016703 0ustar chrisdaemonThis license applies only to the pylit.py script in this directory. GNU GENERAL PUBLIC LICENSE Version 2, June 1991 Copyright (C) 1989, 1991 Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users. This General Public License applies to most of the Free Software Foundation's software and to any other program whose authors commit to using it. (Some other Free Software Foundation software is covered by the GNU Lesser General Public License instead.) You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs; and that you know you can do these things. To protect your rights, we need to make restrictions that forbid anyone to deny you these rights or to ask you to surrender the rights. These restrictions translate to certain responsibilities for you if you distribute copies of the software, or if you modify it. For example, if you distribute copies of such a program, whether gratis or for a fee, you must give the recipients all the rights that you have. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. We protect your rights with two steps: (1) copyright the software, and (2) offer you this license which gives you legal permission to copy, distribute and/or modify the software. Also, for each author's protection and ours, we want to make certain that everyone understands that there is no warranty for this free software. If the software is modified by someone else and passed on, we want its recipients to know that what they have is not the original, so that any problems introduced by others will not reflect on the original authors' reputations. Finally, any free program is threatened constantly by software patents. We wish to avoid the danger that redistributors of a free program will individually obtain patent licenses, in effect making the program proprietary. To prevent this, we have made it clear that any patent must be licensed for everyone's free use or not licensed at all. The precise terms and conditions for copying, distribution and modification follow. GNU GENERAL PUBLIC LICENSE TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 0. This License applies to any program or other work which contains a notice placed by the copyright holder saying it may be distributed under the terms of this General Public License. The "Program", below, refers to any such program or work, and a "work based on the Program" means either the Program or any derivative work under copyright law: that is to say, a work containing the Program or a portion of it, either verbatim or with modifications and/or translated into another language. (Hereinafter, translation is included without limitation in the term "modification".) Each licensee is addressed as "you". Activities other than copying, distribution and modification are not covered by this License; they are outside its scope. The act of running the Program is not restricted, and the output from the Program is covered only if its contents constitute a work based on the Program (independent of having been made by running the Program). Whether that is true depends on what the Program does. 1. You may copy and distribute verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this License and to the absence of any warranty; and give any other recipients of the Program a copy of this License along with the Program. You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee. 2. You may modify your copy or copies of the Program or any portion of it, thus forming a work based on the Program, and copy and distribute such modifications or work under the terms of Section 1 above, provided that you also meet all of these conditions: a) You must cause the modified files to carry prominent notices stating that you changed the files and the date of any change. b) You must cause any work that you distribute or publish, that in whole or in part contains or is derived from the Program or any part thereof, to be licensed as a whole at no charge to all third parties under the terms of this License. c) If the modified program normally reads commands interactively when run, you must cause it, when started running for such interactive use in the most ordinary way, to print or display an announcement including an appropriate copyright notice and a notice that there is no warranty (or else, saying that you provide a warranty) and that users may redistribute the program under these conditions, and telling the user how to view a copy of this License. (Exception: if the Program itself is interactive but does not normally print such an announcement, your work based on the Program is not required to print an announcement.) These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Program, and can be reasonably considered independent and separate works in themselves, then this License, and its terms, do not apply to those sections when you distribute them as separate works. But when you distribute the same sections as part of a whole which is a work based on the Program, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote it. Thus, it is not the intent of this section to claim rights or contest your rights to work written entirely by you; rather, the intent is to exercise the right to control the distribution of derivative or collective works based on the Program. In addition, mere aggregation of another work not based on the Program with the Program (or with a work based on the Program) on a volume of a storage or distribution medium does not bring the other work under the scope of this License. 3. You may copy and distribute the Program (or a work based on it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you also do one of the following: a) Accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, b) Accompany it with a written offer, valid for at least three years, to give any third party, for a charge no more than your cost of physically performing source distribution, a complete machine-readable copy of the corresponding source code, to be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, c) Accompany it with the information you received as to the offer to distribute corresponding source code. (This alternative is allowed only for noncommercial distribution and only if you received the program in object code or executable form with such an offer, in accord with Subsection b above.) The source code for a work means the preferred form of the work for making modifications to it. For an executable work, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the executable. However, as a special exception, the source code distributed need not include anything that is normally distributed (in either source or binary form) with the major components (compiler, kernel, and so on) of the operating system on which the executable runs, unless that component itself accompanies the executable. If distribution of executable or object code is made by offering access to copy from a designated place, then offering equivalent access to copy the source code from the same place counts as distribution of the source code, even though third parties are not compelled to copy the source along with the object code. 4. You may not copy, modify, sublicense, or distribute the Program except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense or distribute the Program is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance. 5. You are not required to accept this License, since you have not signed it. However, nothing else grants you permission to modify or distribute the Program or its derivative works. These actions are prohibited by law if you do not accept this License. Therefore, by modifying or distributing the Program (or any work based on the Program), you indicate your acceptance of this License to do so, and all its terms and conditions for copying, distributing or modifying the Program or works based on it. 6. Each time you redistribute the Program (or any work based on the Program), the recipient automatically receives a license from the original licensor to copy, distribute or modify the Program subject to these terms and conditions. You may not impose any further restrictions on the recipients' exercise of the rights granted herein. You are not responsible for enforcing compliance by third parties to this License. 7. If, as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited to patent issues), conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot distribute so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not distribute the Program at all. For example, if a patent license would not permit royalty-free redistribution of the Program by all those who receive copies directly or indirectly through you, then the only way you could satisfy both it and this License would be to refrain entirely from distribution of the Program. If any portion of this section is held invalid or unenforceable under any particular circumstance, the balance of the section is intended to apply and the section as a whole is intended to apply in other circumstances. It is not the purpose of this section to induce you to infringe any patents or other property right claims or to contest validity of any such claims; this section has the sole purpose of protecting the integrity of the free software distribution system, which is implemented by public license practices. Many people have made generous contributions to the wide range of software distributed through that system in reliance on consistent application of that system; it is up to the author/donor to decide if he or she is willing to distribute software through any other system and a licensee cannot impose that choice. This section is intended to make thoroughly clear what is believed to be a consequence of the rest of this License. 8. If the distribution and/or use of the Program is restricted in certain countries either by patents or by copyrighted interfaces, the original copyright holder who places the Program under this License may add an explicit geographical distribution limitation excluding those countries, so that distribution is permitted only in or among countries not thus excluded. In such case, this License incorporates the limitation as if written in the body of this License. 9. The Free Software Foundation may publish revised and/or new versions of the General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies a version number of this License which applies to it and "any later version", you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of this License, you may choose any version ever published by the Free Software Foundation. 10. If you wish to incorporate parts of the Program into other free programs whose distribution conditions are different, write to the author to ask for permission. For software which is copyrighted by the Free Software Foundation, write to the Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software generally. NO WARRANTY 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively convey the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. Copyright (C) This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. Also add information on how to contact you by electronic and paper mail. If the program is interactive, make it output a short notice like this when it starts in an interactive mode: Gnomovision version 69, Copyright (C) year name of author Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, the commands you use may be called something other than `show w' and `show c'; they could even be mouse-clicks or menu items--whatever suits your program. You should also get your employer (if you work as a programmer) or your school, if any, to sign a "copyright disclaimer" for the program, if necessary. Here is a sample; alter the names: Yoyodyne, Inc., hereby disclaims all copyright interest in the program `Gnomovision' (which makes passes at compilers) written by James Hacker. , 1 April 1989 Ty Coon, President of Vice This General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Lesser General Public License instead of this License. dolfin-2017.2.0.post0/utils/pylit/README.rst0000644000231000000010000000305313216543010017360 0ustar chrisdaemonPyLit ***** `Literate Programming`_ with reStructuredText_ .. epigraph:: The idea is that you do not document programs (after the fact), but write documents that *contain* the programs. -- John Max Skaller What is it PyLit (Python Literate) provides a plain but efficient tool for literate programming: a `bidirectional text/code converter`. Install Put `pylit.py`_ in Python's `Module Search Path`_. .. _pylit.py: http://github.com/gmilde/PyLit/raw/master/pylit.py Documentation coming back soon (was available at http://pylit.berlios.de) .. note: The previous host of the PyLit project, berlios.de closed at the end of 2011. Copyright © 2005, 2009 Günter Milde. License PyLit is `free software`_, released under the `GNU General Public License`_ (GPL) version 2 or later. PyLit is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the `GNU General Public License`_ for more details. .. References .. _Charming Python interview: http://www.ibm.com/developerworks/library/l-pyth7.html .. _bidirectional text/code converter: features.html#dual-source .. _literate programming: literate-programming.html .. _reStructuredText: http://docutils.sourceforge.net/rst.html .. _module search path: http://docs.python.org/tutorial/modules.html#the-module-search-path .. _`free software`: http://www.gnu.org/philosophy/free-sw.html .. _`GNU General Public License`: http://www.gnu.org/copyleft/gpl.html dolfin-2017.2.0.post0/utils/pylit/pylit.py0000755000231000000010000016745513216543010017430 0ustar chrisdaemon#!/usr/bin/env python # -*- coding: iso-8859-1 -*- # pylit.py # ******** # Literate programming with reStructuredText # ++++++++++++++++++++++++++++++++++++++++++ # # :Date: $Date$ # :Revision: $Revision$ # :URL: $URL$ # :Copyright: © 2005, 2007 Günter Milde. # Released without warranty under the terms of the # GNU General Public License (v. 2 or later) # # :: from __future__ import print_function """pylit: bidirectional text <-> code converter Convert between a *text document* with embedded code and *source code* with embedded documentation. """ # .. contents:: # # Frontmatter # =========== # # Changelog # --------- # # .. class:: borderless # # ====== ========== =========================================================== # 0.1 2005-06-29 Initial version. # 0.1.1 2005-06-30 First literate version. # 0.1.2 2005-07-01 Object orientated script using generators. # 0.1.3 2005-07-10 Two state machine (later added 'header' state). # 0.2b 2006-12-04 Start of work on version 0.2 (code restructuring). # 0.2 2007-01-23 Published at http://pylit.berlios.de. # 0.2.1 2007-01-25 Outsourced non-core documentation to the PyLit pages. # 0.2.2 2007-01-26 New behaviour of `diff` function. # 0.2.3 2007-01-29 New `header` methods after suggestion by Riccardo Murri. # 0.2.4 2007-01-31 Raise Error if code indent is too small. # 0.2.5 2007-02-05 New command line option --comment-string. # 0.2.6 2007-02-09 Add section with open questions, # Code2Text: let only blank lines (no comment str) # separate text and code, # fix `Code2Text.header`. # 0.2.7 2007-02-19 Simplify `Code2Text.header`, # new `iter_strip` method replacing a lot of ``if``-s. # 0.2.8 2007-02-22 Set `mtime` of outfile to the one of infile. # 0.3 2007-02-27 New `Code2Text` converter after an idea by Riccardo Murri, # explicit `option_defaults` dict for easier customisation. # 0.3.1 2007-03-02 Expand hard-tabs to prevent errors in indentation, # `Text2Code` now also works on blocks, # removed dependency on SimpleStates module. # 0.3.2 2007-03-06 Bug fix: do not set `language` in `option_defaults` # renamed `code_languages` to `languages`. # 0.3.3 2007-03-16 New language css, # option_defaults -> defaults = optparse.Values(), # simpler PylitOptions: don't store parsed values, # don't parse at initialisation, # OptionValues: return `None` for non-existing attributes, # removed -infile and -outfile, use positional arguments. # 0.3.4 2007-03-19 Documentation update, # separate `execute` function. # 2007-03-21 Code cleanup in `Text2Code.__iter__`. # 0.3.5 2007-03-23 Removed "css" from known languages after learning that # there is no C++ style "// " comment string in CSS2. # 0.3.6 2007-04-24 Documentation update. # 0.4 2007-05-18 Implement Converter.__iter__ as stack of iterator # generators. Iterating over a converter instance now # yields lines instead of blocks. # Provide "hooks" for pre- and postprocessing filters. # Rename states to reduce confusion with formats: # "text" -> "documentation", "code" -> "code_block". # 0.4.1 2007-05-22 Converter.__iter__: cleanup and reorganisation, # rename parent class Converter -> TextCodeConverter. # 0.4.2 2007-05-23 Merged Text2Code.converter and Code2Text.converter into # TextCodeConverter.converter. # 0.4.3 2007-05-30 Replaced use of defaults.code_extensions with # values.languages.keys(). # Removed spurious `print` statement in code_block_handler. # Added basic support for 'c' and 'css' languages # with `dumb_c_preprocessor`_ and `dumb_c_postprocessor`_. # 0.5 2007-06-06 Moved `collect_blocks`_ out of `TextCodeConverter`_, # bug fix: collect all trailing blank lines into a block. # Expand tabs with `expandtabs_filter`_. # 0.6 2007-06-20 Configurable code-block marker (default ``::``) # 0.6.1 2007-06-28 Bug fix: reset self.code_block_marker_missing. # 0.7 2007-12-12 prepending an empty string to sys.path in run_doctest() # to allow imports from the current working dir. # 0.7.1 2008-01-07 If outfile does not exist, do a round-trip conversion # and report differences (as with outfile=='-'). # 0.7.2 2008-01-28 Do not add missing code-block separators with # `doctest_run` on the code source. Keeps lines consistent. # 0.7.3 2008-04-07 Use value of code_block_marker for insertion of missing # transition marker in Code2Text.code_block_handler # Add "shell" to defaults.languages # 0.7.4 2008-06-23 Add "latex" to defaults.languages # 0.7.5 2009-05-14 Bugfix: ignore blank lines in test for end of code block # 0.7.6 2009-12-15 language-dependent code-block markers (after a # `feature request and patch by jrioux`_), # use DefaultDict for language-dependent defaults, # new defaults setting `add_missing_marker`_. # 0.7.7 2010-06-23 New command line option --codeindent. # 0.7.8 2011-03-30 bugfix: do not overwrite custom `add_missing_marker` value, # allow directive options following the 'code' directive. # 0.7.9 2011-04-05 Decode doctest string if 'magic comment' gives encoding. # ====== ========== =========================================================== # # :: _version = "0.7.9" __docformat__ = 'restructuredtext' # Introduction # ------------ # # PyLit is a bidirectional converter between two formats of a computer # program source: # # * a (reStructured) text document with program code embedded in # *code blocks*, and # * a compilable (or executable) code source with *documentation* # embedded in comment blocks # # # Requirements # ------------ # # :: import os, sys, io import re, optparse # DefaultDict # ~~~~~~~~~~~ # As `collections.defaultdict` is only introduced in Python 2.5, we # define a simplified version of the dictionary with default from # http://code.activestate.com/recipes/389639/ # :: class DefaultDict(dict): """Minimalistic Dictionary with default value.""" def __init__(self, default=None, *args, **kwargs): self.update(dict(*args, **kwargs)) self.default = default def __getitem__(self, key): return self.get(key, self.default) # Defaults # ======== # # The `defaults` object provides a central repository for default # values and their customisation. :: defaults = optparse.Values() # It is used for # # * the initialisation of data arguments in TextCodeConverter_ and # PylitOptions_ # # * completion of command line options in `PylitOptions.complete_values`_. # # This allows the easy creation of back-ends that customise the # defaults and then call `main`_ e.g.: # # >>> import pylit # >>> pylit.defaults.comment_string = "## " # >>> pylit.defaults.codeindent = 4 # >>> pylit.main() # # The following default values are defined in pylit.py: # # languages # --------- # # Mapping of code file extensions to code language:: defaults.languages = DefaultDict("python", # fallback language {".c": "c", ".cc": "c++", ".cpp": "c++", ".css": "css", ".py": "python", ".sh": "shell", ".sl": "slang", ".sty": "latex", ".tex": "latex", ".ufl": "python" }) # Will be overridden by the ``--language`` command line option. # # The first argument is the fallback language, used if there is no # matching extension (e.g. if pylit is used as filter) and no # ``--language`` is specified. It can be changed programmatically by # assignment to the ``.default`` attribute, e.g. # # >>> defaults.languages.default='c++' # # # .. _text_extension: # # text_extensions # --------------- # # List of known extensions of (reStructured) text files. The first # extension in this list is used by the `_get_outfile_name`_ method to # generate a text output filename:: defaults.text_extensions = [".txt", ".rst"] # comment_strings # --------------- # # Comment strings for known languages. Used in Code2Text_ to recognise # text blocks and in Text2Code_ to format text blocks as comments. # Defaults to ``'# '``. # # **Comment strings include trailing whitespace.** :: defaults.comment_strings = DefaultDict('# ', {"css": '// ', "c": '// ', "c++": '// ', "latex": '% ', "python": '# ', "shell": '# ', "slang": '% ' }) # header_string # ------------- # # Marker string for a header code block in the text source. No trailing # whitespace needed as indented code follows. # Must be a valid rst directive that accepts code on the same line, e.g. # ``'..admonition::'``. # # Default is a comment marker:: defaults.header_string = '..' # .. _code_block_marker: # # code_block_markers # ------------------ # # Markup at the end of a documentation block. # Default is Docutils' marker for a `literal block`_:: defaults.code_block_markers = DefaultDict('::') defaults.code_block_markers["c++"] = u".. code-block:: cpp" #defaults.code_block_markers['python'] = '.. code-block:: python' # The `code_block_marker` string is `inserted into a regular expression`_. # Language-specific markers can be defined programmatically, e.g. in a # wrapper script. # # In a document where code examples are only one of several uses of # literal blocks, it is more appropriate to single out the source code # ,e.g. with the double colon at a separate line ("expanded form") # # ``defaults.code_block_marker.default = ':: *'`` # # or a dedicated ``.. code-block::`` directive [#]_ # # ``defaults.code_block_marker['c++'] = '.. code-block:: *c++'`` # # The latter form also allows code in different languages kept together # in one literate source file. # # .. [#] The ``.. code-block::`` directive is not (yet) supported by # standard Docutils. It is provided by several add-ons, including # the `code-block directive`_ project in the Docutils Sandbox and # Sphinx_. # # # strip # ----- # # Export to the output format stripping documentation or code blocks:: defaults.strip = False # strip_marker # ------------ # # Strip literal marker from the end of documentation blocks when # converting to code format. Makes the code more concise but looses the # synchronisation of line numbers in text and code formats. Can also be used # (together with the auto-completion of the code-text conversion) to change # the `code_block_marker`:: defaults.strip_marker = False # add_missing_marker # ------------------ # # When converting from code format to text format, add a `code_block_marker` # at the end of documentation blocks if it is missing:: defaults.add_missing_marker = True # Keep this at ``True``, if you want to re-convert to code format later! # # # .. _defaults.preprocessors: # # preprocessors # ------------- # # Preprocess the data with language-specific filters_ # Set below in Filters_:: defaults.preprocessors = {} # .. _defaults.postprocessors: # # postprocessors # -------------- # # Postprocess the data with language-specific filters_:: defaults.postprocessors = {} # .. _defaults.codeindent: # # codeindent # ---------- # # Number of spaces to indent code blocks in `Code2Text.code_block_handler`_:: defaults.codeindent = 2 # In `Text2Code.code_block_handler`_, the codeindent is determined by the # first recognised code line (header or first indented literal block # of the text source). # # overwrite # --------- # # What to do if the outfile already exists? (ignored if `outfile` == '-'):: defaults.overwrite = 'yes' # Recognised values: # # :'yes': overwrite eventually existing `outfile`, # :'update': fail if the `outfile` is newer than `infile`, # :'no': fail if `outfile` exists. # # # Extensions # ========== # # Try to import optional extensions:: try: import pylit_elisp except ImportError: pass # Converter Classes # ================= # # The converter classes implement a simple state machine to separate and # transform documentation and code blocks. For this task, only a very limited # parsing is needed. PyLit's parser assumes: # # * `indented literal blocks`_ in a text source are code blocks. # # * comment blocks in a code source where every line starts with a matching # comment string are documentation blocks. # # TextCodeConverter # ----------------- # :: class TextCodeConverter(object): """Parent class for the converters `Text2Code` and `Code2Text`. """ # The parent class defines data attributes and functions used in both # `Text2Code`_ converting a text source to executable code source, and # `Code2Text`_ converting commented code to a text source. # # Data attributes # ~~~~~~~~~~~~~~~ # # Class default values are fetched from the `defaults`_ object and can be # overridden by matching keyword arguments during class instantiation. This # also works with keyword arguments to `get_converter`_ and `main`_, as these # functions pass on unused keyword args to the instantiation of a converter # class. :: language = defaults.languages.default comment_strings = defaults.comment_strings comment_string = "" # set in __init__ (if empty) codeindent = defaults.codeindent header_string = defaults.header_string code_block_markers = defaults.code_block_markers code_block_marker = "" # set in __init__ (if empty) strip = defaults.strip strip_marker = defaults.strip_marker add_missing_marker = defaults.add_missing_marker directive_option_regexp = re.compile(r' +:(\w|[-._+:])+:( |$)') state = "" # type of current block, see `TextCodeConverter.convert`_ # Interface methods # ~~~~~~~~~~~~~~~~~ # # .. _TextCodeConverter.__init__: # # __init__ # """""""" # # Initialising sets the `data` attribute, an iterable object yielding lines of # the source to convert. [#]_ # # .. [#] The most common choice of data is a `file` object with the text # or code source. # # To convert a string into a suitable object, use its splitlines method # like ``"2 lines\nof source".splitlines(True)``. # # # Additional keyword arguments are stored as instance variables, # overwriting the class defaults:: def __init__(self, data, **keyw): """data -- iterable data object (list, file, generator, string, ...) **keyw -- remaining keyword arguments are stored as data-attributes """ self.data = data self.__dict__.update(keyw) # If empty, `code_block_marker` and `comment_string` are set according # to the `language`:: if not self.code_block_marker: self.code_block_marker = self.code_block_markers[self.language] if not self.comment_string: self.comment_string = self.comment_strings[self.language] self.stripped_comment_string = self.comment_string.rstrip() # Pre- and postprocessing filters are set (with # `TextCodeConverter.get_filter`_):: self.preprocessor = self.get_filter("preprocessors", self.language) self.postprocessor = self.get_filter("postprocessors", self.language) # .. _inserted into a regular expression: # # Finally, a regular_expression for the `code_block_marker` is compiled # to find valid cases of `code_block_marker` in a given line and return # the groups: ``\1 prefix, \2 code_block_marker, \3 remainder`` :: marker = self.code_block_marker if marker == '::': # the default marker may occur at the end of a text line self.marker_regexp = re.compile('^( *(?!\.\.).*)(::)([ \n]*)$') else: # marker must be on a separate line self.marker_regexp = re.compile('^( *)(%s)(.*\n?)$' % marker) # .. _TextCodeConverter.__iter__: # # __iter__ # """""""" # # Return an iterator for the instance. Iteration yields lines of converted # data. # # The iterator is a chain of iterators acting on `self.data` that does # # * preprocessing # * text<->code format conversion # * postprocessing # # Pre- and postprocessing are only performed, if filters for the current # language are registered in `defaults.preprocessors`_ and|or # `defaults.postprocessors`_. The filters must accept an iterable as first # argument and yield the processed input data line-wise. # :: def __iter__(self): """Iterate over input data source and yield converted lines """ return self.postprocessor(self.convert(self.preprocessor(self.data))) # .. _TextCodeConverter.__call__: # # __call__ # """""""" # The special `__call__` method allows the use of class instances as callable # objects. It returns the converted data as list of lines:: def __call__(self): """Iterate over state-machine and return results as list of lines""" return [line for line in self] # .. _TextCodeConverter.__str__: # # __str__ # """"""" # Return converted data as string:: def __str__(self): return "".join(self()) # Helpers and convenience methods # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # # .. _TextCodeConverter.convert: # # convert # """"""" # # The `convert` method generates an iterator that does the actual code <--> # text format conversion. The converted data is yielded line-wise and the # instance's `status` argument indicates whether the current line is "header", # "documentation", or "code_block":: def convert(self, lines): """Iterate over lines of a program document and convert between "text" and "code" format """ # Initialise internal data arguments. (Done here, so that every new iteration # re-initialises them.) # # `state` # the "type" of the currently processed block of lines. One of # # :"": initial state: check for header, # :"header": leading code block: strip `header_string`, # :"documentation": documentation part: comment out, # :"code_block": literal blocks containing source code: unindent. # # :: self.state = "" # `_codeindent` # * Do not confuse the internal attribute `_codeindent` with the configurable # `codeindent` (without the leading underscore). # * `_codeindent` is set in `Text2Code.code_block_handler`_ to the indent of # first non-blank "code_block" line and stripped from all "code_block" lines # in the text-to-code conversion, # * `codeindent` is set in `__init__` to `defaults.codeindent`_ and added to # "code_block" lines in the code-to-text conversion. # # :: self._codeindent = 0 # `_textindent` # * set by `Text2Code.documentation_handler`_ to the minimal indent of a # documentation block, # * used in `Text2Code.set_state`_ to find the end of a code block. # # :: self._textindent = 0 # `_add_code_block_marker` # If the last paragraph of a documentation block does not end with a # code_block_marker_, it should be added (otherwise, the back-conversion # fails.). # # `_add_code_block_marker` is set by `Code2Text.documentation_handler`_ # and evaluated by `Code2Text.code_block_handler`_, because the # documentation_handler does not know whether the next block will be # documentation (with no need for a code_block_marker) or a code block. # # :: self._add_code_block_marker = False # Determine the state of the block and convert with the matching "handler":: for block in collect_blocks(expandtabs_filter(lines)): self.set_state(block) for line in getattr(self, self.state+"_handler")(block): yield line # .. _TextCodeConverter.get_filter: # # get_filter # """""""""" # :: def get_filter(self, filter_set, language): """Return language specific filter""" if self.__class__ == Text2Code: key = "text2"+language elif self.__class__ == Code2Text: key = language+"2text" else: key = "" try: return getattr(defaults, filter_set)[key] except (AttributeError, KeyError): # print("there is no %r filter in %r"%(key, filter_set)) pass return identity_filter # get_indent # """""""""" # Return the number of leading spaces in `line`:: def get_indent(self, line): """Return the indentation of `string`. """ return len(line) - len(line.lstrip()) # Text2Code # --------- # # The `Text2Code` converter separates *code-blocks* [#]_ from *documentation*. # Code blocks are unindented, documentation is commented (or filtered, if the # ``strip`` option is True). # # .. [#] Only `indented literal blocks`_ are considered code-blocks. `quoted # literal blocks`_, `parsed-literal blocks`_, and `doctest blocks`_ are # treated as part of the documentation. This allows the inclusion of # examples: # # >>> 23 + 3 # 26 # # Mark that there is no double colon before the doctest block in the # text source. # # The class inherits the interface and helper functions from # TextCodeConverter_ and adds functions specific to the text-to-code format # conversion:: class Text2Code(TextCodeConverter): """Convert a (reStructured) text source to code source """ # .. _Text2Code.set_state: # # set_state # ~~~~~~~~~ # :: def set_state(self, block): """Determine state of `block`. Set `self.state` """ # `set_state` is used inside an iteration. Hence, if we are out of data, a # StopItertion exception should be raised:: if not block: raise StopIteration # The new state depends on the active state (from the last block) and # features of the current block. It is either "header", "documentation", or # "code_block". # # If the current state is "" (first block), check for # the `header_string` indicating a leading code block:: if self.state == "": # print("set state for %r"%block) if block[0].startswith(self.header_string): self.state = "header" else: self.state = "documentation" # If the current state is "documentation", the next block is also # documentation. The end of a documentation part is detected in the # `Text2Code.documentation_handler`_:: # elif self.state == "documentation": # self.state = "documentation" # A "code_block" ends with the first less indented, non-blank line. # `_textindent` is set by the documentation handler to the indent of the # preceding documentation block:: elif self.state in ["code_block", "header"]: indents = [self.get_indent(line) for line in block if line.rstrip()] # print("set_state:", indents, self._textindent) if indents and min(indents) <= self._textindent: self.state = 'documentation' else: self.state = 'code_block' # TODO: (or not to do?) insert blank line before the first line with too-small # codeindent using self.ensure_trailing_blank_line(lines, line) (would need # split and push-back of the documentation part)? # # .. _Text2Code.header_handler: # # header_handler # ~~~~~~~~~~~~~~ # # Sometimes code needs to remain on the first line(s) of the document to be # valid. The most common example is the "shebang" line that tells a POSIX # shell how to process an executable file:: #!/usr/bin/env python # In Python, the special comment to indicate the encoding, e.g. # ``# -*- coding: iso-8859-1 -*-``, must occur before any other comment # or code too. # # If we want to keep the line numbers in sync for text and code source, the # reStructured Text markup for these header lines must start at the same line # as the first header line. Therefore, header lines could not be marked as # literal block (this would require the ``::`` and an empty line above the # code_block). # # OTOH, a comment may start at the same line as the comment marker and it # includes subsequent indented lines. Comments are visible in the reStructured # Text source but hidden in the pretty-printed output. # # With a header converted to comment in the text source, everything before # the first documentation block (i.e. before the first paragraph using the # matching comment string) will be hidden away (in HTML or PDF output). # # This seems a good compromise, the advantages # # * line numbers are kept # * the "normal" code_block conversion rules (indent/unindent by `codeindent` apply # * greater flexibility: you can hide a repeating header in a project # consisting of many source files. # # set off the disadvantages # # - it may come as surprise if a part of the file is not "printed", # - one more syntax element to learn for rst newbies to start with pylit, # (however, starting from the code source, this will be auto-generated) # # In the case that there is no matching comment at all, the complete code # source will become a comment -- however, in this case it is not very likely # the source is a literate document anyway. # # If needed for the documentation, it is possible to quote the header in (or # after) the first documentation block, e.g. as `parsed literal`. # :: def header_handler(self, lines): """Format leading code block""" # strip header string from first line lines[0] = lines[0].replace(self.header_string, "", 1) # yield remaining lines formatted as code-block for line in self.code_block_handler(lines): yield line # .. _Text2Code.documentation_handler: # # documentation_handler # ~~~~~~~~~~~~~~~~~~~~~ # # The 'documentation' handler processes everything that is not recognised as # "code_block". Documentation is quoted with `self.comment_string` # (or filtered with `--strip=True`). # # If end-of-documentation marker is detected, # # * set state to 'code_block' # * set `self._textindent` (needed by `Text2Code.set_state`_ to find the # next "documentation" block) # # :: def documentation_handler(self, lines): """Convert documentation blocks from text to code format """ for line in lines: # test lines following the code-block marker for false positives if (self.state == "code_block" and line.rstrip() and not self.directive_option_regexp.search(line)): self.state = "documentation" # test for end of documentation block if self.marker_regexp.search(line): self.state = "code_block" self._textindent = self.get_indent(line) # yield lines if self.strip: continue # do not comment blank lines preceding a code block if self.state == "code_block" and not line.rstrip(): yield line else: yield self.comment_string + line # .. _Text2Code.code_block_handler: # # code_block_handler # ~~~~~~~~~~~~~~~~~~ # # The "code_block" handler is called with an indented literal block. It # removes leading whitespace up to the indentation of the first code line in # the file (this deviation from Docutils behaviour allows indented blocks of # Python code). :: def code_block_handler(self, block): """Convert indented literal blocks to source code format """ # If still unset, determine the indentation of code blocks from first non-blank # code line:: if self._codeindent == 0: self._codeindent = self.get_indent(block[0]) # Yield unindented lines after check whether we can safely unindent. If the # line is less indented then `_codeindent`, something got wrong. :: for line in block: if line.lstrip() and self.get_indent(line) < self._codeindent: raise ValueError("code block contains line less indented " "than %d spaces \n%r" % (self._codeindent, block)) yield line.replace(" "*self._codeindent, "", 1) # Code2Text # --------- # # The `Code2Text` converter does the opposite of `Text2Code`_ -- it processes # a source in "code format" (i.e. in a programming language), extracts # documentation from comment blocks, and puts program code in literal blocks. # # The class inherits the interface and helper functions from # TextCodeConverter_ and adds functions specific to the text-to-code format # conversion:: class Code2Text(TextCodeConverter): """Convert code source to text source """ # set_state # ~~~~~~~~~ # # Check if block is "header", "documentation", or "code_block": # # A paragraph is "documentation", if every non-blank line starts with a # matching comment string (including whitespace except for commented blank # lines) :: def set_state(self, block): """Determine state of `block`.""" for line in block: # skip documentation lines (commented, blank or blank comment) if (line.startswith(self.comment_string) or not line.rstrip() or line.rstrip() == self.comment_string.rstrip() ): continue # non-commented line found: if self.state == "": self.state = "header" else: self.state = "code_block" break else: # no code line found # keep state if the block is just a blank line # if len(block) == 1 and self._is_blank_codeline(line): # return self.state = "documentation" # header_handler # ~~~~~~~~~~~~~~ # # Handle a leading code block. (See `Text2Code.header_handler`_ for a # discussion of the "header" state.) :: def header_handler(self, lines): """Format leading code block""" if self.strip == True: return # get iterator over the lines that formats them as code-block lines = iter(self.code_block_handler(lines)) # prepend header string to first line yield self.header_string + lines.next() # yield remaining lines for line in lines: yield line # .. _Code2Text.documentation_handler: # # documentation_handler # ~~~~~~~~~~~~~~~~~~~~~ # # The *documentation state* handler converts a comment to a documentation # block by stripping the leading `comment string` from every line:: def documentation_handler(self, block): """Uncomment documentation blocks in source code """ # Strip comment strings:: lines = [self.uncomment_line(line) for line in block] # If the code block is stripped, the literal marker would lead to an # error when the text is converted with Docutils. Strip it as well. :: if self.strip or self.strip_marker: self.strip_code_block_marker(lines) # Otherwise, check for the `code_block_marker`_ at the end of the # documentation block (skipping directive options that might follow it):: elif self.add_missing_marker: for line in lines[::-1]: if self.marker_regexp.search(line): self._add_code_block_marker = False break if (line.rstrip() and not self.directive_option_regexp.search(line)): self._add_code_block_marker = True break else: self._add_code_block_marker = True # Yield lines:: for line in lines: yield line # uncomment_line # ~~~~~~~~~~~~~~ # # Return documentation line after stripping comment string. Consider the # case that a blank line has a comment string without trailing whitespace:: def uncomment_line(self, line): """Return uncommented documentation line""" line = line.replace(self.comment_string, "", 1) if line.rstrip() == self.stripped_comment_string: line = line.replace(self.stripped_comment_string, "", 1) return line # .. _Code2Text.code_block_handler: # # code_block_handler # ~~~~~~~~~~~~~~~~~~ # # The `code_block` handler returns the code block as indented literal # block (or filters it, if ``self.strip == True``). The amount of the code # indentation is controlled by `self.codeindent` (default 2). :: def code_block_handler(self, lines): """Covert code blocks to text format (indent or strip) """ if self.strip == True: return # eventually insert transition marker if self._add_code_block_marker: self.state = "documentation" yield self.code_block_marker + "\n" yield "\n" self._add_code_block_marker = False self.state = "code_block" for line in lines: yield " "*self.codeindent + line # strip_code_block_marker # ~~~~~~~~~~~~~~~~~~~~~~~ # # Replace the literal marker with the equivalent of Docutils replace rules # # * strip ``::``-line (and preceding blank line) if on a line on its own # * strip ``::`` if it is preceded by whitespace. # * convert ``::`` to a single colon if preceded by text # # `lines` is a list of documentation lines (with a trailing blank line). # It is modified in-place:: def strip_code_block_marker(self, lines): try: line = lines[-2] except IndexError: return # just one line (no trailing blank line) # match with regexp: `match` is None or has groups # \1 leading text, \2 code_block_marker, \3 remainder match = self.marker_regexp.search(line) if not match: # no code_block_marker present return if not match.group(1): # `code_block_marker` on an extra line del(lines[-2]) # delete preceding line if it is blank if len(lines) >= 2 and not lines[-2].lstrip(): del(lines[-2]) elif match.group(1).rstrip() < match.group(1): # '::' follows whitespace lines[-2] = match.group(1).rstrip() + match.group(3) else: # '::' follows text lines[-2] = match.group(1).rstrip() + ':' + match.group(3) # Filters # ======= # # Filters allow pre- and post-processing of the data to bring it in a format # suitable for the "normal" text<->code conversion. An example is conversion # of `C` ``/*`` ``*/`` comments into C++ ``//`` comments (and back). # Another example is the conversion of `C` ``/*`` ``*/`` comments into C++ # ``//`` comments (and back). # # Filters are generator functions that return an iterator acting on a # `data` iterable and yielding processed `data` lines. # # identity_filter # --------------- # # The most basic filter is the identity filter, that returns its argument as # iterator:: def identity_filter(data): """Return data iterator without any processing""" return iter(data) # expandtabs_filter # ----------------- # # Expand hard-tabs in every line of `data` (cf. `str.expandtabs`). # # This filter is applied to the input data by `TextCodeConverter.convert`_ as # hard tabs can lead to errors when the indentation is changed. :: def expandtabs_filter(data): """Yield data tokens with hard-tabs expanded""" for line in data: yield line.expandtabs() # collect_blocks # -------------- # # A filter to aggregate "paragraphs" (blocks separated by blank # lines). Yields lists of lines:: def collect_blocks(lines): """collect lines in a list yield list for each paragraph, i.e. block of lines separated by a blank line (whitespace only). Trailing blank lines are collected as well. """ blank_line_reached = False block = [] for line in lines: if blank_line_reached and line.rstrip(): yield block blank_line_reached = False block = [line] continue if not line.rstrip(): blank_line_reached = True block.append(line) yield block # dumb_c_preprocessor # ------------------- # # This is a basic filter to convert `C` to `C++` comments. Works line-wise and # only converts lines that # # * start with "/\* " and end with " \*/" (followed by whitespace only) # # A more sophisticated version would also # # * convert multi-line comments # # + Keep indentation or strip 3 leading spaces? # # * account for nested comments # # * only convert comments that are separated from code by a blank line # # :: def dumb_c_preprocessor(data): """change `C` ``/* `` `` */`` comments into C++ ``// `` comments""" comment_string = defaults.comment_strings["c++"] boc_string = "/* " eoc_string = " */" for line in data: if (line.startswith(boc_string) and line.rstrip().endswith(eoc_string) ): line = line.replace(boc_string, comment_string, 1) line = "".join(line.rsplit(eoc_string, 1)) yield line # Unfortunately, the `replace` method of strings does not support negative # numbers for the `count` argument: # # >>> "foo */ baz */ bar".replace(" */", "", -1) == "foo */ baz bar" # False # # However, there is the `rsplit` method, that can be used together with `join`: # # >>> "".join("foo */ baz */ bar".rsplit(" */", 1)) == "foo */ baz bar" # True # # dumb_c_postprocessor # -------------------- # # Undo the preparations by the dumb_c_preprocessor and re-insert valid comment # delimiters :: def dumb_c_postprocessor(data): """change C++ ``// `` comments into `C` ``/* `` `` */`` comments""" comment_string = defaults.comment_strings["c++"] boc_string = "/* " eoc_string = " */" for line in data: if line.rstrip() == comment_string.rstrip(): line = line.replace(comment_string, "", 1) elif line.startswith(comment_string): line = line.replace(comment_string, boc_string, 1) line = line.rstrip() + eoc_string + "\n" yield line # register filters # ---------------- # # :: defaults.preprocessors['c2text'] = dumb_c_preprocessor defaults.preprocessors['css2text'] = dumb_c_preprocessor defaults.postprocessors['text2c'] = dumb_c_postprocessor defaults.postprocessors['text2css'] = dumb_c_postprocessor # Command line use # ================ # # Using this script from the command line will convert a file according to its # extension. This default can be overridden by a couple of options. # # Dual source handling # -------------------- # # How to determine which source is up-to-date? # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # # - set modification date of `outfile` to the one of `infile` # # Points out that the source files are 'synchronised'. # # * Are there problems to expect from "backdating" a file? Which? # # Looking at http://www.unix.com/showthread.php?t=20526, it seems # perfectly legal to set `mtime` (while leaving `ctime`) as `mtime` is a # description of the "actuality" of the data in the file. # # * Should this become a default or an option? # # - alternatively move input file to a backup copy (with option: `--replace`) # # - check modification date before overwriting # (with option: `--overwrite=update`) # # - check modification date before editing (implemented as `Jed editor`_ # function `pylit_check()` in `pylit.sl`_) # # .. _Jed editor: http://www.jedsoft.org/jed/ # .. _pylit.sl: http://jedmodes.sourceforge.net/mode/pylit/ # # Recognised Filename Extensions # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ # # Instead of defining a new extension for "pylit" literate programs, # by default ``.txt`` will be appended for the text source and stripped by # the conversion to the code source. I.e. for a Python program foo: # # * the code source is called ``foo.py`` # * the text source is called ``foo.py.txt`` # * the html rendering is called ``foo.py.html`` # # # OptionValues # ------------ # # The following class adds `as_dict`_, `complete`_ and `__getattr__`_ # methods to `optparse.Values`:: class OptionValues(optparse.Values): # .. _OptionValues.as_dict: # # as_dict # ~~~~~~~ # # For use as keyword arguments, it is handy to have the options in a # dictionary. `as_dict` returns a copy of the instances object dictionary:: def as_dict(self): """Return options as dictionary object""" return self.__dict__.copy() # .. _OptionValues.complete: # # complete # ~~~~~~~~ # # :: def complete(self, **keyw): """ Complete the option values with keyword arguments. Do not overwrite existing values. Only use arguments that do not have a corresponding attribute in `self`, """ for key in keyw: if not self.__dict__.__contains__(key): setattr(self, key, keyw[key]) # .. _OptionValues.__getattr__: # # __getattr__ # ~~~~~~~~~~~ # # To replace calls using ``options.ensure_value("OPTION", None)`` with the # more concise ``options.OPTION``, we define `__getattr__` [#]_ :: def __getattr__(self, name): """Return default value for non existing options""" return None # .. [#] The special method `__getattr__` is only called when an attribute # look-up has not found the attribute in the usual places (i.e. it is # not an instance attribute nor is it found in the class tree for # self). # # # PylitOptions # ------------ # # The `PylitOptions` class comprises an option parser and methods for parsing # and completion of command line options:: class PylitOptions(object): """Storage and handling of command line options for pylit""" # Instantiation # ~~~~~~~~~~~~~ # # :: def __init__(self): """Set up an `OptionParser` instance for pylit command line options """ p = optparse.OptionParser(usage=main.__doc__, version=_version) # Conversion settings p.add_option("-c", "--code2txt", dest="txt2code", action="store_false", help="convert code source to text source") p.add_option("-t", "--txt2code", action="store_true", help="convert text source to code source") p.add_option("--language", choices = list(defaults.languages.values()), help="use LANGUAGE native comment style") p.add_option("--comment-string", dest="comment_string", help="documentation block marker in code source " "(including trailing whitespace, " "default: language dependent)") p.add_option("-m", "--code-block-marker", dest="code_block_marker", help="syntax token starting a code block. (default '::')") p.add_option("--codeindent", type="int", help="Number of spaces to indent code blocks with " "text2code (default %d)" % defaults.codeindent) # Output file handling p.add_option("--overwrite", action="store", choices = ["yes", "update", "no"], help="overwrite output file (default 'update')") p.add_option("--replace", action="store_true", help="move infile to a backup copy (appending '~')") p.add_option("-s", "--strip", action="store_true", help='"export" by stripping documentation or code') # Special actions p.add_option("-d", "--diff", action="store_true", help="test for differences to existing file") p.add_option("--doctest", action="store_true", help="run doctest.testfile() on the text version") p.add_option("-e", "--execute", action="store_true", help="execute code (Python only)") self.parser = p # .. _PylitOptions.parse_args: # # parse_args # ~~~~~~~~~~ # # The `parse_args` method calls the `optparse.OptionParser` on command # line or provided args and returns the result as `PylitOptions.Values` # instance. Defaults can be provided as keyword arguments:: def parse_args(self, args=sys.argv[1:], **keyw): """parse command line arguments using `optparse.OptionParser` parse_args(args, **keyw) -> OptionValues instance args -- list of command line arguments. keyw -- keyword arguments or dictionary of option defaults """ # parse arguments (values, args) = self.parser.parse_args(args, OptionValues(keyw)) # Convert FILE and OUTFILE positional args to option values # (other positional arguments are ignored) try: values.infile = args[0] values.outfile = args[1] except IndexError: pass return values # .. _PylitOptions.complete_values: # # complete_values # ~~~~~~~~~~~~~~~ # # Complete an OptionValues instance `values`. Use module-level defaults and # context information to set missing option values to sensible defaults (if # possible) :: def complete_values(self, values): """complete option values with module and context sensible defaults x.complete_values(values) -> values values -- OptionValues instance """ # Complete with module-level defaults_:: values.complete(**defaults.__dict__) # Ensure infile is a string:: values.ensure_value("infile", "") # Guess conversion direction from `infile` filename:: if values.txt2code is None: in_extension = os.path.splitext(values.infile)[1] if in_extension in values.text_extensions: values.txt2code = True elif in_extension in values.languages.keys(): values.txt2code = False # Auto-determine the output file name:: values.ensure_value("outfile", self._get_outfile_name(values)) # Second try: Guess conversion direction from outfile filename:: if values.txt2code is None: out_extension = os.path.splitext(values.outfile)[1] values.txt2code = not (out_extension in values.text_extensions) # Set the language of the code:: if values.txt2code is True: code_extension = os.path.splitext(values.outfile)[1] elif values.txt2code is False: code_extension = os.path.splitext(values.infile)[1] values.ensure_value("language", values.languages[code_extension]) return values # _get_outfile_name # ~~~~~~~~~~~~~~~~~ # # Construct a matching filename for the output file. The output filename is # constructed from `infile` by the following rules: # # * '-' (stdin) results in '-' (stdout) # * strip the `text_extension`_ (txt2code) or # * add the `text_extension`_ (code2txt) # * fallback: if no guess can be made, add ".out" # # .. TODO: use values.outfile_extension if it exists? # # :: def _get_outfile_name(self, values): """Return a matching output filename for `infile` """ # if input is stdin, default output is stdout if values.infile == '-': return '-' # Derive from `infile` name: strip or add text extension (base, ext) = os.path.splitext(values.infile) if ext in values.text_extensions: return base # strip if ext in values.languages.keys() or values.txt2code == False: return values.infile + values.text_extensions[0] # add # give up return values.infile + ".out" # .. _PylitOptions.__call__: # # __call__ # ~~~~~~~~ # # The special `__call__` method allows to use PylitOptions instances as # *callables*: Calling an instance parses the argument list to extract option # values and completes them based on "context-sensitive defaults". Keyword # arguments are passed to `PylitOptions.parse_args`_ as default values. :: def __call__(self, args=sys.argv[1:], **keyw): """parse and complete command line args return option values """ values = self.parse_args(args, **keyw) return self.complete_values(values) # Helper functions # ---------------- # # open_streams # ~~~~~~~~~~~~ # # Return file objects for in- and output. If the input path is missing, # write usage and abort. (An alternative would be to use stdin as default. # However, this leaves the uninitiated user with a non-responding application # if (s)he just tries the script without any arguments) :: def open_streams(infile = '-', outfile = '-', overwrite='update', **keyw): """Open and return the input and output stream open_streams(infile, outfile) -> (in_stream, out_stream) in_stream -- open(infile) or sys.stdin out_stream -- open(outfile) or sys.stdout overwrite -- 'yes': overwrite eventually existing `outfile`, 'update': fail if the `outfile` is newer than `infile`, 'no': fail if `outfile` exists. Irrelevant if `outfile` == '-'. """ if not infile: strerror = "Missing input file name ('-' for stdin; -h for help)" raise IOError((2, strerror, infile)) if infile == '-': in_stream = sys.stdin else: in_stream = io.open(infile, 'r', encoding='utf-8') if outfile == '-': out_stream = sys.stdout elif overwrite == 'no' and os.path.exists(outfile): raise IOError((1, "Output file exists!", outfile)) elif overwrite == 'update' and is_newer(outfile, infile): raise IOError((1, "Output file is newer than input file!", outfile)) else: out_stream = io.open(outfile, 'w', encoding='utf-8') return (in_stream, out_stream) # is_newer # ~~~~~~~~ # # :: def is_newer(path1, path2): """Check if `path1` is newer than `path2` (using mtime) Compare modification time of files at path1 and path2. Non-existing files are considered oldest: Return False if path1 does not exist and True if path2 does not exist. Return None for equal modification time. (This evaluates to False in a Boolean context but allows a test for equality.) """ try: mtime1 = os.path.getmtime(path1) except OSError: mtime1 = -1 try: mtime2 = os.path.getmtime(path2) except OSError: mtime2 = -1 # print("mtime1", mtime1, path1, "\n", "mtime2", mtime2, path2) if mtime1 == mtime2: return None return mtime1 > mtime2 # get_converter # ~~~~~~~~~~~~~ # # Get an instance of the converter state machine:: def get_converter(data, txt2code=True, **keyw): if txt2code: return Text2Code(data, **keyw) else: return Code2Text(data, **keyw) # Use cases # --------- # # run_doctest # ~~~~~~~~~~~ # :: def run_doctest(infile="-", txt2code=True, globs={}, verbose=False, optionflags=0, **keyw): """run doctest on the text source """ # Allow imports from the current working dir by prepending an empty string to # sys.path (see doc of sys.path()):: sys.path.insert(0, '') # Import classes from the doctest module:: from doctest import DocTestParser, DocTestRunner # Read in source. Make sure it is in text format, as tests in comments are not # found by doctest:: (data, out_stream) = open_streams(infile, "-") if txt2code is False: keyw.update({'add_missing_marker': False}) converter = Code2Text(data, **keyw) docstring = str(converter) else: docstring = data.read() # decode doc string if there is a "magic comment" in the first or second line # (http://docs.python.org/reference/lexical_analysis.html#encoding-declarations) # :: firstlines = ' '.join(docstring.splitlines()[:2]) match = re.search('coding[=:]\s*([-\w.]+)', firstlines) if match: docencoding = match.group(1) docstring = docstring.decode(docencoding) # Use the doctest Advanced API to run all doctests in the source text:: test = DocTestParser().get_doctest(docstring, globs, name="", filename=infile, lineno=0) runner = DocTestRunner(verbose, optionflags) runner.run(test) runner.summarize # give feedback also if no failures occurred if not runner.failures: print("%d failures in %d tests"%(runner.failures, runner.tries)) return runner.failures, runner.tries # diff # ~~~~ # # :: def diff(infile='-', outfile='-', txt2code=True, **keyw): """Report differences between converted infile and existing outfile If outfile does not exist or is '-', do a round-trip conversion and report differences. """ import difflib instream = open(infile) # for diffing, we need a copy of the data as list:: data = instream.readlines() # convert converter = get_converter(data, txt2code, **keyw) new = converter() if outfile != '-' and os.path.exists(outfile): outstream = open(outfile) old = outstream.readlines() oldname = outfile newname = ""%infile else: old = data oldname = infile # back-convert the output data converter = get_converter(new, not txt2code) new = converter() newname = ""%infile # find and print the differences is_different = False # print(type(old), old) # print(type(new), new) delta = difflib.unified_diff(old, new, # delta = difflib.unified_diff(["heute\n", "schon\n"], ["heute\n", "noch\n"], fromfile=oldname, tofile=newname) for line in delta: is_different = True print(line, end="") if not is_different: print(oldname) print(newname) print("no differences found") return is_different # execute # ~~~~~~~ # # Works only for python code. # # Does not work with `eval`, as code is not just one expression. :: def execute(infile="-", txt2code=True, **keyw): """Execute the input file. Convert first, if it is a text source. """ data = open(infile) if txt2code: data = str(Text2Code(data, **keyw)) # print("executing " + options.infile) exec(data) # main # ---- # # If this script is called from the command line, the `main` function will # convert the input (file or stdin) between text and code formats. # # Option default values for the conversion can be given as keyword arguments # to `main`_. The option defaults will be updated by command line options and # extended with "intelligent guesses" by `PylitOptions`_ and passed on to # helper functions and the converter instantiation. # # This allows easy customisation for programmatic use -- just call `main` # with the appropriate keyword options, e.g. ``pylit.main(comment_string="## ")`` # # :: def main(args=sys.argv[1:], **defaults): """%prog [options] INFILE [OUTFILE] Convert between (reStructured) text source with embedded code, and code source with embedded documentation (comment blocks) The special filename '-' stands for standard in and output. """ # Parse and complete the options:: options = PylitOptions()(args, **defaults) # print("infile", repr(options.infile)) # Special actions with early return:: if options.doctest: return run_doctest(**options.as_dict()) if options.diff: return diff(**options.as_dict()) if options.execute: return execute(**options.as_dict()) # Open in- and output streams:: try: (data, out_stream) = open_streams(**options.as_dict()) except IOError as ex: print("IOError: %s %s" % (ex.filename, ex.strerror)) sys.exit(ex.errno) # Get a converter instance:: converter = get_converter(data, **options.as_dict()) # Convert and write to out_stream:: if sys.version_info[0] == 2: out_stream.write(unicode(converter)) else: out_stream.write(str(converter)) if out_stream is not sys.stdout: print("extract written to", out_stream.name) out_stream.close() # If input and output are from files, set the modification time (`mtime`) of # the output file to the one of the input file to indicate that the contained # information is equal. [#]_ :: try: os.utime(options.outfile, (os.path.getatime(options.outfile), os.path.getmtime(options.infile)) ) except OSError: pass ## print("mtime", os.path.getmtime(options.infile), options.infile) ## print("mtime", os.path.getmtime(options.outfile), options.outfile) # .. [#] Make sure the corresponding file object (here `out_stream`) is # closed, as otherwise the change will be overwritten when `close` is # called afterwards (either explicitly or at program exit). # # # Rename the infile to a backup copy if ``--replace`` is set:: if options.replace: os.rename(options.infile, options.infile + "~") # Run main, if called from the command line:: if __name__ == '__main__': main() # Open questions # ============== # # Open questions and ideas for further development # # Clean code # ---------- # # * can we gain from using "shutils" over "os.path" and "os"? # * use pylint or pyChecker to enforce a consistent style? # # Options # ------- # # * Use templates for the "intelligent guesses" (with Python syntax for string # replacement with dicts: ``"hello %(what)s" % {'what': 'world'}``) # # * Is it sensible to offer the `header_string` option also as command line # option? # # treatment of blank lines # ------------------------ # # Alternatives: Keep blank lines blank # # - "never" (current setting) -> "visually merges" all documentation # if there is no interjacent code # # - "always" -> disrupts documentation blocks, # # - "if empty" (no whitespace). Comment if there is whitespace. # # This would allow non-obstructing markup but unfortunately this is (in # most editors) also non-visible markup. # # + "if double" (if there is more than one consecutive blank line) # # With this handling, the "visual gap" remains in both, text and code # source. # # # Parsing Problems # ---------------- # # * Ignore "matching comments" in literal strings? # # Too complicated: Would need a specific detection algorithm for every # language that supports multi-line literal strings (C++, PHP, Python) # # * Warn if a comment in code will become documentation after round-trip? # # # docstrings in code blocks # ------------------------- # # * How to handle docstrings in code blocks? (it would be nice to convert them # to rst-text if ``__docformat__ == restructuredtext``) # # TODO: Ask at Docutils users|developers # # Plug-ins # -------- # # Specify a path for user additions and plug-ins. This would require to # convert Pylit from a pure module to a package... # # 6.4.3 Packages in Multiple Directories # # Packages support one more special attribute, __path__. This is initialized # to be a list containing the name of the directory holding the package's # __init__.py before the code in that file is executed. This # variable can be modified; doing so affects future searches for modules and # subpackages contained in the package. # # While this feature is not often needed, it can be used to extend the set # of modules found in a package. # # # .. References # # .. _Docutils: http://docutils.sourceforge.net/ # .. _Sphinx: http://sphinx.pocoo.org # .. _Pygments: http://pygments.org/ # .. _code-block directive: # http://docutils.sourceforge.net/sandbox/code-block-directive/ # .. _literal block: # .. _literal blocks: # http://docutils.sf.net/docs/ref/rst/restructuredtext.html#literal-blocks # .. _indented literal block: # .. _indented literal blocks: # http://docutils.sf.net/docs/ref/rst/restructuredtext.html#indented-literal-blocks # .. _quoted literal block: # .. _quoted literal blocks: # http://docutils.sf.net/docs/ref/rst/restructuredtext.html#quoted-literal-blocks # .. _parsed-literal blocks: # http://docutils.sf.net/docs/ref/rst/directives.html#parsed-literal-block # .. _doctest block: # .. _doctest blocks: # http://docutils.sf.net/docs/ref/rst/restructuredtext.html#doctest-blocks # # .. _feature request and patch by jrioux: # http://developer.berlios.de/feature/?func=detailfeature&feature_id=4890&group_id=7974 dolfin-2017.2.0.post0/utils/scripts/0000755000231000000010000000000013216543010016216 5ustar chrisdaemondolfin-2017.2.0.post0/utils/scripts/code-formatting0000755000231000000010000001507013216543010021231 0ustar chrisdaemon#!/usr/bin/env python # # Copyright (C) 2009 Anders Logg # # This file is part of DOLFIN. # # DOLFIN is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # DOLFIN is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with DOLFIN. If not, see . # # Check and fix coding style in all files (currently very limited) # # Modified by Chris Richardson, 2013 # # Last changed: 2013-05-10 # import os, sys, re, getopt # Camelcaps exceptions camelcaps_exceptions = ["cGq", "dGq", # ODE solver "addListener", "addTest", "getRegistry", "makeTest", "wasSuccessful", # cppunit "tagName", "tagNames", "documentElement", "childNodes", # doscstrings "nodeType", "firstChild", "parentNode", # docstrings "doubleArray", "intArray", # swig "fatalError", "xmlChar", "xmlNode", "xmlStrcasecmp", # libxml2 "startDocument", "endDocument", "startElement", "endElement"] # libxml2 def camelcaps2underscore(word): "Convert fooBar to foo_bar." new_word = "" for c in word: if c.isupper() and len(new_word) > 0: new_word += "_" + c.lower() else: new_word += c return new_word def check_camelcaps(code): "Check for fooBar (should be foo_bar)" problems=[] for i,line in enumerate(code.split("\n")): if(re.search(r"//",line)): continue words = re.findall(r"\b([a-z]+[A-Z][a-z]+)\b", line) for word in words: if word in camelcaps_exceptions: continue if re.match("vtk",word): continue problems.append("[camelCaps ] Line %d: %s"%(i+1,line)) return code, problems def check_trailing_whitespace(code): "Remove trailing spaces at end of lines." problems = [] lines = [] for i,line in enumerate(code.split("\n")): new_line = line.rstrip() lines.append(new_line) if new_line == line: continue problems.append("[trailing whitespace] Line %d: %s"%(i+1,line.replace(" ","_").replace("\r","^M"))) new_code = "\n".join(lines) if len(new_code) > 0 and not new_code[-1] == "\n": new_code += "\n" return new_code, problems def check_tabs(code): "Check for TAB character" problems = [] lines =[] for i,line in enumerate(code.split("\n")): if(re.search("\t",line)): problems.append("[tab ] Line %d: %s"%(i+1,line.replace("\t",r"\t "))) lines.append(re.sub("\t"," ",line)) else: lines.append(line) new_code = "\n".join(lines) if len(new_code) > 0 and not new_code[-1] == "\n": new_code += "\n" return new_code, problems def check_symbol_spacing(code): "Add spaces around +,-,= etc." problems = [] for i,line in enumerate(code.split("\n")): if(re.search(r"//",line)): continue words = re.findall(r"\b[=|+|-|==]\b", line) if(len(words)>0): problems.append("[symbol spacing ] Line %d: %s"%(i,line)) words = re.findall(r"\b,\b", line) if(len(words)>0): problems.append("[comma spacing ] Line %d: %s"%(i,line)) return code, problems def part_of_dolfin(code): "Look for DOLFIN in file" if re.search("DOLFIN",code): return True else: return False def detect_bad_formatting(filename, opts): """Detect where code formatting appears to be wrong. opts Flags: -c for checking camelCaps -w for checking whitespace -s for checking symbol spacing, i.e. =, +, - etc. -wf to fix whitespace errors """ # Open file f = open(filename, "r") code = f.read() f.close() if(not part_of_dolfin(code)): problems = ['[part of dolfin ] Does not contain "DOLFIN" - skipping'] else: # Checks problems = [] if("-c" in opts): code, p = check_camelcaps(code) problems.extend(p) if("-w" in opts): code, p1 = check_trailing_whitespace(code) problems.extend(p1) code, p2 = check_tabs(code) problems.extend(p2) # Rewrite corrected file (only corrects whitespace errors) if("-f" in opts and (len(p1)!=0 or len(p2)!=0)): problems.append('Rewriting file %s'%filename) f = open(filename, "w") f.write(code) f.close() if("-s" in opts): code, p = check_symbol_spacing(code) problems.extend(p) if(len(problems) != 0): print "================================================================================" print filename print "================================================================================" for p in problems: print p def usage(name): print print name," [-c|-w|-s|-f] path" print "\n Check formatting of code in the folders and subfolders of path\n" print " -c check camelCaps" print " -w check white space and tabs" print " -s check symbol spacing" print " -wf fix whitespace errors" print " default: check all" if __name__ == "__main__": try: opts, args = getopt.getopt(sys.argv[1:], "cwsf") except getopt.GetoptError as err: print str(err) # will print something like "option -a not recognized" usage(sys.argv[0]) sys.exit(0) if(len(args)==0): usage(sys.argv[0]) sys.exit(0) filepath = args[0] options=[] for opt, arg in opts: options.append(opt) if len(options)==0: options=['-c','-w','-s'] # Iterate over all source files for root, dirs, files in os.walk(filepath): for file in files: if not file.endswith((".cpp", ".h", ".py", ".i")): continue filename = os.path.join(root, file) detect_bad_formatting(filename, options) dolfin-2017.2.0.post0/utils/scripts/broken0000755000231000000010000000017413216543010017426 0ustar chrisdaemon#!/bin/sh # Find all BROKENs in the code find . -name '*.h' | xargs grep BROKEN find . -name '*.cpp' | xargs grep BROKEN dolfin-2017.2.0.post0/utils/scripts/formatcode0000755000231000000010000001011213216543010020262 0ustar chrisdaemon#!/usr/bin/env python # # Copyright (C) 2009 Anders Logg # # This file is part of DOLFIN. # # DOLFIN is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # DOLFIN is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with DOLFIN. If not, see . # # Check and fix coding style in all files (currently very limited) import os, sys, re # Camelcaps exceptions camelcaps_exceptions = ["cGq", "dGq", # ODE solver "addListener", "addTest", "getRegistry", "makeTest", "wasSuccessful", # cppunit "tagName", "tagNames", "documentElement", "childNodes", # doscstrings "nodeType", "firstChild", "parentNode", # docstrings "doubleArray", "intArray", # swig "fatalError", "xmlChar", "xmlNode", "xmlStrcasecmp", # libxml2 "startDocument", "endDocument", "startElement", "endElement"] # libxml2 def camelcaps2underscore(word): "Convert fooBar to foo_bar." new_word = "" for c in word: if c.isupper() and len(new_word) > 0: new_word += "_" + c.lower() else: new_word += c return new_word def check_camelcaps(code, replacements): "Replace fooBar by foo_bar." words = re.findall(r"\b([a-z]+[A-Z][a-z]+)\b", code) for word in words: if word in camelcaps_exceptions: continue new_word = camelcaps2underscore(word) code = code.replace(word, new_word) r = (word, new_word) if r in replacements: replacements[r] += 1 else: replacements[r] = 1 print " Replacing: %s --> %s" % r return code def check_trailing_whitespace(code, replacements): "Remove trailing spaces at end of lines." lines = [] for line in code.split("\n"): new_line = line.rstrip() lines.append(new_line) if new_line == line: continue r = ("trailing whitespace", "") if r in replacements: replacements[r] += 1 else: replacements[r] = 1 print " Removing trailing whitespace" new_code = "\n".join(lines) if len(new_code) > 0 and not new_code[-1] == "\n": new_code += "\n" return new_code def replace(filename, replacements): "Make replacements for given filename." print "Checking %s..." % filename # Open file f = open(filename, "r") code = f.read() f.close() # Checks new_code = check_camelcaps(code, replacements) new_code = check_trailing_whitespace(code, replacements) # Write file f = open(filename, "w") f.write(new_code) f.close() # Iterate over all source files replacements = {} for root, dirs, files in os.walk("dolfin"): for file in files: if not file.endswith((".cpp", ".h", ".py", ".i")): continue filename = os.path.join(root, file) replace(filename, replacements) for root, dirs, files in os.walk("demo"): for file in files: if not file.endswith((".cpp", ".h", ".py")): continue filename = os.path.join(root, file) replace(filename, replacements) # Report replacements if len(replacements) == 0: print "No replacements were made, unchanged." else: print "" print "The following replacements were made:" lines = [] for r in replacements: lines.append(" %s --> %s (%d occurences)" % (r[0], r[1], replacements[r])) lines.sort() print "\n".join(lines) dolfin-2017.2.0.post0/utils/scripts/dolfinreplace0000755000231000000010000001041413216543010020753 0ustar chrisdaemon#!/usr/bin/python # # Replace string in dolfin files # # dolfinreplace s0 s1 [-sv] # # Example: # # dolfinreplace foo bar # # Johan Hake, 2008-19-09 # # Modified by Anders Logg 2008, 2014 from os import path import glob # File post fixes to look in post_fixes = [".h", ".cpp", ".i", ".tex", ".py", ".rst"] # Primitive exclusion of some directories exclude_patterns = ["build.", "local."] # Global variables count = 0 changed_files = 0 report_lines = [] dolfin_root = path.abspath(path.join(path.dirname(path.abspath(__file__)), path.pardir, path.pardir)) # Output strings post_fix_string = ", ".join( "'%s'"%s for s in post_fixes) start_str = """Replacing '%s' with '%s' in all %s files...""" report_str = """ %d occurences replaced in %d files: %s %s""" def replace(args, dirname, filenames): """ Replace replaces[0] with replaces[1] In all fnames with post fixes in 'post_fixes' replace replaces[0] with replaces[1] """ global count, report_lines, changed_files (s0, s1), options = args _filenames = filenames[:] for f in _filenames: for exclude in exclude_patterns: if exclude in f: filenames.remove(f) for filename in glob.glob(path.join(dirname, options.pattern)): if path.splitext(filename)[1] in post_fixes: fullpath_filename = path.join(dirname,filename) f = open(fullpath_filename, "r") if options.verbose: changed_lines = [] lines = f.readlines() num_changed = 0 for i, line in enumerate(lines): num = line.count(s0) if num>0: num_changed += num changed_lines.append(" "*8+line.replace(s0,\ "\033[4m%s\033[0;0m"%s0).strip()) lines[i] = line.replace(s0, s1) count += num_changed output = "".join(lines) else: input = f.read() num_changed = input.count(s0) count += num_changed output = input.replace(s0, s1) f.close() if not options.simulate and num_changed > 0: f = open(fullpath_filename, "w") f.write(output) f.close() if num_changed > 0: changed_files += 1 report_lines.append(" "*4+fullpath_filename) if options.verbose: report_lines.extend(changed_lines) report_lines.append("") def options_parser(): import optparse usage ="""Usage: %prog [options] arg1 arg2 Replaces string 'arg1' with 'arg2' in files in the present directory tree. For options, see dolfinreplace --help.""" parser = optparse.OptionParser(usage=usage) parser.add_option("-s", "--simulate", help="simulate the replacement", action="store_true", dest="simulate", default=False) parser.add_option("-p", "--pattern", help="a glob compatible pattern " "for which files should be looked into. "\ "[default: '%default']", action="store",\ dest="pattern", default="*") parser.add_option("-v", "--verbose", help="verbosely print each line that"\ " includes the searched pattern.", action="store_true", \ dest="verbose", default=False) return parser def main(): parser = options_parser() options, args = parser.parse_args() if len(args) != 2: parser.error("incorrect number of arguments") s0, s1 = args[0].replace("\\", ""), args[1].replace("\\", "") print start_str%(s0,s1,post_fix_string) # Do it! path.walk(dolfin_root, replace, ((s0, s1), options)) if count > 0: if options.simulate: simulate_str = "\nSimulating, no replacement done..." else: simulate_str = "" if report_lines[-1]: report_lines.pop(-1) print report_str % (count, changed_files, "\n".\ join(report_lines), simulate_str) else: print "Done\n\nNo occurens of '%s' found"%args[0] if __name__ == "__main__": main() dolfin-2017.2.0.post0/utils/scripts/fixme0000755000231000000010000000017013216543010017252 0ustar chrisdaemon#!/bin/sh # Find all FIXMEs in the code find . -name '*.h' | xargs grep FIXME find . -name '*.cpp' | xargs grep FIXME dolfin-2017.2.0.post0/utils/scripts/notinuse0000755000231000000010000000123713216543010020013 0ustar chrisdaemon#!/bin/sh # notinuse # # Scan subdirectories for files that may not be in use, # that is, old files that should be removed. # # This script should be run from the top level directory. # Simple test to see where we are # FIXME: Does not work when typing ./scripts/... CHECK=`echo $0 | cut -d'/' -f1` if [ "$CHECK" != "scripts" ]; then echo "This script must be run from the top level directory." exit 1 fi for f in `find . -name '*.h' -printf '%f '`; do a=`rgrep $f * | grep include` if [ "x$a" == "x" ]; then echo File $f may not be used. fi done dolfin-2017.2.0.post0/utils/scripts/plotklocs0000644000231000000010000000464413216543010020163 0ustar chrisdaemon#!/usr/bin/env python # # Copyright (C) 2013 Anders Logg # # This file is part of DOLFIN. # # DOLFIN is free software: you can redistribute it and/or modify it # under the terms of the GNU Lesser General Public License as # published by the Free Software Foundation, either version 3 of the # License, or (at your option) any later version. # # DOLFIN is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with DOLFIN. If not, see . # # First added: 2013-02-26 # Last changed: 2013-02-26 # # Plot growth of code as function of time. This script should be # run rom the top level directory. from commands import getoutput from datetime import datetime import pylab as pl # Suffixes to check suffixes = [".h", ".hh", ".cpp", ".C"] # Paths to skip skip = ["demo", "bench", "test", "local", "build"] # Get time stamps timestamps = getoutput("bzr log | grep timestamp | awk '{print $3}'") t = [] for timestamp in timestamps.split("\n"): y, m, d = timestamp.split("-") t.append(datetime(int(y), int(m), int(d))) t.reverse() # Revisions to check revisions = range(len(t)) # For quick testing #revisions = revisions[-10:] #t = t[-10:] # Iterate over revisions locs = [] for i, r in enumerate(revisions): print "Updating to revision %r %s" % (r, t[i]) # Update repository getoutput("bzr update -r %d" % r) # Count the lines of code n_total = 0 for suffix in suffixes: skips = " | ".join("grep -v %s" % s for s in skip) n = getoutput("""find . -name *%s \ | %s \ | xargs wc -l \ | grep total \ | awk '{print $1}'""" % (suffix, skips)) if n.isdigit(): n_total += int(n) # Store lines of code print "%d lines of code" % n_total locs.append(n_total) # Store results to file f = open("klocs.csv", "w") f.write("\n".join("%d-%d-%d,%d" % \ (t[i].year, t[i].month, t[i].day, locs[i]) \ for i in range(len(revisions)))) f.close() # Plot results pl.plot(t, locs) pl.grid(True) pl.ylabel("Lines of code") pl.title("Code growth for DOLFIN %d - %d" % (t[0].year, t[-1].year)) pl.show() dolfin-2017.2.0.post0/utils/scripts/klocs0000755000231000000010000000116313216543010017260 0ustar chrisdaemon#!/bin/sh # klocs # # Count the number of kilos of lines of code (klocs) # in the directory src. # # This script should be run from the top level directory. IMPL=`find dolfin -name '*.cpp' | xargs wc -l | grep total | awk '{ printf "%d", $1/1000 }'` HEAD=`find dolfin -name '*.h' | grep -v elements | grep -v ffc-forms | xargs wc -l | grep total | awk '{ printf "%d", $1/1000 }'` TOTAL=`echo $IMPL $HEAD | awk '{ print $1 + $2 }'` echo $HEAD' klocs in .h files' echo $IMPL' klocs in .cpp files' echo $TOTAL' klocs total' dolfin-2017.2.0.post0/utils/scripts/pdebug0000755000231000000010000000037513216543010017417 0ustar chrisdaemon#!/bin/bash echo "Running \"$2\" on $1 processes through gdb debugger." env="" # Example of how to set the environment, if needed #env="source /home/skavhaug/.setups/alt1; source ../../dolfin.conf;" mpirun -np $1 -x DISPLAY=:0.0 xterm -e "$env gdb $2" dolfin-2017.2.0.post0/utils/emacs/0000755000231000000010000000000013216543010015617 5ustar chrisdaemondolfin-2017.2.0.post0/utils/emacs/macros0000644000231000000010000000663013216543010017033 0ustar chrisdaemon;; Macro for DOLFIN .h file (fset 'dolfin\.h [?/ ?/ ? ?C ?o ?p ?y ?r ?i ?g ?h ?t ? ?( ?C ?) ? ?2 ?0 ?0 ?6 ? ?Y ?o ?u ?r ? ?N ?a ?m ?e ?. return ?/ ?/ ? ?L ?i ?c ?e ?n ?s ?e ?d ? ?u ?n ?d ?e ?r ? ?t ?h ?e ? ?G ?N ?U ?\S- ?G ?P ?L ?\S- ?V ?e ?r ?s ?i ?o ?n ? ?2 ?. return ?/ ?/ return ?/ ?/ ? ?F ?i ?r ?s ?t ? ?a ?d ?d ?e ?d ?: ? ? ?2 ?0 ?0 ?6 ?- ?0 ?1 ?- ?0 ?1 return ?/ ?/ ? ?L ?a ?s ?t ? ?c ?h ?a ?n ?g ?e ?d ?: ? ?2 ?0 ?0 ?6 ?- ?0 ?1 ?- ?0 ?1 return return ?# ?i ?f ?n ?d ?e ?f ? ?_ ?_ ?F ?O ?O ?_ ?H return ?# ?d ?e ?f ?i ?n ?e ? ?_ ?_ ?F ?O ?O ?_ ?H return return ?# ?i ?n ?c ?l ?u ?d ?e ? ?< ?d ?o ?l ?f ?i ?n ?/ ?c ?o ?n ?s ?t ?a ?n ?t ?s ?. ?h ?> return return ?n ?a ?m ?e ?s ?p ?a ?c ?e ? ?d ?o ?l ?f ?i ?n return ?{ return return ? ? ?/ ?/ ?/ ? ?D ?o ?c ?u ?m ?e ?n ?t ?a ?t ?i ?o ?n ? ?o ?f ? ?c ?l ?a ?s ?s return return ? ? ?c ?l ?a ?s ?s ? ?F ?o ?o return ? ? ?{ return ? ? ?p ?u ?b ?l ?i ?c ?: return return ? ? ? ? ?/ ?/ ?/ ? ?C ?o ?n ?s ?t ?r ?u ?c ?t ?o ?r return ? ? ? ? ?F ?o ?o ?( ?) ?\; return return ? ? ? ? ?/ ?/ ?/ ? ?D ?e ?s ?t ?r ?u ?c ?t ?o ?r return ? ? ? ? ?~ ?F ?o ?o ?( ?) ?\; return return ? ? ?p ?r ?i ?v ?a ?t ?e ?: return return ? ? ?} ?\; return return ?} return return ?# ?e ?n ?d ?i ?f return ?\M-x ?e ?n ?d tab ?k tab]) ;; Macro for DOLFIN .cpp file (fset 'dolfin\.cpp [?/ ?/ ? ?C ?o ?p ?y ?r ?i ?g ?h ?t ? ?( ?C ?) ? ?2 ?0 ?0 ?6 ? ?Y ?o ?u ?r ? ?N ?a ?m ?e ?. return ?/ ?/ ? ?L ?i ?c ?e ?n ?s ?e ?d ? ?u ?n ?d ?e ?r ? ?t ?h ?e ? ?G ?N ?U ?\S- ?G ?P ?L ?\S- ?V ?e ?r ?s ?i ?o ?n ? ?2 ?. return ?/ ?/ return ?/ ?/ ? ?F ?i ?r ?s ?t ? ?a ?d ?d ?e ?d ?: ? ? ?2 ?0 ?0 ?6 ?- ?0 ?1 ?- ?0 ?1 return ?/ ?/ ? ?L ?a ?s ?t ? ?c ?h ?a ?n ?g ?e ?d ?: ? ?2 ?0 ?0 ?6 ?- ?0 ?1 ?- ?0 ?1 return return ?# ?i ?n ?c ?l ?u ?d ?e ? ?< ?d ?o ?l ?f ?i ?n ?/ ?F ?o ?o ?. ?h ?> return return ?u ?s ?i ?n ?g ? ?n ?a ?m ?e ?s ?p ?a ?c ?e ? ?d ?o ?l ?f ?i ?n ?\; return return ?/ ?/ ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- return ?F ?o ?o ?: ?: ?F ?o ?o ?( ?) return ?{ return ? ? ?/ ?/ ? ?D ?o ? ?n ?o ?t ?h ?i ?n ?g return ?} return ?/ ?/ ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- return ?F ?o ?o ?: ?: ?~ ?F ?o ?o ?( ?) return ?{ return ? ? ?/ ?/ ? ?D ?o ? ?n ?o ?t ?h ?i ?n ?g return ?} return ?/ ?/ ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- ?- return ?\M-x ?e ?n ?d tab ?k tab]) ;; Update time stamp for DOLFIN source code (setq time-stamp-start "Last changed:\\\\? "; start of pattern time-stamp-end "\\\\?\n" ; end of pattern time-stamp-active t ; do enable time-stamps time-stamp-line-limit 20 ; check first 20 lines time-stamp-format "%04y-%02m-%02d") ; date format ;; Update timestamp before saving (add-hook 'before-save-hook 'time-stamp) ;; Remove trailing whitespaces before saving (add-hook 'before-save-hook 'delete-trailing-whitespace) dolfin-2017.2.0.post0/utils/x3dom/0000755000231000000010000000000013216543010015561 5ustar chrisdaemondolfin-2017.2.0.post0/utils/x3dom/x3dom_support.css0000644000231000000010000000437313216543010021130 0ustar chrisdaemon/* for the help menu header */ #menu { position: absolute; margin: 1%; font-size: 12px; } #menu-items { line-height: normal; background-color: grey; opacity: 0.7; font-size: 0px; } #menu-items label { display: inline; padding-left: 10px; padding-right: 10px; border: 1px solid black; font-size: 12px; } #button-options + label { padding-left: 20px; padding-right: 20px; } #menu-items label:hover { background-color: silver; } #menu-items input[type="radio"] { display: none; } #menu input[type="radio"]:checked + label { background-color: silver; } /* for the help menu content */ #menu-content { height: 100%; width: 100%; background-color: black; opacity: 0.7; color: white; padding-top: 5px; text-align: center; } /* for the options content in the help menu */ #content-options { font-size: 0px; line-height: normal; } /*#content-options * { padding: 0; margin: 0; }*/ .options input { font-size: 12px; } /*.options label { font-size: 12px; color: white; }*/ #content-options span { display: inline-block; font-size: 12px; padding-bottom: 3px; } .options label { display: inline-block; width: 60px; padding-left: 2px; font-size: 12px; text-align: left; color: white; } /* for the summary content in the help menu */ #content-summary { padding-bottom: 5px; } /* for the color content in the help menu */ #content-color { width: 100%; padding-bottom: 10px; } #content-color * { margin: 0; padding: 0; } #color-map { margin-left: 5px; margin-right: 5px; } #color-map span { width: 1px; height: 12px; display: inline-block; } #content-color form { padding-bottom: 5px; } /* for the warping content in the help menu */ #content-warp * { margin: 0; padding: 0; } #content-warp input[type="range"] { margin: auto; padding-top: 5px; width: 50%; display: inline-block; } #content-warp label { font-size: 12px; } #warp-slider-val { position: relative; top: -7px; } /* for the opacity section of the menu */ #opacity-slider-val { position: relative; top: -7px; } /* for the viewpoints section of the menu */ #content-viewpoints { margin-bottom: 5px; font-size: 0px; } #content-viewpoints span { font-size: 12px; } .viewpoint { display: inline-block; width: 60px; margin: 5px; font-size: 12px; color: black; }dolfin-2017.2.0.post0/utils/x3dom/README.rst0000644000231000000010000000237413216543010017256 0ustar chrisdaemonX3DOM Support ------------- Author: Peter Scott The files included in 'utils/x3dom' are support for visualizing solutions using the X3DOM framework. The functions included in the javascript file 'x3dom_support.js' are meant to allow for dynamic interaction with the visualized data. There are functions for converting the x3dom representation of vertices (a 1d array of numbers) into a 3 dimensional representation, as well as for converting back to the x3dom representation. There are functions to add to and remove color from x3dom shapes as well as functions for adding a warped representation of the shape to the x3dom scene. Currently, only warping by a scalar is supported and the warp factor is determined by user input (using a slider). There are functions for adjusting the opacity, though currently these are not used since selection of shapes is necessary for adjusting opacity. Finally, there are functions to set up the ui functionality: these functions set up what happens on user interaction with the x3dom menu display. The 'x3dom_support.css' file is used to style the menu display. The different components of the menu are given certain styles to ensure a uniform look as well as to ensure that the display does not cover up too much of the x3dom scene. dolfin-2017.2.0.post0/utils/x3dom/x3dom_support.js0000644000231000000010000002570713216543010020760 0ustar chrisdaemon/* @licstart The following is the entire license notice for the JavaScript code in this page. Copyright (C) 2016 Peter Scott The JavaScript code in this page is free software: you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. @licend The above is the entire license notice for the JavaScript code in this page. */ var scalarField; var addWarp = true; /* function to convert a list of numbers (in string form) to a 3d array of numbers */ function listToArray3d(pts) { pts = pts.trim().split(" "); var array = []; var idx = 0; for (var i = 0; i < pts.length; i++) { if (i % 3 == 0) { array[idx] = []; } array[idx][i % 3] = Number(pts[i]); if (i % 3 == 2) { idx++; } } return array; } /* function to convert a 3d array of numbers to a list of numbers (in string form) */ function array3dToList(arr) { var pts = ""; for (var i = 0; i < arr.length * 3; i++) { var x = Math.floor(i / 3); var y = i % 3; pts += arr[x][y].toString(); pts += " "; } return pts; } /* function to convert a list of numbers (in string form) to a 1D array of numbers */ function listToArray1d(pts) { pts = pts.trim().split(" "); var array = []; for (var i = 0; i < pts.length; i++) { array[i] = Number(pts[i]); } return array; } /* function to convert a 1d array of numbers to a list of numbers (in string form) */ function array1dToList(arr) { var pts = ""; for (var i = 0; i < arr.length; i++) { pts += arr[i].toString(); pts += " "; } } /* function to translate colormap and intensity indices into intensity color on the figure */ function addColor() { // get information from metadata tag: var metadata = $("metadata")[0]; var colormap = listToArray3d($(metadata).attr("color_map")); var indices = listToArray1d($(metadata).attr("indices")); // add rgb values to color tag var rgb = []; for (var i = 0; i < indices.length; i++) { var idx = indices[i]; rgb[i] = colormap[idx]; } // add color node to each indexed face set var faces = $("indexedFaceSet"); for (var i = 0; i < faces.length; i++) { // update color node with rgb values var color = document.createElement("color"); $(color).attr("color", array3dToList(rgb)); var face = faces[i]; face.appendChild(color); // update shape's indexed face set -- remove old face set // (this is because x3dom does not automatically regenerate dom tree based when updated) var parent = face.parentElement; parent.removeChild(face); parent.appendChild(face); } } /* function to remove color on the figure */ function removeColor() { // need to remove the colors from each face set var faces = $("indexedFaceSet"); for (var i = 0; i < faces.length; i++) { // remove the color tag var color = $(faces[i]).children("color"); color.remove(); // re-add the faces to the dom then remove old one (with color) -- x3dom doesn't register updates var parent = faces[i].parentElement; parent.removeChild(faces[i]); parent.appendChild(faces[i]); } } /* function to calculate the scalar field values at each vertex */ function calculateScalars() { // get information from metadata tag: var metadata = $("metadata")[0]; var max_value = Number($(metadata).attr("max_value")); var min_value = Number($(metadata).attr("min_value")); var indices = listToArray1d($(metadata).attr("indices")); // find scale used to normalize color values (there are 256 colors in map by default) var scale = (min_value == max_value) ? 1.0 : 255.0 / (max_value - min_value); // calculate scalar value for each index var scalars = []; for (var i = 0; i < indices.length; i++) { scalars[i] = (indices[i] / scale) + min_value; } return scalars; } /* function to warp a shape by a scalar field */ function warpByScalar() { if (scalarField.length == 0) { return; } // add warped shapes if not added already if (addWarp) { var shapes = $("shape"); var parent = shapes[0].parentElement; for (var i = 0; i < shapes.length; i++) { // find the points for each shape var curr = $(shapes[i]).clone()[0]; var coord = $(curr).find("coordinate"); var points = listToArray3d($(coord).attr("point")); // change the points according to the scalar field for (var j = 0; j < points.length; j++) { // FIXME: generalize perpendicular direction // change the z-coordinate to be the scalar value points[j][2] = scalarField[j]; } $(coord).attr("point", array3dToList(points)); // add classname to new shape $(curr).addClass("warped"); // add new shape to the parent parent.appendChild(curr); } addWarp = false; } // adjust the scalar warping by the current scale factor var factor = Number($("#warp-slider")[0].value); var shapes = $(".warped"); for (var i = 0; i < shapes.length; i++) { var coord = $(shapes[i]).find("coordinate"); var points = listToArray3d($(coord).attr("point")); // change the scalar warping by current scale for (var j = 0; j < points.length; j++) { points[j][2] = scalarField[j] * factor; } $(coord).attr("point", array3dToList(points)); } } /* function to determine whether the x3dom shape should be colored */ function shouldColor() { var metadata = $("metadata")[0]; if (!metadata) { return false; } if (metadata.color) { return true; } return false; } /* function to call all helper setup functions needed for the menu */ function setupMenu() { // first setup the menu buttons setupButtons(); // next setup the content for each subsection setupColorContent(); setupWarpContent(); // setupOpacityContent(); setupViewpointContent(); } /* change the content in the menu based on the button selected */ function setupButtons() { $("#menu input[type='radio']").change(function() { // get the name from the calling button var name = $(this).attr("id"); // find the content's parent by using the id for the menu content var parent = $("#menu-content"); // find the content from the parent by using the name of calling button var selector = "div[for='" + name + "']"; var content = $(parent).children(selector); var children = $(parent).children(); for (var i = 0; i < children.length; i++) { if ($(children[i]).is(content)) { $(children[i]).show(); } else { $(children[i]).hide(); } } }); $("#content-options input[type='checkbox']").change(function() { // get the name from the calling button var name = $(this).attr("id").split("-")[1]; // find the parent for the menu-items var parent = $("#menu-items"); // find the corresponding label element in the menu buttons var selector = "label[for='button-" + name +"']"; var label = $(parent).children(selector); // hide or show if (this.checked) { $(label).show(); } else { $(label).hide(); } }); } /* function to setup the content for the color subsection in menu */ function setupColorContent() { // first set up the listener for the 'show color' checkbox $("#color-checkbox").change(function() { if ($(this).prop("checked")) { addColor(); } else { removeColor(); } }); // build the color map into the menu-content location var map_parent = $("#color-map")[0]; // get information from metadata tag: var colormap = listToArray3d($("metadata").attr("color_map")); // add spans to the color map parent with the every other color for (var i = 0; i < colormap.length; i += 2) { var r = Math.round(colormap[i][0] * 256); var g = Math.round(colormap[i][1] * 256); var b = Math.round(colormap[i][2] * 256); var span = document.createElement("span"); var background_color = "background-color: rgb(" + r + "," + g + "," + b + ")"; span.style = background_color; map_parent.appendChild(span); } // set minimum and maximum values var min_value = Number($("metadata").attr("min_value")); $("#min-color-value").html(min_value.toPrecision(3)); var max_value = Number($("metadata").attr("max_value")); $("#max-color-value").html(max_value.toPrecision(3)); } /* function to setup the content for the warping section in the menu */ function setupWarpContent() { // calculate the scalars initially for frame of reference -- global value scalarField = calculateScalars(); // setup the checkbox to toggle the slider $("#warp-checkbox").change(function() { document.getElementById("warp-slider").disabled = !document.getElementById("warp-slider").disabled; if (!document.getElementById("warp-slider").disabled) { warpByScalar(); } else { // hide the warped shapes $(".warped").remove(); addWarp = true; } }); // setup the slider to change the warped shape and to change the value of the label $("#warp-slider").change(function() { $("#warp-slider-val").html($(this).val()); warpByScalar(); }); } /* adjust opacity */ function adjustOpacity(val) { // get the first indexed face set in the scene var faces = $("indexedFaceSet")[0]; var parent = faces.parentElement; var material = $(parent).find("material"); var transparency = 1 - val; material.attr("transparency", transparency); } function setupOpacityContent() { $("#opacity-slider").change(function() { $("#opacity-slider-val").html($(this).val()); adjustOpacity(Number($(this).val())); }); } /* function to setup the content for the viewpoint buttons */ function setupViewpointContent() { // add corresponding click listeners for each button var buttons = $(".viewpoint"); for (var i = 0; i < buttons.length; i++) { $(buttons[i]).click(function() { // the name of the button is the corresponding id of the viewpoint var name = $(this).text(); $("#" + name).attr("set_bind", true); }); } } // add color as soon as the document is ready $(document).ready(function() { setupMenu(); // TODO check if color should be added if (shouldColor() || true) { addColor(); } });dolfin-2017.2.0.post0/COPYING0000644000231000000010000010451313216542752014442 0ustar chrisdaemon GNU GENERAL PUBLIC LICENSE Version 3, 29 June 2007 Copyright (C) 2007 Free Software Foundation, Inc. Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The GNU General Public License is a free, copyleft license for software and other kinds of works. The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change all versions of a program--to make sure it remains free software for all its users. We, the Free Software Foundation, use the GNU General Public License for most of our software; it applies also to any other work released this way by its authors. You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things. To protect your rights, we need to prevent others from denying you these rights or asking you to surrender the rights. Therefore, you have certain responsibilities if you distribute copies of the software, or if you modify it: responsibilities to respect the freedom of others. For example, if you distribute copies of such a program, whether gratis or for a fee, you must pass on to the recipients the same freedoms that you received. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. Developers that use the GNU GPL protect your rights with two steps: (1) assert copyright on the software, and (2) offer you this License giving you legal permission to copy, distribute and/or modify it. For the developers' and authors' protection, the GPL clearly explains that there is no warranty for this free software. For both users' and authors' sake, the GPL requires that modified versions be marked as changed, so that their problems will not be attributed erroneously to authors of previous versions. Some devices are designed to deny users access to install or run modified versions of the software inside them, although the manufacturer can do so. This is fundamentally incompatible with the aim of protecting users' freedom to change the software. The systematic pattern of such abuse occurs in the area of products for individuals to use, which is precisely where it is most unacceptable. Therefore, we have designed this version of the GPL to prohibit the practice for those products. If such problems arise substantially in other domains, we stand ready to extend this provision to those domains in future versions of the GPL, as needed to protect the freedom of users. Finally, every program is threatened constantly by software patents. States should not allow patents to restrict development and use of software on general-purpose computers, but in those that do, we wish to avoid the special danger that patents applied to a free program could make it effectively proprietary. To prevent this, the GPL assures that patents cannot be used to render the program non-free. The precise terms and conditions for copying, distribution and modification follow. TERMS AND CONDITIONS 0. Definitions. "This License" refers to version 3 of the GNU General Public License. "Copyright" also means copyright-like laws that apply to other kinds of works, such as semiconductor masks. "The Program" refers to any copyrightable work licensed under this License. Each licensee is addressed as "you". "Licensees" and "recipients" may be individuals or organizations. To "modify" a work means to copy from or adapt all or part of the work in a fashion requiring copyright permission, other than the making of an exact copy. The resulting work is called a "modified version" of the earlier work or a work "based on" the earlier work. A "covered work" means either the unmodified Program or a work based on the Program. To "propagate" a work means to do anything with it that, without permission, would make you directly or secondarily liable for infringement under applicable copyright law, except executing it on a computer or modifying a private copy. Propagation includes copying, distribution (with or without modification), making available to the public, and in some countries other activities as well. To "convey" a work means any kind of propagation that enables other parties to make or receive copies. Mere interaction with a user through a computer network, with no transfer of a copy, is not conveying. An interactive user interface displays "Appropriate Legal Notices" to the extent that it includes a convenient and prominently visible feature that (1) displays an appropriate copyright notice, and (2) tells the user that there is no warranty for the work (except to the extent that warranties are provided), that licensees may convey the work under this License, and how to view a copy of this License. If the interface presents a list of user commands or options, such as a menu, a prominent item in the list meets this criterion. 1. Source Code. The "source code" for a work means the preferred form of the work for making modifications to it. "Object code" means any non-source form of a work. A "Standard Interface" means an interface that either is an official standard defined by a recognized standards body, or, in the case of interfaces specified for a particular programming language, one that is widely used among developers working in that language. The "System Libraries" of an executable work include anything, other than the work as a whole, that (a) is included in the normal form of packaging a Major Component, but which is not part of that Major Component, and (b) serves only to enable use of the work with that Major Component, or to implement a Standard Interface for which an implementation is available to the public in source code form. A "Major Component", in this context, means a major essential component (kernel, window system, and so on) of the specific operating system (if any) on which the executable work runs, or a compiler used to produce the work, or an object code interpreter used to run it. The "Corresponding Source" for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities. However, it does not include the work's System Libraries, or general-purpose tools or generally available free programs which are used unmodified in performing those activities but which are not part of the work. For example, Corresponding Source includes interface definition files associated with source files for the work, and the source code for shared libraries and dynamically linked subprograms that the work is specifically designed to require, such as by intimate data communication or control flow between those subprograms and other parts of the work. The Corresponding Source need not include anything that users can regenerate automatically from other parts of the Corresponding Source. The Corresponding Source for a work in source code form is that same work. 2. Basic Permissions. All rights granted under this License are granted for the term of copyright on the Program, and are irrevocable provided the stated conditions are met. This License explicitly affirms your unlimited permission to run the unmodified Program. The output from running a covered work is covered by this License only if the output, given its content, constitutes a covered work. This License acknowledges your rights of fair use or other equivalent, as provided by copyright law. You may make, run and propagate covered works that you do not convey, without conditions so long as your license otherwise remains in force. You may convey covered works to others for the sole purpose of having them make modifications exclusively for you, or provide you with facilities for running those works, provided that you comply with the terms of this License in conveying all material for which you do not control copyright. Those thus making or running the covered works for you must do so exclusively on your behalf, under your direction and control, on terms that prohibit them from making any copies of your copyrighted material outside their relationship with you. Conveying under any other circumstances is permitted solely under the conditions stated below. Sublicensing is not allowed; section 10 makes it unnecessary. 3. Protecting Users' Legal Rights From Anti-Circumvention Law. No covered work shall be deemed part of an effective technological measure under any applicable law fulfilling obligations under article 11 of the WIPO copyright treaty adopted on 20 December 1996, or similar laws prohibiting or restricting circumvention of such measures. When you convey a covered work, you waive any legal power to forbid circumvention of technological measures to the extent such circumvention is effected by exercising rights under this License with respect to the covered work, and you disclaim any intention to limit operation or modification of the work as a means of enforcing, against the work's users, your or third parties' legal rights to forbid circumvention of technological measures. 4. Conveying Verbatim Copies. You may convey verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice; keep intact all notices stating that this License and any non-permissive terms added in accord with section 7 apply to the code; keep intact all notices of the absence of any warranty; and give all recipients a copy of this License along with the Program. You may charge any price or no price for each copy that you convey, and you may offer support or warranty protection for a fee. 5. Conveying Modified Source Versions. You may convey a work based on the Program, or the modifications to produce it from the Program, in the form of source code under the terms of section 4, provided that you also meet all of these conditions: a) The work must carry prominent notices stating that you modified it, and giving a relevant date. b) The work must carry prominent notices stating that it is released under this License and any conditions added under section 7. This requirement modifies the requirement in section 4 to "keep intact all notices". c) You must license the entire work, as a whole, under this License to anyone who comes into possession of a copy. This License will therefore apply, along with any applicable section 7 additional terms, to the whole of the work, and all its parts, regardless of how they are packaged. This License gives no permission to license the work in any other way, but it does not invalidate such permission if you have separately received it. d) If the work has interactive user interfaces, each must display Appropriate Legal Notices; however, if the Program has interactive interfaces that do not display Appropriate Legal Notices, your work need not make them do so. A compilation of a covered work with other separate and independent works, which are not by their nature extensions of the covered work, and which are not combined with it such as to form a larger program, in or on a volume of a storage or distribution medium, is called an "aggregate" if the compilation and its resulting copyright are not used to limit the access or legal rights of the compilation's users beyond what the individual works permit. Inclusion of a covered work in an aggregate does not cause this License to apply to the other parts of the aggregate. 6. Conveying Non-Source Forms. You may convey a covered work in object code form under the terms of sections 4 and 5, provided that you also convey the machine-readable Corresponding Source under the terms of this License, in one of these ways: a) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by the Corresponding Source fixed on a durable physical medium customarily used for software interchange. b) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by a written offer, valid for at least three years and valid for as long as you offer spare parts or customer support for that product model, to give anyone who possesses the object code either (1) a copy of the Corresponding Source for all the software in the product that is covered by this License, on a durable physical medium customarily used for software interchange, for a price no more than your reasonable cost of physically performing this conveying of source, or (2) access to copy the Corresponding Source from a network server at no charge. c) Convey individual copies of the object code with a copy of the written offer to provide the Corresponding Source. This alternative is allowed only occasionally and noncommercially, and only if you received the object code with such an offer, in accord with subsection 6b. d) Convey the object code by offering access from a designated place (gratis or for a charge), and offer equivalent access to the Corresponding Source in the same way through the same place at no further charge. You need not require recipients to copy the Corresponding Source along with the object code. If the place to copy the object code is a network server, the Corresponding Source may be on a different server (operated by you or a third party) that supports equivalent copying facilities, provided you maintain clear directions next to the object code saying where to find the Corresponding Source. Regardless of what server hosts the Corresponding Source, you remain obligated to ensure that it is available for as long as needed to satisfy these requirements. e) Convey the object code using peer-to-peer transmission, provided you inform other peers where the object code and Corresponding Source of the work are being offered to the general public at no charge under subsection 6d. A separable portion of the object code, whose source code is excluded from the Corresponding Source as a System Library, need not be included in conveying the object code work. A "User Product" is either (1) a "consumer product", which means any tangible personal property which is normally used for personal, family, or household purposes, or (2) anything designed or sold for incorporation into a dwelling. In determining whether a product is a consumer product, doubtful cases shall be resolved in favor of coverage. For a particular product received by a particular user, "normally used" refers to a typical or common use of that class of product, regardless of the status of the particular user or of the way in which the particular user actually uses, or expects or is expected to use, the product. A product is a consumer product regardless of whether the product has substantial commercial, industrial or non-consumer uses, unless such uses represent the only significant mode of use of the product. "Installation Information" for a User Product means any methods, procedures, authorization keys, or other information required to install and execute modified versions of a covered work in that User Product from a modified version of its Corresponding Source. The information must suffice to ensure that the continued functioning of the modified object code is in no case prevented or interfered with solely because modification has been made. If you convey an object code work under this section in, or with, or specifically for use in, a User Product, and the conveying occurs as part of a transaction in which the right of possession and use of the User Product is transferred to the recipient in perpetuity or for a fixed term (regardless of how the transaction is characterized), the Corresponding Source conveyed under this section must be accompanied by the Installation Information. But this requirement does not apply if neither you nor any third party retains the ability to install modified object code on the User Product (for example, the work has been installed in ROM). The requirement to provide Installation Information does not include a requirement to continue to provide support service, warranty, or updates for a work that has been modified or installed by the recipient, or for the User Product in which it has been modified or installed. Access to a network may be denied when the modification itself materially and adversely affects the operation of the network or violates the rules and protocols for communication across the network. Corresponding Source conveyed, and Installation Information provided, in accord with this section must be in a format that is publicly documented (and with an implementation available to the public in source code form), and must require no special password or key for unpacking, reading or copying. 7. Additional Terms. "Additional permissions" are terms that supplement the terms of this License by making exceptions from one or more of its conditions. Additional permissions that are applicable to the entire Program shall be treated as though they were included in this License, to the extent that they are valid under applicable law. If additional permissions apply only to part of the Program, that part may be used separately under those permissions, but the entire Program remains governed by this License without regard to the additional permissions. When you convey a copy of a covered work, you may at your option remove any additional permissions from that copy, or from any part of it. (Additional permissions may be written to require their own removal in certain cases when you modify the work.) You may place additional permissions on material, added by you to a covered work, for which you have or can give appropriate copyright permission. Notwithstanding any other provision of this License, for material you add to a covered work, you may (if authorized by the copyright holders of that material) supplement the terms of this License with terms: a) Disclaiming warranty or limiting liability differently from the terms of sections 15 and 16 of this License; or b) Requiring preservation of specified reasonable legal notices or author attributions in that material or in the Appropriate Legal Notices displayed by works containing it; or c) Prohibiting misrepresentation of the origin of that material, or requiring that modified versions of such material be marked in reasonable ways as different from the original version; or d) Limiting the use for publicity purposes of names of licensors or authors of the material; or e) Declining to grant rights under trademark law for use of some trade names, trademarks, or service marks; or f) Requiring indemnification of licensors and authors of that material by anyone who conveys the material (or modified versions of it) with contractual assumptions of liability to the recipient, for any liability that these contractual assumptions directly impose on those licensors and authors. All other non-permissive additional terms are considered "further restrictions" within the meaning of section 10. If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term. If a license document contains a further restriction but permits relicensing or conveying under this License, you may add to a covered work material governed by the terms of that license document, provided that the further restriction does not survive such relicensing or conveying. If you add terms to a covered work in accord with this section, you must place, in the relevant source files, a statement of the additional terms that apply to those files, or a notice indicating where to find the applicable terms. Additional terms, permissive or non-permissive, may be stated in the form of a separately written license, or stated as exceptions; the above requirements apply either way. 8. Termination. You may not propagate or modify a covered work except as expressly provided under this License. Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License (including any patent licenses granted under the third paragraph of section 11). However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation. Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice. Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, you do not qualify to receive new licenses for the same material under section 10. 9. Acceptance Not Required for Having Copies. You are not required to accept this License in order to receive or run a copy of the Program. Ancillary propagation of a covered work occurring solely as a consequence of using peer-to-peer transmission to receive a copy likewise does not require acceptance. However, nothing other than this License grants you permission to propagate or modify any covered work. These actions infringe copyright if you do not accept this License. Therefore, by modifying or propagating a covered work, you indicate your acceptance of this License to do so. 10. Automatic Licensing of Downstream Recipients. Each time you convey a covered work, the recipient automatically receives a license from the original licensors, to run, modify and propagate that work, subject to this License. You are not responsible for enforcing compliance by third parties with this License. An "entity transaction" is a transaction transferring control of an organization, or substantially all assets of one, or subdividing an organization, or merging organizations. If propagation of a covered work results from an entity transaction, each party to that transaction who receives a copy of the work also receives whatever licenses to the work the party's predecessor in interest had or could give under the previous paragraph, plus a right to possession of the Corresponding Source of the work from the predecessor in interest, if the predecessor has it or can get it with reasonable efforts. You may not impose any further restrictions on the exercise of the rights granted or affirmed under this License. For example, you may not impose a license fee, royalty, or other charge for exercise of rights granted under this License, and you may not initiate litigation (including a cross-claim or counterclaim in a lawsuit) alleging that any patent claim is infringed by making, using, selling, offering for sale, or importing the Program or any portion of it. 11. Patents. A "contributor" is a copyright holder who authorizes use under this License of the Program or a work on which the Program is based. The work thus licensed is called the contributor's "contributor version". A contributor's "essential patent claims" are all patent claims owned or controlled by the contributor, whether already acquired or hereafter acquired, that would be infringed by some manner, permitted by this License, of making, using, or selling its contributor version, but do not include claims that would be infringed only as a consequence of further modification of the contributor version. For purposes of this definition, "control" includes the right to grant patent sublicenses in a manner consistent with the requirements of this License. Each contributor grants you a non-exclusive, worldwide, royalty-free patent license under the contributor's essential patent claims, to make, use, sell, offer for sale, import and otherwise run, modify and propagate the contents of its contributor version. In the following three paragraphs, a "patent license" is any express agreement or commitment, however denominated, not to enforce a patent (such as an express permission to practice a patent or covenant not to sue for patent infringement). To "grant" such a patent license to a party means to make such an agreement or commitment not to enforce a patent against the party. If you convey a covered work, knowingly relying on a patent license, and the Corresponding Source of the work is not available for anyone to copy, free of charge and under the terms of this License, through a publicly available network server or other readily accessible means, then you must either (1) cause the Corresponding Source to be so available, or (2) arrange to deprive yourself of the benefit of the patent license for this particular work, or (3) arrange, in a manner consistent with the requirements of this License, to extend the patent license to downstream recipients. "Knowingly relying" means you have actual knowledge that, but for the patent license, your conveying the covered work in a country, or your recipient's use of the covered work in a country, would infringe one or more identifiable patents in that country that you have reason to believe are valid. If, pursuant to or in connection with a single transaction or arrangement, you convey, or propagate by procuring conveyance of, a covered work, and grant a patent license to some of the parties receiving the covered work authorizing them to use, propagate, modify or convey a specific copy of the covered work, then the patent license you grant is automatically extended to all recipients of the covered work and works based on it. A patent license is "discriminatory" if it does not include within the scope of its coverage, prohibits the exercise of, or is conditioned on the non-exercise of one or more of the rights that are specifically granted under this License. You may not convey a covered work if you are a party to an arrangement with a third party that is in the business of distributing software, under which you make payment to the third party based on the extent of your activity of conveying the work, and under which the third party grants, to any of the parties who would receive the covered work from you, a discriminatory patent license (a) in connection with copies of the covered work conveyed by you (or copies made from those copies), or (b) primarily for and in connection with specific products or compilations that contain the covered work, unless you entered into that arrangement, or that patent license was granted, prior to 28 March 2007. Nothing in this License shall be construed as excluding or limiting any implied license or other defenses to infringement that may otherwise be available to you under applicable patent law. 12. No Surrender of Others' Freedom. If conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot convey a covered work so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not convey it at all. For example, if you agree to terms that obligate you to collect a royalty for further conveying from those to whom you convey the Program, the only way you could satisfy both those terms and this License would be to refrain entirely from conveying the Program. 13. Use with the GNU Affero General Public License. Notwithstanding any other provision of this License, you have permission to link or combine any covered work with a work licensed under version 3 of the GNU Affero General Public License into a single combined work, and to convey the resulting work. The terms of this License will continue to apply to the part which is the covered work, but the special requirements of the GNU Affero General Public License, section 13, concerning interaction through a network will apply to the combination as such. 14. Revised Versions of this License. The Free Software Foundation may publish revised and/or new versions of the GNU General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies that a certain numbered version of the GNU General Public License "or any later version" applies to it, you have the option of following the terms and conditions either of that numbered version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of the GNU General Public License, you may choose any version ever published by the Free Software Foundation. If the Program specifies that a proxy can decide which future versions of the GNU General Public License can be used, that proxy's public statement of acceptance of a version permanently authorizes you to choose that version for the Program. Later license versions may give you additional or different permissions. However, no additional obligations are imposed on any author or copyright holder as a result of your choosing to follow a later version. 15. Disclaimer of Warranty. THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 16. Limitation of Liability. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. 17. Interpretation of Sections 15 and 16. If the disclaimer of warranty and limitation of liability provided above cannot be given local legal effect according to their terms, reviewing courts shall apply local law that most closely approximates an absolute waiver of all civil liability in connection with the Program, unless a warranty or assumption of liability accompanies a copy of the Program in return for a fee. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively state the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. Copyright (C) This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . Also add information on how to contact you by electronic and paper mail. If the program does terminal interaction, make it output a short notice like this when it starts in an interactive mode: Copyright (C) This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, your program's commands might be different; for a GUI interface, you would use an "about box". You should also get your employer (if you work as a programmer) or school, if any, to sign a "copyright disclaimer" for the program, if necessary. For more information on this, and how to apply and follow the GNU GPL, see . The GNU General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Lesser General Public License instead of this License. But first, please read . dolfin-2017.2.0.post0/CMakeLists.txt0000644000231000000010000012472313216543010016140 0ustar chrisdaemon# Top level CMakeLists.txt file for DOLFIN # Require CMake 3.5 cmake_minimum_required(VERSION 3.5) #------------------------------------------------------------------------------ # Set project name and version number project(DOLFIN) set(DOLFIN_VERSION_RELEASE 1) set(DOLFIN_VERSION_MAJOR "2017") set(DOLFIN_VERSION_MINOR "2") set(DOLFIN_VERSION_MICRO "0") set(DOLFIN_VERSION "${DOLFIN_VERSION_MAJOR}.${DOLFIN_VERSION_MINOR}.${DOLFIN_VERSION_MICRO}") if (NOT DOLFIN_VERSION_RELEASE) set(DOLFIN_VERSION "${DOLFIN_VERSION}.dev0") endif() #------------------------------------------------------------------------------ # Require and use C++11 # Use C++11 set(CMAKE_CXX_STANDARD 11) # Require C++11 set(CMAKE_CXX_STANDARD_REQUIRED ON) # Do not enable compler-specific extensions set(CMAKE_CXX_EXTENSIONS OFF) #------------------------------------------------------------------------------ # Check compiler version # Check for GCC version - earlier versions have insuffcient C++11 # support, or bugs. if (CMAKE_COMPILER_IS_GNUCXX) if (CMAKE_CXX_COMPILER_VERSION VERSION_LESS 4.8.3) message(FATAL_ERROR "GCC version must be at least 4.8.3 (for sufficient C++11 support and to avoid some compiler bugs). You have version ${CMAKE_CXX_COMPILER_VERSION}") endif() endif() #------------------------------------------------------------------------------ # Get GIT changeset, if available # Check for git find_program(GIT_FOUND git) if (GIT_FOUND) # Get the commit hash of the working branch execute_process(COMMAND git rev-parse HEAD WORKING_DIRECTORY ${CMAKE_SOURCE_DIR} OUTPUT_VARIABLE GIT_COMMIT_HASH OUTPUT_STRIP_TRAILING_WHITESPACE ) else() set(GIT_COMMIT_HASH "unknown") endif() #------------------------------------------------------------------------------ # General configuration # Set CMake options, see `cmake --help-policy CMP000x` if (COMMAND cmake_policy) cmake_policy(SET CMP0003 NEW) cmake_policy(SET CMP0004 NEW) cmake_policy(SET CMP0042 NEW) endif() # Set location of our FindFoo.cmake modules set(DOLFIN_CMAKE_DIR "${DOLFIN_SOURCE_DIR}/cmake" CACHE INTERNAL "") set(CMAKE_MODULE_PATH "${DOLFIN_CMAKE_DIR}/modules") # Make sure CMake uses the correct DOLFINConfig.cmake for tests and demos set(CMAKE_PREFIX_PATH ${CMAKE_PREFIX_PATH} ${CMAKE_CURRENT_BINARY_DIR}/dolfin) #------------------------------------------------------------------------------ # Configurable options for how we want to build include(FeatureSummary) option(BUILD_SHARED_LIBS "Build DOLFIN with shared libraries." ON) option(CMAKE_USE_RELATIVE_PATHS "Use relative paths in makefiles and projects." OFF) option(DOLFIN_AUTO_DETECT_MPI "Detect MPI automatically (turn this off to use the MPI compiler wrappers directly via setting CXX, CXX, FC)." ON) option(DOLFIN_DEPRECATION_ERROR "Turn deprecation warnings into errors." OFF) option(DOLFIN_ENABLE_BENCHMARKS "Enable benchmark programs." OFF) option(DOLFIN_ENABLE_CODE_COVERAGE "Enable code coverage." OFF) option(DOLFIN_ENABLE_DOCS "Enable generation of documentation." ON) option(DOLFIN_ENABLE_TESTING "Enable testing." OFF) option(DOLFIN_IGNORE_PETSC4PY_VERSION "Ignore version of PETSc4py." OFF) option(DOLFIN_SKIP_BUILD_TESTS "Skip build tests for testing usability of dependency packages." OFF) option(DOLFIN_WITH_LIBRARY_VERSION "Build with library version information." ON) option(DOLFIN_ENABLE_GEOMETRY_DEBUGGING "Enable geometry debugging (developers only; requires libcgal-dev and libcgal-qt5-dev)." OFF) add_feature_info(BUILD_SHARED_LIBS BUILD_SHARED_LIBS "Build DOLFIN with shared libraries.") add_feature_info(CMAKE_USE_RELATIVE_PATHS CMAKE_USE_RELATIVE_PATHS "Use relative paths in makefiles and projects.") add_feature_info(DOLFIN_AUTO_DETECT_MPI DOLFIN_AUTO_DETECT_MPI "Detect MPI automatically (turn this off to use the MPI compiler wrappers directly via setting CXX, CXX, FC).") add_feature_info(DOLFIN_ENABLE_CODE_COVERAGE DOLFIN_ENABLE_CODE_COVERAGE "Enable code coverage.") add_feature_info(DOLFIN_WITH_LIBRARY_VERSION DOLFIN_WITH_LIBRARY_VERSION "Build with library version information.") add_feature_info(DOLFIN_ENABLE_TESTING DOLFIN_ENABLE_TESTING "Enable testing.") add_feature_info(DOLFIN_ENABLE_BENCHMARKS DOLFIN_ENABLE_BENCHMARKS "Enable benchmark programs.") add_feature_info(DOLFIN_ENABLE_DOCS DOLFIN_ENABLE_DOCS "Enable generation of documentation.") add_feature_info(DOLFIN_SKIP_BUILD_TESTS DOLFIN_SKIP_BUILD_TESTS "Skip build tests for testing usability of dependency packages.") add_feature_info(DOLFIN_DEPRECATION_ERROR DOLFIN_DEPRECATION_ERROR "Turn deprecation warnings into errors.") add_feature_info(DOLFIN_IGNORE_PETSC4PY_VERSION DOLFIN_IGNORE_PETSC4PY_VERSION "Ignore version of PETSc4py.") add_feature_info(DOLFIN_ENABLE_GEOMETRY_DEBUGGING DOLFIN_ENABLE_GEOMETRY_DEBUGGING "Enable geometry debugging.") # Add shared library paths so shared libs in non-system paths are found option(CMAKE_INSTALL_RPATH_USE_LINK_PATH "Add paths to linker search and installed rpath." ON) add_feature_info(CMAKE_INSTALL_RPATH_USE_LINK_PATH CMAKE_INSTALL_RPATH_USE_LINK_PATH "Add paths to linker search and installed rpath.") # Hande RPATH on OSX when not installing to a system directory, see # https://groups.google.com/d/msg/fenics-dev/KSCrob4M_1M/zsJwdN-SCAAJ # and https://cmake.org/Wiki/CMake_RPATH_handling#Always_full_RPATH. if (APPLE) # The RPATH to be used when installing, but only if it's not a # system directory SET(CMAKE_INSTALL_RPATH "${CMAKE_INSTALL_PREFIX}/lib") LIST(FIND CMAKE_PLATFORM_IMPLICIT_LINK_DIRECTORIES "${CMAKE_INSTALL_PREFIX}/lib" isSystemDir) IF("${isSystemDir}" STREQUAL "-1") SET(CMAKE_INSTALL_RPATH "${CMAKE_INSTALL_PREFIX}/lib") ENDIF("${isSystemDir}" STREQUAL "-1") endif() #------------------------------------------------------------------------------ # Enable or disable optional packages # List optional packages set(OPTIONAL_PACKAGES "") list(APPEND OPTIONAL_PACKAGES "MPI") list(APPEND OPTIONAL_PACKAGES "PETSc") list(APPEND OPTIONAL_PACKAGES "PETSc4py") list(APPEND OPTIONAL_PACKAGES "SLEPc") list(APPEND OPTIONAL_PACKAGES "SLEPc4py") list(APPEND OPTIONAL_PACKAGES "Trilinos") list(APPEND OPTIONAL_PACKAGES "UMFPACK") list(APPEND OPTIONAL_PACKAGES "CHOLMOD") list(APPEND OPTIONAL_PACKAGES "SCOTCH") list(APPEND OPTIONAL_PACKAGES "ParMETIS") list(APPEND OPTIONAL_PACKAGES "zlib") list(APPEND OPTIONAL_PACKAGES "Python") list(APPEND OPTIONAL_PACKAGES "Sphinx") list(APPEND OPTIONAL_PACKAGES "HDF5") # Add options foreach (OPTIONAL_PACKAGE ${OPTIONAL_PACKAGES}) string(TOUPPER "DOLFIN_ENABLE_${OPTIONAL_PACKAGE}" OPTION_NAME) option(${OPTION_NAME} "Compile with support for ${OPTIONAL_PACKAGE}." ON) add_feature_info(${OPTION_NAME} ${OPTION_NAME} "Compile with support for ${OPTIONAL_PACKAGE}.") endforeach() # Default Python version option(DOLFIN_USE_PYTHON3 "Use Python 3." ON) #------------------------------------------------------------------------------ # Package-specific options # None at the moment #------------------------------------------------------------------------------ # Compiler flags # Default build type (can be overridden by user) if (NOT CMAKE_BUILD_TYPE) set(CMAKE_BUILD_TYPE "RelWithDebInfo" CACHE STRING "Choose the type of build, options are: Debug Developer MinSizeRel Release RelWithDebInfo." FORCE) endif() # Check for some compiler flags include(CheckCXXCompilerFlag) CHECK_CXX_COMPILER_FLAG(-pipe HAVE_PIPE) if (HAVE_PIPE) set(DOLFIN_CXX_DEVELOPER_FLAGS "-pipe ${DOLFIN_CXX_DEVELOPER_FLAGS}") endif() # Add some strict compiler checks CHECK_CXX_COMPILER_FLAG("-Wall -Werror -pedantic" HAVE_PEDANTIC) if (HAVE_PEDANTIC) set(DOLFIN_CXX_DEVELOPER_FLAGS "-Wall -Werror -pedantic ${DOLFIN_CXX_DEVELOPER_FLAGS}") endif() # Debug flags CHECK_CXX_COMPILER_FLAG(-g HAVE_DEBUG) if (HAVE_DEBUG) set(DOLFIN_CXX_DEVELOPER_FLAGS "-g ${DOLFIN_CXX_DEVELOPER_FLAGS}") endif() CHECK_CXX_COMPILER_FLAG(-O2 HAVE_O2_OPTIMISATION) if (HAVE_O2_OPTIMISATION) set(DOLFIN_CXX_DEVELOPER_FLAGS "-O2 ${DOLFIN_CXX_DEVELOPER_FLAGS}") endif() # Set 'Developer' build type flags set(CMAKE_CXX_FLAGS_DEVELOPER "${DOLFIN_CXX_DEVELOPER_FLAGS}" CACHE STRING "Flags used by the compiler during development." FORCE) # Add flags for generating code coverage reports if (DOLFIN_ENABLE_CODE_COVERAGE AND CMAKE_COMPILER_IS_GNUCXX) set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fprofile-arcs -ftest-coverage") set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -fprofile-arcs -ftest-coverage") endif() # Settings for Intel compilers if("${CMAKE_CXX_COMPILER_ID}" STREQUAL "Intel") # Use -isystem incluse flag with Intel compiler set(CMAKE_INCLUDE_SYSTEM_FLAG_CXX "-isystem ") # Stop spurious warnings from older Intel compilers if("${CMAKE_CXX_COMPILER_VERSION}" LESS "13") set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -wd654,1125") set(CMAKE_CXX_FLAGS_DEBUG "${CMAKE_CXX_FLAGS_DEBUG} -wd654,1125") set(CMAKE_CXX_FLAGS_DEVELOPER "${CMAKE_CXX_FLAGS_DEVELOPER} -wd654,1125") set(CMAKE_CXX_FLAGS_RELEASE "${CMAKE_CXX_FLAGS_RELEASE} -wd654,1125") set(CMAKE_CXX_FLAGS_RELWITHDEBINFO "${CMAKE_CXX_FLAGS_RELWITHDEBINFO} -wd654,1125") endif() endif() # Set system include flags to get around CMake bug on OSX with gcc See # http://public.kitware.com/Bug/print_bug_page.php?bug_id=10837 if ("${CMAKE_CXX_COMPILER_ID}" STREQUAL "GNU") set(CMAKE_INCLUDE_SYSTEM_FLAG_CXX "-isystem ") endif() if (APPLE) set(CMAKE_INCLUDE_SYSTEM_FLAG_CXX "-isystem ") set(CMAKE_CXX_FLAGS_DEVELOPER "${CMAKE_CXX_FLAGS_DEVELOPER} -Wno-long-long") set(CMAKE_CXX_FLAGS_DEBUG "${CMAKE_CXX_FLAGS_DEBUG} -Wno-long-long") set(CMAKE_CXX_FLAGS_RELWITHDEBINFO "${CMAKE_CXX_FLAGS_RELWITHDEBINFO} -Wno-long-long") endif() #------------------------------------------------------------------------------ # Check for MPI # FIXME: Should be set CMake to use the MPI compiler wrappers? if (DOLFIN_ENABLE_MPI) if (DOLFIN_AUTO_DETECT_MPI) find_package(MPI) set_package_properties(MPI PROPERTIES TYPE OPTIONAL DESCRIPTION "Message Passing Interface (MPI)" PURPOSE "Enables DOLFIN to run in parallel with MPI") else() # Assume user has set MPI compiler wrappers (via CXX, etc or # CMAKE_CXX_COMPILER, etc) set(MPI_CXX_FOUND TRUE) set(MPI_C_FOUND TRUE) endif() endif() #------------------------------------------------------------------------------ # Run tests to find required packages # Check for Boost set(BOOST_ROOT $ENV{BOOST_DIR} $ENV{BOOST_HOME}) if (BOOST_ROOT) set(Boost_NO_SYSTEM_PATHS on) endif() # Prevent FindBoost.cmake from looking for system Boost{foo}.cmake # files set(Boost_NO_BOOST_CMAKE true) set(Boost_USE_MULTITHREADED $ENV{BOOST_USE_MULTITHREADED}) find_package(Boost 1.56 QUIET REQUIRED) # Boost public/private libraries to link to. # Note: These should all be private as they do not appear in the # DOLFIN public interface , but there is a linking issues with older # Boost or CMake. Ubuntu 16.04 requires linking DOLFIN programs with # filesystem, whereas Ubuntu 16.10 and macOS (Homebrew) do not. if (Boost_VERSION VERSION_LESS 106100) set(DOLFIN_BOOST_COMPONENTS_PUBLIC filesystem timer) set(DOLFIN_BOOST_COMPONENTS_PRIVATE program_options iostreams) else() set(DOLFIN_BOOST_COMPONENTS_PUBLIC timer) set(DOLFIN_BOOST_COMPONENTS_PRIVATE filesystem program_options iostreams) endif() # Find required Boost libraries find_package(Boost COMPONENTS ${DOLFIN_BOOST_COMPONENTS_PUBLIC} ${DOLFIN_BOOST_COMPONENTS_PRIVATE} REQUIRED) set_package_properties(Boost PROPERTIES TYPE REQUIRED DESCRIPTION "Boost C++ libraries" URL "http://www.boost.org") # Check for required package Eigen3 find_package(Eigen3 3.2.90 REQUIRED) set_package_properties(Eigen3 PROPERTIES TYPE REQUIRED DESCRIPTION "Lightweight C++ template library for linear algebra" URL "http://eigen.tuxfamily.org") #------------------------------------------------------------------------------ # Run tests to find optional packages # Note: Check for Python interpreter even when Python is disabled # because it is used to get the installation path for # dolfin_utils if (DOLFIN_USE_PYTHON3) find_package(PythonInterp 3) else() find_package(PythonInterp 2.7) endif() set_package_properties(PythonInterp PROPERTIES TYPE REQUIRED DESCRIPTION "Interactive high-level object-oriented language" URL "http://www.python.org") if (DOLFIN_ENABLE_PYTHON) set_package_properties(PythonInterp PROPERTIES PURPOSE "Needed for the Python interface to DOLFIN") # Set variables to help find Python library that is compatible with # interpreter if (PYTHONINTERP_FOUND) # Get Python include path from Python interpretter execute_process(COMMAND "${PYTHON_EXECUTABLE}" -c "import distutils.sysconfig, sys; sys.stdout.write(distutils.sysconfig.get_python_inc())" OUTPUT_VARIABLE _PYTHON_INCLUDE_PATH RESULT_VARIABLE _PYTHON_INCLUDE_RESULT) # Get Python library path from interpreter execute_process(COMMAND "${PYTHON_EXECUTABLE}" -c "import os, sys, inspect; sys.stdout.write(os.path.split(os.path.split(inspect.getfile(inspect))[0])[0])" OUTPUT_VARIABLE _PYTHON_LIB_PATH RESULT_VARIABLE _PYTHON_LIB_RESULT) # Set include path, if returned by interpreter if ("${_PYTHON_INCLUDE_RESULT}" STREQUAL "0") set(PYTHON_INCLUDE_DIR ${_PYTHON_INCLUDE_PATH}) endif() # Add a search path for Python library based on output from # interpreter set(CMAKE_LIBRARY_PATH_SAVE ${CMAKE_LIBRARY_PATH}) if ("${_PYTHON_LIB_RESULT}" STREQUAL "0") set(CMAKE_LIBRARY_PATH ${_PYTHON_LIB_PATH}) endif() # Find Pythons libs find_package(PythonLibs ${PYTHON_VERSION_STRING} EXACT REQUIRED) set_package_properties(PythonLibs PROPERTIES TYPE REQUIRED DESCRIPTION "Include paths and libraries for Python") set(CMAKE_LIBRARY_PATH ${CMAKE_LIBRARY_PATH_SAVE}) endif() # If Python is found, check for NumPy, SWIG and ply if (PYTHONINTERP_FOUND AND PYTHONLIBS_FOUND) # Check for NumPy find_package(NumPy REQUIRED) set_package_properties(NumPy PROPERTIES TYPE REQUIRED DESCRIPTION "Array processing for numbers, strings, records, and objects in Python." URL "http://www.numpy.org" PURPOSE "Needed for the Python interface to DOLFIN") # Check for ply include(FindPythonModule) find_python_module(ply REQUIRED) set_package_properties(ply PROPERTIES TYPE REQUIRED DESCRIPTION "Python Lex & Yacc" URL "http://www.dabeaz.com/ply/" PURPOSE "Needed for the Python interface to DOLFIN") if (NOT PY_PLY_FOUND) message(FATAL_ERROR "Required Python module 'ply' (http://www.dabeaz.com/ply/) could not be found. Install ply or set DOLFIN_ENABLE_PYTHON to false.") endif() # Check for SWIG if (DEFINED UFC_SWIG_EXECUTABLE) set(SWIG_EXECUTABLE ${UFC_SWIG_EXECUTABLE}) endif() find_package(SWIG REQUIRED) set_package_properties(SWIG PROPERTIES TYPE REQUIRED DESCRIPTION "Tool to wrap C/C++ libraries in high-level languages" URL "http://swig.org" PURPOSE "Needed for the Python interface to DOLFIN") set(REQUIRED_SWIG_VERSION "3.0.5") if ("${SWIG_VERSION}" VERSION_LESS "${REQUIRED_SWIG_VERSION}") message(FATAL_ERROR " DOLFIN requires SWIG version ${REQUIRED_SWIG_VERSION} or greater. You have version ${SWIG_VERSION}. Set DOLFIN_ENABLE_PYTHON to false or install correct SWIG version.") endif() include(UseSWIG) set(PYTHON_FOUND TRUE) endif() endif() # Check for required package UFC find_package(UFC MODULE 2017.2) set_package_properties(UFC PROPERTIES TYPE REQUIRED DESCRIPTION "Unified language for form-compilers (part of FFC)" URL "https://bitbucket.org/fenics-project/ffc") # Check for PETSc, SLEPc and petsc4py, slepc4py set(PETSC_FOUND FALSE) set(SLEPC_FOUND FALSE) if (DOLFIN_ENABLE_PETSC) find_package(PETSc 3.7) set_package_properties(PETSc PROPERTIES TYPE OPTIONAL DESCRIPTION "Portable, Extensible Toolkit for Scientific Computation" URL "https://www.mcs.anl.gov/petsc/" PURPOSE "Enables the PETSc linear algebra backend") if (PETSC_FOUND AND PYTHON_FOUND AND DOLFIN_ENABLE_PETSC4PY) find_package(PETSc4py) set_package_properties(PETSc4py PROPERTIES TYPE OPTIONAL DESCRIPTION "Python bindings for PETSc" URL "https://bitbucket.org/petsc/petsc4py/") if (PETSC4PY_FOUND) if (NOT ("${PETSC4PY_VERSION_MAJOR}" EQUAL "${PETSC_VERSION_MAJOR}" AND "${PETSC4PY_VERSION_MINOR}" EQUAL "${PETSC_VERSION_MINOR}") AND NOT DOLFIN_IGNORE_PETSC4PY_VERSION) message(WARNING "PETSc version ${PETSC_VERSION} and petsc4py version ${PETSC4PY_VERSION} do not match. Disabling PETSc4py.") set(PETSC4PY_FOUND FALSE) endif() endif() endif() set(SLEPC_FOUND FALSE) if (PETSC_FOUND AND DOLFIN_ENABLE_SLEPC) find_package(SLEPc 3.7) set_package_properties(SLEPc PROPERTIES TYPE OPTIONAL DESCRIPTION "Scalable Library for Eigenvalue Problem Computations" URL "http://slepc.upv.es/") if (SLEPC_FOUND AND PYTHON_FOUND AND DOLFIN_ENABLE_SLEPC4PY) find_package(SLEPc4py) set_package_properties(SLEPc4py PROPERTIES TYPE OPTIONAL DESCRIPTION "Python bindings for SLEPc" URL "https://bitbucket.org/slepc/slepc4py/") if (SLEPC4PY_FOUND) if (NOT ("${SLEPC4PY_VERSION_MAJOR}" EQUAL "${SLEPC_VERSION_MAJOR}" AND "${SLEPC4PY_VERSION_MINOR}" EQUAL "${SLEPC_VERSION_MINOR}")) message(WARNING "SLEPc version ${SLEPC_VERSION} and slepc4py version ${SLEPC4PY_VERSION} do not match. Disabling slepc4py support") set(SLEPC4PY_FOUND FALSE) endif() endif() endif() endif() endif() # Check for ParMETIS and SCOTCH if (DOLFIN_ENABLE_MPI AND MPI_C_FOUND) if (DOLFIN_ENABLE_PARMETIS) find_package(ParMETIS 4.0.2) set_package_properties(ParMETIS PROPERTIES TYPE OPTIONAL DESCRIPTION "Parallel Graph Partitioning and Fill-reducing Matrix Ordering" URL "http://glaros.dtc.umn.edu/gkhome/metis/parmetis/overview" PURPOSE "Enables parallel graph partitioning") endif() if (DOLFIN_ENABLE_SCOTCH) find_package(SCOTCH) set_package_properties(SCOTCH PROPERTIES TYPE OPTIONAL DESCRIPTION "Programs and libraries for graph, mesh and hypergraph partitioning" URL "https://www.labri.fr/perso/pelegrin/scotch" PURPOSE "Enables parallel graph partitioning") endif() endif() # Check for UMFPACK if (DOLFIN_ENABLE_UMFPACK) find_package(AMD QUIET) find_package(BLAS QUIET) set_package_properties(BLAS PROPERTIES TYPE OPTIONAL DESCRIPTION "Basic Linear Algebra Subprograms" URL "http://netlib.org/blas/") find_package(UMFPACK) set_package_properties(UMFPACK PROPERTIES TYPE OPTIONAL DESCRIPTION "Sparse LU factorization library" URL "http://faculty.cse.tamu.edu/davis/suitesparse.html") endif() # Check for CHOLMOD if (DOLFIN_ENABLE_CHOLMOD) find_package(CHOLMOD) set_package_properties(CHOLMOD PROPERTIES TYPE OPTIONAL DESCRIPTION "Sparse Cholesky factorization library for sparse matrices" URL "http://faculty.cse.tamu.edu/davis/suitesparse.html") endif() # Check for HDF5 if (DOLFIN_ENABLE_HDF5) if (NOT DEFINED ENV{HDF5_ROOT}) set(ENV{HDF5_ROOT} "$ENV{HDF5_DIR}") endif() set(HDF5_PREFER_PARALLEL FALSE) if (DOLFIN_ENABLE_MPI) set(HDF5_PREFER_PARALLEL TRUE) endif() find_package(HDF5 COMPONENTS C) set_package_properties(HDF5 PROPERTIES TYPE OPTIONAL DESCRIPTION "Hierarchical Data Format 5 (HDF5)" URL "https://www.hdfgroup.org/HDF5") endif() # Check for Trilinos and the requires Trilinos packages if (DOLFIN_ENABLE_TRILINOS) message(STATUS "Checking for Trilinos") find_package(Trilinos QUIET PATHS ${TRILINOS_DIR} ${Trilinos_DIR} $ENV{TRILINOS_DIR}) set(DOLFIN_TRILINOS_PACKAGES "Tpetra;Zoltan;MueLu;Amesos2;Ifpack2;Belos") if ("${Trilinos_VERSION}" VERSION_LESS "12.4.0") set(Trilinos_FOUND FALSE) message(STATUS "Unable to find Trilinos (>= 12.4.0)") endif() set_package_properties(Trilinos PROPERTIES TYPE OPTIONAL DESCRIPTION "Object-oriented framework for large-scale problems" URL "https://trilinos.org") # Check for required packages set(DOLFIN_TRILINOS_PACKAGES_FOUND false) if (Trilinos_FOUND) message(STATUS " Trilinos version ${Trilinos_VERSION} found. Checking for components") # Check that necessary packages are enabled set(DOLFIN_TRILINOS_PACKAGES_FOUND true) foreach (required_package ${DOLFIN_TRILINOS_PACKAGES}) # Search for required package in list of available packages list(FIND Trilinos_PACKAGE_LIST ${required_package} required_trilinos_package_index) if(required_trilinos_package_index EQUAL -1) set(required_trilinos_package_found false) else() set(required_trilinos_package_found true) endif() # Print whether or not package is found if (required_trilinos_package_found) message(STATUS " ${required_package} found") else() message(STATUS " Trilinos found, but required package ${required_package} not found. Trilinos will be disabled.") set(DOLFIN_TRILINOS_PACKAGES_FOUND false) break() endif() endforeach() # Add package libraries if all packages have been found if (DOLFIN_TRILINOS_PACKAGES_FOUND) message(STATUS " All necessary Trilinos components found. Trilinos will be enabled.") set(DOLFIN_TRILINOS_DEFINITIONS) # Loop over each package foreach (package ${DOLFIN_TRILINOS_PACKAGES}) # Loop over libs and get full path foreach (lib ${${package}_LIBRARIES}) find_library(TRILINOS_LIB_${lib} ${lib} PATHS ${${package}_LIBRARY_DIRS} NO_DEFAULT_PATH) # Also search the default paths find_library(TRILINOS_LIB_${lib} ${lib}) if (TRILINOS_LIB_${lib}) list(APPEND DOLFIN_TRILINOS_LIBRARIES ${TRILINOS_LIB_${lib}}) endif() endforeach() endforeach() # Remove duplicates list(REVERSE DOLFIN_TRILINOS_LIBRARIES) list(REMOVE_DUPLICATES DOLFIN_TRILINOS_LIBRARIES) list(REVERSE DOLFIN_TRILINOS_LIBRARIES) endif() else() message(STATUS "Trilinos could not be found") endif() endif() # Check for zlib if (DOLFIN_ENABLE_ZLIB) find_package(ZLIB) set_package_properties(ZLIB PROPERTIES TYPE OPTIONAL DESCRIPTION "Compression library" URL "http://www.zlib.net") endif() # Check for Sphinx if (DOLFIN_ENABLE_DOCS AND PYTHON_FOUND) find_package(Sphinx 1.1.0) set_package_properties(Sphinx PROPERTIES TYPE OPTIONAL DESCRIPTION "Python documentation generator" URL "http://www.sphinx-doc.org" PURPOSE "Needed to build documentation") endif() # Check for geometry debugging if (DOLFIN_ENABLE_GEOMETRY_DEBUGGING) message(STATUS "Enabling geometry debugging") find_package(CGAL REQUIRED) find_package(GMP REQUIRED) find_package(MPFR REQUIRED) if (NOT CGAL_FOUND) message(FATAL_ERROR "Unable to find package CGAL required for DOLFIN_ENABLE_GEOMETRY_DEBUGGING") endif() if (NOT GMP_FOUND) message(FATAL_ERROR "Unable to find package GMP required for DOLFIN_ENABLE_GEOMETRY_DEBUGGING") endif() if (NOT MPFR_FOUND) message(FATAL_ERROR "Unable to find package MPFR required for DOLFIN_ENABLE_GEOMETRY_DEBUGGING") endif() endif() #------------------------------------------------------------------------------ # Print summary of found and not found optional packages feature_summary(WHAT ALL) #------------------------------------------------------------------------------ # Get installation paths for Python modules (pure and platform-dependent) if (PYTHONINTERP_FOUND) if (NOT DEFINED DOLFIN_INSTALL_PYTHON_MODULE_DIR) # Get path for platform-dependent Python modules (since we install # a binary libary) # Python command string to discover module install location if (CMAKE_INSTALL_PREFIX_INITIALIZED_TO_DEFAULT) set(PYTHON_LIB_DISCOVER_STR "import sys, distutils.sysconfig; sys.stdout.write(distutils.sysconfig.get_python_lib(plat_specific=True))") else() set(PYTHON_LIB_DISCOVER_STR "import sys, distutils.sysconfig; sys.stdout.write(distutils.sysconfig.get_python_lib(plat_specific=True, prefix='${CMAKE_INSTALL_PREFIX}'))") endif() # Probe Python interpreter execute_process(COMMAND ${PYTHON_EXECUTABLE} -c "${PYTHON_LIB_DISCOVER_STR}" OUTPUT_VARIABLE DOLFIN_INSTALL_PYTHON_MODULE_DIR ) set(DOLFIN_INSTALL_PYTHON_MODULE_DIR ${DOLFIN_INSTALL_PYTHON_MODULE_DIR} CACHE PATH "Python extension module installation directory.") endif() if (NOT DEFINED DOLFIN_INSTALL_PYTHON_PURE_MODULE_DIR) # Get path for pure Python modules # Python command string to discover module install location if (CMAKE_INSTALL_PREFIX_INITIALIZED_TO_DEFAULT) set(PYTHON_LIB_DISCOVER_STR "import sys, distutils.sysconfig; sys.stdout.write(distutils.sysconfig.get_python_lib(plat_specific=False))") else() set(PYTHON_LIB_DISCOVER_STR "import sys, distutils.sysconfig; sys.stdout.write(distutils.sysconfig.get_python_lib(plat_specific=False, prefix='${CMAKE_INSTALL_PREFIX}'))") endif() # Probe Pyton interpreter execute_process(COMMAND ${PYTHON_EXECUTABLE} -c "${PYTHON_LIB_DISCOVER_STR}" OUTPUT_VARIABLE DOLFIN_INSTALL_PYTHON_PURE_MODULE_DIR ) set(DOLFIN_INSTALL_PYTHON_PURE_MODULE_DIR ${DOLFIN_INSTALL_PYTHON_PURE_MODULE_DIR} CACHE PATH "Python module installation directory.") endif() endif() #------------------------------------------------------------------------------ # Installation of DOLFIN and FEniCS Python modules if (DOLFIN_ENABLE_PYTHON AND PYTHON_FOUND) install(DIRECTORY ${CMAKE_SOURCE_DIR}/site-packages/dolfin DESTINATION ${DOLFIN_INSTALL_PYTHON_MODULE_DIR} USE_SOURCE_PERMISSIONS COMPONENT RuntimeLibraries PATTERN "*.in" EXCLUDE ) configure_file(${CMAKE_SOURCE_DIR}/site-packages/dolfin/common/globalparameters.py.in ${CMAKE_BINARY_DIR}/globalparameters.py @ONLY) install(FILES ${CMAKE_BINARY_DIR}/globalparameters.py DESTINATION ${DOLFIN_INSTALL_PYTHON_MODULE_DIR}/dolfin/common/ COMPONENT RuntimeLibraries ) install(DIRECTORY ${CMAKE_SOURCE_DIR}/site-packages/fenics DESTINATION ${DOLFIN_INSTALL_PYTHON_MODULE_DIR} USE_SOURCE_PERMISSIONS COMPONENT RuntimeLibraries PATTERN "*.in" EXCLUDE ) endif() #------------------------------------------------------------------------------ # Installation of dolfin_utils if (DOLFIN_INSTALL_PYTHON_MODULE_DIR) install(DIRECTORY ${CMAKE_SOURCE_DIR}/site-packages/dolfin_utils DESTINATION ${DOLFIN_INSTALL_PYTHON_PURE_MODULE_DIR} USE_SOURCE_PERMISSIONS) # Add target "install_dolfin_utils" for installing dolfin_utils # without building and install the rest of DOLFIN add_custom_target(install_dolfin_utils COMMAND ${CMAKE_COMMAND} -E copy_directory "${CMAKE_SOURCE_DIR}/site-packages/dolfin_utils" "${DOLFIN_INSTALL_PYTHON_MODULE_DIR}/dolfin_utils" COMMENT "Installing dolfin_utils in ${DOLFIN_INSTALL_PYTHON_MODULE_DIR}/dolfin_utils") endif() #------------------------------------------------------------------------------ # Installation of docstrings #install(DIRECTORY ${CMAKE_SOURCE_DIR}/site-packages/dolfin/docstrings # DESTINATION ${DOLFIN_INSTALL_PYTHON_MODULE_DIR}/dolfin # USE_SOURCE_PERMISSIONS) #------------------------------------------------------------------------------ # Installation of DOLFIN library # Append the library version information to the library target properties if (DOLFIN_WITH_LIBRARY_VERSION) string(REPLACE "+" "" DOLFIN_LIBRARY_VERSION ${DOLFIN_VERSION}) # This setting of SOVERSION assumes that any API change # will increment either the minor or major version number. set(DOLFIN_LIBRARY_PROPERTIES ${DOLFIN_LIBRARY_PROPERTIES} VERSION ${DOLFIN_LIBRARY_VERSION} SOVERSION ${DOLFIN_VERSION_MAJOR}.${DOLFIN_VERSION_MINOR} ) endif() # Set DOLFIN install sub-directories set(DOLFIN_BIN_DIR "bin" CACHE PATH "Binary installation directory.") set(DOLFIN_LIB_DIR "lib" CACHE PATH "Library installation directory.") set(DOLFIN_INCLUDE_DIR "include" CACHE PATH "C/C++ header installation directory.") set(DOLFIN_PKGCONFIG_DIR "lib/pkgconfig" CACHE PATH "pkg-config file installation directory.") set(DOLFIN_SHARE_DIR "share/dolfin" CACHE PATH "Shared data installation directory.") set(DOLFIN_MAN_DIR "share/man" CACHE PATH "Manual page installation directory.") set(DOLFIN_DOC_DIR "${DOLFIN_SHARE_DIR}/doc" CACHE PATH "DOLFIN Documentation directory.") set(DOLFIN_ETC_DIR "etc" CACHE PATH "Configuration file directory.") # Add source directory add_subdirectory(dolfin) #------------------------------------------------------------------------------ # Installation of DOLFIN utilities set(DOLFIN_UTILITIES ${DOLFIN_SOURCE_DIR}/scripts/dolfin-convert/dolfin-convert ${DOLFIN_SOURCE_DIR}/scripts/dolfin-order/dolfin-order ${DOLFIN_SOURCE_DIR}/scripts/dolfin-plot/dolfin-plot) install(FILES ${DOLFIN_UTILITIES} DESTINATION ${DOLFIN_BIN_DIR} PERMISSIONS OWNER_WRITE OWNER_READ GROUP_READ WORLD_READ OWNER_EXECUTE GROUP_EXECUTE WORLD_EXECUTE COMPONENT RuntimeExecutables) #------------------------------------------------------------------------------ # Installation of DOLFIN manual pages install(DIRECTORY ${DOLFIN_SOURCE_DIR}/doc-old/man/ DESTINATION ${DOLFIN_MAN_DIR} USE_SOURCE_PERMISSIONS COMPONENT RuntimeExecutables) #------------------------------------------------------------------------------ # Generate and install helper file dolfin.conf # FIXME: Can CMake provide the library path name variable? if (APPLE) set(OS_LIBRARY_PATH_NAME "DYLD_LIBRARY_PATH") else() set(OS_LIBRARY_PATH_NAME "LD_LIBRARY_PATH") endif() # FIXME: not cross-platform compatible # Create and install dolfin.conf file configure_file(${DOLFIN_CMAKE_DIR}/templates/dolfin.conf.in ${CMAKE_BINARY_DIR}/dolfin.conf @ONLY) install(FILES ${CMAKE_BINARY_DIR}/dolfin.conf DESTINATION ${DOLFIN_SHARE_DIR} COMPONENT Development) #------------------------------------------------------------------------------ # Generate and install helper scripts dolfin-version, fenics-version # FIXME: not cross-platform compatible # Create and install dolfin-version configure_file(${DOLFIN_CMAKE_DIR}/templates/dolfin-version.in ${CMAKE_BINARY_DIR}/dolfin-version @ONLY) install(FILES ${CMAKE_BINARY_DIR}/dolfin-version DESTINATION ${DOLFIN_BIN_DIR} PERMISSIONS OWNER_WRITE OWNER_READ GROUP_READ WORLD_READ OWNER_EXECUTE GROUP_EXECUTE WORLD_EXECUTE COMPONENT RuntimeExecutables) # Create and install fenics-version configure_file(${DOLFIN_CMAKE_DIR}/templates/fenics-version.in ${CMAKE_BINARY_DIR}/fenics-version @ONLY) install(FILES ${CMAKE_BINARY_DIR}/fenics-version DESTINATION ${DOLFIN_BIN_DIR} PERMISSIONS OWNER_WRITE OWNER_READ GROUP_READ WORLD_READ OWNER_EXECUTE GROUP_EXECUTE WORLD_EXECUTE COMPONENT RuntimeExecutables) #------------------------------------------------------------------------------ # Generate and install utility script dolfin-get-demos configure_file(${DOLFIN_CMAKE_DIR}/templates/dolfin-get-demos.in ${CMAKE_BINARY_DIR}/dolfin-get-demos @ONLY) install(FILES ${CMAKE_BINARY_DIR}/dolfin-get-demos DESTINATION ${DOLFIN_BIN_DIR} PERMISSIONS OWNER_WRITE OWNER_READ GROUP_READ WORLD_READ OWNER_EXECUTE GROUP_EXECUTE WORLD_EXECUTE COMPONENT RuntimeExecutables) #------------------------------------------------------------------------------ # Generate demo files from rst if (PYTHONINTERP_FOUND) message(STATUS "") message(STATUS "Generating demo source files from reStructuredText") message(STATUS "--------------------------------------------------") file(GLOB_RECURSE demo_rst_files "demo/*.py.rst" "demo/*.cpp.rst" "demo/*.ufl.rst") foreach(rst_file ${demo_rst_files}) execute_process(COMMAND ${PYTHON_EXECUTABLE} "-B" "-u" ./utils/pylit/pylit.py ${rst_file} WORKING_DIRECTORY ${DOLFIN_SOURCE_DIR}) #add_custom_command(TARGET demos_source PRE_BUILD # COMMAND ${PYTHON_EXECUTABLE} "-B" "-u" ./utils/pylit/pylit.py ${rst_file} # WORKING_DIRECTORY ${CMAKE_SOURCE_DIR}) endforeach() endif() #------------------------------------------------------------------------------ # Generate form files for tests, bench, demos and DOLFIN if not exists # FIXME: Generate files in Build directory instead, at least for # bench, demo and tests set(COPY_DEMO_TEST_DEMO_DATA FALSE) if (NOT EXISTS ${DOLFIN_SOURCE_DIR}/demo/documented/poisson/cpp/Poisson.h) message(STATUS "") message(STATUS "Generating form files in demo, test and bench directories. May take some time...") message(STATUS "----------------------------------------------------------------------------------------") execute_process( COMMAND ${PYTHON_EXECUTABLE} "-B" "-u" ${DOLFIN_SOURCE_DIR}/cmake/scripts/generate-form-files.py WORKING_DIRECTORY ${DOLFIN_SOURCE_DIR} RESULT_VARIABLE FORM_GENERATION_RESULT OUTPUT_VARIABLE FORM_GENERATION_OUTPUT ERROR_VARIABLE FORM_GENERATION_OUTPUT ) if (FORM_GENERATION_RESULT) # Cleanup so download is triggered next time we run cmake if (EXISTS ${DOLFIN_SOURCE_DIR}/demo/documented/poisson/cpp/Poisson.h) file(REMOVE ${DOLFIN_SOURCE_DIR}/demo/documented/poisson/cpp/Poisson.h) endif() message(FATAL_ERROR "Generation of form files failed: \n${FORM_GENERATION_OUTPUT}") endif() set(COPY_DEMO_TEST_DEMO_DATA TRUE) endif() #------------------------------------------------------------------------------ # Generate CMakeLists.txt files for bench and demos if not existing # FIXME: Generate files in Build directory instead? # NOTE: We need to call this script after generate-formfiles if (NOT EXISTS ${DOLFIN_SOURCE_DIR}/demo/documented/poisson/cpp/CMakeLists.txt) message(STATUS "") message(STATUS "Generating CMakeLists.txt files in demo, test and bench directories") message(STATUS "-------------------------------------------------------------------") execute_process( COMMAND ${PYTHON_EXECUTABLE} ${DOLFIN_SOURCE_DIR}/cmake/scripts/generate-cmakefiles.py WORKING_DIRECTORY ${DOLFIN_SOURCE_DIR} RESULT_VARIABLE CMAKE_GENERATION_RESULT OUTPUT_VARIABLE CMAKE_GENERATION_OUTPUT ERROR_VARIABLE CMAKE_GENERATION_OUTPUT ) if (CMAKE_GENERATION_RESULT) # Cleanup so FFC rebuild is triggered next time we run cmake if (EXISTS ${DOLFIN_SOURCE_DIR}/demo/documented/poisson/cpp/CMakeLists.txt) file(REMOVE ${DOLFIN_SOURCE_DIR}/demo/documented/poisson/cpp/CMakeLists.txt) endif() message(FATAL_ERROR "Generation of CMakeLists.txt files failed: \n${CMAKE_GENERATION_OUTPUT}") endif() set(COPY_DEMO_TEST_DEMO_DATA TRUE) endif() #------------------------------------------------------------------------------ # Copy data in demo/bench/test direcories to the build directories # FIXME: We should probably just generate them directly in the build # directory... if (COPY_DEMO_TEST_DEMO_DATA) message(STATUS "") message(STATUS "Copying demo and test data to build directory.") message(STATUS "----------------------------------------------") execute_process( COMMAND ${PYTHON_EXECUTABLE} "-B" "-u" ${DOLFIN_SOURCE_DIR}/cmake/scripts/copy-test-demo-data.py ${CMAKE_CURRENT_BINARY_DIR} WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR} RESULT_VARIABLE COPY_DEMO_DATA_RESULT OUTPUT_VARIABLE COPY_DEMO_DATA_OUTPUT ERROR_VARIABLE COPY_DEMO_DATA_OUTPUT) if (COPY_DEMO_DATA_RESULT) message(FATAL_ERROR "Copy demo data failed: \n${COPY_DEMO_DATA_OUTPUT}") endif() endif() #------------------------------------------------------------------------------ # Add demos and install demo source files and mesh files # Add demo but do not add to default target set(CMAKE_MODULE_PATH "${DOLFIN_CMAKE_DIR}/modules") add_subdirectory(demo EXCLUDE_FROM_ALL) # Set make program if ("${CMAKE_GENERATOR}" STREQUAL "Unix Makefiles") set(MAKE_PROGRAM "$(MAKE)") else() set(MAKE_PROGRAM "${CMAKE_MAKE_PROGRAM}") endif() # Add taget to build .py demo files from Python rst input files, and # create a target to build source files from .cpp.rst and .py.rst # files (using pylit) if (PYTHONINTERP_FOUND) file(GLOB_RECURSE demo_rst_files "demo/*.py.rst" "demo/*.cpp.rst" "demo/*.ufl.rst") add_custom_target(demos_source) foreach(rst_file ${rst_files}) add_custom_command(TARGET demos_source PRE_BUILD COMMAND ${PYTHON_EXECUTABLE} "-B" "-u" ./utils/pylit/pylit.py ${rst_file} WORKING_DIRECTORY ${CMAKE_SOURCE_DIR}) endforeach() endif() # Add target "demo" for building the demos add_custom_target(demo COMMAND ${MAKE_PROGRAM} DEPENDS copy_data_test_demo demos_source WORKING_DIRECTORY "${CMAKE_CURRENT_BINARY_DIR}/demo") # Install the demo source files install(DIRECTORY demo DESTINATION ${DOLFIN_SHARE_DIR} FILES_MATCHING PATTERN "CMakeLists.txt" PATTERN "*.cpp" PATTERN "*.ufl" PATTERN "*.h" PATTERN "*.py" PATTERN "*.xml*" PATTERN "*.off" PATTERN "CMakeFiles" EXCLUDE) # Install meshes (data directory) install(DIRECTORY data DESTINATION ${DOLFIN_SHARE_DIR}) #------------------------------------------------------------------------------ # Generate documentation if (DOLFIN_ENABLE_DOCS) if (NOT SPHINX_FOUND) message(STATUS "Disabling generation of documentation because Sphinx is missing.") else() add_subdirectory(doc-old) endif() endif() #------------------------------------------------------------------------------ # Add tests and benchmarks if (DOLFIN_ENABLE_BENCHMARKS) # Add bench but do not add to default target add_subdirectory(bench EXCLUDE_FROM_ALL) # Add target "bench" for building benchmarks add_custom_target(bench COMMAND ${MAKE_PROGRAM} WORKING_DIRECTORY "${CMAKE_CURRENT_BINARY_DIR}/bench") # Copy files needed to run benchmarks in build directory file(COPY bench DESTINATION ${CMAKE_CURRENT_BINARY_DIR} FILES_MATCHING PATTERN "*" PATTERN "CMakeFiles" EXCLUDE) # Add target "run_bench" for running benchmarks add_custom_target(run_bench COMMAND ${PYTHON_EXECUTABLE} "-B" "-u" "bench.py" DEPENDS bench WORKING_DIRECTORY "${CMAKE_CURRENT_BINARY_DIR}/bench") endif() if (DOLFIN_ENABLE_TESTING) # Add target "unittests_cpp", but do not add to default target add_subdirectory(test/unit/cpp EXCLUDE_FROM_ALL) # Add target "run_unittests_cpp" for running only C++ unit tests add_custom_target(run_unittests_cpp COMMAND ${CMAKE_CURRENT_BINARY_DIR}/test/unit/cpp/unittests_cpp DEPENDS copy_data_test_demo unittests_cpp) # FIXME: remove this buildbot updated to call unittests_cpp # Add alias for unittests_cpp add_custom_target(tests DEPENDS unittests_cpp) # Add target "copy_data_test_demo" add_custom_target(copy_data_test_demo COMMAND ${PYTHON_EXECUTABLE} "-B" "-u" ${DOLFIN_SOURCE_DIR}/cmake/scripts/copy-test-demo-data.py ${CMAKE_CURRENT_BINARY_DIR} WORKING_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}) # Add target "run_memorytests" for running memory tests add_custom_target(run_memorytests COMMAND ${PYTHON_EXECUTABLE} "-B" "-u" "test.py" DEPENDS copy_data_test_demo demo WORKING_DIRECTORY "${CMAKE_CURRENT_BINARY_DIR}/test/memory") # Add target "run_regressiontests" for running regression tests add_custom_target(run_regressiontests COMMAND ${PYTHON_EXECUTABLE} "-B" "-u" "test.py" DEPENDS copy_data_test_demo demo demos_source WORKING_DIRECTORY "${CMAKE_CURRENT_BINARY_DIR}/test/regression") # Add target "run_regressiontests_cpp" for running C++ regression tests add_custom_target(run_regressiontests_cpp COMMAND ${CMAKE_COMMAND} -E env DISABLE_PYTHON_TESTING=1 DISABLE_PARALLEL_TESTING=1 ${PYTHON_EXECUTABLE} "-B" "-u" "test.py" DEPENDS copy_data_test_demo demo demos_source WORKING_DIRECTORY "${CMAKE_CURRENT_BINARY_DIR}/test/regression") # Add target "run_regressiontests_cpp_mpi" for running C++ regression tests with mpi add_custom_target(run_regressiontests_cpp_mpi COMMAND ${CMAKE_COMMAND} -E env DISABLE_PYTHON_TESTING=1 DISABLE_SERIAL_TESTING=1 ${PYTHON_EXECUTABLE} "-B" "-u" "test.py" DEPENDS copy_data_test_demo demo demos_source WORKING_DIRECTORY "${CMAKE_CURRENT_BINARY_DIR}/test/regression") # Add target "run_regressiontests_py" for running Python regression tests add_custom_target(run_regressiontests_py COMMAND ${CMAKE_COMMAND} -E env DISABLE_CPP_TESTING=1 DISABLE_PARALLEL_TESTING=1 ${PYTHON_EXECUTABLE} "-B" "-u" "test.py" DEPENDS copy_data_test_demo demo demos_source WORKING_DIRECTORY "${CMAKE_CURRENT_BINARY_DIR}/test/regression") # Add target "run_regressiontests_py_mpi" for running Python regression tests with mpi add_custom_target(run_regressiontests_py_mpi COMMAND ${CMAKE_COMMAND} -E env DISABLE_CPP_TESTING=1 DISABLE_SERIAL_TESTING=1 ${PYTHON_EXECUTABLE} "-B" "-u" "test.py" DEPENDS copy_data_test_demo demo demos_source WORKING_DIRECTORY "${CMAKE_CURRENT_BINARY_DIR}/test/regression") # Add target "run_unittests" for running unit tests add_custom_target(run_unittests DEPENDS run_unittests_cpp run_unittests_py run_unittests_py_mpi) # Add target "run_unittests_py" for running Python unit tests add_custom_target(run_unittests_py COMMAND ${PYTHON_EXECUTABLE} "-B" "-u" "-m" "pytest" DEPENDS copy_data_test_demo WORKING_DIRECTORY "${CMAKE_CURRENT_BINARY_DIR}/test/unit/python") # Add target "run_unittests_py_mpi" for running Python unit tests with mpi add_custom_target(run_unittests_py_mpi COMMAND ${MPIEXEC} "-np" "3" "./mpipipe.sh" ${PYTHON_EXECUTABLE} "-B" "-u" "-m" "pytest" DEPENDS copy_data_test_demo WORKING_DIRECTORY "${CMAKE_CURRENT_BINARY_DIR}/test/unit/python") # Add target "run_quicktest" for running only Python unit tests not # marked as 'slow' add_custom_target(run_quicktest COMMAND ${PYTHON_EXECUTABLE} "-B" "-u" "-m" "pytest" "-k" "'not" "slow'" DEPENDS copy_data_test_demo WORKING_DIRECTORY "${CMAKE_CURRENT_BINARY_DIR}/test/unit/python") # Add target "run_styletest" for running documentation tests add_custom_target(run_styletest COMMAND ${PYTHON_EXECUTABLE} "-B" "-u" "test_coding_style.py" DEPENDS copy_data_test_demo WORKING_DIRECTORY "${CMAKE_CURRENT_BINARY_DIR}/test/codingstyle") # Create test_coding_style.py in build dir (depends on source path) configure_file(${CMAKE_SOURCE_DIR}/test/codingstyle/test_coding_style.py.in ${CMAKE_BINARY_DIR}/test/codingstyle/test_coding_style.py @ONLY) # Add target "runtests" for running all tests add_custom_target(runtests DEPENDS run_regressiontests run_unittests run_styletest) endif() #------------------------------------------------------------------------------ # Add "make uninstall" target configure_file( "${DOLFIN_CMAKE_DIR}/templates/cmake_uninstall.cmake.in" "${CMAKE_CURRENT_BINARY_DIR}/cmake_uninstall.cmake" IMMEDIATE @ONLY) add_custom_target(uninstall "${CMAKE_COMMAND}" -P "${CMAKE_CURRENT_BINARY_DIR}/cmake_uninstall.cmake") #------------------------------------------------------------------------------ # Print post-install message add_subdirectory(cmake/post-install) #------------------------------------------------------------------------------ dolfin-2017.2.0.post0/COPYING.LESSER0000644000231000000010000001672713216542752015447 0ustar chrisdaemon GNU LESSER GENERAL PUBLIC LICENSE Version 3, 29 June 2007 Copyright (C) 2007 Free Software Foundation, Inc. Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. This version of the GNU Lesser General Public License incorporates the terms and conditions of version 3 of the GNU General Public License, supplemented by the additional permissions listed below. 0. Additional Definitions. As used herein, "this License" refers to version 3 of the GNU Lesser General Public License, and the "GNU GPL" refers to version 3 of the GNU General Public License. "The Library" refers to a covered work governed by this License, other than an Application or a Combined Work as defined below. An "Application" is any work that makes use of an interface provided by the Library, but which is not otherwise based on the Library. Defining a subclass of a class defined by the Library is deemed a mode of using an interface provided by the Library. A "Combined Work" is a work produced by combining or linking an Application with the Library. The particular version of the Library with which the Combined Work was made is also called the "Linked Version". The "Minimal Corresponding Source" for a Combined Work means the Corresponding Source for the Combined Work, excluding any source code for portions of the Combined Work that, considered in isolation, are based on the Application, and not on the Linked Version. The "Corresponding Application Code" for a Combined Work means the object code and/or source code for the Application, including any data and utility programs needed for reproducing the Combined Work from the Application, but excluding the System Libraries of the Combined Work. 1. Exception to Section 3 of the GNU GPL. You may convey a covered work under sections 3 and 4 of this License without being bound by section 3 of the GNU GPL. 2. Conveying Modified Versions. If you modify a copy of the Library, and, in your modifications, a facility refers to a function or data to be supplied by an Application that uses the facility (other than as an argument passed when the facility is invoked), then you may convey a copy of the modified version: a) under this License, provided that you make a good faith effort to ensure that, in the event an Application does not supply the function or data, the facility still operates, and performs whatever part of its purpose remains meaningful, or b) under the GNU GPL, with none of the additional permissions of this License applicable to that copy. 3. Object Code Incorporating Material from Library Header Files. The object code form of an Application may incorporate material from a header file that is part of the Library. You may convey such object code under terms of your choice, provided that, if the incorporated material is not limited to numerical parameters, data structure layouts and accessors, or small macros, inline functions and templates (ten or fewer lines in length), you do both of the following: a) Give prominent notice with each copy of the object code that the Library is used in it and that the Library and its use are covered by this License. b) Accompany the object code with a copy of the GNU GPL and this license document. 4. Combined Works. You may convey a Combined Work under terms of your choice that, taken together, effectively do not restrict modification of the portions of the Library contained in the Combined Work and reverse engineering for debugging such modifications, if you also do each of the following: a) Give prominent notice with each copy of the Combined Work that the Library is used in it and that the Library and its use are covered by this License. b) Accompany the Combined Work with a copy of the GNU GPL and this license document. c) For a Combined Work that displays copyright notices during execution, include the copyright notice for the Library among these notices, as well as a reference directing the user to the copies of the GNU GPL and this license document. d) Do one of the following: 0) Convey the Minimal Corresponding Source under the terms of this License, and the Corresponding Application Code in a form suitable for, and under terms that permit, the user to recombine or relink the Application with a modified version of the Linked Version to produce a modified Combined Work, in the manner specified by section 6 of the GNU GPL for conveying Corresponding Source. 1) Use a suitable shared library mechanism for linking with the Library. A suitable mechanism is one that (a) uses at run time a copy of the Library already present on the user's computer system, and (b) will operate properly with a modified version of the Library that is interface-compatible with the Linked Version. e) Provide Installation Information, but only if you would otherwise be required to provide such information under section 6 of the GNU GPL, and only to the extent that such information is necessary to install and execute a modified version of the Combined Work produced by recombining or relinking the Application with a modified version of the Linked Version. (If you use option 4d0, the Installation Information must accompany the Minimal Corresponding Source and Corresponding Application Code. If you use option 4d1, you must provide the Installation Information in the manner specified by section 6 of the GNU GPL for conveying Corresponding Source.) 5. Combined Libraries. You may place library facilities that are a work based on the Library side by side in a single library together with other library facilities that are not Applications and are not covered by this License, and convey such a combined library under terms of your choice, if you do both of the following: a) Accompany the combined library with a copy of the same work based on the Library, uncombined with any other library facilities, conveyed under the terms of this License. b) Give prominent notice with the combined library that part of it is a work based on the Library, and explaining where to find the accompanying uncombined form of the same work. 6. Revised Versions of the GNU Lesser General Public License. The Free Software Foundation may publish revised and/or new versions of the GNU Lesser General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Library as you received it specifies that a certain numbered version of the GNU Lesser General Public License "or any later version" applies to it, you have the option of following the terms and conditions either of that published version or of any later version published by the Free Software Foundation. If the Library as you received it does not specify a version number of the GNU Lesser General Public License, you may choose any version of the GNU Lesser General Public License ever published by the Free Software Foundation. If the Library as you received it specifies that a proxy can decide whether future versions of the GNU Lesser General Public License shall apply, that proxy's public statement of acceptance of any version is permanent authorization for you to choose that version for the Library. dolfin-2017.2.0.post0/scripts/0000755000231000000010000000000013216543010015056 5ustar chrisdaemondolfin-2017.2.0.post0/scripts/dolfin-order/0000755000231000000010000000000013216543010017442 5ustar chrisdaemondolfin-2017.2.0.post0/scripts/dolfin-order/dolfin-order0000755000231000000010000000326013216543010021755 0ustar chrisdaemon#!/usr/bin/env python # # Copyright (C) 2008 Anders Logg # # This file is part of DOLFIN. # # DOLFIN is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # DOLFIN is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with DOLFIN. If not, see . # # Script for ordering a DOLFIN mesh according to the UFC ordering scheme import sys, os, shutil from dolfin import Mesh, File def main(args): "Main function" # Check that we got at least if not len(args) > 0: usage() sys.exit(2) # Convert each mesh for filename in args: print "Ordering %s" % filename # Read and order mesh mesh = Mesh(filename) mesh.order() # Make backup copy shutil.move(filename, filename + ".bak") # Write file and gzip if necessary gzip = False if len(filename) >= 3 and filename[-3:] == ".gz": filename = filename[:-3] gzip = True file = File(filename) file << mesh if gzip: os.system("gzip " + filename) def usage(): "Print usage instructions" print "Usage: dolfin-order mesh0.xml[.gz] [mesh1.xml[.gz] mesh2.xml[.gz] ...]" if __name__ == "__main__": main(sys.argv[1:]) dolfin-2017.2.0.post0/scripts/dolfin-convert/0000755000231000000010000000000013216543010020007 5ustar chrisdaemondolfin-2017.2.0.post0/scripts/dolfin-convert/test_Triangle.edge0000644000231000000010000000173213216543010023444 0ustar chrisdaemon58 1 1 29 2 0 2 2 1 1 3 1 29 1 4 29 23 0 5 23 2 0 6 25 24 1 7 24 23 1 8 23 25 0 9 23 22 1 10 22 2 0 11 29 25 1 12 22 3 0 13 3 2 1 14 3 21 0 15 21 4 0 16 4 3 1 17 22 21 1 18 21 20 1 19 20 4 0 20 5 4 1 21 4 26 0 22 26 5 0 23 19 26 0 24 4 19 0 25 19 18 1 26 18 26 0 27 20 19 1 28 26 28 1 29 28 5 0 30 12 14 0 31 14 13 1 32 13 12 1 33 12 11 1 34 11 14 0 35 11 10 1 36 10 9 1 37 9 11 0 38 8 14 0 39 14 9 0 40 9 8 1 41 8 15 0 42 15 14 1 43 6 27 0 44 27 7 0 45 7 6 1 46 18 27 0 47 27 26 1 48 28 6 0 49 6 5 1 50 18 7 0 51 28 27 1 52 15 7 0 53 7 16 0 54 16 15 1 55 8 7 1 56 17 7 0 57 18 17 1 58 17 16 1 # Generated by triangle -e A.poly dolfin-2017.2.0.post0/scripts/dolfin-convert/test_exodus.exo0000644000231000000010000001601413216543010023074 0ustar chrisdaemonCDF len_string!len_lineQfour time_stepnum_dim num_nodesknum_elem num_el_blk num_qa_recnum_el_in_blk1num_nod_per_el1num_att_in_blk1  api_version@Qversion@@floating_point_word_size file_sizetitleQcubit(/Users/gideonsimpson/code/fenics/darcy2d_quarter/square.e): 01/15/2008: 12 time_whole qa_records coor_namesD0elem_mapt eb_statusDeb_prop1 nameIDHattrib1 Lconnect1  elem_typeTRI3p coord\CUBIT10.201/15/200812:20:01xy  !"#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????   !"#$%&'()*+, -./+01234567"89:;<5=>*,?21@ABC+0;D<(ACEBABFCG!*?HIJFKC2LM++M,4NOPJ%Q7R)'EA  D< S!% .TU$VWXYZWD[[ S3XR')\Z]- $OQ^8:8_9`0=^>L+/aT&UPObQ2V3Rc'cE'd6#6"#(")"7)('A K"Z\(\Z"(ZYW"e4? WT$& f] ] 3VgVgN g#YWMh,N*H9/:iL/IOJj=UO$Q-k.%.& %1bf Ff *NV2N  PjJjJOIb2@\KF KZYY,h?he?G!S!!G<4e54e;5dd#<G5G5+**`>0/0:``113i/9_i9`I >^:0>:3bI@NgXWX3X>`>I@1b@ObQ%$QK\C\(C4NH?4H.k&ka& g3g?Px?SE?H'},?|S7??Q"A}?l?p?>X\o?n\0?w`G?ۜpoB>?=? C?SO?H#d?ͶI[>?jyFN?˅VP?uQo?$23 ?۴=?$[P?(l?F?lYl?ԫ;?=?nL{?ڂ,{?4No ??Ӝ?ˏ?e̜|?o}H?%n^??G1N? )X6??̖j7????jqw3?c# ?ό. ?HG? c???ᱷ?ZxX?˵5Be"???m? X?,B6?s?V??3?? l:?[ǫ?????g5[????DX;?{ZN?nuOy?ӇB??-uK??sW, ?L4?ҋL?{?0&V?!t5? G??V1.?)?G.+J[?,GHW9?pS"5? [lBU?',?ז?lS=?S???KT? D? ?N%Y'?P<?Bu~??\#P?>/ ?Jq}D?`lO?Wf?eU5$?]'??k#?T?Ԉd5?C/ɺ?ϊ‚m??>aN?x V?ax?⁖-H?qR7W? x`?ńY[?L:M????3?y$O'?ĆU??ӫQ?P7o74?OᏳA?헬B??ȸ???;l/?Ώ 9?dd/?&u?d{B?ѿu?;;??m[?????ک?䆳#j?5. # # Modified by Garth N. Wells (gmsh function) # Modified by Alexander H. Jarosch (gmsh fix) # Modified by Angelo Simone (Gmsh and Medit fix) # Modified by Andy R. Terrel (gmsh fix and triangle function) # Modified by Magnus Vikstrom (metis and scotch function) # Modified by Bartosz Sawicki (diffpack function) # Modified by Gideon Simpson (Exodus II function) # Modified by Arve Knudsen (move logic into module meshconvert) # Modified by Kent-Andre Mardal (Star-CD function) # # Script for converting between various data formats from __future__ import print_function import getopt import sys import os from dolfin_utils.commands import getoutput import re import warnings import os.path from dolfin_utils.meshconvert import meshconvert def main(argv): "Main function" # Get command-line arguments try: opts, args = getopt.getopt(argv, "hi:o:", ["help", "input=", "output="]) except getopt.GetoptError: usage() sys.exit(2) # Get options iformat = None oformat = None for opt, arg in opts: if opt in ("-h", "--help"): usage() sys.exit() elif opt in ("-i", "--input"): iformat = arg elif opt in ("-o", "--output"): oformat = arg # Check that we got two filenames if not len(args) == 2: usage() sys.exit(2) # Get filenames ifilename = args[0] ofilename = args[1] # Can only convert to XML if oformat and oformat != "xml": error("Unable to convert to format %s." % (oformat,)) # Convert to XML meshconvert.convert2xml(ifilename, ofilename, iformat=iformat) # Order mesh #os.system("dolfin-order %s" % ofilename) def usage(): "Display usage" print("""\ Usage: dolfin-convert [OPTIONS] ... input.x output.y Options: -h display this help text and exit -i format specify input format -o format specify output format Alternatively, the following long options may be used: --help same as -h --input same as -i --output same as -o Supported formats: xml - DOLFIN XML mesh format (current) xml-old - DOLFIN XML mesh format (DOLFIN 0.6.2 and earlier) mesh - Medit, generated by tetgen with option -g Triangle - Triangle file format (input prefix of .ele and .node files) gmsh - Gmsh, version 2.0 file format metis - Metis graph file format scotch - Scotch graph file format diffpack - Diffpack tetrahedral grid format abaqus - Abaqus tetrahedral grid format ExodusII - Sandia Format (requires ncdump utility from NetCDF) Star-CD - Star-CD tetrahedral grid format If --input or --output are not specified, the format will be deduced from the suffix: .xml - xml .mesh - mesh .gmsh - gmsh .msh - gmsh .gra - metis .grf - scotch .grid - diffpack .inp - abaqus .e - Exodus II .exo - Exodus II .ncdf - ncdump'ed Exodus II .vrt and .cell - starcd """) if __name__ == "__main__": main(sys.argv[1:]) dolfin-2017.2.0.post0/scripts/dolfin-plot/0000755000231000000010000000000013216543010017305 5ustar chrisdaemondolfin-2017.2.0.post0/scripts/dolfin-plot/plot_book_elements.sh0000755000231000000010000000664213216543010023540 0ustar chrisdaemon#!/bin/sh # # This script plots a bunch of elements. Images produced by this # script are used in the FEniCS book chapter "Common and unusual # finite elements" by Kirby/Logg/Rognes/Terrel. # # Anders Logg, 2010-12-08 ROTATE=0 dolfin-plot Argyris triangle 5 rotate=$ROTATE dolfin-plot Arnold-Winther triangle rotate=$ROTATE dolfin-plot Brezzi-Douglas-Marini triangle 1 rotate=$ROTATE dolfin-plot Brezzi-Douglas-Marini triangle 2 rotate=$ROTATE dolfin-plot Brezzi-Douglas-Marini triangle 3 rotate=$ROTATE dolfin-plot Brezzi-Douglas-Marini tetrahedron 1 rotate=$ROTATE dolfin-plot Brezzi-Douglas-Marini tetrahedron 2 rotate=$ROTATE dolfin-plot Brezzi-Douglas-Marini tetrahedron 3 rotate=$ROTATE dolfin-plot Crouzeix-Raviart triangle 1 rotate=$ROTATE dolfin-plot Crouzeix-Raviart tetrahedron 1 rotate=$ROTATE dolfin-plot DG triangle 0 rotate=$ROTATE dolfin-plot DG triangle 1 rotate=$ROTATE dolfin-plot DG triangle 2 rotate=$ROTATE dolfin-plot DG triangle 3 rotate=$ROTATE dolfin-plot DG tetrahedron 0 rotate=$ROTATE dolfin-plot DG tetrahedron 1 rotate=$ROTATE dolfin-plot DG tetrahedron 2 rotate=$ROTATE dolfin-plot DG tetrahedron 3 rotate=$ROTATE dolfin-plot Hermite triangle rotate=$ROTATE dolfin-plot Hermite tetrahedron rotate=$ROTATE dolfin-plot Lagrange triangle 1 rotate=$ROTATE dolfin-plot Lagrange triangle 2 rotate=$ROTATE dolfin-plot Lagrange triangle 3 rotate=$ROTATE dolfin-plot Lagrange triangle 4 rotate=$ROTATE dolfin-plot Lagrange triangle 5 rotate=$ROTATE dolfin-plot Lagrange triangle 6 rotate=$ROTATE dolfin-plot Lagrange tetrahedron 1 rotate=$ROTATE dolfin-plot Lagrange tetrahedron 2 rotate=$ROTATE dolfin-plot Lagrange tetrahedron 3 rotate=$ROTATE dolfin-plot Lagrange tetrahedron 4 rotate=$ROTATE dolfin-plot Lagrange tetrahedron 5 rotate=$ROTATE dolfin-plot Lagrange tetrahedron 6 rotate=$ROTATE dolfin-plot Mardal-Tai-Winther triangle rotate=$ROTATE dolfin-plot Morley triangle rotate=$ROTATE dolfin-plot N1curl triangle 1 rotate=$ROTATE dolfin-plot N1curl triangle 2 rotate=$ROTATE dolfin-plot N1curl triangle 3 rotate=$ROTATE dolfin-plot N1curl tetrahedron 1 rotate=$ROTATE dolfin-plot N1curl tetrahedron 2 rotate=$ROTATE dolfin-plot N1curl tetrahedron 3 rotate=$ROTATE dolfin-plot N2curl triangle 1 rotate=$ROTATE dolfin-plot N2curl triangle 2 rotate=$ROTATE dolfin-plot N2curl triangle 3 rotate=$ROTATE dolfin-plot N2curl tetrahedron 1 rotate=$ROTATE dolfin-plot Raviart-Thomas triangle 1 rotate=$ROTATE dolfin-plot Raviart-Thomas triangle 2 rotate=$ROTATE dolfin-plot Raviart-Thomas triangle 3 rotate=$ROTATE dolfin-plot Raviart-Thomas tetrahedron 1 rotate=$ROTATE dolfin-plot Raviart-Thomas tetrahedron 2 rotate=$ROTATE dolfin-plot Raviart-Thomas tetrahedron 3 rotate=$ROTATE dolfin-2017.2.0.post0/scripts/dolfin-plot/plot_elements.sh0000755000231000000010000000142713216543010022522 0ustar chrisdaemon#!/bin/sh # # This script plots a bunch of rotating elements. # # Anders Logg, 2010-12-08 echo "Plotting elements..." dolfin-plot Argyris triangle 5 rotate=1 & dolfin-plot Arnold-Winther triangle rotate=1 & dolfin-plot Brezzi-Douglas-Marini tetrahedron 3 rotate=1 & dolfin-plot Crouzeix-Raviart triangle 1 rotate=1 & dolfin-plot Hermite triangle rotate=1 & dolfin-plot Hermite tetrahedron rotate=1 & dolfin-plot Lagrange tetrahedron 5 rotate=1 & dolfin-plot Mardal-Tai-Winther triangle rotate=1 & dolfin-plot Morley triangle rotate=1 & dolfin-plot N1curl tetrahedron 5 rotate=1 & dolfin-plot Raviart-Thomas tetrahedron 1 rotate=1 & dolfin-2017.2.0.post0/scripts/dolfin-plot/dolfin-plot0000755000231000000010000000716213216543010021470 0ustar chrisdaemon#!/usr/bin/env python "Script for simple command-line plotting." # Copyright (C) 2010 Anders Logg # # This file is part of DOLFIN. # # DOLFIN is free software: you can redistribute it and/or modify # it under the terms of the GNU Lesser General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # DOLFIN is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU Lesser General Public License for more details. # # You should have received a copy of the GNU Lesser General Public License # along with DOLFIN. If not, see . # # Modified by Marie E. Rognes, 2010 # # First added: 2010-12-08 # Last changed: 2010-12-22 import sys # Try importing DOLFIN try: from dolfin import * except: print """\ Unable to import DOLFIN. The DOLFIN Python module is required to run this script. Check that DOLFIN is in your PYTHONPATH.""" sys.exit(1) def usage(): "Print usage of this script." # Build list of supported elements element_list = "" for e in supported_elements_for_plotting: if e in supported_elements: element_list += (" %s\n" % e) else: element_list += (" %s (*)\n" % e) print """\ Usage: 1. dolfin-plot where is a mesh stored in DOLFIN XML format with suffix .xml or .xml.gz. 2. dolfin-plot [degree] [rotate=1/0] where is the name of a finite element family, is the domain type ('triangle' or 'tetrahedron'), and is an optional degree for elements that support variable degree. Examples: dolfin-plot mesh.xml dolfin-plot Lagrange triangle 3 dolfin-plot BDM tetrahedron 5 dolfin-plot Hermite triangle List of supported element families: %s A (*) indicates that the element is not supported by DOLFIN/FFC, but the element may still be plotted. """ % element_list # Check command-line arguments if len(sys.argv) < 2: usage() sys.exit(1) # Check for help text if "-h" in sys.argv[1:] or "--help" in sys.argv[1:]: usage() sys.exit(0) # Extract arguments and options args = [arg for arg in sys.argv[1:] if not "=" in arg] options = dict([arg.split("=") for arg in sys.argv[1:] if "=" in arg]) # Check for plotting of mesh if len(args) == 1: # Read mesh print "Reading mesh from file '%s'." % args[0] try: mesh = Mesh(args[0]) except: print "Unable to read mesh from file." sys.exit(1) # Plot mesh print "Plotting mesh." plot(mesh, title="Mesh", interactive=True) sys.exit(0) # Check for plotting of element if len(args) in (2, 3, 4): # Get family and domain family, domain = args[:2] # Get degree if len(args) == 2: degree = None else: degree = int(args[2]) # Create element print "Creating finite element." if len(args) == 3 or len(args) == 2: element = FiniteElement(family, domain, degree) elif len(args) == 4: assert family == "P Lambda" or family == "P- Lambda" form_degree = int(args[3]) element = FiniteElement(family, domain, degree, form_degree) else: pass # Rotate by default in 3D rotate = 1 if str(domain) == "tetrahedron" else 0 # Check for rotate option if "rotate" in options: rotate = int(options["rotate"]) # Plot element print "Plotting finite element." plot(element, rotate=rotate) sys.exit(0) # Catch all usage() sys.exit(1) dolfin-2017.2.0.post0/doc/0000755000231000000010000000000013216543004014137 5ustar chrisdaemondolfin-2017.2.0.post0/doc/parse_doxygen.py0000644000231000000010000006622113216543004017367 0ustar chrisdaemon#!/usr/bin/env python # # Read doxygen xml files into an object tree that can be # written to either Sphinx ReStructuredText format or SWIG doctrings # # Written by Tormod Landet, 2017 # from __future__ import print_function import sys, os try: import lxml.etree as ET except ImportError: import xml.etree.ElementTree as ET MOCK_REPLACEMENTS = {'operator[]': '__getitem__', 'operator()': '__call__', 'in': '_in', 'self': '_self', 'string': 'str', 'false': 'False', 'true': 'True', 'null': 'None'} class Namespace(object): """ A C++ namespace """ def __init__(self, name): self.name = name # We use '' for the global namespace self.parent_namespace = None self.subspaces = {} self.members = {} def add(self, item): """ Add a member to this namespace """ if isinstance(item, Namespace): self.subspaces[item.name] = item item.parent_namespace = self else: self.members[item.name] = item item.namespace = self def lookup(self, name): """ Find a named member. This is currently only used to find classes in namespaces. We use this to find the superclasses of each class since we only know their names and we need the list their superclasses as well to show only direct superclasses when we generate documentation for each class """ item = self.members.get(name, None) if item: return item nn = name.split('::')[0] if nn == self.name: return None if nn in self.subspaces: return self.subspaces[nn].lookup(item) if self.parent_namespace is not None: return self.parent_namespace.lookup(name) class Parameter(object): """ A parameter to a C++ function """ def __init__(self, name, param_type, value=None, description=''): self.name = name self.type = param_type self.value = None self.description = description @staticmethod def from_param(param_elem): """ Read a element and get name and type """ tp = get_single_element(param_elem, 'type') param_type = description_to_string(tp, skipelems=('ref',)) n1 = get_single_element(param_elem, 'declname', True) n2 = get_single_element(param_elem, 'defname', True) if n1 is None and n2 is not None: name = n2.text elif n1 is None: name = '' else: name = n1.text defval = None dv = get_single_element(param_elem, 'defval', True) if dv is not None: defval = dv.text description = '' doc1 = get_single_element(param_elem, 'briefdescription', True) if doc1 is not None: description = description_to_string(doc1) return Parameter(name, param_type, defval, description) class NamespaceMember(object): """ A class, function, enum, struct, or typedef """ def __init__(self, name, kind): # Used for every member self.name = name self.kind = kind # enum, struct, function or class self.namespace = None self.docstring = [] self.short_name = None self.protection = None self.hpp_file_name = None self.xml_file_name = None # Used for functions and variables self.type = None self.type_description = '' # Used for functions self.parameters = [] # Used for classes self.superclasses = [] self.members = {} # Used for enums self.enum_values = [] def add(self, item): """ Add a member to a class """ assert self.kind == 'class' self.members[item.name] = item item.namespace = self @staticmethod def from_memberdef(mdef, name, kind, xml_file_name, namespace_obj): """ Read a memberdef element from a nameclass file (members of the namespace that are not classes) or from a compoundef (members of the class) """ # Make sure we have the required "namespace::" prefix required_prefix = '%s::' % namespace_obj.name if kind != 'friend' and not name.startswith(required_prefix): name = required_prefix + name short_name = name[len(required_prefix):] if kind == 'function': argsstring = get_single_element(mdef, 'argsstring').text name += argsstring # Define attributes common to all namespace members item = NamespaceMember(name, kind) item.short_name = short_name item.protection = mdef.attrib['prot'] item.xml_file_name = xml_file_name item.hpp_file_name = get_single_element(mdef, 'location').attrib['file'] item._add_doc(mdef) # Get parameters (for functions) for param in mdef.findall('param'): item.parameters.append(Parameter.from_param(param)) # Get type (return type for functions) mtype = get_single_element(mdef, 'type', True) if mtype is not None: item.type = description_to_string(mtype, skipelems=('ref',)) # Get parameter descriptions dd = get_single_element(mdef, 'detaileddescription') for pi in findall_recursive(dd, 'parameteritem'): pnl = get_single_element(pi, 'parameternamelist') pd = get_single_element(pi, 'parameterdescription') param_desc = description_to_string(pd) pns = pnl.findall('parametername') for pn in pns: pname = pn.text pdesc = param_desc if 'direction' in pn.attrib: pdesc += ' [direction=%s]' % pn.attrib['direction'] maching_params = [p for p in item.parameters if p.name == pname] assert len(maching_params) == 1 maching_params[0].description += ' ' + pdesc # Get return type description for ss in findall_recursive(dd, 'simplesect'): memory = {'skip_simplesect': False} if ss.get('kind', '') == 'return': item.type_description = description_to_string(ss, memory=memory) # Get enum values for ev in mdef.findall('enumvalue'): ename = get_single_element(ev, 'name').text evalue = '' init = get_single_element(ev, 'initializer', True) if init is not None: evalue = init.text item.enum_values.append((ename, evalue)) return item @staticmethod def from_compounddef(cdef, name, kind, xml_file_name): """ Read a compounddef element from a class definition file """ item = NamespaceMember(name, kind) item.hpp_file_name = get_single_element(cdef, 'location').attrib['file'] item.xml_file_name = xml_file_name item.short_name = name.split('::')[-1] item._add_doc(cdef) # Get superclasses with public inheritance igs = cdef.findall('collaborationgraph') if len(igs) == 1: for node in igs[0]: public = False for cn in node.findall('childnode'): if cn.attrib.get('relation', '') == 'public-inheritance': public = True if public: label = get_single_element(node, 'label').text if label != item.name: item.superclasses.append(label) else: assert len(igs) == 0 # Read members for s in cdef.findall('sectiondef'): members = s.findall('memberdef') for m in members: mname = get_single_element(m, 'name').text mkind = m.attrib['kind'] mitem = NamespaceMember.from_memberdef(m, mname, mkind, xml_file_name, item) item.add(mitem) return item def _add_doc(self, elem): """ Add docstring for the given element """ bd = get_single_element(elem, 'briefdescription') dd = get_single_element(elem, 'detaileddescription') description_to_rst(bd, self.docstring) if self.docstring and self.docstring[-1].strip(): self.docstring.append('') description_to_rst(dd, self.docstring) def get_superclasses(self): to_remove = set() for sn in self.superclasses: obj = self.namespace.lookup(sn) if obj: for sn2 in obj.superclasses: if sn2 in self.superclasses: to_remove.add(sn2) return [sn for sn in self.superclasses if sn not in to_remove] def _to_rst_string(self, indent, for_swig=False, for_mock=False): """ Create a list of lines on Sphinx (ReStructuredText) format The header is included for Sphinx, but not for SWIG docstrings """ ret = [] if for_mock: for_swig = True if not for_swig: ret.append('') simple_kinds = {'typedef': 'type', 'enum': 'enum'} kinds_with_types = {'function': 'function', 'variable': 'var'} if self.kind in ('class', 'struct'): superclasses = self.get_superclasses() supers = '' if superclasses: supers = ' : public ' + ', '.join(superclasses) ret.append(indent + '.. cpp:class:: %s%s' % (self.name, supers)) elif self.kind in kinds_with_types: rst_kind = kinds_with_types[self.kind] ret.append(indent + '.. cpp:%s:: %s %s' % (rst_kind, self.type, self.name)) elif self.kind in simple_kinds: rst_kind = simple_kinds[self.kind] ret.append(indent + '.. cpp:%s:: %s' % (rst_kind, self.name)) else: raise NotImplementedError('Kind %s not implemented' % self.kind) # Docstring and parameters must be further indented indent += ' ' ret.append(indent) # All: add docstring doclines = self.docstring if doclines and not doclines[0].strip(): doclines = doclines[1:] ret.extend(indent + line for line in doclines) # Classes: separate friends from other members friends, members = [], [] for _, member in sorted(self.members.items()): if member.kind == 'friend': friends.append(member) else: members.append(member) # Classes: add friends if friends: if ret and ret[-1].strip(): ret.append(indent) ret.append(indent + 'Friends: %s.' % ', '.join( ':cpp:any:`%s`' % friend.name for friend in friends)) # Functions: add space before parameters and return types if self.parameters or (self.type and for_swig) or self.type_description: if ret and ret[-1].strip(): ret.append(indent) # Functions: add parameter definitions for param in self.parameters: if param.name: ptype = param.type.replace(':', '\\:') pname = param.name.replace(':', '\\:') if for_swig: ret.append(indent + ':param %s %s: %s' % (ptype, pname, param.description)) else: # Parameter type is redundant info in the Sphinx produced html ret.append(indent + ':param %s: %s' % (pname, param.description)) # Functions: add return type (redundant info in the Sphinx produced html) if self.type and for_swig: ret.append(indent + ':rtype: %s' % self.type) if self.type_description: ret.append(indent + ':returns: %s' % self.type_description) # Enums: add enumerators for ename, evalue in self.enum_values: if ret and ret[-1].strip(): ret.append(indent) ret.append(indent + '.. cpp:enumerator:: %s::%s %s' % (self.name, ename, evalue)) ret.append(indent) # Remove doubled up blank lines if ret: ret2 = [ret[0]] for line in ret[1:]: if line.strip() or ret2[-1].strip(): ret2.append(line) ret = ret2 # All: SWIG items are not nested, so we end this one here if for_swig and not for_mock: escaped = [line.replace('\\', '\\\\').replace('"', '\\"') for line in ret] sname = self.name if sname.endswith('=0'): sname = sname[:-2] elif sname.endswith('=delete'): sname = sname[:-7] ret = ['%%feature("docstring") %s "' % sname, '\n'.join(escaped).rstrip() + '\n";\n'] indent = '' # Classes: add members of a class if not for_mock: for member in members: ret.append(member._to_rst_string(indent, for_swig)) return '\n'.join(ret) def to_swig(self): """ Output SWIG docstring definitions """ swigstr = self._to_rst_string(indent='', for_swig=True) # Get rid of repeated newlines for _ in range(4): swigstr = swigstr.replace('\n\n\n', '\n\n') # Remove any '=default' keywords swigstr = swigstr.replace("=default", "") return swigstr def to_rst(self, indent=''): """ Output Sphinx :cpp:*:: definitions """ return self._to_rst_string(indent) def to_mock(self, modulename, indent='', _classname=None): """ Output mock Python code """ # Get swig docstring and get rid of any repeated newlines new_indent = indent + ' ' swigdoc = self._to_rst_string(indent=new_indent, for_mock=True) for _ in range(4): swigdoc = swigdoc.replace('\n\n\n', '\n\n') ret = [] if self.kind == 'class': superclasses = [sc.split('::')[-1].split('<')[0] for sc in self.get_superclasses()] superclasses = [MOCK_REPLACEMENTS.get(sc, sc) for sc in superclasses] superclasses = [sc for sc in superclasses if sc != self.short_name] ret.append(indent + 'class %s:\n' % (self.short_name)) ret.append(new_indent + '"""\n') ret.append(swigdoc.rstrip()) ret.append('\n') for sc in superclasses: ret.append(new_indent + 'Superclass: :py:obj:`%s`\n' % sc) ret.append(new_indent + '"""\n') for member in self.members.values(): ret.append('\n') ret.append(member.to_mock(modulename, indent=new_indent, _classname=self.short_name)) elif self.kind == 'function': fname = MOCK_REPLACEMENTS.get(self.short_name, self.short_name) if fname.startswith('operator'): return '' params = [MOCK_REPLACEMENTS.get(p.name, p.name) for p in self.parameters] if _classname: params = ['self'] + params if _classname == fname: fname = '__init__' elif '~' + _classname == fname: fname = '__del__' if self.parameters and self.parameters[-1].type == '...': params[-1] = '*args' args = ', '.join(params) ret.append(indent + 'def %s(%s):\n' % (fname, args)) ret.append(new_indent + '"""\n') ret.append(swigdoc.rstrip()) ret.append('\n') ret.append(new_indent + '"""\n') ret.append(new_indent + 'print(WARNING)\n') elif self.kind == 'friend': pass elif self.kind == 'enum': ret.append(indent + '# Enumeration %s\n' % self.short_name) for ename, evalue in self.enum_values: ret.append(indent + '#: Enum value %s::%s %s\n' % (self.name, ename, evalue)) if not evalue.strip(): evalue = '= None' evalue = MOCK_REPLACEMENTS.get(evalue[1:].strip(), evalue[1:].strip()) ret.append(indent + '%s.%s_%s = %s\n' % (modulename, self.short_name, ename, evalue)) elif self.kind == 'variable': pass #for line in swigdoc.split('\n'): # ret.append(indent + '#: %s\n' % line) #ret.append(indent + 'module.%s = None\n' % self.short_name) elif self.kind == 'typedef': pass else: print('WARNING: kind %s not supported by to_mock()' % self.kind) if not _classname and self.kind in ('class', 'function'): ret.append('%s.%s = %s\n' % (modulename, self.short_name, self.short_name)) return ''.join(ret) def __str__(self): return self.to_rst() def description_to_string(element, indent='', skipelems=(), memory=None): lines = [] description_to_rst(element, lines, indent, skipelems, memory) return ' '.join(lines).strip() NOT_IMPLEMENTED_ELEMENTS = set() def description_to_rst(element, lines, indent='', skipelems=(), memory=None): """ Create a valid ReStructuredText block for Sphinx from the given description element. Handles etc markup inside and also is called for every sub-tag of the description tag (like , etc) """ if lines == []: lines.append('') if memory is None: memory = dict() tag = element.tag children = list(element) postfix = '' postfix_lines = [] # Handle known tags that show up in description type element trees # Tag contents are handled beneath, if the if-branch does not return # Unknown tags are just output unchanged and a WARNING is shown (in main) if tag in ('briefdescription', 'detaileddescription', 'parameterdescription', 'type', 'highlight'): pass elif element in memory: # This element is being re-read, only treat the contained elements pass elif tag == 'para': if lines[-1].strip(): lines.append(indent) postfix_lines.append(indent) elif tag == 'codeline': memory = dict(memory); memory[element] = 1 skipelems = set(skipelems); skipelems.add('ref') line = description_to_string(element, indent, skipelems, memory) if line.startswith('*'): line = line[1:] lines.append(indent + line) return elif tag == 'ndash': lines[-1] += '--' elif tag == 'mdash': lines[-1] += '---' elif tag == 'sp': lines[-1] += ' ' elif tag == 'ref': if 'ref' in skipelems: lines[-1] += element.text else: lines[-1] += ':cpp:any:`%s` ' % element.text return elif tag == 'ulink': lines[-1] += '`%s <%s>`_ ' % (element.text, element.get('url')) return elif tag == 'emphasis': if children and children[0].tag != 'ref': lines[-1] += '**' postfix += '**' elif tag == 'computeroutput': if element.text is None and not list(element): return elif lines[-1].endswith(':math:'): lines[-1] += '`' postfix += '` ' else: lines[-1] += '``' postfix += '`` ' elif tag in ('verbatim', 'programlisting'): if element.text is None and not list(element): return if lines[-1].strip(): lines.append(indent) lines.append(indent + '::') postfix_lines.append(indent) indent += ' ' lines.append(indent) lines.append(indent) elif tag == 'table': lines.extend(xml_table_to_rst(element, indent)) lines.append(indent) return # Do not process children elif tag == 'formula': contents = element.text.strip() if contents.startswith(r'\['): if lines[-1].strip(): lines.append(indent) lines.append(indent + '.. math::') lines.append(indent + ' ') lines.append(indent + ' ' + contents[2:-2]) lines.append(indent) lines.append(indent) else: lines[-1] += ' :math:`' + contents[1:-1] + '` ' return elif tag == 'itemizedlist': memory = dict(memory) memory['list_item_prefix'] = '* ' if lines[-1].strip(): lines.append(indent) postfix_lines.append(indent) elif tag == 'orderedlist': memory = dict(memory) memory['list_item_prefix'] = '#. ' if lines[-1].strip(): lines.append(indent) postfix_lines.append(indent) elif tag == 'listitem': memory = dict(memory); memory[element] = 1 item = description_to_string(element, indent + ' ', skipelems, memory) lines.append(indent + memory['list_item_prefix'] + item) return elif tag == 'parameterlist': # We parse these separately in the parameter reading process return elif tag == 'simplesect' and element.get('kind', '') == 'return': # We parse these separately in the return-type reading process if memory.get('skip_simplesect', True): return else: NOT_IMPLEMENTED_ELEMENTS.add(tag) lines.append(ET.tostring(element)) def add_text(text): if text is not None and text.strip(): tl = text.split('\n') lines[-1] += tl[0] lines.extend([indent + line for line in tl[1:]]) if text.endswith('\n'): lines.append(indent) add_text(element.text) for child in children: description_to_rst(child, lines, indent, skipelems, memory) add_text(child.tail) if postfix: lines[-1] += postfix if postfix_lines: lines.extend(postfix_lines) def xml_table_to_rst(table, indent): """ Read an XML table element and produce a ReStructuredText table ===== ==== Col A B ===== ==== hi 1.00 there 3.14 ===== ==== """ rows = int(table.get('rows')) cols = int(table.get('cols')) data = [[''] * cols for row in range(rows)] widths = [0] * cols for i, row in enumerate(table): for j, entry in enumerate(row): memory = {entry: 1} text = description_to_string(entry, indent='', skipelems=('para',), memory=memory) widths[j] = max(widths[j], len(text)) data[i][j] = text ret = [indent] * (rows + 3) for w in widths: ret[0] += '=' * w + ' ' ret[2] += '=' * w + ' ' ret[-1] += '=' * w + ' ' for i, row in enumerate(data): I = 1 if i == 0 else i + 2 for j, text in enumerate(row): ret[I] += text + ' ' * (widths[j] + 2 - len(text)) return ret def get_single_element(parent, name, allow_none=False): """ Helper to get one unique child element """ elems = parent.findall(name) N = len(elems) if N != 1: if allow_none and N == 0: return None raise ValueError('Expected one element %r below %r, got %r' % (name, parent, len(elems))) return elems[0] def findall_recursive(parent, name): for item in parent.findall(name): yield item for child in parent: for item in findall_recursive(child, name): yield item def read_doxygen_xml_files(xml_directory, namespace_names, verbose=True): """ Read doxygen XML files from the given directory. Restrict the returned namespaces to the ones listed in the namespaces input iterable Remember: we are built for speed, not ultimate flexibility, hence the restrictions to avoid parsing more than we are interested in actually outputing in the end """ if verbose: print('Parsing doxygen XML files in %s' % xml_directory) root_namespace = Namespace('') for nn in namespace_names: root_namespace.add(Namespace(nn)) # Loop through xml files of compounds and get class definitions xml_files = os.listdir(xml_directory) for xml_file_name in xml_files: if not xml_file_name.startswith('class'): continue path = os.path.join(xml_directory, xml_file_name) try: root = ET.parse(path).getroot() except Exception as e: print('ERROR parsing doxygen xml document', path) print(e) continue compounds = root.findall('compounddef') for c in compounds: kind = c.attrib['kind'] names = c.findall('compoundname') assert len(names) == 1 name = names[0].text nn = name.split('::')[0] namespace = root_namespace.subspaces.get(nn, None) if not namespace: continue item = NamespaceMember.from_compounddef(c, name, kind, xml_file_name) namespace.add(item) if verbose: print(end='.') if verbose: print('DONE\nParsing namespace files:') # Loop through other elements in the namespaces for namespace in root_namespace.subspaces.values(): file_name = 'namespace%s.xml' % namespace.name path = os.path.join(xml_directory, file_name) try: root = ET.parse(path).getroot() except Exception as e: print('ERROR parsing doxygen xml document', path) print(e) continue compound = get_single_element(root, 'compounddef') sections = compound.findall('sectiondef') for s in sections: members = s.findall('memberdef') for m in members: name = get_single_element(m, 'name').text kind = m.attrib['kind'] item = NamespaceMember.from_memberdef(m, name, kind, xml_file_name, namespace) namespace.add(item) if verbose: print(' - ', namespace.name) if verbose: print('Done parsing files') return root_namespace.subspaces if __name__ == '__main__': if len(sys.argv) != 3: print('Call me like "script path_to/xml/dir namespace"') print('An exampe:\n\tpython parse_doxygen.py doxygen/xml dolfin') print('ERROR: I need two arguments!') exit(1) xml_directory = sys.argv[1] namespace = sys.argv[2] # Parse the XML files namespaces = read_doxygen_xml_files(xml_directory, [namespace]) # Get sorted list of members members = list(namespaces[namespace].members.values()) members.sort(key=lambda m: m.name) # Make Sphinx documentation with open('api.rst', 'wt') as out: for member in members: out.write(member.to_rst()) out.write('\n') out.write('\n') # Make SWIG interface file with open('docstrings.i', 'wt') as out: out.write('// SWIG docstrings generated by doxygen and parse_doxygen.py\n\n') for member in members: out.write(member.to_swig()) out.write('\n') out.write('\n') for tag in NOT_IMPLEMENTED_ELEMENTS: print('WARNING: doxygen XML tag %s is not supported by the parser' % tag) dolfin-2017.2.0.post0/doc/Doxyfile0000644000231000000010000031621013216543004015650 0ustar chrisdaemon# Doxyfile 1.8.11 # This file describes the settings to be used by the documentation system # doxygen (www.doxygen.org) for a project. # # All text after a double hash (##) is considered a comment and is placed in # front of the TAG it is preceding. # # All text after a single hash (#) is considered a comment and will be ignored. # The format is: # TAG = value [value, ...] # For lists, items can also be appended using: # TAG += value [value, ...] # Values that contain spaces should be placed between quotes (\" \"). #--------------------------------------------------------------------------- # Project related configuration options #--------------------------------------------------------------------------- # This tag specifies the encoding used for all characters in the config file # that follow. The default is UTF-8 which is also the encoding used for all text # before the first occurrence of this tag. Doxygen uses libiconv (or the iconv # built into libc) for the transcoding. See http://www.gnu.org/software/libiconv # for the list of possible encodings. # The default value is: UTF-8. DOXYFILE_ENCODING = UTF-8 # The PROJECT_NAME tag is a single word (or a sequence of words surrounded by # double-quotes, unless you are using Doxywizard) that should identify the # project for which the documentation is generated. This name is used in the # title of most generated pages and in a few other places. # The default value is: My Project. PROJECT_NAME = "dolfin" # The PROJECT_NUMBER tag can be used to enter a project or revision number. This # could be handy for archiving the generated documentation or if some version # control system is used. PROJECT_NUMBER = "2016.2.0" # Using the PROJECT_BRIEF tag one can provide an optional one line description # for a project that appears at the top of each page and should give viewer a # quick idea about the purpose of the project. Keep the description short. PROJECT_BRIEF = "C++ FEniCS library" # With the PROJECT_LOGO tag one can specify a logo or an icon that is included # in the documentation. The maximum height of the logo should not exceed 55 # pixels and the maximum width should not exceed 200 pixels. Doxygen will copy # the logo to the output directory. PROJECT_LOGO = # The OUTPUT_DIRECTORY tag is used to specify the (relative or absolute) path # into which the generated documentation will be written. If a relative path is # entered, it will be relative to the location where doxygen was started. If # left blank the current directory will be used. OUTPUT_DIRECTORY = doxygen # If the CREATE_SUBDIRS tag is set to YES then doxygen will create 4096 sub- # directories (in 2 levels) under the output directory of each output format and # will distribute the generated files over these directories. Enabling this # option can be useful when feeding doxygen a huge amount of source files, where # putting all generated files in the same directory would otherwise causes # performance problems for the file system. # The default value is: NO. CREATE_SUBDIRS = NO # If the ALLOW_UNICODE_NAMES tag is set to YES, doxygen will allow non-ASCII # characters to appear in the names of generated files. If set to NO, non-ASCII # characters will be escaped, for example _xE3_x81_x84 will be used for Unicode # U+3044. # The default value is: NO. ALLOW_UNICODE_NAMES = NO # The OUTPUT_LANGUAGE tag is used to specify the language in which all # documentation generated by doxygen is written. Doxygen will use this # information to generate all constant output in the proper language. # Possible values are: Afrikaans, Arabic, Armenian, Brazilian, Catalan, Chinese, # Chinese-Traditional, Croatian, Czech, Danish, Dutch, English (United States), # Esperanto, Farsi (Persian), Finnish, French, German, Greek, Hungarian, # Indonesian, Italian, Japanese, Japanese-en (Japanese with English messages), # Korean, Korean-en (Korean with English messages), Latvian, Lithuanian, # Macedonian, Norwegian, Persian (Farsi), Polish, Portuguese, Romanian, Russian, # Serbian, Serbian-Cyrillic, Slovak, Slovene, Spanish, Swedish, Turkish, # Ukrainian and Vietnamese. # The default value is: English. OUTPUT_LANGUAGE = English # If the BRIEF_MEMBER_DESC tag is set to YES, doxygen will include brief member # descriptions after the members that are listed in the file and class # documentation (similar to Javadoc). Set to NO to disable this. # The default value is: YES. BRIEF_MEMBER_DESC = YES # If the REPEAT_BRIEF tag is set to YES, doxygen will prepend the brief # description of a member or function before the detailed description # # Note: If both HIDE_UNDOC_MEMBERS and BRIEF_MEMBER_DESC are set to NO, the # brief descriptions will be completely suppressed. # The default value is: YES. REPEAT_BRIEF = YES # This tag implements a quasi-intelligent brief description abbreviator that is # used to form the text in various listings. Each string in this list, if found # as the leading text of the brief description, will be stripped from the text # and the result, after processing the whole list, is used as the annotated # text. Otherwise, the brief description is used as-is. If left blank, the # following values are used ($name is automatically replaced with the name of # the entity):The $name class, The $name widget, The $name file, is, provides, # specifies, contains, represents, a, an and the. ABBREVIATE_BRIEF = # If the ALWAYS_DETAILED_SEC and REPEAT_BRIEF tags are both set to YES then # doxygen will generate a detailed section even if there is only a brief # description. # The default value is: NO. ALWAYS_DETAILED_SEC = NO # If the INLINE_INHERITED_MEMB tag is set to YES, doxygen will show all # inherited members of a class in the documentation of that class as if those # members were ordinary class members. Constructors, destructors and assignment # operators of the base classes will not be shown. # The default value is: NO. INLINE_INHERITED_MEMB = NO # If the FULL_PATH_NAMES tag is set to YES, doxygen will prepend the full path # before files name in the file list and in the header files. If set to NO the # shortest path that makes the file name unique will be used # The default value is: YES. FULL_PATH_NAMES = YES # The STRIP_FROM_PATH tag can be used to strip a user-defined part of the path. # Stripping is only done if one of the specified strings matches the left-hand # part of the path. The tag can be used to show relative paths in the file list. # If left blank the directory from which doxygen is run is used as the path to # strip. # # Note that you can specify absolute paths here, but also relative paths, which # will be relative from the directory where doxygen is started. # This tag requires that the tag FULL_PATH_NAMES is set to YES. STRIP_FROM_PATH = # The STRIP_FROM_INC_PATH tag can be used to strip a user-defined part of the # path mentioned in the documentation of a class, which tells the reader which # header file to include in order to use a class. If left blank only the name of # the header file containing the class definition is used. Otherwise one should # specify the list of include paths that are normally passed to the compiler # using the -I flag. STRIP_FROM_INC_PATH = # If the SHORT_NAMES tag is set to YES, doxygen will generate much shorter (but # less readable) file names. This can be useful is your file systems doesn't # support long names like on DOS, Mac, or CD-ROM. # The default value is: NO. SHORT_NAMES = NO # If the JAVADOC_AUTOBRIEF tag is set to YES then doxygen will interpret the # first line (until the first dot) of a Javadoc-style comment as the brief # description. If set to NO, the Javadoc-style will behave just like regular Qt- # style comments (thus requiring an explicit @brief command for a brief # description.) # The default value is: NO. JAVADOC_AUTOBRIEF = NO # If the QT_AUTOBRIEF tag is set to YES then doxygen will interpret the first # line (until the first dot) of a Qt-style comment as the brief description. If # set to NO, the Qt-style will behave just like regular Qt-style comments (thus # requiring an explicit \brief command for a brief description.) # The default value is: NO. QT_AUTOBRIEF = NO # The MULTILINE_CPP_IS_BRIEF tag can be set to YES to make doxygen treat a # multi-line C++ special comment block (i.e. a block of //! or /// comments) as # a brief description. This used to be the default behavior. The new default is # to treat a multi-line C++ comment block as a detailed description. Set this # tag to YES if you prefer the old behavior instead. # # Note that setting this tag to YES also means that rational rose comments are # not recognized any more. # The default value is: NO. MULTILINE_CPP_IS_BRIEF = NO # If the INHERIT_DOCS tag is set to YES then an undocumented member inherits the # documentation from any documented member that it re-implements. # The default value is: YES. INHERIT_DOCS = YES # If the SEPARATE_MEMBER_PAGES tag is set to YES then doxygen will produce a new # page for each member. If set to NO, the documentation of a member will be part # of the file/class/namespace that contains it. # The default value is: NO. SEPARATE_MEMBER_PAGES = NO # The TAB_SIZE tag can be used to set the number of spaces in a tab. Doxygen # uses this value to replace tabs by spaces in code fragments. # Minimum value: 1, maximum value: 16, default value: 4. TAB_SIZE = 4 # This tag can be used to specify a number of aliases that act as commands in # the documentation. An alias has the form: # name=value # For example adding # "sideeffect=@par Side Effects:\n" # will allow you to put the command \sideeffect (or @sideeffect) in the # documentation, which will result in a user-defined paragraph with heading # "Side Effects:". You can put \n's in the value part of an alias to insert # newlines. ALIASES = # This tag can be used to specify a number of word-keyword mappings (TCL only). # A mapping has the form "name=value". For example adding "class=itcl::class" # will allow you to use the command class in the itcl::class meaning. TCL_SUBST = # Set the OPTIMIZE_OUTPUT_FOR_C tag to YES if your project consists of C sources # only. Doxygen will then generate output that is more tailored for C. For # instance, some of the names that are used will be different. The list of all # members will be omitted, etc. # The default value is: NO. OPTIMIZE_OUTPUT_FOR_C = NO # Set the OPTIMIZE_OUTPUT_JAVA tag to YES if your project consists of Java or # Python sources only. Doxygen will then generate output that is more tailored # for that language. For instance, namespaces will be presented as packages, # qualified scopes will look different, etc. # The default value is: NO. OPTIMIZE_OUTPUT_JAVA = NO # Set the OPTIMIZE_FOR_FORTRAN tag to YES if your project consists of Fortran # sources. Doxygen will then generate output that is tailored for Fortran. # The default value is: NO. OPTIMIZE_FOR_FORTRAN = NO # Set the OPTIMIZE_OUTPUT_VHDL tag to YES if your project consists of VHDL # sources. Doxygen will then generate output that is tailored for VHDL. # The default value is: NO. OPTIMIZE_OUTPUT_VHDL = NO # Doxygen selects the parser to use depending on the extension of the files it # parses. With this tag you can assign which parser to use for a given # extension. Doxygen has a built-in mapping, but you can override or extend it # using this tag. The format is ext=language, where ext is a file extension, and # language is one of the parsers supported by doxygen: IDL, Java, Javascript, # C#, C, C++, D, PHP, Objective-C, Python, Fortran (fixed format Fortran: # FortranFixed, free formatted Fortran: FortranFree, unknown formatted Fortran: # Fortran. In the later case the parser tries to guess whether the code is fixed # or free formatted code, this is the default for Fortran type files), VHDL. For # instance to make doxygen treat .inc files as Fortran files (default is PHP), # and .f files as C (default is Fortran), use: inc=Fortran f=C. # # Note: For files without extension you can use no_extension as a placeholder. # # Note that for custom extensions you also need to set FILE_PATTERNS otherwise # the files are not read by doxygen. EXTENSION_MAPPING = # If the MARKDOWN_SUPPORT tag is enabled then doxygen pre-processes all comments # according to the Markdown format, which allows for more readable # documentation. See http://daringfireball.net/projects/markdown/ for details. # The output of markdown processing is further processed by doxygen, so you can # mix doxygen, HTML, and XML commands with Markdown formatting. Disable only in # case of backward compatibilities issues. # The default value is: YES. MARKDOWN_SUPPORT = YES # When enabled doxygen tries to link words that correspond to documented # classes, or namespaces to their corresponding documentation. Such a link can # be prevented in individual cases by putting a % sign in front of the word or # globally by setting AUTOLINK_SUPPORT to NO. # The default value is: YES. AUTOLINK_SUPPORT = YES # If you use STL classes (i.e. std::string, std::vector, etc.) but do not want # to include (a tag file for) the STL sources as input, then you should set this # tag to YES in order to let doxygen match functions declarations and # definitions whose arguments contain STL classes (e.g. func(std::string); # versus func(std::string) {}). This also make the inheritance and collaboration # diagrams that involve STL classes more complete and accurate. # The default value is: NO. BUILTIN_STL_SUPPORT = YES # If you use Microsoft's C++/CLI language, you should set this option to YES to # enable parsing support. # The default value is: NO. CPP_CLI_SUPPORT = NO # Set the SIP_SUPPORT tag to YES if your project consists of sip (see: # http://www.riverbankcomputing.co.uk/software/sip/intro) sources only. Doxygen # will parse them like normal C++ but will assume all classes use public instead # of private inheritance when no explicit protection keyword is present. # The default value is: NO. SIP_SUPPORT = NO # For Microsoft's IDL there are propget and propput attributes to indicate # getter and setter methods for a property. Setting this option to YES will make # doxygen to replace the get and set methods by a property in the documentation. # This will only work if the methods are indeed getting or setting a simple # type. If this is not the case, or you want to show the methods anyway, you # should set this option to NO. # The default value is: YES. IDL_PROPERTY_SUPPORT = YES # If member grouping is used in the documentation and the DISTRIBUTE_GROUP_DOC # tag is set to YES then doxygen will reuse the documentation of the first # member in the group (if any) for the other members of the group. By default # all members of a group must be documented explicitly. # The default value is: NO. DISTRIBUTE_GROUP_DOC = NO # If one adds a struct or class to a group and this option is enabled, then also # any nested class or struct is added to the same group. By default this option # is disabled and one has to add nested compounds explicitly via \ingroup. # The default value is: NO. GROUP_NESTED_COMPOUNDS = NO # Set the SUBGROUPING tag to YES to allow class member groups of the same type # (for instance a group of public functions) to be put as a subgroup of that # type (e.g. under the Public Functions section). Set it to NO to prevent # subgrouping. Alternatively, this can be done per class using the # \nosubgrouping command. # The default value is: YES. SUBGROUPING = YES # When the INLINE_GROUPED_CLASSES tag is set to YES, classes, structs and unions # are shown inside the group in which they are included (e.g. using \ingroup) # instead of on a separate page (for HTML and Man pages) or section (for LaTeX # and RTF). # # Note that this feature does not work in combination with # SEPARATE_MEMBER_PAGES. # The default value is: NO. INLINE_GROUPED_CLASSES = NO # When the INLINE_SIMPLE_STRUCTS tag is set to YES, structs, classes, and unions # with only public data fields or simple typedef fields will be shown inline in # the documentation of the scope in which they are defined (i.e. file, # namespace, or group documentation), provided this scope is documented. If set # to NO, structs, classes, and unions are shown on a separate page (for HTML and # Man pages) or section (for LaTeX and RTF). # The default value is: NO. INLINE_SIMPLE_STRUCTS = NO # When TYPEDEF_HIDES_STRUCT tag is enabled, a typedef of a struct, union, or # enum is documented as struct, union, or enum with the name of the typedef. So # typedef struct TypeS {} TypeT, will appear in the documentation as a struct # with name TypeT. When disabled the typedef will appear as a member of a file, # namespace, or class. And the struct will be named TypeS. This can typically be # useful for C code in case the coding convention dictates that all compound # types are typedef'ed and only the typedef is referenced, never the tag name. # The default value is: NO. TYPEDEF_HIDES_STRUCT = NO # The size of the symbol lookup cache can be set using LOOKUP_CACHE_SIZE. This # cache is used to resolve symbols given their name and scope. Since this can be # an expensive process and often the same symbol appears multiple times in the # code, doxygen keeps a cache of pre-resolved symbols. If the cache is too small # doxygen will become slower. If the cache is too large, memory is wasted. The # cache size is given by this formula: 2^(16+LOOKUP_CACHE_SIZE). The valid range # is 0..9, the default is 0, corresponding to a cache size of 2^16=65536 # symbols. At the end of a run doxygen will report the cache usage and suggest # the optimal cache size from a speed point of view. # Minimum value: 0, maximum value: 9, default value: 0. LOOKUP_CACHE_SIZE = 0 #--------------------------------------------------------------------------- # Build related configuration options #--------------------------------------------------------------------------- # If the EXTRACT_ALL tag is set to YES, doxygen will assume all entities in # documentation are documented, even if no documentation was available. Private # class members and static file members will be hidden unless the # EXTRACT_PRIVATE respectively EXTRACT_STATIC tags are set to YES. # Note: This will also disable the warnings about undocumented members that are # normally produced when WARNINGS is set to YES. # The default value is: NO. EXTRACT_ALL = NO # If the EXTRACT_PRIVATE tag is set to YES, all private members of a class will # be included in the documentation. # The default value is: NO. EXTRACT_PRIVATE = NO # If the EXTRACT_PACKAGE tag is set to YES, all members with package or internal # scope will be included in the documentation. # The default value is: NO. EXTRACT_PACKAGE = NO # If the EXTRACT_STATIC tag is set to YES, all static members of a file will be # included in the documentation. # The default value is: NO. EXTRACT_STATIC = NO # If the EXTRACT_LOCAL_CLASSES tag is set to YES, classes (and structs) defined # locally in source files will be included in the documentation. If set to NO, # only classes defined in header files are included. Does not have any effect # for Java sources. # The default value is: YES. EXTRACT_LOCAL_CLASSES = YES # This flag is only useful for Objective-C code. If set to YES, local methods, # which are defined in the implementation section but not in the interface are # included in the documentation. If set to NO, only methods in the interface are # included. # The default value is: NO. EXTRACT_LOCAL_METHODS = NO # If this flag is set to YES, the members of anonymous namespaces will be # extracted and appear in the documentation as a namespace called # 'anonymous_namespace{file}', where file will be replaced with the base name of # the file that contains the anonymous namespace. By default anonymous namespace # are hidden. # The default value is: NO. EXTRACT_ANON_NSPACES = NO # If the HIDE_UNDOC_MEMBERS tag is set to YES, doxygen will hide all # undocumented members inside documented classes or files. If set to NO these # members will be included in the various overviews, but no documentation # section is generated. This option has no effect if EXTRACT_ALL is enabled. # The default value is: NO. HIDE_UNDOC_MEMBERS = NO # If the HIDE_UNDOC_CLASSES tag is set to YES, doxygen will hide all # undocumented classes that are normally visible in the class hierarchy. If set # to NO, these classes will be included in the various overviews. This option # has no effect if EXTRACT_ALL is enabled. # The default value is: NO. HIDE_UNDOC_CLASSES = NO # If the HIDE_FRIEND_COMPOUNDS tag is set to YES, doxygen will hide all friend # (class|struct|union) declarations. If set to NO, these declarations will be # included in the documentation. # The default value is: NO. HIDE_FRIEND_COMPOUNDS = NO # If the HIDE_IN_BODY_DOCS tag is set to YES, doxygen will hide any # documentation blocks found inside the body of a function. If set to NO, these # blocks will be appended to the function's detailed documentation block. # The default value is: NO. HIDE_IN_BODY_DOCS = NO # The INTERNAL_DOCS tag determines if documentation that is typed after a # \internal command is included. If the tag is set to NO then the documentation # will be excluded. Set it to YES to include the internal documentation. # The default value is: NO. INTERNAL_DOCS = NO # If the CASE_SENSE_NAMES tag is set to NO then doxygen will only generate file # names in lower-case letters. If set to YES, upper-case letters are also # allowed. This is useful if you have classes or files whose names only differ # in case and if your file system supports case sensitive file names. Windows # and Mac users are advised to set this option to NO. # The default value is: system dependent. CASE_SENSE_NAMES = NO # If the HIDE_SCOPE_NAMES tag is set to NO then doxygen will show members with # their full class and namespace scopes in the documentation. If set to YES, the # scope will be hidden. # The default value is: NO. HIDE_SCOPE_NAMES = NO # If the HIDE_COMPOUND_REFERENCE tag is set to NO (default) then doxygen will # append additional text to a page's title, such as Class Reference. If set to # YES the compound reference will be hidden. # The default value is: NO. HIDE_COMPOUND_REFERENCE= NO # If the SHOW_INCLUDE_FILES tag is set to YES then doxygen will put a list of # the files that are included by a file in the documentation of that file. # The default value is: YES. SHOW_INCLUDE_FILES = YES # If the SHOW_GROUPED_MEMB_INC tag is set to YES then Doxygen will add for each # grouped member an include statement to the documentation, telling the reader # which file to include in order to use the member. # The default value is: NO. SHOW_GROUPED_MEMB_INC = NO # If the FORCE_LOCAL_INCLUDES tag is set to YES then doxygen will list include # files with double quotes in the documentation rather than with sharp brackets. # The default value is: NO. FORCE_LOCAL_INCLUDES = NO # If the INLINE_INFO tag is set to YES then a tag [inline] is inserted in the # documentation for inline members. # The default value is: YES. INLINE_INFO = YES # If the SORT_MEMBER_DOCS tag is set to YES then doxygen will sort the # (detailed) documentation of file and class members alphabetically by member # name. If set to NO, the members will appear in declaration order. # The default value is: YES. SORT_MEMBER_DOCS = YES # If the SORT_BRIEF_DOCS tag is set to YES then doxygen will sort the brief # descriptions of file, namespace and class members alphabetically by member # name. If set to NO, the members will appear in declaration order. Note that # this will also influence the order of the classes in the class list. # The default value is: NO. SORT_BRIEF_DOCS = NO # If the SORT_MEMBERS_CTORS_1ST tag is set to YES then doxygen will sort the # (brief and detailed) documentation of class members so that constructors and # destructors are listed first. If set to NO the constructors will appear in the # respective orders defined by SORT_BRIEF_DOCS and SORT_MEMBER_DOCS. # Note: If SORT_BRIEF_DOCS is set to NO this option is ignored for sorting brief # member documentation. # Note: If SORT_MEMBER_DOCS is set to NO this option is ignored for sorting # detailed member documentation. # The default value is: NO. SORT_MEMBERS_CTORS_1ST = NO # If the SORT_GROUP_NAMES tag is set to YES then doxygen will sort the hierarchy # of group names into alphabetical order. If set to NO the group names will # appear in their defined order. # The default value is: NO. SORT_GROUP_NAMES = NO # If the SORT_BY_SCOPE_NAME tag is set to YES, the class list will be sorted by # fully-qualified names, including namespaces. If set to NO, the class list will # be sorted only by class name, not including the namespace part. # Note: This option is not very useful if HIDE_SCOPE_NAMES is set to YES. # Note: This option applies only to the class list, not to the alphabetical # list. # The default value is: NO. SORT_BY_SCOPE_NAME = NO # If the STRICT_PROTO_MATCHING option is enabled and doxygen fails to do proper # type resolution of all parameters of a function it will reject a match between # the prototype and the implementation of a member function even if there is # only one candidate or it is obvious which candidate to choose by doing a # simple string match. By disabling STRICT_PROTO_MATCHING doxygen will still # accept a match between prototype and implementation in such cases. # The default value is: NO. STRICT_PROTO_MATCHING = NO # The GENERATE_TODOLIST tag can be used to enable (YES) or disable (NO) the todo # list. This list is created by putting \todo commands in the documentation. # The default value is: YES. GENERATE_TODOLIST = YES # The GENERATE_TESTLIST tag can be used to enable (YES) or disable (NO) the test # list. This list is created by putting \test commands in the documentation. # The default value is: YES. GENERATE_TESTLIST = YES # The GENERATE_BUGLIST tag can be used to enable (YES) or disable (NO) the bug # list. This list is created by putting \bug commands in the documentation. # The default value is: YES. GENERATE_BUGLIST = YES # The GENERATE_DEPRECATEDLIST tag can be used to enable (YES) or disable (NO) # the deprecated list. This list is created by putting \deprecated commands in # the documentation. # The default value is: YES. GENERATE_DEPRECATEDLIST= YES # The ENABLED_SECTIONS tag can be used to enable conditional documentation # sections, marked by \if ... \endif and \cond # ... \endcond blocks. ENABLED_SECTIONS = # The MAX_INITIALIZER_LINES tag determines the maximum number of lines that the # initial value of a variable or macro / define can have for it to appear in the # documentation. If the initializer consists of more lines than specified here # it will be hidden. Use a value of 0 to hide initializers completely. The # appearance of the value of individual variables and macros / defines can be # controlled using \showinitializer or \hideinitializer command in the # documentation regardless of this setting. # Minimum value: 0, maximum value: 10000, default value: 30. MAX_INITIALIZER_LINES = 30 # Set the SHOW_USED_FILES tag to NO to disable the list of files generated at # the bottom of the documentation of classes and structs. If set to YES, the # list will mention the files that were used to generate the documentation. # The default value is: YES. SHOW_USED_FILES = YES # Set the SHOW_FILES tag to NO to disable the generation of the Files page. This # will remove the Files entry from the Quick Index and from the Folder Tree View # (if specified). # The default value is: YES. SHOW_FILES = YES # Set the SHOW_NAMESPACES tag to NO to disable the generation of the Namespaces # page. This will remove the Namespaces entry from the Quick Index and from the # Folder Tree View (if specified). # The default value is: YES. SHOW_NAMESPACES = YES # The FILE_VERSION_FILTER tag can be used to specify a program or script that # doxygen should invoke to get the current version for each file (typically from # the version control system). Doxygen will invoke the program by executing (via # popen()) the command command input-file, where command is the value of the # FILE_VERSION_FILTER tag, and input-file is the name of an input file provided # by doxygen. Whatever the program writes to standard output is used as the file # version. For an example see the documentation. FILE_VERSION_FILTER = # The LAYOUT_FILE tag can be used to specify a layout file which will be parsed # by doxygen. The layout file controls the global structure of the generated # output files in an output format independent way. To create the layout file # that represents doxygen's defaults, run doxygen with the -l option. You can # optionally specify a file name after the option, if omitted DoxygenLayout.xml # will be used as the name of the layout file. # # Note that if you run doxygen from a directory containing a file called # DoxygenLayout.xml, doxygen will parse it automatically even if the LAYOUT_FILE # tag is left empty. LAYOUT_FILE = DoxygenLayout.xml # The CITE_BIB_FILES tag can be used to specify one or more bib files containing # the reference definitions. This must be a list of .bib files. The .bib # extension is automatically appended if omitted. This requires the bibtex tool # to be installed. See also http://en.wikipedia.org/wiki/BibTeX for more info. # For LaTeX the style of the bibliography can be controlled using # LATEX_BIB_STYLE. To use this feature you need bibtex and perl available in the # search path. See also \cite for info how to create references. CITE_BIB_FILES = #--------------------------------------------------------------------------- # Configuration options related to warning and progress messages #--------------------------------------------------------------------------- # The QUIET tag can be used to turn on/off the messages that are generated to # standard output by doxygen. If QUIET is set to YES this implies that the # messages are off. # The default value is: NO. QUIET = YES # The WARNINGS tag can be used to turn on/off the warning messages that are # generated to standard error (stderr) by doxygen. If WARNINGS is set to YES # this implies that the warnings are on. # # Tip: Turn warnings on while writing the documentation. # The default value is: YES. WARNINGS = YES # If the WARN_IF_UNDOCUMENTED tag is set to YES then doxygen will generate # warnings for undocumented members. If EXTRACT_ALL is set to YES then this flag # will automatically be disabled. # The default value is: YES. WARN_IF_UNDOCUMENTED = YES # If the WARN_IF_DOC_ERROR tag is set to YES, doxygen will generate warnings for # potential errors in the documentation, such as not documenting some parameters # in a documented function, or documenting parameters that don't exist or using # markup commands wrongly. # The default value is: YES. WARN_IF_DOC_ERROR = YES # This WARN_NO_PARAMDOC option can be enabled to get warnings for functions that # are documented, but have no documentation for their parameters or return # value. If set to NO, doxygen will only warn about wrong or incomplete # parameter documentation, but not about the absence of documentation. # The default value is: NO. WARN_NO_PARAMDOC = NO # If the WARN_AS_ERROR tag is set to YES then doxygen will immediately stop when # a warning is encountered. # The default value is: NO. WARN_AS_ERROR = NO # The WARN_FORMAT tag determines the format of the warning messages that doxygen # can produce. The string should contain the $file, $line, and $text tags, which # will be replaced by the file and line number from which the warning originated # and the warning text. Optionally the format may contain $version, which will # be replaced by the version of the file (if it could be obtained via # FILE_VERSION_FILTER) # The default value is: $file:$line: $text. WARN_FORMAT = "$file:$line: $text" # The WARN_LOGFILE tag can be used to specify a file to which warning and error # messages should be written. If left blank the output is written to standard # error (stderr). WARN_LOGFILE = #--------------------------------------------------------------------------- # Configuration options related to the input files #--------------------------------------------------------------------------- # The INPUT tag is used to specify the files and/or directories that contain # documented source files. You may enter file names like myfile.cpp or # directories like /usr/src/myproject. Separate the files or directories with # spaces. See also FILE_PATTERNS and EXTENSION_MAPPING # Note: If this tag is empty the current directory is searched. # FEniCS note: # The env variable FFC_PATH_FOR_DOXYGEN is defined in the Sphinx conf.py, but # it is not needed for generating SWIG doctrings, only when making Sphinx docs # so in a normal dolfin build it is undefined (which is fine) INPUT = ../dolfin $(FFC_PATH_FOR_DOXYGEN) # This tag can be used to specify the character encoding of the source files # that doxygen parses. Internally doxygen uses the UTF-8 encoding. Doxygen uses # libiconv (or the iconv built into libc) for the transcoding. See the libiconv # documentation (see: http://www.gnu.org/software/libiconv) for the list of # possible encodings. # The default value is: UTF-8. INPUT_ENCODING = UTF-8 # If the value of the INPUT tag contains directories, you can use the # FILE_PATTERNS tag to specify one or more wildcard patterns (like *.cpp and # *.h) to filter out the source-files in the directories. # # Note that for custom extensions or not directly supported extensions you also # need to set EXTENSION_MAPPING for the extension otherwise the files are not # read by doxygen. # # If left blank the following patterns are tested:*.c, *.cc, *.cxx, *.cpp, # *.c++, *.java, *.ii, *.ixx, *.ipp, *.i++, *.inl, *.idl, *.ddl, *.odl, *.h, # *.hh, *.hxx, *.hpp, *.h++, *.cs, *.d, *.php, *.php4, *.php5, *.phtml, *.inc, # *.m, *.markdown, *.md, *.mm, *.dox, *.py, *.pyw, *.f90, *.f, *.for, *.tcl, # *.vhd, *.vhdl, *.ucf, *.qsf, *.as and *.js. FILE_PATTERNS = *.cpp *.h # The RECURSIVE tag can be used to specify whether or not subdirectories should # be searched for input files as well. # The default value is: NO. RECURSIVE = YES # The EXCLUDE tag can be used to specify files and/or directories that should be # excluded from the INPUT source files. This way you can easily exclude a # subdirectory from a directory tree whose root is specified with the INPUT tag. # # Note that relative paths are relative to the directory from which doxygen is # run. EXCLUDE = # The EXCLUDE_SYMLINKS tag can be used to select whether or not files or # directories that are symbolic links (a Unix file system feature) are excluded # from the input. # The default value is: NO. EXCLUDE_SYMLINKS = NO # If the value of the INPUT tag contains directories, you can use the # EXCLUDE_PATTERNS tag to specify one or more wildcard patterns to exclude # certain files from those directories. # # Note that the wildcards are matched against the file with absolute path, so to # exclude all test directories for example use the pattern */test/* EXCLUDE_PATTERNS = pugi* base64* Poisson*.h GenericFile* VTKPlot* VTKWindow* # The EXCLUDE_SYMBOLS tag can be used to specify one or more symbol names # (namespaces, classes, functions, etc.) that should be excluded from the # output. The symbol name can be a fully qualified name, a word, or if the # wildcard * is used, a substring. Examples: ANamespace, AClass, # AClass::ANamespace, ANamespace::*Test # # Note that the wildcards are matched against the file with absolute path, so to # exclude all test directories use the pattern */test/* EXCLUDE_SYMBOLS = _* default* lt_coordinate* NotImplemented* # The EXAMPLE_PATH tag can be used to specify one or more files or directories # that contain example code fragments that are included (see the \include # command). EXAMPLE_PATH = # If the value of the EXAMPLE_PATH tag contains directories, you can use the # EXAMPLE_PATTERNS tag to specify one or more wildcard pattern (like *.cpp and # *.h) to filter out the source-files in the directories. If left blank all # files are included. EXAMPLE_PATTERNS = # If the EXAMPLE_RECURSIVE tag is set to YES then subdirectories will be # searched for input files to be used with the \include or \dontinclude commands # irrespective of the value of the RECURSIVE tag. # The default value is: NO. EXAMPLE_RECURSIVE = NO # The IMAGE_PATH tag can be used to specify one or more files or directories # that contain images that are to be included in the documentation (see the # \image command). IMAGE_PATH = # The INPUT_FILTER tag can be used to specify a program that doxygen should # invoke to filter for each input file. Doxygen will invoke the filter program # by executing (via popen()) the command: # # # # where is the value of the INPUT_FILTER tag, and is the # name of an input file. Doxygen will then use the output that the filter # program writes to standard output. If FILTER_PATTERNS is specified, this tag # will be ignored. # # Note that the filter must not add or remove lines; it is applied before the # code is scanned, but not when the output code is generated. If lines are added # or removed, the anchors will not be placed correctly. # # Note that for custom extensions or not directly supported extensions you also # need to set EXTENSION_MAPPING for the extension otherwise the files are not # properly processed by doxygen. INPUT_FILTER = # The FILTER_PATTERNS tag can be used to specify filters on a per file pattern # basis. Doxygen will compare the file name with each pattern and apply the # filter if there is a match. The filters are a list of the form: pattern=filter # (like *.cpp=my_cpp_filter). See INPUT_FILTER for further information on how # filters are used. If the FILTER_PATTERNS tag is empty or if none of the # patterns match the file name, INPUT_FILTER is applied. # # Note that for custom extensions or not directly supported extensions you also # need to set EXTENSION_MAPPING for the extension otherwise the files are not # properly processed by doxygen. FILTER_PATTERNS = # If the FILTER_SOURCE_FILES tag is set to YES, the input filter (if set using # INPUT_FILTER) will also be used to filter the input files that are used for # producing the source files to browse (i.e. when SOURCE_BROWSER is set to YES). # The default value is: NO. FILTER_SOURCE_FILES = NO # The FILTER_SOURCE_PATTERNS tag can be used to specify source filters per file # pattern. A pattern will override the setting for FILTER_PATTERN (if any) and # it is also possible to disable source filtering for a specific pattern using # *.ext= (so without naming a filter). # This tag requires that the tag FILTER_SOURCE_FILES is set to YES. FILTER_SOURCE_PATTERNS = # If the USE_MDFILE_AS_MAINPAGE tag refers to the name of a markdown file that # is part of the input, its contents will be placed on the main page # (index.html). This can be useful if you have a project on for instance GitHub # and want to reuse the introduction page also for the doxygen output. USE_MDFILE_AS_MAINPAGE = #--------------------------------------------------------------------------- # Configuration options related to source browsing #--------------------------------------------------------------------------- # If the SOURCE_BROWSER tag is set to YES then a list of source files will be # generated. Documented entities will be cross-referenced with these sources. # # Note: To get rid of all source code in the generated output, make sure that # also VERBATIM_HEADERS is set to NO. # The default value is: NO. SOURCE_BROWSER = NO # Setting the INLINE_SOURCES tag to YES will include the body of functions, # classes and enums directly into the documentation. # The default value is: NO. INLINE_SOURCES = NO # Setting the STRIP_CODE_COMMENTS tag to YES will instruct doxygen to hide any # special comment blocks from generated source code fragments. Normal C, C++ and # Fortran comments will always remain visible. # The default value is: YES. STRIP_CODE_COMMENTS = YES # If the REFERENCED_BY_RELATION tag is set to YES then for each documented # function all documented functions referencing it will be listed. # The default value is: NO. REFERENCED_BY_RELATION = NO # If the REFERENCES_RELATION tag is set to YES then for each documented function # all documented entities called/used by that function will be listed. # The default value is: NO. REFERENCES_RELATION = NO # If the REFERENCES_LINK_SOURCE tag is set to YES and SOURCE_BROWSER tag is set # to YES then the hyperlinks from functions in REFERENCES_RELATION and # REFERENCED_BY_RELATION lists will link to the source code. Otherwise they will # link to the documentation. # The default value is: YES. REFERENCES_LINK_SOURCE = YES # If SOURCE_TOOLTIPS is enabled (the default) then hovering a hyperlink in the # source code will show a tooltip with additional information such as prototype, # brief description and links to the definition and documentation. Since this # will make the HTML file larger and loading of large files a bit slower, you # can opt to disable this feature. # The default value is: YES. # This tag requires that the tag SOURCE_BROWSER is set to YES. SOURCE_TOOLTIPS = YES # If the USE_HTAGS tag is set to YES then the references to source code will # point to the HTML generated by the htags(1) tool instead of doxygen built-in # source browser. The htags tool is part of GNU's global source tagging system # (see http://www.gnu.org/software/global/global.html). You will need version # 4.8.6 or higher. # # To use it do the following: # - Install the latest version of global # - Enable SOURCE_BROWSER and USE_HTAGS in the config file # - Make sure the INPUT points to the root of the source tree # - Run doxygen as normal # # Doxygen will invoke htags (and that will in turn invoke gtags), so these # tools must be available from the command line (i.e. in the search path). # # The result: instead of the source browser generated by doxygen, the links to # source code will now point to the output of htags. # The default value is: NO. # This tag requires that the tag SOURCE_BROWSER is set to YES. USE_HTAGS = NO # If the VERBATIM_HEADERS tag is set the YES then doxygen will generate a # verbatim copy of the header file for each class for which an include is # specified. Set to NO to disable this. # See also: Section \class. # The default value is: YES. VERBATIM_HEADERS = YES #--------------------------------------------------------------------------- # Configuration options related to the alphabetical class index #--------------------------------------------------------------------------- # If the ALPHABETICAL_INDEX tag is set to YES, an alphabetical index of all # compounds will be generated. Enable this if the project contains a lot of # classes, structs, unions or interfaces. # The default value is: YES. ALPHABETICAL_INDEX = YES # The COLS_IN_ALPHA_INDEX tag can be used to specify the number of columns in # which the alphabetical index list will be split. # Minimum value: 1, maximum value: 20, default value: 5. # This tag requires that the tag ALPHABETICAL_INDEX is set to YES. COLS_IN_ALPHA_INDEX = 5 # In case all classes in a project start with a common prefix, all classes will # be put under the same header in the alphabetical index. The IGNORE_PREFIX tag # can be used to specify a prefix (or a list of prefixes) that should be ignored # while generating the index headers. # This tag requires that the tag ALPHABETICAL_INDEX is set to YES. IGNORE_PREFIX = #--------------------------------------------------------------------------- # Configuration options related to the HTML output #--------------------------------------------------------------------------- # If the GENERATE_HTML tag is set to YES, doxygen will generate HTML output # The default value is: YES. GENERATE_HTML = NO # The HTML_OUTPUT tag is used to specify where the HTML docs will be put. If a # relative path is entered the value of OUTPUT_DIRECTORY will be put in front of # it. # The default directory is: html. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_OUTPUT = html # The HTML_FILE_EXTENSION tag can be used to specify the file extension for each # generated HTML page (for example: .htm, .php, .asp). # The default value is: .html. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_FILE_EXTENSION = .html # The HTML_HEADER tag can be used to specify a user-defined HTML header file for # each generated HTML page. If the tag is left blank doxygen will generate a # standard header. # # To get valid HTML the header file that includes any scripts and style sheets # that doxygen needs, which is dependent on the configuration options used (e.g. # the setting GENERATE_TREEVIEW). It is highly recommended to start with a # default header using # doxygen -w html new_header.html new_footer.html new_stylesheet.css # YourConfigFile # and then modify the file new_header.html. See also section "Doxygen usage" # for information on how to generate the default header that doxygen normally # uses. # Note: The header is subject to change so you typically have to regenerate the # default header when upgrading to a newer version of doxygen. For a description # of the possible markers and block names see the documentation. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_HEADER = # The HTML_FOOTER tag can be used to specify a user-defined HTML footer for each # generated HTML page. If the tag is left blank doxygen will generate a standard # footer. See HTML_HEADER for more information on how to generate a default # footer and what special commands can be used inside the footer. See also # section "Doxygen usage" for information on how to generate the default footer # that doxygen normally uses. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_FOOTER = # The HTML_STYLESHEET tag can be used to specify a user-defined cascading style # sheet that is used by each HTML page. It can be used to fine-tune the look of # the HTML output. If left blank doxygen will generate a default style sheet. # See also section "Doxygen usage" for information on how to generate the style # sheet that doxygen normally uses. # Note: It is recommended to use HTML_EXTRA_STYLESHEET instead of this tag, as # it is more robust and this tag (HTML_STYLESHEET) will in the future become # obsolete. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_STYLESHEET = # The HTML_EXTRA_STYLESHEET tag can be used to specify additional user-defined # cascading style sheets that are included after the standard style sheets # created by doxygen. Using this option one can overrule certain style aspects. # This is preferred over using HTML_STYLESHEET since it does not replace the # standard style sheet and is therefore more robust against future updates. # Doxygen will copy the style sheet files to the output directory. # Note: The order of the extra style sheet files is of importance (e.g. the last # style sheet in the list overrules the setting of the previous ones in the # list). For an example see the documentation. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_EXTRA_STYLESHEET = # The HTML_EXTRA_FILES tag can be used to specify one or more extra images or # other source files which should be copied to the HTML output directory. Note # that these files will be copied to the base HTML output directory. Use the # $relpath^ marker in the HTML_HEADER and/or HTML_FOOTER files to load these # files. In the HTML_STYLESHEET file, use the file name only. Also note that the # files will be copied as-is; there are no commands or markers available. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_EXTRA_FILES = # The HTML_COLORSTYLE_HUE tag controls the color of the HTML output. Doxygen # will adjust the colors in the style sheet and background images according to # this color. Hue is specified as an angle on a colorwheel, see # http://en.wikipedia.org/wiki/Hue for more information. For instance the value # 0 represents red, 60 is yellow, 120 is green, 180 is cyan, 240 is blue, 300 # purple, and 360 is red again. # Minimum value: 0, maximum value: 359, default value: 220. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_COLORSTYLE_HUE = 220 # The HTML_COLORSTYLE_SAT tag controls the purity (or saturation) of the colors # in the HTML output. For a value of 0 the output will use grayscales only. A # value of 255 will produce the most vivid colors. # Minimum value: 0, maximum value: 255, default value: 100. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_COLORSTYLE_SAT = 100 # The HTML_COLORSTYLE_GAMMA tag controls the gamma correction applied to the # luminance component of the colors in the HTML output. Values below 100 # gradually make the output lighter, whereas values above 100 make the output # darker. The value divided by 100 is the actual gamma applied, so 80 represents # a gamma of 0.8, The value 220 represents a gamma of 2.2, and 100 does not # change the gamma. # Minimum value: 40, maximum value: 240, default value: 80. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_COLORSTYLE_GAMMA = 80 # If the HTML_TIMESTAMP tag is set to YES then the footer of each generated HTML # page will contain the date and time when the page was generated. Setting this # to YES can help to show when doxygen was last run and thus if the # documentation is up to date. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_TIMESTAMP = NO # If the HTML_DYNAMIC_SECTIONS tag is set to YES then the generated HTML # documentation will contain sections that can be hidden and shown after the # page has loaded. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_DYNAMIC_SECTIONS = NO # With HTML_INDEX_NUM_ENTRIES one can control the preferred number of entries # shown in the various tree structured indices initially; the user can expand # and collapse entries dynamically later on. Doxygen will expand the tree to # such a level that at most the specified number of entries are visible (unless # a fully collapsed tree already exceeds this amount). So setting the number of # entries 1 will produce a full collapsed tree by default. 0 is a special value # representing an infinite number of entries and will result in a full expanded # tree by default. # Minimum value: 0, maximum value: 9999, default value: 100. # This tag requires that the tag GENERATE_HTML is set to YES. HTML_INDEX_NUM_ENTRIES = 100 # If the GENERATE_DOCSET tag is set to YES, additional index files will be # generated that can be used as input for Apple's Xcode 3 integrated development # environment (see: http://developer.apple.com/tools/xcode/), introduced with # OSX 10.5 (Leopard). To create a documentation set, doxygen will generate a # Makefile in the HTML output directory. Running make will produce the docset in # that directory and running make install will install the docset in # ~/Library/Developer/Shared/Documentation/DocSets so that Xcode will find it at # startup. See http://developer.apple.com/tools/creatingdocsetswithdoxygen.html # for more information. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. GENERATE_DOCSET = NO # This tag determines the name of the docset feed. A documentation feed provides # an umbrella under which multiple documentation sets from a single provider # (such as a company or product suite) can be grouped. # The default value is: Doxygen generated docs. # This tag requires that the tag GENERATE_DOCSET is set to YES. DOCSET_FEEDNAME = "Doxygen generated docs" # This tag specifies a string that should uniquely identify the documentation # set bundle. This should be a reverse domain-name style string, e.g. # com.mycompany.MyDocSet. Doxygen will append .docset to the name. # The default value is: org.doxygen.Project. # This tag requires that the tag GENERATE_DOCSET is set to YES. DOCSET_BUNDLE_ID = org.doxygen.Project # The DOCSET_PUBLISHER_ID tag specifies a string that should uniquely identify # the documentation publisher. This should be a reverse domain-name style # string, e.g. com.mycompany.MyDocSet.documentation. # The default value is: org.doxygen.Publisher. # This tag requires that the tag GENERATE_DOCSET is set to YES. DOCSET_PUBLISHER_ID = org.doxygen.Publisher # The DOCSET_PUBLISHER_NAME tag identifies the documentation publisher. # The default value is: Publisher. # This tag requires that the tag GENERATE_DOCSET is set to YES. DOCSET_PUBLISHER_NAME = Publisher # If the GENERATE_HTMLHELP tag is set to YES then doxygen generates three # additional HTML index files: index.hhp, index.hhc, and index.hhk. The # index.hhp is a project file that can be read by Microsoft's HTML Help Workshop # (see: http://www.microsoft.com/en-us/download/details.aspx?id=21138) on # Windows. # # The HTML Help Workshop contains a compiler that can convert all HTML output # generated by doxygen into a single compiled HTML file (.chm). Compiled HTML # files are now used as the Windows 98 help format, and will replace the old # Windows help format (.hlp) on all Windows platforms in the future. Compressed # HTML files also contain an index, a table of contents, and you can search for # words in the documentation. The HTML workshop also contains a viewer for # compressed HTML files. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. GENERATE_HTMLHELP = NO # The CHM_FILE tag can be used to specify the file name of the resulting .chm # file. You can add a path in front of the file if the result should not be # written to the html output directory. # This tag requires that the tag GENERATE_HTMLHELP is set to YES. CHM_FILE = # The HHC_LOCATION tag can be used to specify the location (absolute path # including file name) of the HTML help compiler (hhc.exe). If non-empty, # doxygen will try to run the HTML help compiler on the generated index.hhp. # The file has to be specified with full path. # This tag requires that the tag GENERATE_HTMLHELP is set to YES. HHC_LOCATION = # The GENERATE_CHI flag controls if a separate .chi index file is generated # (YES) or that it should be included in the master .chm file (NO). # The default value is: NO. # This tag requires that the tag GENERATE_HTMLHELP is set to YES. GENERATE_CHI = NO # The CHM_INDEX_ENCODING is used to encode HtmlHelp index (hhk), content (hhc) # and project file content. # This tag requires that the tag GENERATE_HTMLHELP is set to YES. CHM_INDEX_ENCODING = # The BINARY_TOC flag controls whether a binary table of contents is generated # (YES) or a normal table of contents (NO) in the .chm file. Furthermore it # enables the Previous and Next buttons. # The default value is: NO. # This tag requires that the tag GENERATE_HTMLHELP is set to YES. BINARY_TOC = NO # The TOC_EXPAND flag can be set to YES to add extra items for group members to # the table of contents of the HTML help documentation and to the tree view. # The default value is: NO. # This tag requires that the tag GENERATE_HTMLHELP is set to YES. TOC_EXPAND = NO # If the GENERATE_QHP tag is set to YES and both QHP_NAMESPACE and # QHP_VIRTUAL_FOLDER are set, an additional index file will be generated that # can be used as input for Qt's qhelpgenerator to generate a Qt Compressed Help # (.qch) of the generated HTML documentation. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. GENERATE_QHP = NO # If the QHG_LOCATION tag is specified, the QCH_FILE tag can be used to specify # the file name of the resulting .qch file. The path specified is relative to # the HTML output folder. # This tag requires that the tag GENERATE_QHP is set to YES. QCH_FILE = # The QHP_NAMESPACE tag specifies the namespace to use when generating Qt Help # Project output. For more information please see Qt Help Project / Namespace # (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#namespace). # The default value is: org.doxygen.Project. # This tag requires that the tag GENERATE_QHP is set to YES. QHP_NAMESPACE = org.doxygen.Project # The QHP_VIRTUAL_FOLDER tag specifies the namespace to use when generating Qt # Help Project output. For more information please see Qt Help Project / Virtual # Folders (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#virtual- # folders). # The default value is: doc. # This tag requires that the tag GENERATE_QHP is set to YES. QHP_VIRTUAL_FOLDER = doc # If the QHP_CUST_FILTER_NAME tag is set, it specifies the name of a custom # filter to add. For more information please see Qt Help Project / Custom # Filters (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#custom- # filters). # This tag requires that the tag GENERATE_QHP is set to YES. QHP_CUST_FILTER_NAME = # The QHP_CUST_FILTER_ATTRS tag specifies the list of the attributes of the # custom filter to add. For more information please see Qt Help Project / Custom # Filters (see: http://qt-project.org/doc/qt-4.8/qthelpproject.html#custom- # filters). # This tag requires that the tag GENERATE_QHP is set to YES. QHP_CUST_FILTER_ATTRS = # The QHP_SECT_FILTER_ATTRS tag specifies the list of the attributes this # project's filter section matches. Qt Help Project / Filter Attributes (see: # http://qt-project.org/doc/qt-4.8/qthelpproject.html#filter-attributes). # This tag requires that the tag GENERATE_QHP is set to YES. QHP_SECT_FILTER_ATTRS = # The QHG_LOCATION tag can be used to specify the location of Qt's # qhelpgenerator. If non-empty doxygen will try to run qhelpgenerator on the # generated .qhp file. # This tag requires that the tag GENERATE_QHP is set to YES. QHG_LOCATION = # If the GENERATE_ECLIPSEHELP tag is set to YES, additional index files will be # generated, together with the HTML files, they form an Eclipse help plugin. To # install this plugin and make it available under the help contents menu in # Eclipse, the contents of the directory containing the HTML and XML files needs # to be copied into the plugins directory of eclipse. The name of the directory # within the plugins directory should be the same as the ECLIPSE_DOC_ID value. # After copying Eclipse needs to be restarted before the help appears. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. GENERATE_ECLIPSEHELP = NO # A unique identifier for the Eclipse help plugin. When installing the plugin # the directory name containing the HTML and XML files should also have this # name. Each documentation set should have its own identifier. # The default value is: org.doxygen.Project. # This tag requires that the tag GENERATE_ECLIPSEHELP is set to YES. ECLIPSE_DOC_ID = org.doxygen.Project # If you want full control over the layout of the generated HTML pages it might # be necessary to disable the index and replace it with your own. The # DISABLE_INDEX tag can be used to turn on/off the condensed index (tabs) at top # of each HTML page. A value of NO enables the index and the value YES disables # it. Since the tabs in the index contain the same information as the navigation # tree, you can set this option to YES if you also set GENERATE_TREEVIEW to YES. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. DISABLE_INDEX = NO # The GENERATE_TREEVIEW tag is used to specify whether a tree-like index # structure should be generated to display hierarchical information. If the tag # value is set to YES, a side panel will be generated containing a tree-like # index structure (just like the one that is generated for HTML Help). For this # to work a browser that supports JavaScript, DHTML, CSS and frames is required # (i.e. any modern browser). Windows users are probably better off using the # HTML help feature. Via custom style sheets (see HTML_EXTRA_STYLESHEET) one can # further fine-tune the look of the index. As an example, the default style # sheet generated by doxygen has an example that shows how to put an image at # the root of the tree instead of the PROJECT_NAME. Since the tree basically has # the same information as the tab index, you could consider setting # DISABLE_INDEX to YES when enabling this option. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. GENERATE_TREEVIEW = NO # The ENUM_VALUES_PER_LINE tag can be used to set the number of enum values that # doxygen will group on one line in the generated HTML documentation. # # Note that a value of 0 will completely suppress the enum values from appearing # in the overview section. # Minimum value: 0, maximum value: 20, default value: 4. # This tag requires that the tag GENERATE_HTML is set to YES. ENUM_VALUES_PER_LINE = 4 # If the treeview is enabled (see GENERATE_TREEVIEW) then this tag can be used # to set the initial width (in pixels) of the frame in which the tree is shown. # Minimum value: 0, maximum value: 1500, default value: 250. # This tag requires that the tag GENERATE_HTML is set to YES. TREEVIEW_WIDTH = 250 # If the EXT_LINKS_IN_WINDOW option is set to YES, doxygen will open links to # external symbols imported via tag files in a separate window. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. EXT_LINKS_IN_WINDOW = NO # Use this tag to change the font size of LaTeX formulas included as images in # the HTML documentation. When you change the font size after a successful # doxygen run you need to manually remove any form_*.png images from the HTML # output directory to force them to be regenerated. # Minimum value: 8, maximum value: 50, default value: 10. # This tag requires that the tag GENERATE_HTML is set to YES. FORMULA_FONTSIZE = 10 # Use the FORMULA_TRANPARENT tag to determine whether or not the images # generated for formulas are transparent PNGs. Transparent PNGs are not # supported properly for IE 6.0, but are supported on all modern browsers. # # Note that when changing this option you need to delete any form_*.png files in # the HTML output directory before the changes have effect. # The default value is: YES. # This tag requires that the tag GENERATE_HTML is set to YES. FORMULA_TRANSPARENT = YES # Enable the USE_MATHJAX option to render LaTeX formulas using MathJax (see # http://www.mathjax.org) which uses client side Javascript for the rendering # instead of using pre-rendered bitmaps. Use this if you do not have LaTeX # installed or if you want to formulas look prettier in the HTML output. When # enabled you may also need to install MathJax separately and configure the path # to it using the MATHJAX_RELPATH option. # The default value is: NO. # This tag requires that the tag GENERATE_HTML is set to YES. USE_MATHJAX = NO # When MathJax is enabled you can set the default output format to be used for # the MathJax output. See the MathJax site (see: # http://docs.mathjax.org/en/latest/output.html) for more details. # Possible values are: HTML-CSS (which is slower, but has the best # compatibility), NativeMML (i.e. MathML) and SVG. # The default value is: HTML-CSS. # This tag requires that the tag USE_MATHJAX is set to YES. MATHJAX_FORMAT = HTML-CSS # When MathJax is enabled you need to specify the location relative to the HTML # output directory using the MATHJAX_RELPATH option. The destination directory # should contain the MathJax.js script. For instance, if the mathjax directory # is located at the same level as the HTML output directory, then # MATHJAX_RELPATH should be ../mathjax. The default value points to the MathJax # Content Delivery Network so you can quickly see the result without installing # MathJax. However, it is strongly recommended to install a local copy of # MathJax from http://www.mathjax.org before deployment. # The default value is: http://cdn.mathjax.org/mathjax/latest. # This tag requires that the tag USE_MATHJAX is set to YES. MATHJAX_RELPATH = http://cdn.mathjax.org/mathjax/latest # The MATHJAX_EXTENSIONS tag can be used to specify one or more MathJax # extension names that should be enabled during MathJax rendering. For example # MATHJAX_EXTENSIONS = TeX/AMSmath TeX/AMSsymbols # This tag requires that the tag USE_MATHJAX is set to YES. MATHJAX_EXTENSIONS = # The MATHJAX_CODEFILE tag can be used to specify a file with javascript pieces # of code that will be used on startup of the MathJax code. See the MathJax site # (see: http://docs.mathjax.org/en/latest/output.html) for more details. For an # example see the documentation. # This tag requires that the tag USE_MATHJAX is set to YES. MATHJAX_CODEFILE = # When the SEARCHENGINE tag is enabled doxygen will generate a search box for # the HTML output. The underlying search engine uses javascript and DHTML and # should work on any modern browser. Note that when using HTML help # (GENERATE_HTMLHELP), Qt help (GENERATE_QHP), or docsets (GENERATE_DOCSET) # there is already a search function so this one should typically be disabled. # For large projects the javascript based search engine can be slow, then # enabling SERVER_BASED_SEARCH may provide a better solution. It is possible to # search using the keyboard; to jump to the search box use + S # (what the is depends on the OS and browser, but it is typically # , /